url
stringlengths 61
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 75
75
| comments_url
stringlengths 70
70
| events_url
stringlengths 68
68
| html_url
stringlengths 49
51
| id
int64 1.6B
1.71B
| node_id
stringlengths 18
19
| number
int64 5.58k
5.87k
| title
stringlengths 9
113
| user
dict | labels
list | state
stringclasses 1
value | locked
bool 1
class | assignee
dict | assignees
list | milestone
null | comments
sequence | created_at
timestamp[s] | updated_at
timestamp[s] | closed_at
timestamp[s] | author_association
stringclasses 3
values | active_lock_reason
null | draft
bool 2
classes | pull_request
dict | body
stringlengths 3
19.9k
⌀ | reactions
dict | timeline_url
stringlengths 70
70
| performed_via_github_app
null | state_reason
stringclasses 1
value | is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/5872 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5872/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5872/comments | https://api.github.com/repos/huggingface/datasets/issues/5872/events | https://github.com/huggingface/datasets/pull/5872 | 1,713,174,662 | PR_kwDODunzps5QrQ5o | 5,872 | Fix infer module for uppercase extensions | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007049 / 0.011353 (-0.004304) | 0.005034 / 0.011008 (-0.005974) | 0.097737 / 0.038508 (0.059229) | 0.033280 / 0.023109 (0.010170) | 0.301017 / 0.275898 (0.025119) | 0.336593 / 0.323480 (0.013113) | 0.005567 / 0.007986 (-0.002419) | 0.005384 / 0.004328 (0.001056) | 0.072980 / 0.004250 (0.068730) | 0.045030 / 0.037052 (0.007978) | 0.303280 / 0.258489 (0.044791) | 0.367528 / 0.293841 (0.073687) | 0.034131 / 0.128546 (-0.094415) | 0.012118 / 0.075646 (-0.063528) | 0.331677 / 0.419271 (-0.087594) | 0.049211 / 0.043533 (0.005678) | 0.297535 / 0.255139 (0.042396) | 0.318136 / 0.283200 (0.034936) | 0.101574 / 0.141683 (-0.040109) | 1.472769 / 1.452155 (0.020615) | 1.541724 / 1.492716 (0.049007) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.014646 / 0.018006 (-0.003360) | 0.439050 / 0.000490 (0.438560) | 0.008575 / 0.000200 (0.008375) | 0.000297 / 0.000054 (0.000242) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027591 / 0.037411 (-0.009820) | 0.111639 / 0.014526 (0.097113) | 0.117098 / 0.176557 (-0.059458) | 0.173281 / 0.737135 (-0.563855) | 0.123197 / 0.296338 (-0.173141) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.397507 / 0.215209 (0.182298) | 3.971457 / 2.077655 (1.893803) | 1.781158 / 1.504120 (0.277038) | 1.590419 / 1.541195 (0.049224) | 1.716374 / 1.468490 (0.247884) | 0.687150 / 4.584777 (-3.897627) | 3.691009 / 3.745712 (-0.054703) | 2.050900 / 5.269862 (-3.218961) | 1.304893 / 4.565676 (-3.260784) | 0.084507 / 0.424275 (-0.339768) | 0.012231 / 0.007607 (0.004624) | 0.493033 / 0.226044 (0.266988) | 4.929957 / 2.268929 (2.661028) | 2.209069 / 55.444624 (-53.235555) | 1.885992 / 6.876477 (-4.990485) | 2.007004 / 2.142072 (-0.135069) | 0.827265 / 4.805227 (-3.977963) | 0.168225 / 6.500664 (-6.332439) | 0.064988 / 0.075469 (-0.010481) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.182341 / 1.841788 (-0.659447) | 14.691983 / 8.074308 (6.617674) | 14.350720 / 10.191392 (4.159328) | 0.164307 / 0.680424 (-0.516117) | 0.017480 / 0.534201 (-0.516720) | 0.421843 / 0.579283 (-0.157441) | 0.417481 / 0.434364 (-0.016883) | 0.496587 / 0.540337 (-0.043751) | 0.581208 / 1.386936 (-0.805728) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007070 / 0.011353 (-0.004283) | 0.005083 / 0.011008 (-0.005926) | 0.075009 / 0.038508 (0.036500) | 0.032343 / 0.023109 (0.009234) | 0.366788 / 0.275898 (0.090890) | 0.392273 / 0.323480 (0.068794) | 0.005512 / 0.007986 (-0.002474) | 0.003999 / 0.004328 (-0.000329) | 0.073743 / 0.004250 (0.069492) | 0.046203 / 0.037052 (0.009151) | 0.367874 / 0.258489 (0.109385) | 0.409154 / 0.293841 (0.115313) | 0.035227 / 0.128546 (-0.093319) | 0.012223 / 0.075646 (-0.063424) | 0.087149 / 0.419271 (-0.332122) | 0.045648 / 0.043533 (0.002115) | 0.362414 / 0.255139 (0.107275) | 0.379970 / 0.283200 (0.096770) | 0.100631 / 0.141683 (-0.041052) | 1.439733 / 1.452155 (-0.012422) | 1.506266 / 1.492716 (0.013550) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.227071 / 0.018006 (0.209065) | 0.451243 / 0.000490 (0.450753) | 0.000406 / 0.000200 (0.000206) | 0.000060 / 0.000054 (0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028952 / 0.037411 (-0.008459) | 0.111934 / 0.014526 (0.097408) | 0.124080 / 0.176557 (-0.052477) | 0.174022 / 0.737135 (-0.563113) | 0.126811 / 0.296338 (-0.169527) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.436423 / 0.215209 (0.221214) | 4.331959 / 2.077655 (2.254304) | 2.111914 / 1.504120 (0.607794) | 1.921338 / 1.541195 (0.380143) | 1.994425 / 1.468490 (0.525935) | 0.699164 / 4.584777 (-3.885613) | 3.722143 / 3.745712 (-0.023569) | 3.516538 / 5.269862 (-1.753323) | 1.867245 / 4.565676 (-2.698431) | 0.085923 / 0.424275 (-0.338352) | 0.012059 / 0.007607 (0.004452) | 0.586147 / 0.226044 (0.360102) | 5.395823 / 2.268929 (3.126894) | 2.594430 / 55.444624 (-52.850194) | 2.275021 / 6.876477 (-4.601456) | 2.347810 / 2.142072 (0.205737) | 0.835118 / 4.805227 (-3.970109) | 0.167089 / 6.500664 (-6.333575) | 0.064893 / 0.075469 (-0.010576) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.291423 / 1.841788 (-0.550365) | 14.992696 / 8.074308 (6.918388) | 13.307842 / 10.191392 (3.116450) | 0.163799 / 0.680424 (-0.516625) | 0.017315 / 0.534201 (-0.516886) | 0.461319 / 0.579283 (-0.117965) | 0.430474 / 0.434364 (-0.003889) | 0.568115 / 0.540337 (0.027777) | 0.647909 / 1.386936 (-0.739027) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#a5161c9ecdcdde9cc99c7f212da13523d5ba6bdb \"CML watermark\")\n"
] | 2023-05-17T05:56:45 | 2023-05-17T14:26:59 | 2023-05-17T14:19:18 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5872",
"html_url": "https://github.com/huggingface/datasets/pull/5872",
"diff_url": "https://github.com/huggingface/datasets/pull/5872.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5872.patch",
"merged_at": "2023-05-17T14:19:18"
} | Fix the `infer_module_for_data_files` and `infer_module_for_data_files_in_archives` functions when passed a data file name with uppercase extension, e.g. `filename.TXT`.
Before, `None` module was returned. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5872/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5872/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5868 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5868/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5868/comments | https://api.github.com/repos/huggingface/datasets/issues/5868/events | https://github.com/huggingface/datasets/issues/5868 | 1,711,173,098 | I_kwDODunzps5l_m3q | 5,868 | Is it possible to change a cached file and 're-cache' it instead of re-generating? | {
"login": "zyh3826",
"id": 31238754,
"node_id": "MDQ6VXNlcjMxMjM4NzU0",
"avatar_url": "https://avatars.githubusercontent.com/u/31238754?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zyh3826",
"html_url": "https://github.com/zyh3826",
"followers_url": "https://api.github.com/users/zyh3826/followers",
"following_url": "https://api.github.com/users/zyh3826/following{/other_user}",
"gists_url": "https://api.github.com/users/zyh3826/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zyh3826/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zyh3826/subscriptions",
"organizations_url": "https://api.github.com/users/zyh3826/orgs",
"repos_url": "https://api.github.com/users/zyh3826/repos",
"events_url": "https://api.github.com/users/zyh3826/events{/privacy}",
"received_events_url": "https://api.github.com/users/zyh3826/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | [
"Arrow files/primitives (tables and arrays) are immutable, so re-generating them is the only option, I'm afraid.",
"> \r\n\r\nGot it, thanks for your reply"
] | 2023-05-16T03:45:42 | 2023-05-17T11:21:36 | 2023-05-17T11:21:36 | NONE | null | null | null | ### Feature request
Hi,
I have a huge cached file using `map`(over 500GB), and I want to change an attribution of each element, is there possible to do it using some method instead of re-generating, because `map` takes over 24 hours
### Motivation
For large datasets, I think it is very important because we always face the problem which is changing something in the original cache without re-generating it.
### Your contribution
For now, I can't help, sorry. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5868/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5868/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5861 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5861/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5861/comments | https://api.github.com/repos/huggingface/datasets/issues/5861/events | https://github.com/huggingface/datasets/pull/5861 | 1,709,807,340 | PR_kwDODunzps5Qf55q | 5,861 | Better error message when combining dataset dicts instead of datasets | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007167 / 0.011353 (-0.004185) | 0.004914 / 0.011008 (-0.006094) | 0.096858 / 0.038508 (0.058350) | 0.033468 / 0.023109 (0.010359) | 0.297276 / 0.275898 (0.021378) | 0.344289 / 0.323480 (0.020809) | 0.005703 / 0.007986 (-0.002282) | 0.003972 / 0.004328 (-0.000357) | 0.075191 / 0.004250 (0.070940) | 0.046247 / 0.037052 (0.009194) | 0.317857 / 0.258489 (0.059368) | 0.347263 / 0.293841 (0.053422) | 0.035017 / 0.128546 (-0.093529) | 0.012036 / 0.075646 (-0.063611) | 0.332522 / 0.419271 (-0.086750) | 0.050188 / 0.043533 (0.006655) | 0.296627 / 0.255139 (0.041488) | 0.319196 / 0.283200 (0.035997) | 0.101100 / 0.141683 (-0.040583) | 1.484536 / 1.452155 (0.032382) | 1.606364 / 1.492716 (0.113648) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.203954 / 0.018006 (0.185948) | 0.436505 / 0.000490 (0.436015) | 0.003853 / 0.000200 (0.003654) | 0.000079 / 0.000054 (0.000025) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025834 / 0.037411 (-0.011578) | 0.105759 / 0.014526 (0.091233) | 0.114289 / 0.176557 (-0.062268) | 0.174388 / 0.737135 (-0.562748) | 0.122248 / 0.296338 (-0.174090) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.404218 / 0.215209 (0.189009) | 4.027900 / 2.077655 (1.950245) | 1.854757 / 1.504120 (0.350637) | 1.668882 / 1.541195 (0.127687) | 1.731451 / 1.468490 (0.262961) | 0.707843 / 4.584777 (-3.876934) | 3.756386 / 3.745712 (0.010674) | 2.067751 / 5.269862 (-3.202110) | 1.313039 / 4.565676 (-3.252638) | 0.086442 / 0.424275 (-0.337833) | 0.012329 / 0.007607 (0.004722) | 0.505964 / 0.226044 (0.279919) | 5.050788 / 2.268929 (2.781860) | 2.353936 / 55.444624 (-53.090688) | 2.055560 / 6.876477 (-4.820917) | 2.162948 / 2.142072 (0.020876) | 0.850532 / 4.805227 (-3.954696) | 0.168560 / 6.500664 (-6.332104) | 0.063143 / 0.075469 (-0.012326) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.182723 / 1.841788 (-0.659065) | 14.779342 / 8.074308 (6.705034) | 14.461572 / 10.191392 (4.270180) | 0.163120 / 0.680424 (-0.517303) | 0.017978 / 0.534201 (-0.516223) | 0.419168 / 0.579283 (-0.160115) | 0.420955 / 0.434364 (-0.013409) | 0.509710 / 0.540337 (-0.030628) | 0.619586 / 1.386936 (-0.767350) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006804 / 0.011353 (-0.004549) | 0.005136 / 0.011008 (-0.005872) | 0.074910 / 0.038508 (0.036402) | 0.032552 / 0.023109 (0.009443) | 0.374998 / 0.275898 (0.099100) | 0.399219 / 0.323480 (0.075739) | 0.005615 / 0.007986 (-0.002371) | 0.004118 / 0.004328 (-0.000210) | 0.074219 / 0.004250 (0.069969) | 0.045924 / 0.037052 (0.008871) | 0.383228 / 0.258489 (0.124739) | 0.407195 / 0.293841 (0.113354) | 0.035460 / 0.128546 (-0.093086) | 0.012460 / 0.075646 (-0.063187) | 0.087077 / 0.419271 (-0.332195) | 0.050507 / 0.043533 (0.006974) | 0.369001 / 0.255139 (0.113862) | 0.385761 / 0.283200 (0.102561) | 0.106999 / 0.141683 (-0.034684) | 1.465456 / 1.452155 (0.013302) | 1.556962 / 1.492716 (0.064246) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.214926 / 0.018006 (0.196920) | 0.436893 / 0.000490 (0.436403) | 0.003388 / 0.000200 (0.003188) | 0.000093 / 0.000054 (0.000038) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029919 / 0.037411 (-0.007492) | 0.110859 / 0.014526 (0.096333) | 0.120617 / 0.176557 (-0.055939) | 0.171781 / 0.737135 (-0.565355) | 0.125627 / 0.296338 (-0.170712) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.436024 / 0.215209 (0.220815) | 4.359167 / 2.077655 (2.281512) | 2.188399 / 1.504120 (0.684279) | 2.001196 / 1.541195 (0.460001) | 2.023710 / 1.468490 (0.555220) | 0.713799 / 4.584777 (-3.870978) | 3.832217 / 3.745712 (0.086504) | 3.269351 / 5.269862 (-2.000510) | 1.534608 / 4.565676 (-3.031068) | 0.088505 / 0.424275 (-0.335770) | 0.012345 / 0.007607 (0.004738) | 0.542446 / 0.226044 (0.316401) | 5.377757 / 2.268929 (3.108828) | 2.659837 / 55.444624 (-52.784787) | 2.272356 / 6.876477 (-4.604120) | 2.297289 / 2.142072 (0.155217) | 0.855276 / 4.805227 (-3.949952) | 0.170666 / 6.500664 (-6.329998) | 0.064549 / 0.075469 (-0.010920) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.255938 / 1.841788 (-0.585850) | 15.151471 / 8.074308 (7.077163) | 12.905762 / 10.191392 (2.714370) | 0.162425 / 0.680424 (-0.517999) | 0.017504 / 0.534201 (-0.516697) | 0.448671 / 0.579283 (-0.130612) | 0.422424 / 0.434364 (-0.011940) | 0.551772 / 0.540337 (0.011434) | 0.649115 / 1.386936 (-0.737821) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#be73d9f192149727c5542ff257df81b03024fa39 \"CML watermark\")\n",
"Having those different checks helps providing an appropriate error message.\r\n\r\nIf the input is a dict, we suggest to select a split. If the input lists is a mix of iterable and non-iterable, we mention that it must be one or the other.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006559 / 0.011353 (-0.004794) | 0.004569 / 0.011008 (-0.006439) | 0.104503 / 0.038508 (0.065995) | 0.028220 / 0.023109 (0.005111) | 0.365507 / 0.275898 (0.089609) | 0.400238 / 0.323480 (0.076758) | 0.004968 / 0.007986 (-0.003017) | 0.003271 / 0.004328 (-0.001057) | 0.082804 / 0.004250 (0.078554) | 0.036299 / 0.037052 (-0.000754) | 0.361201 / 0.258489 (0.102712) | 0.410962 / 0.293841 (0.117121) | 0.030423 / 0.128546 (-0.098123) | 0.011612 / 0.075646 (-0.064034) | 0.331820 / 0.419271 (-0.087452) | 0.043822 / 0.043533 (0.000289) | 0.356242 / 0.255139 (0.101103) | 0.393035 / 0.283200 (0.109836) | 0.088426 / 0.141683 (-0.053257) | 1.484139 / 1.452155 (0.031984) | 1.566712 / 1.492716 (0.073995) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.195887 / 0.018006 (0.177880) | 0.402720 / 0.000490 (0.402231) | 0.003516 / 0.000200 (0.003316) | 0.000075 / 0.000054 (0.000021) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023270 / 0.037411 (-0.014141) | 0.095834 / 0.014526 (0.081308) | 0.102924 / 0.176557 (-0.073632) | 0.161397 / 0.737135 (-0.575738) | 0.105225 / 0.296338 (-0.191114) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.451701 / 0.215209 (0.236491) | 4.495171 / 2.077655 (2.417517) | 2.223203 / 1.504120 (0.719083) | 2.035533 / 1.541195 (0.494338) | 2.076182 / 1.468490 (0.607692) | 0.697317 / 4.584777 (-3.887460) | 3.406309 / 3.745712 (-0.339403) | 1.847179 / 5.269862 (-3.422683) | 1.158762 / 4.565676 (-3.406914) | 0.083067 / 0.424275 (-0.341208) | 0.012453 / 0.007607 (0.004846) | 0.546502 / 0.226044 (0.320458) | 5.455712 / 2.268929 (3.186784) | 2.654142 / 55.444624 (-52.790483) | 2.298722 / 6.876477 (-4.577755) | 2.383467 / 2.142072 (0.241395) | 0.805950 / 4.805227 (-3.999278) | 0.152479 / 6.500664 (-6.348185) | 0.066784 / 0.075469 (-0.008685) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.239129 / 1.841788 (-0.602659) | 13.603707 / 8.074308 (5.529398) | 14.062004 / 10.191392 (3.870612) | 0.130928 / 0.680424 (-0.549495) | 0.016907 / 0.534201 (-0.517294) | 0.381614 / 0.579283 (-0.197670) | 0.386770 / 0.434364 (-0.047594) | 0.455792 / 0.540337 (-0.084545) | 0.526092 / 1.386936 (-0.860844) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006202 / 0.011353 (-0.005151) | 0.004478 / 0.011008 (-0.006531) | 0.076492 / 0.038508 (0.037984) | 0.026703 / 0.023109 (0.003594) | 0.355134 / 0.275898 (0.079236) | 0.391207 / 0.323480 (0.067727) | 0.004852 / 0.007986 (-0.003133) | 0.003271 / 0.004328 (-0.001057) | 0.075080 / 0.004250 (0.070830) | 0.038803 / 0.037052 (0.001750) | 0.359530 / 0.258489 (0.101041) | 0.409044 / 0.293841 (0.115203) | 0.030366 / 0.128546 (-0.098180) | 0.011544 / 0.075646 (-0.064102) | 0.084849 / 0.419271 (-0.334423) | 0.040076 / 0.043533 (-0.003457) | 0.357359 / 0.255139 (0.102220) | 0.384075 / 0.283200 (0.100875) | 0.089130 / 0.141683 (-0.052552) | 1.520400 / 1.452155 (0.068246) | 1.604403 / 1.492716 (0.111687) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.257127 / 0.018006 (0.239121) | 0.403691 / 0.000490 (0.403202) | 0.006894 / 0.000200 (0.006694) | 0.000088 / 0.000054 (0.000034) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024653 / 0.037411 (-0.012758) | 0.098834 / 0.014526 (0.084309) | 0.107276 / 0.176557 (-0.069281) | 0.158256 / 0.737135 (-0.578879) | 0.111339 / 0.296338 (-0.184999) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.445006 / 0.215209 (0.229797) | 4.452953 / 2.077655 (2.375299) | 2.168291 / 1.504120 (0.664171) | 1.969457 / 1.541195 (0.428262) | 2.003505 / 1.468490 (0.535015) | 0.695857 / 4.584777 (-3.888920) | 3.433424 / 3.745712 (-0.312288) | 2.466977 / 5.269862 (-2.802885) | 1.528167 / 4.565676 (-3.037509) | 0.082425 / 0.424275 (-0.341850) | 0.012470 / 0.007607 (0.004863) | 0.559039 / 0.226044 (0.332995) | 5.609496 / 2.268929 (3.340568) | 2.602898 / 55.444624 (-52.841726) | 2.273971 / 6.876477 (-4.602506) | 2.303370 / 2.142072 (0.161298) | 0.803875 / 4.805227 (-4.001352) | 0.151069 / 6.500664 (-6.349595) | 0.067956 / 0.075469 (-0.007513) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.334443 / 1.841788 (-0.507345) | 13.773252 / 8.074308 (5.698944) | 13.007042 / 10.191392 (2.815650) | 0.127939 / 0.680424 (-0.552485) | 0.016412 / 0.534201 (-0.517789) | 0.374744 / 0.579283 (-0.204539) | 0.396912 / 0.434364 (-0.037452) | 0.443197 / 0.540337 (-0.097140) | 0.528338 / 1.386936 (-0.858598) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#51d9f2a3064aa89a780e3d02c6cc34000c51c4fb \"CML watermark\")\n",
"Just modified it to use only one loop. I think I managed to keep it readable as well",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007382 / 0.011353 (-0.003971) | 0.005143 / 0.011008 (-0.005865) | 0.097635 / 0.038508 (0.059127) | 0.034726 / 0.023109 (0.011616) | 0.315556 / 0.275898 (0.039658) | 0.355951 / 0.323480 (0.032472) | 0.006055 / 0.007986 (-0.001931) | 0.004264 / 0.004328 (-0.000065) | 0.073636 / 0.004250 (0.069386) | 0.050480 / 0.037052 (0.013428) | 0.316031 / 0.258489 (0.057542) | 0.363933 / 0.293841 (0.070092) | 0.035138 / 0.128546 (-0.093408) | 0.012407 / 0.075646 (-0.063239) | 0.333677 / 0.419271 (-0.085595) | 0.050586 / 0.043533 (0.007053) | 0.309507 / 0.255139 (0.054369) | 0.327043 / 0.283200 (0.043844) | 0.108975 / 0.141683 (-0.032708) | 1.447778 / 1.452155 (-0.004377) | 1.519971 / 1.492716 (0.027255) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.248770 / 0.018006 (0.230764) | 0.603036 / 0.000490 (0.602546) | 0.000383 / 0.000200 (0.000183) | 0.000058 / 0.000054 (0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027094 / 0.037411 (-0.010317) | 0.104427 / 0.014526 (0.089901) | 0.120627 / 0.176557 (-0.055929) | 0.178790 / 0.737135 (-0.558346) | 0.124877 / 0.296338 (-0.171461) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.414442 / 0.215209 (0.199233) | 4.138009 / 2.077655 (2.060355) | 1.964642 / 1.504120 (0.460523) | 1.775940 / 1.541195 (0.234745) | 1.899719 / 1.468490 (0.431228) | 0.695406 / 4.584777 (-3.889371) | 3.760470 / 3.745712 (0.014758) | 3.906958 / 5.269862 (-1.362904) | 2.028164 / 4.565676 (-2.537513) | 0.086704 / 0.424275 (-0.337571) | 0.012465 / 0.007607 (0.004857) | 0.512336 / 0.226044 (0.286292) | 5.108587 / 2.268929 (2.839659) | 2.435273 / 55.444624 (-53.009352) | 2.142387 / 6.876477 (-4.734090) | 2.258234 / 2.142072 (0.116162) | 0.854035 / 4.805227 (-3.951193) | 0.170443 / 6.500664 (-6.330222) | 0.065762 / 0.075469 (-0.009707) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.187529 / 1.841788 (-0.654259) | 15.151164 / 8.074308 (7.076856) | 14.577545 / 10.191392 (4.386153) | 0.166973 / 0.680424 (-0.513450) | 0.017883 / 0.534201 (-0.516318) | 0.427607 / 0.579283 (-0.151676) | 0.417050 / 0.434364 (-0.017314) | 0.508116 / 0.540337 (-0.032221) | 0.590173 / 1.386936 (-0.796763) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007499 / 0.011353 (-0.003854) | 0.005195 / 0.011008 (-0.005813) | 0.073600 / 0.038508 (0.035091) | 0.033574 / 0.023109 (0.010464) | 0.377506 / 0.275898 (0.101608) | 0.432752 / 0.323480 (0.109272) | 0.006042 / 0.007986 (-0.001944) | 0.006427 / 0.004328 (0.002098) | 0.071666 / 0.004250 (0.067416) | 0.053243 / 0.037052 (0.016190) | 0.363972 / 0.258489 (0.105483) | 0.454988 / 0.293841 (0.161147) | 0.035118 / 0.128546 (-0.093428) | 0.012395 / 0.075646 (-0.063251) | 0.084308 / 0.419271 (-0.334963) | 0.048589 / 0.043533 (0.005057) | 0.368036 / 0.255139 (0.112897) | 0.399414 / 0.283200 (0.116215) | 0.109043 / 0.141683 (-0.032640) | 1.462972 / 1.452155 (0.010817) | 1.574443 / 1.492716 (0.081726) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.215107 / 0.018006 (0.197101) | 0.550255 / 0.000490 (0.549765) | 0.004630 / 0.000200 (0.004430) | 0.000104 / 0.000054 (0.000049) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029948 / 0.037411 (-0.007463) | 0.111866 / 0.014526 (0.097340) | 0.126559 / 0.176557 (-0.049997) | 0.181443 / 0.737135 (-0.555693) | 0.130559 / 0.296338 (-0.165779) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.441410 / 0.215209 (0.226201) | 4.403406 / 2.077655 (2.325752) | 2.180276 / 1.504120 (0.676156) | 2.003729 / 1.541195 (0.462534) | 2.079394 / 1.468490 (0.610904) | 0.706061 / 4.584777 (-3.878716) | 3.805668 / 3.745712 (0.059956) | 3.864941 / 5.269862 (-1.404921) | 1.970468 / 4.565676 (-2.595208) | 0.086033 / 0.424275 (-0.338242) | 0.012261 / 0.007607 (0.004654) | 0.550427 / 0.226044 (0.324383) | 5.542270 / 2.268929 (3.273342) | 2.717047 / 55.444624 (-52.727577) | 2.449022 / 6.876477 (-4.427455) | 2.549567 / 2.142072 (0.407495) | 0.854981 / 4.805227 (-3.950247) | 0.169756 / 6.500664 (-6.330908) | 0.067082 / 0.075469 (-0.008387) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.281369 / 1.841788 (-0.560419) | 15.445090 / 8.074308 (7.370781) | 13.205652 / 10.191392 (3.014260) | 0.170070 / 0.680424 (-0.510354) | 0.017815 / 0.534201 (-0.516385) | 0.425193 / 0.579283 (-0.154090) | 0.425205 / 0.434364 (-0.009159) | 0.493561 / 0.540337 (-0.046776) | 0.588994 / 1.386936 (-0.797942) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#e427105fc68fce04d0f3c74efb942cbf3a65d166 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006345 / 0.011353 (-0.005008) | 0.004330 / 0.011008 (-0.006678) | 0.096327 / 0.038508 (0.057819) | 0.032964 / 0.023109 (0.009855) | 0.335600 / 0.275898 (0.059702) | 0.365635 / 0.323480 (0.042155) | 0.005435 / 0.007986 (-0.002551) | 0.005005 / 0.004328 (0.000677) | 0.071107 / 0.004250 (0.066856) | 0.044363 / 0.037052 (0.007311) | 0.339988 / 0.258489 (0.081498) | 0.375575 / 0.293841 (0.081734) | 0.028343 / 0.128546 (-0.100203) | 0.008587 / 0.075646 (-0.067059) | 0.324349 / 0.419271 (-0.094922) | 0.050105 / 0.043533 (0.006573) | 0.327398 / 0.255139 (0.072259) | 0.348479 / 0.283200 (0.065279) | 0.102357 / 0.141683 (-0.039326) | 1.419905 / 1.452155 (-0.032250) | 1.534887 / 1.492716 (0.042171) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.212418 / 0.018006 (0.194412) | 0.433183 / 0.000490 (0.432693) | 0.000595 / 0.000200 (0.000395) | 0.000062 / 0.000054 (0.000007) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027520 / 0.037411 (-0.009891) | 0.109503 / 0.014526 (0.094977) | 0.118202 / 0.176557 (-0.058355) | 0.177236 / 0.737135 (-0.559899) | 0.123736 / 0.296338 (-0.172602) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.405734 / 0.215209 (0.190525) | 4.039566 / 2.077655 (1.961911) | 1.838211 / 1.504120 (0.334091) | 1.652650 / 1.541195 (0.111456) | 1.753488 / 1.468490 (0.284998) | 0.525258 / 4.584777 (-4.059519) | 3.704509 / 3.745712 (-0.041203) | 1.826794 / 5.269862 (-3.443067) | 1.236361 / 4.565676 (-3.329315) | 0.065619 / 0.424275 (-0.358656) | 0.011606 / 0.007607 (0.003999) | 0.505954 / 0.226044 (0.279910) | 5.054140 / 2.268929 (2.785211) | 2.352587 / 55.444624 (-53.092037) | 2.050601 / 6.876477 (-4.825875) | 2.097222 / 2.142072 (-0.044850) | 0.641044 / 4.805227 (-4.164183) | 0.140676 / 6.500664 (-6.359988) | 0.063217 / 0.075469 (-0.012253) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.177750 / 1.841788 (-0.664038) | 14.819346 / 8.074308 (6.745038) | 14.085937 / 10.191392 (3.894545) | 0.168618 / 0.680424 (-0.511806) | 0.017189 / 0.534201 (-0.517011) | 0.393415 / 0.579283 (-0.185868) | 0.422879 / 0.434364 (-0.011485) | 0.477289 / 0.540337 (-0.063048) | 0.569078 / 1.386936 (-0.817858) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006502 / 0.011353 (-0.004850) | 0.004640 / 0.011008 (-0.006368) | 0.073272 / 0.038508 (0.034764) | 0.033225 / 0.023109 (0.010116) | 0.359165 / 0.275898 (0.083267) | 0.391659 / 0.323480 (0.068179) | 0.005684 / 0.007986 (-0.002302) | 0.004045 / 0.004328 (-0.000284) | 0.072880 / 0.004250 (0.068629) | 0.046260 / 0.037052 (0.009208) | 0.361772 / 0.258489 (0.103283) | 0.402905 / 0.293841 (0.109064) | 0.027732 / 0.128546 (-0.100814) | 0.008864 / 0.075646 (-0.066783) | 0.081961 / 0.419271 (-0.337310) | 0.046170 / 0.043533 (0.002637) | 0.364198 / 0.255139 (0.109059) | 0.387468 / 0.283200 (0.104269) | 0.105456 / 0.141683 (-0.036227) | 1.457176 / 1.452155 (0.005021) | 1.564899 / 1.492716 (0.072183) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.179129 / 0.018006 (0.161123) | 0.439699 / 0.000490 (0.439209) | 0.002882 / 0.000200 (0.002682) | 0.000090 / 0.000054 (0.000035) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029123 / 0.037411 (-0.008288) | 0.112046 / 0.014526 (0.097520) | 0.122773 / 0.176557 (-0.053784) | 0.178404 / 0.737135 (-0.558732) | 0.127904 / 0.296338 (-0.168434) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.440413 / 0.215209 (0.225204) | 4.407334 / 2.077655 (2.329680) | 2.112932 / 1.504120 (0.608812) | 1.911034 / 1.541195 (0.369840) | 2.057168 / 1.468490 (0.588677) | 0.525472 / 4.584777 (-4.059305) | 3.738894 / 3.745712 (-0.006818) | 1.807592 / 5.269862 (-3.462270) | 1.053837 / 4.565676 (-3.511839) | 0.066203 / 0.424275 (-0.358072) | 0.011965 / 0.007607 (0.004358) | 0.541137 / 0.226044 (0.315093) | 5.415040 / 2.268929 (3.146112) | 2.580476 / 55.444624 (-52.864148) | 2.234144 / 6.876477 (-4.642333) | 2.306014 / 2.142072 (0.163942) | 0.644221 / 4.805227 (-4.161006) | 0.142870 / 6.500664 (-6.357794) | 0.065015 / 0.075469 (-0.010454) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.303465 / 1.841788 (-0.538323) | 14.949683 / 8.074308 (6.875375) | 14.370871 / 10.191392 (4.179478) | 0.142714 / 0.680424 (-0.537710) | 0.017372 / 0.534201 (-0.516829) | 0.403898 / 0.579283 (-0.175385) | 0.424781 / 0.434364 (-0.009583) | 0.465984 / 0.540337 (-0.074353) | 0.570863 / 1.386936 (-0.816074) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#22d1d533e8ab831b1aa1aab3e7d3c72ba42a83e8 \"CML watermark\")\n"
] | 2023-05-15T10:36:24 | 2023-05-23T10:40:13 | 2023-05-23T10:32:58 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5861",
"html_url": "https://github.com/huggingface/datasets/pull/5861",
"diff_url": "https://github.com/huggingface/datasets/pull/5861.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5861.patch",
"merged_at": "2023-05-23T10:32:58"
} | close https://github.com/huggingface/datasets/issues/5851 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5861/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5861/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5860 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5860/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5860/comments | https://api.github.com/repos/huggingface/datasets/issues/5860/events | https://github.com/huggingface/datasets/pull/5860 | 1,709,727,460 | PR_kwDODunzps5QfojD | 5,860 | Minor tqdm optim | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006917 / 0.011353 (-0.004436) | 0.004803 / 0.011008 (-0.006205) | 0.097082 / 0.038508 (0.058574) | 0.035105 / 0.023109 (0.011996) | 0.325911 / 0.275898 (0.050013) | 0.371858 / 0.323480 (0.048378) | 0.006451 / 0.007986 (-0.001534) | 0.004421 / 0.004328 (0.000093) | 0.075738 / 0.004250 (0.071487) | 0.053624 / 0.037052 (0.016572) | 0.332661 / 0.258489 (0.074172) | 0.372729 / 0.293841 (0.078888) | 0.028279 / 0.128546 (-0.100267) | 0.009318 / 0.075646 (-0.066328) | 0.328505 / 0.419271 (-0.090766) | 0.066962 / 0.043533 (0.023429) | 0.316863 / 0.255139 (0.061724) | 0.344296 / 0.283200 (0.061096) | 0.120575 / 0.141683 (-0.021108) | 1.457867 / 1.452155 (0.005712) | 1.597361 / 1.492716 (0.104644) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.296399 / 0.018006 (0.278392) | 0.507196 / 0.000490 (0.506706) | 0.003036 / 0.000200 (0.002836) | 0.000088 / 0.000054 (0.000034) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028535 / 0.037411 (-0.008876) | 0.110566 / 0.014526 (0.096040) | 0.122078 / 0.176557 (-0.054479) | 0.182926 / 0.737135 (-0.554210) | 0.125546 / 0.296338 (-0.170792) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.426952 / 0.215209 (0.211742) | 4.255608 / 2.077655 (2.177953) | 2.063865 / 1.504120 (0.559745) | 1.867198 / 1.541195 (0.326004) | 2.058236 / 1.468490 (0.589746) | 0.525885 / 4.584777 (-4.058892) | 3.723607 / 3.745712 (-0.022105) | 1.919144 / 5.269862 (-3.350718) | 1.235308 / 4.565676 (-3.330368) | 0.066423 / 0.424275 (-0.357852) | 0.012045 / 0.007607 (0.004438) | 0.528432 / 0.226044 (0.302388) | 5.268723 / 2.268929 (2.999794) | 2.504071 / 55.444624 (-52.940553) | 2.137999 / 6.876477 (-4.738477) | 2.229987 / 2.142072 (0.087914) | 0.641739 / 4.805227 (-4.163488) | 0.142635 / 6.500664 (-6.358029) | 0.065649 / 0.075469 (-0.009820) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.182710 / 1.841788 (-0.659078) | 15.339777 / 8.074308 (7.265469) | 14.722308 / 10.191392 (4.530916) | 0.145914 / 0.680424 (-0.534510) | 0.017861 / 0.534201 (-0.516340) | 0.393092 / 0.579283 (-0.186191) | 0.431179 / 0.434364 (-0.003185) | 0.485712 / 0.540337 (-0.054625) | 0.602634 / 1.386936 (-0.784302) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006792 / 0.011353 (-0.004561) | 0.005118 / 0.011008 (-0.005890) | 0.073440 / 0.038508 (0.034932) | 0.033751 / 0.023109 (0.010642) | 0.389243 / 0.275898 (0.113345) | 0.397083 / 0.323480 (0.073603) | 0.005989 / 0.007986 (-0.001997) | 0.004289 / 0.004328 (-0.000040) | 0.073228 / 0.004250 (0.068977) | 0.053490 / 0.037052 (0.016438) | 0.396070 / 0.258489 (0.137581) | 0.415134 / 0.293841 (0.121293) | 0.028649 / 0.128546 (-0.099897) | 0.009159 / 0.075646 (-0.066487) | 0.080813 / 0.419271 (-0.338458) | 0.048200 / 0.043533 (0.004667) | 0.388009 / 0.255139 (0.132870) | 0.382174 / 0.283200 (0.098975) | 0.107807 / 0.141683 (-0.033876) | 1.467276 / 1.452155 (0.015121) | 1.568091 / 1.492716 (0.075375) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.328030 / 0.018006 (0.310024) | 0.498058 / 0.000490 (0.497568) | 0.002513 / 0.000200 (0.002313) | 0.000099 / 0.000054 (0.000045) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029835 / 0.037411 (-0.007576) | 0.113859 / 0.014526 (0.099333) | 0.130813 / 0.176557 (-0.045743) | 0.183646 / 0.737135 (-0.553490) | 0.136561 / 0.296338 (-0.159777) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.438901 / 0.215209 (0.223692) | 4.376426 / 2.077655 (2.298771) | 2.220932 / 1.504120 (0.716812) | 2.043585 / 1.541195 (0.502390) | 2.161383 / 1.468490 (0.692893) | 0.523224 / 4.584777 (-4.061553) | 3.730589 / 3.745712 (-0.015123) | 1.859602 / 5.269862 (-3.410260) | 1.073415 / 4.565676 (-3.492261) | 0.066363 / 0.424275 (-0.357912) | 0.012491 / 0.007607 (0.004884) | 0.542052 / 0.226044 (0.316008) | 5.426246 / 2.268929 (3.157318) | 2.673884 / 55.444624 (-52.770740) | 2.372611 / 6.876477 (-4.503865) | 2.482216 / 2.142072 (0.340143) | 0.705669 / 4.805227 (-4.099558) | 0.141075 / 6.500664 (-6.359589) | 0.065339 / 0.075469 (-0.010130) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.316403 / 1.841788 (-0.525385) | 15.832870 / 8.074308 (7.758562) | 13.307045 / 10.191392 (3.115653) | 0.147258 / 0.680424 (-0.533166) | 0.017966 / 0.534201 (-0.516235) | 0.414396 / 0.579283 (-0.164887) | 0.431801 / 0.434364 (-0.002563) | 0.465483 / 0.540337 (-0.074855) | 0.577850 / 1.386936 (-0.809086) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#c795c7e332a7c850c3e725f2034d4894b5e314f7 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006368 / 0.011353 (-0.004985) | 0.004274 / 0.011008 (-0.006734) | 0.098799 / 0.038508 (0.060291) | 0.029096 / 0.023109 (0.005986) | 0.308009 / 0.275898 (0.032111) | 0.345701 / 0.323480 (0.022221) | 0.005312 / 0.007986 (-0.002674) | 0.003435 / 0.004328 (-0.000894) | 0.075912 / 0.004250 (0.071662) | 0.041993 / 0.037052 (0.004941) | 0.320075 / 0.258489 (0.061586) | 0.347506 / 0.293841 (0.053665) | 0.025456 / 0.128546 (-0.103091) | 0.008461 / 0.075646 (-0.067185) | 0.322823 / 0.419271 (-0.096448) | 0.044650 / 0.043533 (0.001117) | 0.314118 / 0.255139 (0.058979) | 0.333436 / 0.283200 (0.050237) | 0.093811 / 0.141683 (-0.047871) | 1.464464 / 1.452155 (0.012310) | 1.548098 / 1.492716 (0.055382) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.015905 / 0.018006 (-0.002101) | 0.427847 / 0.000490 (0.427357) | 0.007600 / 0.000200 (0.007400) | 0.000421 / 0.000054 (0.000366) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024530 / 0.037411 (-0.012882) | 0.099907 / 0.014526 (0.085381) | 0.107282 / 0.176557 (-0.069275) | 0.168332 / 0.737135 (-0.568804) | 0.109875 / 0.296338 (-0.186464) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.451064 / 0.215209 (0.235855) | 4.491434 / 2.077655 (2.413779) | 2.253251 / 1.504120 (0.749131) | 2.086740 / 1.541195 (0.545545) | 2.133288 / 1.468490 (0.664798) | 0.558801 / 4.584777 (-4.025976) | 3.463525 / 3.745712 (-0.282187) | 1.747657 / 5.269862 (-3.522205) | 1.005465 / 4.565676 (-3.560211) | 0.068341 / 0.424275 (-0.355934) | 0.012521 / 0.007607 (0.004914) | 0.567002 / 0.226044 (0.340957) | 5.689529 / 2.268929 (3.420601) | 2.700562 / 55.444624 (-52.744062) | 2.384888 / 6.876477 (-4.491589) | 2.503160 / 2.142072 (0.361088) | 0.667107 / 4.805227 (-4.138120) | 0.137253 / 6.500664 (-6.363412) | 0.068300 / 0.075469 (-0.007170) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.202916 / 1.841788 (-0.638872) | 14.163393 / 8.074308 (6.089085) | 14.402463 / 10.191392 (4.211071) | 0.145273 / 0.680424 (-0.535151) | 0.016996 / 0.534201 (-0.517205) | 0.363520 / 0.579283 (-0.215763) | 0.421595 / 0.434364 (-0.012769) | 0.438413 / 0.540337 (-0.101925) | 0.508615 / 1.386936 (-0.878321) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006419 / 0.011353 (-0.004934) | 0.004346 / 0.011008 (-0.006662) | 0.076356 / 0.038508 (0.037848) | 0.029370 / 0.023109 (0.006260) | 0.371046 / 0.275898 (0.095148) | 0.398279 / 0.323480 (0.074799) | 0.005258 / 0.007986 (-0.002728) | 0.003528 / 0.004328 (-0.000800) | 0.076787 / 0.004250 (0.072537) | 0.041575 / 0.037052 (0.004522) | 0.362319 / 0.258489 (0.103830) | 0.402134 / 0.293841 (0.108293) | 0.025633 / 0.128546 (-0.102913) | 0.008826 / 0.075646 (-0.066820) | 0.082380 / 0.419271 (-0.336892) | 0.041655 / 0.043533 (-0.001878) | 0.357583 / 0.255139 (0.102444) | 0.383486 / 0.283200 (0.100287) | 0.093682 / 0.141683 (-0.048001) | 1.488522 / 1.452155 (0.036367) | 1.576090 / 1.492716 (0.083373) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.185556 / 0.018006 (0.167550) | 0.431345 / 0.000490 (0.430855) | 0.002290 / 0.000200 (0.002090) | 0.000082 / 0.000054 (0.000027) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026030 / 0.037411 (-0.011382) | 0.102889 / 0.014526 (0.088364) | 0.109541 / 0.176557 (-0.067015) | 0.161050 / 0.737135 (-0.576085) | 0.113525 / 0.296338 (-0.182814) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.445301 / 0.215209 (0.230092) | 4.437320 / 2.077655 (2.359666) | 2.174181 / 1.504120 (0.670061) | 1.977440 / 1.541195 (0.436245) | 2.036323 / 1.468490 (0.567832) | 0.554227 / 4.584777 (-4.030550) | 3.462746 / 3.745712 (-0.282966) | 1.765257 / 5.269862 (-3.504604) | 1.014515 / 4.565676 (-3.551161) | 0.068391 / 0.424275 (-0.355884) | 0.013154 / 0.007607 (0.005546) | 0.546696 / 0.226044 (0.320652) | 5.490628 / 2.268929 (3.221699) | 2.611947 / 55.444624 (-52.832677) | 2.282659 / 6.876477 (-4.593818) | 2.333972 / 2.142072 (0.191899) | 0.663140 / 4.805227 (-4.142087) | 0.137996 / 6.500664 (-6.362668) | 0.069063 / 0.075469 (-0.006407) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.332147 / 1.841788 (-0.509641) | 14.781592 / 8.074308 (6.707284) | 13.399190 / 10.191392 (3.207798) | 0.139370 / 0.680424 (-0.541054) | 0.016742 / 0.534201 (-0.517459) | 0.364138 / 0.579283 (-0.215146) | 0.402479 / 0.434364 (-0.031885) | 0.427591 / 0.540337 (-0.112746) | 0.520864 / 1.386936 (-0.866072) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#a8279677b58b93f77995c7da67aea2a04b6a7395 \"CML watermark\")\n"
] | 2023-05-15T09:49:37 | 2023-05-17T18:46:46 | 2023-05-17T18:39:35 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5860",
"html_url": "https://github.com/huggingface/datasets/pull/5860",
"diff_url": "https://github.com/huggingface/datasets/pull/5860.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5860.patch",
"merged_at": "2023-05-17T18:39:35"
} | Don't create a tqdm progress bar when `disable_tqdm` is passed to `map_nested`.
On my side it sped up some iterable datasets by ~30% when `map_nested` is used extensively to recursively tensorize python dicts. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5860/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5860/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5853 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5853/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5853/comments | https://api.github.com/repos/huggingface/datasets/issues/5853/events | https://github.com/huggingface/datasets/pull/5853 | 1,708,092,786 | PR_kwDODunzps5QaZLP | 5,853 | [docs] Redirects, migrated from nginx | {
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"@mishig25 note that it's not exactly the same behavior as in nginx as here it interacts a bit with the `version` and the `language`\r\n\r\nShould be close enough, though.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007212 / 0.011353 (-0.004141) | 0.005125 / 0.011008 (-0.005883) | 0.098460 / 0.038508 (0.059952) | 0.034040 / 0.023109 (0.010931) | 0.320203 / 0.275898 (0.044305) | 0.357787 / 0.323480 (0.034307) | 0.006000 / 0.007986 (-0.001986) | 0.005644 / 0.004328 (0.001316) | 0.072654 / 0.004250 (0.068403) | 0.049393 / 0.037052 (0.012341) | 0.345686 / 0.258489 (0.087196) | 0.362345 / 0.293841 (0.068504) | 0.036597 / 0.128546 (-0.091949) | 0.012303 / 0.075646 (-0.063343) | 0.334374 / 0.419271 (-0.084897) | 0.062010 / 0.043533 (0.018477) | 0.312547 / 0.255139 (0.057408) | 0.336021 / 0.283200 (0.052821) | 0.112304 / 0.141683 (-0.029378) | 1.446706 / 1.452155 (-0.005449) | 1.523256 / 1.492716 (0.030540) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.217658 / 0.018006 (0.199652) | 0.449208 / 0.000490 (0.448718) | 0.002878 / 0.000200 (0.002679) | 0.000091 / 0.000054 (0.000036) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025735 / 0.037411 (-0.011676) | 0.105876 / 0.014526 (0.091350) | 0.114887 / 0.176557 (-0.061669) | 0.170984 / 0.737135 (-0.566152) | 0.121420 / 0.296338 (-0.174918) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.419670 / 0.215209 (0.204461) | 4.189453 / 2.077655 (2.111798) | 1.938236 / 1.504120 (0.434116) | 1.769747 / 1.541195 (0.228553) | 1.910919 / 1.468490 (0.442429) | 0.705046 / 4.584777 (-3.879730) | 3.783774 / 3.745712 (0.038062) | 2.096504 / 5.269862 (-3.173358) | 1.339265 / 4.565676 (-3.226412) | 0.086670 / 0.424275 (-0.337605) | 0.012243 / 0.007607 (0.004636) | 0.524701 / 0.226044 (0.298657) | 5.240689 / 2.268929 (2.971760) | 2.473622 / 55.444624 (-52.971003) | 2.170568 / 6.876477 (-4.705909) | 2.289653 / 2.142072 (0.147581) | 0.848913 / 4.805227 (-3.956314) | 0.168332 / 6.500664 (-6.332332) | 0.064926 / 0.075469 (-0.010543) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.193614 / 1.841788 (-0.648173) | 14.920403 / 8.074308 (6.846095) | 14.475059 / 10.191392 (4.283667) | 0.164458 / 0.680424 (-0.515966) | 0.017613 / 0.534201 (-0.516588) | 0.426311 / 0.579283 (-0.152972) | 0.431478 / 0.434364 (-0.002886) | 0.520280 / 0.540337 (-0.020057) | 0.627738 / 1.386936 (-0.759198) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007458 / 0.011353 (-0.003895) | 0.005363 / 0.011008 (-0.005645) | 0.076713 / 0.038508 (0.038205) | 0.034189 / 0.023109 (0.011079) | 0.359938 / 0.275898 (0.084040) | 0.395532 / 0.323480 (0.072052) | 0.005977 / 0.007986 (-0.002008) | 0.004263 / 0.004328 (-0.000065) | 0.075971 / 0.004250 (0.071721) | 0.051924 / 0.037052 (0.014871) | 0.362818 / 0.258489 (0.104329) | 0.409897 / 0.293841 (0.116056) | 0.035494 / 0.128546 (-0.093053) | 0.012399 / 0.075646 (-0.063247) | 0.088335 / 0.419271 (-0.330937) | 0.047968 / 0.043533 (0.004435) | 0.355744 / 0.255139 (0.100606) | 0.376339 / 0.283200 (0.093139) | 0.104542 / 0.141683 (-0.037141) | 1.464826 / 1.452155 (0.012672) | 1.600665 / 1.492716 (0.107948) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.220841 / 0.018006 (0.202834) | 0.446444 / 0.000490 (0.445954) | 0.000392 / 0.000200 (0.000192) | 0.000057 / 0.000054 (0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029402 / 0.037411 (-0.008009) | 0.116511 / 0.014526 (0.101986) | 0.122959 / 0.176557 (-0.053598) | 0.171674 / 0.737135 (-0.565462) | 0.129871 / 0.296338 (-0.166468) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.450411 / 0.215209 (0.235202) | 4.471859 / 2.077655 (2.394205) | 2.229439 / 1.504120 (0.725319) | 2.053308 / 1.541195 (0.512114) | 2.142476 / 1.468490 (0.673986) | 0.708299 / 4.584777 (-3.876478) | 3.797830 / 3.745712 (0.052118) | 2.142509 / 5.269862 (-3.127352) | 1.333357 / 4.565676 (-3.232320) | 0.086837 / 0.424275 (-0.337439) | 0.012102 / 0.007607 (0.004495) | 0.548428 / 0.226044 (0.322384) | 5.490611 / 2.268929 (3.221682) | 2.713882 / 55.444624 (-52.730742) | 2.399638 / 6.876477 (-4.476839) | 2.481549 / 2.142072 (0.339477) | 0.839812 / 4.805227 (-3.965415) | 0.168890 / 6.500664 (-6.331774) | 0.065564 / 0.075469 (-0.009906) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.275507 / 1.841788 (-0.566281) | 14.896343 / 8.074308 (6.822035) | 13.159701 / 10.191392 (2.968309) | 0.172065 / 0.680424 (-0.508359) | 0.017507 / 0.534201 (-0.516694) | 0.420031 / 0.579283 (-0.159252) | 0.438835 / 0.434364 (0.004471) | 0.490597 / 0.540337 (-0.049741) | 0.583952 / 1.386936 (-0.802984) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#48c9755d0ae9abe4c4d6cd8c1ce76eff849f0e5c \"CML watermark\")\n"
] | 2023-05-12T19:19:27 | 2023-05-15T10:37:19 | 2023-05-15T10:30:14 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5853",
"html_url": "https://github.com/huggingface/datasets/pull/5853",
"diff_url": "https://github.com/huggingface/datasets/pull/5853.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5853.patch",
"merged_at": "2023-05-15T10:30:14"
} | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5853/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5853/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5848 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5848/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5848/comments | https://api.github.com/repos/huggingface/datasets/issues/5848/events | https://github.com/huggingface/datasets/pull/5848 | 1,707,506,734 | PR_kwDODunzps5QYa1B | 5,848 | Add `accelerate` as metric's test dependency to fix CI error | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007565 / 0.011353 (-0.003788) | 0.005361 / 0.011008 (-0.005647) | 0.098963 / 0.038508 (0.060455) | 0.034271 / 0.023109 (0.011162) | 0.323421 / 0.275898 (0.047523) | 0.348495 / 0.323480 (0.025015) | 0.006244 / 0.007986 (-0.001741) | 0.004215 / 0.004328 (-0.000113) | 0.073614 / 0.004250 (0.069364) | 0.049334 / 0.037052 (0.012282) | 0.315277 / 0.258489 (0.056788) | 0.354325 / 0.293841 (0.060484) | 0.035001 / 0.128546 (-0.093545) | 0.012149 / 0.075646 (-0.063497) | 0.335614 / 0.419271 (-0.083657) | 0.050532 / 0.043533 (0.006999) | 0.308500 / 0.255139 (0.053361) | 0.324620 / 0.283200 (0.041421) | 0.110241 / 0.141683 (-0.031442) | 1.443923 / 1.452155 (-0.008232) | 1.559289 / 1.492716 (0.066573) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.207629 / 0.018006 (0.189622) | 0.433251 / 0.000490 (0.432762) | 0.003021 / 0.000200 (0.002821) | 0.000074 / 0.000054 (0.000019) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028312 / 0.037411 (-0.009100) | 0.111829 / 0.014526 (0.097303) | 0.127099 / 0.176557 (-0.049458) | 0.184702 / 0.737135 (-0.552433) | 0.125062 / 0.296338 (-0.171277) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.399451 / 0.215209 (0.184242) | 3.966528 / 2.077655 (1.888874) | 1.826004 / 1.504120 (0.321884) | 1.669547 / 1.541195 (0.128353) | 1.751584 / 1.468490 (0.283094) | 0.688308 / 4.584777 (-3.896469) | 3.813275 / 3.745712 (0.067562) | 3.181554 / 5.269862 (-2.088307) | 1.750566 / 4.565676 (-2.815111) | 0.085038 / 0.424275 (-0.339237) | 0.011992 / 0.007607 (0.004385) | 0.502374 / 0.226044 (0.276330) | 4.970614 / 2.268929 (2.701686) | 2.309617 / 55.444624 (-53.135007) | 2.012427 / 6.876477 (-4.864050) | 2.156348 / 2.142072 (0.014276) | 0.834415 / 4.805227 (-3.970812) | 0.167912 / 6.500664 (-6.332752) | 0.065711 / 0.075469 (-0.009758) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.223132 / 1.841788 (-0.618656) | 15.126753 / 8.074308 (7.052445) | 14.829184 / 10.191392 (4.637792) | 0.142582 / 0.680424 (-0.537842) | 0.017483 / 0.534201 (-0.516718) | 0.429768 / 0.579283 (-0.149516) | 0.422745 / 0.434364 (-0.011619) | 0.508813 / 0.540337 (-0.031525) | 0.618716 / 1.386936 (-0.768220) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007749 / 0.011353 (-0.003604) | 0.005433 / 0.011008 (-0.005576) | 0.076223 / 0.038508 (0.037715) | 0.036334 / 0.023109 (0.013225) | 0.375339 / 0.275898 (0.099441) | 0.413674 / 0.323480 (0.090194) | 0.006207 / 0.007986 (-0.001778) | 0.004085 / 0.004328 (-0.000244) | 0.076154 / 0.004250 (0.071904) | 0.050324 / 0.037052 (0.013271) | 0.382919 / 0.258489 (0.124429) | 0.442508 / 0.293841 (0.148667) | 0.035951 / 0.128546 (-0.092595) | 0.012067 / 0.075646 (-0.063580) | 0.087649 / 0.419271 (-0.331623) | 0.048786 / 0.043533 (0.005253) | 0.373541 / 0.255139 (0.118402) | 0.400437 / 0.283200 (0.117237) | 0.102622 / 0.141683 (-0.039061) | 1.472443 / 1.452155 (0.020288) | 1.580178 / 1.492716 (0.087462) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.222105 / 0.018006 (0.204098) | 0.445465 / 0.000490 (0.444975) | 0.003671 / 0.000200 (0.003471) | 0.000096 / 0.000054 (0.000041) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030808 / 0.037411 (-0.006603) | 0.116687 / 0.014526 (0.102161) | 0.124972 / 0.176557 (-0.051584) | 0.175621 / 0.737135 (-0.561514) | 0.129029 / 0.296338 (-0.167310) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.434627 / 0.215209 (0.219418) | 4.330268 / 2.077655 (2.252613) | 2.140266 / 1.504120 (0.636146) | 1.960705 / 1.541195 (0.419510) | 2.035949 / 1.468490 (0.567459) | 0.696830 / 4.584777 (-3.887947) | 3.790468 / 3.745712 (0.044756) | 3.194112 / 5.269862 (-2.075750) | 1.577728 / 4.565676 (-2.987948) | 0.085445 / 0.424275 (-0.338830) | 0.012207 / 0.007607 (0.004600) | 0.555199 / 0.226044 (0.329154) | 5.551539 / 2.268929 (3.282610) | 2.630917 / 55.444624 (-52.813707) | 2.383362 / 6.876477 (-4.493114) | 2.476301 / 2.142072 (0.334229) | 0.845773 / 4.805227 (-3.959455) | 0.169229 / 6.500664 (-6.331435) | 0.066064 / 0.075469 (-0.009405) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.277543 / 1.841788 (-0.564245) | 15.775637 / 8.074308 (7.701329) | 13.528588 / 10.191392 (3.337196) | 0.167428 / 0.680424 (-0.512996) | 0.017581 / 0.534201 (-0.516620) | 0.454472 / 0.579283 (-0.124811) | 0.427987 / 0.434364 (-0.006377) | 0.551512 / 0.540337 (0.011175) | 0.650811 / 1.386936 (-0.736125) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#96a6f5f526cc90330df597ae0097274742d5b84f \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009800 / 0.011353 (-0.001552) | 0.006443 / 0.011008 (-0.004565) | 0.144137 / 0.038508 (0.105629) | 0.037493 / 0.023109 (0.014383) | 0.482306 / 0.275898 (0.206408) | 0.467625 / 0.323480 (0.144145) | 0.006812 / 0.007986 (-0.001174) | 0.004810 / 0.004328 (0.000481) | 0.109047 / 0.004250 (0.104796) | 0.047169 / 0.037052 (0.010116) | 0.451253 / 0.258489 (0.192764) | 0.511339 / 0.293841 (0.217498) | 0.055583 / 0.128546 (-0.072963) | 0.021810 / 0.075646 (-0.053836) | 0.426522 / 0.419271 (0.007250) | 0.070282 / 0.043533 (0.026749) | 0.469631 / 0.255139 (0.214492) | 0.484951 / 0.283200 (0.201751) | 0.117370 / 0.141683 (-0.024313) | 1.809917 / 1.452155 (0.357763) | 1.882659 / 1.492716 (0.389943) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.223843 / 0.018006 (0.205837) | 0.549216 / 0.000490 (0.548726) | 0.007120 / 0.000200 (0.006920) | 0.000128 / 0.000054 (0.000074) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033057 / 0.037411 (-0.004354) | 0.128242 / 0.014526 (0.113716) | 0.140906 / 0.176557 (-0.035650) | 0.213122 / 0.737135 (-0.524013) | 0.148115 / 0.296338 (-0.148224) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.638712 / 0.215209 (0.423503) | 6.383684 / 2.077655 (4.306029) | 2.477020 / 1.504120 (0.972900) | 2.129190 / 1.541195 (0.587996) | 2.230503 / 1.468490 (0.762013) | 1.367167 / 4.584777 (-3.217610) | 5.570586 / 3.745712 (1.824873) | 5.462857 / 5.269862 (0.192996) | 2.990604 / 4.565676 (-1.575073) | 0.146543 / 0.424275 (-0.277732) | 0.016060 / 0.007607 (0.008453) | 0.812691 / 0.226044 (0.586646) | 7.928041 / 2.268929 (5.659112) | 3.329494 / 55.444624 (-52.115130) | 2.523452 / 6.876477 (-4.353025) | 2.672374 / 2.142072 (0.530302) | 1.598554 / 4.805227 (-3.206673) | 0.284727 / 6.500664 (-6.215937) | 0.080359 / 0.075469 (0.004889) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.501112 / 1.841788 (-0.340675) | 17.553644 / 8.074308 (9.479335) | 22.704062 / 10.191392 (12.512670) | 0.225575 / 0.680424 (-0.454849) | 0.026531 / 0.534201 (-0.507670) | 0.520129 / 0.579283 (-0.059154) | 0.626220 / 0.434364 (0.191856) | 0.631740 / 0.540337 (0.091403) | 0.750611 / 1.386936 (-0.636325) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009866 / 0.011353 (-0.001487) | 0.005733 / 0.011008 (-0.005275) | 0.111529 / 0.038508 (0.073021) | 0.042001 / 0.023109 (0.018891) | 0.458578 / 0.275898 (0.182680) | 0.507796 / 0.323480 (0.184316) | 0.006547 / 0.007986 (-0.001438) | 0.005611 / 0.004328 (0.001282) | 0.115321 / 0.004250 (0.111070) | 0.048741 / 0.037052 (0.011689) | 0.447611 / 0.258489 (0.189122) | 0.531830 / 0.293841 (0.237989) | 0.052176 / 0.128546 (-0.076370) | 0.022431 / 0.075646 (-0.053216) | 0.120709 / 0.419271 (-0.298562) | 0.067301 / 0.043533 (0.023769) | 0.460577 / 0.255139 (0.205438) | 0.497805 / 0.283200 (0.214605) | 0.121830 / 0.141683 (-0.019853) | 1.876436 / 1.452155 (0.424281) | 1.983491 / 1.492716 (0.490775) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.230982 / 0.018006 (0.212976) | 0.540643 / 0.000490 (0.540153) | 0.004646 / 0.000200 (0.004446) | 0.000131 / 0.000054 (0.000077) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034230 / 0.037411 (-0.003181) | 0.136454 / 0.014526 (0.121928) | 0.143370 / 0.176557 (-0.033187) | 0.206752 / 0.737135 (-0.530384) | 0.148722 / 0.296338 (-0.147617) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.704667 / 0.215209 (0.489458) | 7.112079 / 2.077655 (5.034424) | 3.083916 / 1.504120 (1.579797) | 2.606388 / 1.541195 (1.065193) | 2.738505 / 1.468490 (1.270015) | 1.314897 / 4.584777 (-3.269880) | 5.764442 / 3.745712 (2.018729) | 3.491890 / 5.269862 (-1.777972) | 2.299983 / 4.565676 (-2.265693) | 0.169655 / 0.424275 (-0.254620) | 0.015251 / 0.007607 (0.007643) | 0.977230 / 0.226044 (0.751186) | 9.697773 / 2.268929 (7.428844) | 3.826928 / 55.444624 (-51.617697) | 3.108238 / 6.876477 (-3.768239) | 3.103242 / 2.142072 (0.961169) | 1.586645 / 4.805227 (-3.218582) | 0.287181 / 6.500664 (-6.213483) | 0.107332 / 0.075469 (0.031863) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.712710 / 1.841788 (-0.129077) | 19.169403 / 8.074308 (11.095095) | 21.777301 / 10.191392 (11.585909) | 0.216918 / 0.680424 (-0.463506) | 0.026551 / 0.534201 (-0.507650) | 0.570383 / 0.579283 (-0.008900) | 0.643885 / 0.434364 (0.209521) | 0.673906 / 0.540337 (0.133568) | 0.824573 / 1.386936 (-0.562363) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#4ead18b6921c9576a3078d2fb685c38f1e1a4b8a \"CML watermark\")\n"
] | 2023-05-12T12:01:01 | 2023-05-12T13:48:47 | 2023-05-12T13:39:06 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5848",
"html_url": "https://github.com/huggingface/datasets/pull/5848",
"diff_url": "https://github.com/huggingface/datasets/pull/5848.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5848.patch",
"merged_at": "2023-05-12T13:39:06"
} | The `frugalscore` metric uses Transformers' Trainer, which requires `accelerate` (as of recently).
Fixes the following [CI error](https://github.com/huggingface/datasets/actions/runs/4950900048/jobs/8855148703?pr=5845). | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5848/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5848/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5851 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5851/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5851/comments | https://api.github.com/repos/huggingface/datasets/issues/5851/events | https://github.com/huggingface/datasets/issues/5851 | 1,707,907,048 | I_kwDODunzps5lzJfo | 5,851 | Error message not clear in interleaving datasets | {
"login": "surya-narayanan",
"id": 17240858,
"node_id": "MDQ6VXNlcjE3MjQwODU4",
"avatar_url": "https://avatars.githubusercontent.com/u/17240858?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/surya-narayanan",
"html_url": "https://github.com/surya-narayanan",
"followers_url": "https://api.github.com/users/surya-narayanan/followers",
"following_url": "https://api.github.com/users/surya-narayanan/following{/other_user}",
"gists_url": "https://api.github.com/users/surya-narayanan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/surya-narayanan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/surya-narayanan/subscriptions",
"organizations_url": "https://api.github.com/users/surya-narayanan/orgs",
"repos_url": "https://api.github.com/users/surya-narayanan/repos",
"events_url": "https://api.github.com/users/surya-narayanan/events{/privacy}",
"received_events_url": "https://api.github.com/users/surya-narayanan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 2023-05-11T20:52:13 | 2023-05-23T10:32:59 | 2023-05-23T10:32:59 | NONE | null | null | null | ### System Info
standard env
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I'm trying to interleave 'sciq', 'wiki' and the 'pile-enron' dataset. I think the error I made was that I loaded the train split of one, but for the other but the error is not too helpful-
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
[/home/suryahari/Vornoi/save_model_ops.py](https://vscode-remote+ssh-002dremote-002bthomsonlab-002d2-002ejamesgornet-002ecom.vscode-resource.vscode-cdn.net/home/suryahari/Vornoi/save_model_ops.py) in line 3
[41](file:///home/suryahari/Vornoi/save_model_ops.py?line=40) # %%
----> [43](file:///home/suryahari/Vornoi/save_model_ops.py?line=42) dataset = interleave_datasets(datasets, stopping_strategy="all_exhausted")
File [~/miniconda3/envs/vornoi/lib/python3.10/site-packages/datasets/combine.py:124](https://vscode-remote+ssh-002dremote-002bthomsonlab-002d2-002ejamesgornet-002ecom.vscode-resource.vscode-cdn.net/home/suryahari/~/miniconda3/envs/vornoi/lib/python3.10/site-packages/datasets/combine.py:124), in interleave_datasets(datasets, probabilities, seed, info, split, stopping_strategy)
[122](file:///home/suryahari/miniconda3/envs/vornoi/lib/python3.10/site-packages/datasets/combine.py?line=121) for dataset in datasets[1:]:
[123](file:///home/suryahari/miniconda3/envs/vornoi/lib/python3.10/site-packages/datasets/combine.py?line=122) if (map_style and not isinstance(dataset, Dataset)) or (iterable and not isinstance(dataset, IterableDataset)):
--> [124](file:///home/suryahari/miniconda3/envs/vornoi/lib/python3.10/site-packages/datasets/combine.py?line=123) raise ValueError(
[125](file:///home/suryahari/miniconda3/envs/vornoi/lib/python3.10/site-packages/datasets/combine.py?line=124) f"Unable to interleave a {type(datasets[0])} with a {type(dataset)}. Expected a list of Dataset objects or a list of IterableDataset objects."
[126](file:///home/suryahari/miniconda3/envs/vornoi/lib/python3.10/site-packages/datasets/combine.py?line=125) )
[127](file:///home/suryahari/miniconda3/envs/vornoi/lib/python3.10/site-packages/datasets/combine.py?line=126) if stopping_strategy not in ["first_exhausted", "all_exhausted"]:
[128](file:///home/suryahari/miniconda3/envs/vornoi/lib/python3.10/site-packages/datasets/combine.py?line=127) raise ValueError(f"{stopping_strategy} is not supported. Please enter a valid stopping_strategy.")
ValueError: Unable to interleave a with a . Expected a list of Dataset objects or a list of IterableDataset objects.
```
### Expected behavior
the error message should hopefully be more clear | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5851/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5851/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5845 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5845/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5845/comments | https://api.github.com/repos/huggingface/datasets/issues/5845/events | https://github.com/huggingface/datasets/pull/5845 | 1,706,253,251 | PR_kwDODunzps5QUMjS | 5,845 | Add `date_format` param to the CSV reader | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007592 / 0.011353 (-0.003761) | 0.005223 / 0.011008 (-0.005786) | 0.110218 / 0.038508 (0.071710) | 0.027644 / 0.023109 (0.004534) | 0.335063 / 0.275898 (0.059165) | 0.347102 / 0.323480 (0.023623) | 0.005107 / 0.007986 (-0.002878) | 0.003932 / 0.004328 (-0.000396) | 0.086095 / 0.004250 (0.081845) | 0.034735 / 0.037052 (-0.002317) | 0.329029 / 0.258489 (0.070540) | 0.370282 / 0.293841 (0.076441) | 0.043040 / 0.128546 (-0.085507) | 0.019626 / 0.075646 (-0.056021) | 0.336452 / 0.419271 (-0.082819) | 0.070365 / 0.043533 (0.026832) | 0.326881 / 0.255139 (0.071742) | 0.354984 / 0.283200 (0.071785) | 0.102605 / 0.141683 (-0.039077) | 1.459161 / 1.452155 (0.007007) | 1.453599 / 1.492716 (-0.039117) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.201021 / 0.018006 (0.183015) | 0.456415 / 0.000490 (0.455926) | 0.012349 / 0.000200 (0.012149) | 0.000115 / 0.000054 (0.000061) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025199 / 0.037411 (-0.012213) | 0.098536 / 0.014526 (0.084010) | 0.107528 / 0.176557 (-0.069028) | 0.160492 / 0.737135 (-0.576643) | 0.108660 / 0.296338 (-0.187679) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.527020 / 0.215209 (0.311811) | 5.357635 / 2.077655 (3.279980) | 2.062930 / 1.504120 (0.558811) | 1.783009 / 1.541195 (0.241815) | 1.840225 / 1.468490 (0.371735) | 1.074278 / 4.584777 (-3.510499) | 4.710533 / 3.745712 (0.964821) | 2.611202 / 5.269862 (-2.658660) | 1.885487 / 4.565676 (-2.680189) | 0.123201 / 0.424275 (-0.301074) | 0.013880 / 0.007607 (0.006273) | 0.636511 / 0.226044 (0.410467) | 6.516075 / 2.268929 (4.247146) | 2.710138 / 55.444624 (-52.734486) | 2.046606 / 6.876477 (-4.829871) | 2.085907 / 2.142072 (-0.056166) | 1.199489 / 4.805227 (-3.605738) | 0.211668 / 6.500664 (-6.288996) | 0.075436 / 0.075469 (-0.000033) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.219771 / 1.841788 (-0.622016) | 14.276215 / 8.074308 (6.201907) | 16.611529 / 10.191392 (6.420137) | 0.221091 / 0.680424 (-0.459333) | 0.024922 / 0.534201 (-0.509279) | 0.431906 / 0.579283 (-0.147377) | 0.518863 / 0.434364 (0.084499) | 0.515366 / 0.540337 (-0.024971) | 0.640411 / 1.386936 (-0.746525) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007955 / 0.011353 (-0.003398) | 0.004813 / 0.011008 (-0.006196) | 0.076508 / 0.038508 (0.038000) | 0.028137 / 0.023109 (0.005028) | 0.349609 / 0.275898 (0.073711) | 0.403588 / 0.323480 (0.080109) | 0.005456 / 0.007986 (-0.002530) | 0.005677 / 0.004328 (0.001349) | 0.076882 / 0.004250 (0.072632) | 0.039832 / 0.037052 (0.002779) | 0.351930 / 0.258489 (0.093440) | 0.390492 / 0.293841 (0.096651) | 0.045199 / 0.128546 (-0.083347) | 0.023945 / 0.075646 (-0.051701) | 0.091140 / 0.419271 (-0.328132) | 0.057728 / 0.043533 (0.014195) | 0.370663 / 0.255139 (0.115524) | 0.380649 / 0.283200 (0.097449) | 0.097017 / 0.141683 (-0.044666) | 1.362248 / 1.452155 (-0.089907) | 1.445699 / 1.492716 (-0.047018) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.204207 / 0.018006 (0.186201) | 0.474471 / 0.000490 (0.473981) | 0.012187 / 0.000200 (0.011987) | 0.000151 / 0.000054 (0.000096) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023123 / 0.037411 (-0.014288) | 0.097547 / 0.014526 (0.083021) | 0.113877 / 0.176557 (-0.062679) | 0.158307 / 0.737135 (-0.578828) | 0.113876 / 0.296338 (-0.182462) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.519920 / 0.215209 (0.304711) | 5.384371 / 2.077655 (3.306716) | 2.263276 / 1.504120 (0.759156) | 1.960604 / 1.541195 (0.419409) | 2.022864 / 1.468490 (0.554374) | 1.015430 / 4.584777 (-3.569347) | 4.774426 / 3.745712 (1.028714) | 4.549598 / 5.269862 (-0.720264) | 2.412638 / 4.565676 (-2.153039) | 0.117983 / 0.424275 (-0.306292) | 0.013340 / 0.007607 (0.005733) | 0.639826 / 0.226044 (0.413782) | 6.491622 / 2.268929 (4.222693) | 2.946892 / 55.444624 (-52.497732) | 2.376393 / 6.876477 (-4.500084) | 2.285592 / 2.142072 (0.143519) | 1.185049 / 4.805227 (-3.620178) | 0.204127 / 6.500664 (-6.296537) | 0.070285 / 0.075469 (-0.005184) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.439736 / 1.841788 (-0.402052) | 14.852087 / 8.074308 (6.777779) | 15.675742 / 10.191392 (5.484350) | 0.206577 / 0.680424 (-0.473846) | 0.031688 / 0.534201 (-0.502513) | 0.471003 / 0.579283 (-0.108280) | 0.505449 / 0.434364 (0.071085) | 0.506114 / 0.540337 (-0.034224) | 0.583752 / 1.386936 (-0.803184) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#d6fcff8a031db39cb31079bc1fa62ded6e35218c \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.012965 / 0.011353 (0.001612) | 0.006660 / 0.011008 (-0.004348) | 0.126060 / 0.038508 (0.087551) | 0.041154 / 0.023109 (0.018045) | 0.413428 / 0.275898 (0.137530) | 0.429035 / 0.323480 (0.105555) | 0.006680 / 0.007986 (-0.001305) | 0.005063 / 0.004328 (0.000734) | 0.092161 / 0.004250 (0.087911) | 0.056092 / 0.037052 (0.019039) | 0.421460 / 0.258489 (0.162971) | 0.450291 / 0.293841 (0.156450) | 0.050820 / 0.128546 (-0.077726) | 0.021392 / 0.075646 (-0.054255) | 0.426915 / 0.419271 (0.007643) | 0.064908 / 0.043533 (0.021375) | 0.406769 / 0.255139 (0.151630) | 0.434344 / 0.283200 (0.151144) | 0.127967 / 0.141683 (-0.013716) | 1.922414 / 1.452155 (0.470260) | 1.940717 / 1.492716 (0.448000) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.288024 / 0.018006 (0.270017) | 0.615859 / 0.000490 (0.615369) | 0.007095 / 0.000200 (0.006895) | 0.000160 / 0.000054 (0.000106) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028182 / 0.037411 (-0.009230) | 0.126277 / 0.014526 (0.111752) | 0.131687 / 0.176557 (-0.044870) | 0.206191 / 0.737135 (-0.530944) | 0.141799 / 0.296338 (-0.154539) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.631580 / 0.215209 (0.416371) | 6.141942 / 2.077655 (4.064287) | 2.476721 / 1.504120 (0.972602) | 2.128850 / 1.541195 (0.587655) | 2.236468 / 1.468490 (0.767978) | 1.188665 / 4.584777 (-3.396112) | 5.481179 / 3.745712 (1.735467) | 3.120333 / 5.269862 (-2.149529) | 2.365889 / 4.565676 (-2.199787) | 0.145081 / 0.424275 (-0.279194) | 0.015866 / 0.007607 (0.008259) | 0.795650 / 0.226044 (0.569605) | 7.595289 / 2.268929 (5.326361) | 3.174418 / 55.444624 (-52.270207) | 2.905207 / 6.876477 (-3.971270) | 2.428263 / 2.142072 (0.286191) | 1.408900 / 4.805227 (-3.396328) | 0.265485 / 6.500664 (-6.235179) | 0.083882 / 0.075469 (0.008413) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.517025 / 1.841788 (-0.324762) | 18.110288 / 8.074308 (10.035980) | 20.810003 / 10.191392 (10.618611) | 0.210380 / 0.680424 (-0.470044) | 0.030180 / 0.534201 (-0.504021) | 0.523453 / 0.579283 (-0.055830) | 0.603896 / 0.434364 (0.169532) | 0.622554 / 0.540337 (0.082216) | 0.737973 / 1.386936 (-0.648963) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009795 / 0.011353 (-0.001558) | 0.006269 / 0.011008 (-0.004739) | 0.099938 / 0.038508 (0.061430) | 0.035162 / 0.023109 (0.012052) | 0.506353 / 0.275898 (0.230455) | 0.527804 / 0.323480 (0.204324) | 0.007211 / 0.007986 (-0.000775) | 0.005498 / 0.004328 (0.001169) | 0.098325 / 0.004250 (0.094075) | 0.054513 / 0.037052 (0.017461) | 0.525764 / 0.258489 (0.267274) | 0.576699 / 0.293841 (0.282858) | 0.052800 / 0.128546 (-0.075747) | 0.021192 / 0.075646 (-0.054454) | 0.117676 / 0.419271 (-0.301596) | 0.055415 / 0.043533 (0.011882) | 0.516746 / 0.255139 (0.261607) | 0.528417 / 0.283200 (0.245217) | 0.116947 / 0.141683 (-0.024735) | 1.757864 / 1.452155 (0.305709) | 2.043632 / 1.492716 (0.550916) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.284018 / 0.018006 (0.266011) | 0.595086 / 0.000490 (0.594596) | 0.001945 / 0.000200 (0.001745) | 0.000127 / 0.000054 (0.000073) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032255 / 0.037411 (-0.005157) | 0.128201 / 0.014526 (0.113676) | 0.139189 / 0.176557 (-0.037367) | 0.199750 / 0.737135 (-0.537385) | 0.149406 / 0.296338 (-0.146933) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.652184 / 0.215209 (0.436975) | 6.453319 / 2.077655 (4.375664) | 2.831566 / 1.504120 (1.327446) | 2.453064 / 1.541195 (0.911869) | 2.622056 / 1.468490 (1.153566) | 1.191279 / 4.584777 (-3.393498) | 5.504720 / 3.745712 (1.759007) | 5.916900 / 5.269862 (0.647038) | 2.974400 / 4.565676 (-1.591277) | 0.142851 / 0.424275 (-0.281424) | 0.015241 / 0.007607 (0.007634) | 0.917537 / 0.226044 (0.691493) | 8.277645 / 2.268929 (6.008717) | 3.700495 / 55.444624 (-51.744130) | 3.047127 / 6.876477 (-3.829350) | 3.093216 / 2.142072 (0.951143) | 1.413529 / 4.805227 (-3.391698) | 0.259395 / 6.500664 (-6.241270) | 0.083144 / 0.075469 (0.007675) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.632240 / 1.841788 (-0.209548) | 18.687403 / 8.074308 (10.613095) | 20.134091 / 10.191392 (9.942699) | 0.238792 / 0.680424 (-0.441632) | 0.027645 / 0.534201 (-0.506556) | 0.518200 / 0.579283 (-0.061083) | 0.613535 / 0.434364 (0.179171) | 0.631414 / 0.540337 (0.091076) | 0.724658 / 1.386936 (-0.662278) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#ac7caa5e195ad76c7e8ef98914813383f4f668cf \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006228 / 0.011353 (-0.005125) | 0.004517 / 0.011008 (-0.006492) | 0.097998 / 0.038508 (0.059490) | 0.027903 / 0.023109 (0.004793) | 0.309789 / 0.275898 (0.033891) | 0.332784 / 0.323480 (0.009304) | 0.004757 / 0.007986 (-0.003228) | 0.003348 / 0.004328 (-0.000981) | 0.075193 / 0.004250 (0.070942) | 0.037382 / 0.037052 (0.000330) | 0.306929 / 0.258489 (0.048440) | 0.347304 / 0.293841 (0.053463) | 0.030235 / 0.128546 (-0.098312) | 0.011516 / 0.075646 (-0.064131) | 0.322249 / 0.419271 (-0.097023) | 0.044125 / 0.043533 (0.000592) | 0.303874 / 0.255139 (0.048735) | 0.326808 / 0.283200 (0.043608) | 0.088137 / 0.141683 (-0.053546) | 1.521426 / 1.452155 (0.069272) | 1.573823 / 1.492716 (0.081107) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.203204 / 0.018006 (0.185197) | 0.402247 / 0.000490 (0.401757) | 0.003146 / 0.000200 (0.002946) | 0.000088 / 0.000054 (0.000034) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022955 / 0.037411 (-0.014456) | 0.096059 / 0.014526 (0.081533) | 0.105552 / 0.176557 (-0.071004) | 0.167459 / 0.737135 (-0.569676) | 0.106723 / 0.296338 (-0.189615) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.454626 / 0.215209 (0.239417) | 4.556346 / 2.077655 (2.478691) | 2.220349 / 1.504120 (0.716229) | 2.011820 / 1.541195 (0.470625) | 2.048149 / 1.468490 (0.579659) | 0.697583 / 4.584777 (-3.887194) | 3.428394 / 3.745712 (-0.317318) | 1.863872 / 5.269862 (-3.405989) | 1.159691 / 4.565676 (-3.405985) | 0.082598 / 0.424275 (-0.341677) | 0.012202 / 0.007607 (0.004594) | 0.555617 / 0.226044 (0.329572) | 5.545481 / 2.268929 (3.276553) | 2.650850 / 55.444624 (-52.793775) | 2.305864 / 6.876477 (-4.570613) | 2.392252 / 2.142072 (0.250179) | 0.808512 / 4.805227 (-3.996716) | 0.152086 / 6.500664 (-6.348578) | 0.066440 / 0.075469 (-0.009029) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.211789 / 1.841788 (-0.629999) | 13.515546 / 8.074308 (5.441238) | 13.859870 / 10.191392 (3.668478) | 0.150335 / 0.680424 (-0.530088) | 0.016578 / 0.534201 (-0.517623) | 0.379145 / 0.579283 (-0.200138) | 0.393735 / 0.434364 (-0.040628) | 0.460219 / 0.540337 (-0.080118) | 0.555896 / 1.386936 (-0.831040) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006402 / 0.011353 (-0.004950) | 0.004558 / 0.011008 (-0.006450) | 0.077332 / 0.038508 (0.038824) | 0.027955 / 0.023109 (0.004846) | 0.407877 / 0.275898 (0.131979) | 0.432552 / 0.323480 (0.109072) | 0.004850 / 0.007986 (-0.003135) | 0.003329 / 0.004328 (-0.000999) | 0.075767 / 0.004250 (0.071517) | 0.035940 / 0.037052 (-0.001112) | 0.419544 / 0.258489 (0.161055) | 0.454672 / 0.293841 (0.160831) | 0.030461 / 0.128546 (-0.098085) | 0.011536 / 0.075646 (-0.064111) | 0.085774 / 0.419271 (-0.333498) | 0.039408 / 0.043533 (-0.004125) | 0.389909 / 0.255139 (0.134770) | 0.403287 / 0.283200 (0.120088) | 0.088385 / 0.141683 (-0.053298) | 1.596840 / 1.452155 (0.144686) | 1.659296 / 1.492716 (0.166580) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.216349 / 0.018006 (0.198342) | 0.394969 / 0.000490 (0.394479) | 0.000408 / 0.000200 (0.000208) | 0.000059 / 0.000054 (0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024346 / 0.037411 (-0.013066) | 0.099609 / 0.014526 (0.085084) | 0.106779 / 0.176557 (-0.069778) | 0.156889 / 0.737135 (-0.580247) | 0.110625 / 0.296338 (-0.185714) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.443809 / 0.215209 (0.228600) | 4.450524 / 2.077655 (2.372870) | 2.151694 / 1.504120 (0.647574) | 1.952521 / 1.541195 (0.411326) | 1.963320 / 1.468490 (0.494830) | 0.709291 / 4.584777 (-3.875486) | 3.415708 / 3.745712 (-0.330005) | 1.850498 / 5.269862 (-3.419363) | 1.164355 / 4.565676 (-3.401321) | 0.084977 / 0.424275 (-0.339298) | 0.013284 / 0.007607 (0.005677) | 0.555103 / 0.226044 (0.329059) | 5.583587 / 2.268929 (3.314658) | 2.608754 / 55.444624 (-52.835870) | 2.264079 / 6.876477 (-4.612398) | 2.272455 / 2.142072 (0.130382) | 0.820849 / 4.805227 (-3.984379) | 0.155063 / 6.500664 (-6.345601) | 0.069709 / 0.075469 (-0.005760) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.293285 / 1.841788 (-0.548503) | 14.181867 / 8.074308 (6.107559) | 13.021280 / 10.191392 (2.829888) | 0.130101 / 0.680424 (-0.550323) | 0.016461 / 0.534201 (-0.517740) | 0.383651 / 0.579283 (-0.195632) | 0.387353 / 0.434364 (-0.047011) | 0.443351 / 0.540337 (-0.096986) | 0.529448 / 1.386936 (-0.857488) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#05145d50b5bb1b7b42b76516cd6492d4868c46ba \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007513 / 0.011353 (-0.003840) | 0.005328 / 0.011008 (-0.005680) | 0.096937 / 0.038508 (0.058429) | 0.036230 / 0.023109 (0.013121) | 0.325808 / 0.275898 (0.049910) | 0.363601 / 0.323480 (0.040121) | 0.006130 / 0.007986 (-0.001855) | 0.004352 / 0.004328 (0.000023) | 0.073543 / 0.004250 (0.069293) | 0.054114 / 0.037052 (0.017062) | 0.328952 / 0.258489 (0.070463) | 0.366943 / 0.293841 (0.073102) | 0.035768 / 0.128546 (-0.092778) | 0.012505 / 0.075646 (-0.063142) | 0.332260 / 0.419271 (-0.087012) | 0.066673 / 0.043533 (0.023140) | 0.323866 / 0.255139 (0.068727) | 0.341311 / 0.283200 (0.058112) | 0.129898 / 0.141683 (-0.011785) | 1.456890 / 1.452155 (0.004735) | 1.546933 / 1.492716 (0.054217) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.299236 / 0.018006 (0.281229) | 0.496134 / 0.000490 (0.495645) | 0.004233 / 0.000200 (0.004033) | 0.000081 / 0.000054 (0.000027) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028089 / 0.037411 (-0.009322) | 0.104723 / 0.014526 (0.090197) | 0.121032 / 0.176557 (-0.055525) | 0.179916 / 0.737135 (-0.557220) | 0.126628 / 0.296338 (-0.169711) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.403497 / 0.215209 (0.188288) | 4.052481 / 2.077655 (1.974827) | 1.804419 / 1.504120 (0.300299) | 1.619833 / 1.541195 (0.078638) | 1.732438 / 1.468490 (0.263948) | 0.702474 / 4.584777 (-3.882303) | 3.808973 / 3.745712 (0.063261) | 3.682764 / 5.269862 (-1.587098) | 1.919184 / 4.565676 (-2.646493) | 0.086638 / 0.424275 (-0.337637) | 0.012265 / 0.007607 (0.004658) | 0.501273 / 0.226044 (0.275229) | 5.010918 / 2.268929 (2.741989) | 2.278114 / 55.444624 (-53.166510) | 1.942266 / 6.876477 (-4.934211) | 2.101982 / 2.142072 (-0.040091) | 0.847622 / 4.805227 (-3.957606) | 0.172973 / 6.500664 (-6.327691) | 0.066884 / 0.075469 (-0.008586) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.187609 / 1.841788 (-0.654179) | 15.089485 / 8.074308 (7.015177) | 14.787398 / 10.191392 (4.596006) | 0.168254 / 0.680424 (-0.512170) | 0.018266 / 0.534201 (-0.515935) | 0.423204 / 0.579283 (-0.156079) | 0.435238 / 0.434364 (0.000874) | 0.512473 / 0.540337 (-0.027864) | 0.618091 / 1.386936 (-0.768845) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007249 / 0.011353 (-0.004104) | 0.005297 / 0.011008 (-0.005711) | 0.076428 / 0.038508 (0.037920) | 0.033565 / 0.023109 (0.010456) | 0.373756 / 0.275898 (0.097858) | 0.407405 / 0.323480 (0.083925) | 0.006100 / 0.007986 (-0.001886) | 0.006482 / 0.004328 (0.002153) | 0.075884 / 0.004250 (0.071633) | 0.055338 / 0.037052 (0.018286) | 0.378721 / 0.258489 (0.120232) | 0.427065 / 0.293841 (0.133224) | 0.036285 / 0.128546 (-0.092261) | 0.012460 / 0.075646 (-0.063186) | 0.087641 / 0.419271 (-0.331630) | 0.048199 / 0.043533 (0.004666) | 0.386785 / 0.255139 (0.131646) | 0.386702 / 0.283200 (0.103503) | 0.110087 / 0.141683 (-0.031596) | 1.511204 / 1.452155 (0.059050) | 1.585671 / 1.492716 (0.092954) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.313558 / 0.018006 (0.295552) | 0.496991 / 0.000490 (0.496501) | 0.001492 / 0.000200 (0.001292) | 0.000093 / 0.000054 (0.000038) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031814 / 0.037411 (-0.005597) | 0.113486 / 0.014526 (0.098960) | 0.125208 / 0.176557 (-0.051348) | 0.174469 / 0.737135 (-0.562666) | 0.131095 / 0.296338 (-0.165244) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.439282 / 0.215209 (0.224073) | 4.362286 / 2.077655 (2.284631) | 2.153271 / 1.504120 (0.649151) | 1.990482 / 1.541195 (0.449288) | 2.103322 / 1.468490 (0.634831) | 0.692522 / 4.584777 (-3.892254) | 3.861931 / 3.745712 (0.116219) | 3.686294 / 5.269862 (-1.583567) | 1.734525 / 4.565676 (-2.831152) | 0.085057 / 0.424275 (-0.339218) | 0.012116 / 0.007607 (0.004509) | 0.547996 / 0.226044 (0.321952) | 5.513835 / 2.268929 (3.244906) | 2.723829 / 55.444624 (-52.720795) | 2.404715 / 6.876477 (-4.471761) | 2.514768 / 2.142072 (0.372696) | 0.834972 / 4.805227 (-3.970255) | 0.168261 / 6.500664 (-6.332403) | 0.066464 / 0.075469 (-0.009005) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.259923 / 1.841788 (-0.581865) | 15.646277 / 8.074308 (7.571969) | 13.097598 / 10.191392 (2.906206) | 0.187991 / 0.680424 (-0.492433) | 0.017358 / 0.534201 (-0.516843) | 0.427979 / 0.579283 (-0.151304) | 0.425747 / 0.434364 (-0.008617) | 0.501907 / 0.540337 (-0.038431) | 0.595106 / 1.386936 (-0.791830) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#db56f7f0d2f0b99af4da17d388c205152504c7d9 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009378 / 0.011353 (-0.001975) | 0.006434 / 0.011008 (-0.004574) | 0.120603 / 0.038508 (0.082095) | 0.042929 / 0.023109 (0.019820) | 0.366853 / 0.275898 (0.090955) | 0.436795 / 0.323480 (0.113315) | 0.007730 / 0.007986 (-0.000256) | 0.004842 / 0.004328 (0.000513) | 0.091058 / 0.004250 (0.086808) | 0.058256 / 0.037052 (0.021203) | 0.378692 / 0.258489 (0.120203) | 0.467384 / 0.293841 (0.173543) | 0.042948 / 0.128546 (-0.085598) | 0.015172 / 0.075646 (-0.060475) | 0.409225 / 0.419271 (-0.010046) | 0.083672 / 0.043533 (0.040140) | 0.390088 / 0.255139 (0.134949) | 0.406965 / 0.283200 (0.123765) | 0.142132 / 0.141683 (0.000449) | 1.765737 / 1.452155 (0.313582) | 1.895419 / 1.492716 (0.402703) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.244052 / 0.018006 (0.226046) | 0.553383 / 0.000490 (0.552893) | 0.006798 / 0.000200 (0.006598) | 0.000227 / 0.000054 (0.000173) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032032 / 0.037411 (-0.005380) | 0.129990 / 0.014526 (0.115464) | 0.140338 / 0.176557 (-0.036219) | 0.212155 / 0.737135 (-0.524980) | 0.147395 / 0.296338 (-0.148943) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.478760 / 0.215209 (0.263551) | 4.751335 / 2.077655 (2.673680) | 2.164755 / 1.504120 (0.660635) | 1.944288 / 1.541195 (0.403094) | 2.077657 / 1.468490 (0.609167) | 0.818519 / 4.584777 (-3.766258) | 4.689013 / 3.745712 (0.943301) | 2.484079 / 5.269862 (-2.785782) | 1.788632 / 4.565676 (-2.777044) | 0.100484 / 0.424275 (-0.323791) | 0.013838 / 0.007607 (0.006231) | 0.589650 / 0.226044 (0.363605) | 5.859461 / 2.268929 (3.590533) | 2.670025 / 55.444624 (-52.774599) | 2.688709 / 6.876477 (-4.187768) | 2.408060 / 2.142072 (0.265988) | 0.972107 / 4.805227 (-3.833120) | 0.194425 / 6.500664 (-6.306239) | 0.076077 / 0.075469 (0.000608) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.430150 / 1.841788 (-0.411638) | 17.710507 / 8.074308 (9.636199) | 16.210789 / 10.191392 (6.019397) | 0.163940 / 0.680424 (-0.516484) | 0.020295 / 0.534201 (-0.513906) | 0.472596 / 0.579283 (-0.106687) | 0.483107 / 0.434364 (0.048743) | 0.585269 / 0.540337 (0.044931) | 0.705526 / 1.386936 (-0.681410) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008864 / 0.011353 (-0.002489) | 0.006095 / 0.011008 (-0.004913) | 0.088702 / 0.038508 (0.050194) | 0.041596 / 0.023109 (0.018486) | 0.453515 / 0.275898 (0.177617) | 0.476217 / 0.323480 (0.152737) | 0.007574 / 0.007986 (-0.000412) | 0.004727 / 0.004328 (0.000398) | 0.087271 / 0.004250 (0.083021) | 0.059631 / 0.037052 (0.022578) | 0.449379 / 0.258489 (0.190890) | 0.494436 / 0.293841 (0.200595) | 0.043448 / 0.128546 (-0.085098) | 0.014580 / 0.075646 (-0.061067) | 0.103836 / 0.419271 (-0.315435) | 0.057537 / 0.043533 (0.014004) | 0.449359 / 0.255139 (0.194220) | 0.447577 / 0.283200 (0.164377) | 0.123600 / 0.141683 (-0.018083) | 1.748448 / 1.452155 (0.296294) | 1.902116 / 1.492716 (0.409399) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.237214 / 0.018006 (0.219207) | 0.497648 / 0.000490 (0.497158) | 0.003519 / 0.000200 (0.003319) | 0.000112 / 0.000054 (0.000058) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034477 / 0.037411 (-0.002934) | 0.132627 / 0.014526 (0.118101) | 0.139721 / 0.176557 (-0.036836) | 0.195705 / 0.737135 (-0.541430) | 0.150762 / 0.296338 (-0.145577) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.521306 / 0.215209 (0.306097) | 5.184982 / 2.077655 (3.107328) | 2.503979 / 1.504120 (0.999859) | 2.301054 / 1.541195 (0.759860) | 2.352713 / 1.468490 (0.884222) | 0.819804 / 4.584777 (-3.764973) | 4.584011 / 3.745712 (0.838299) | 2.497311 / 5.269862 (-2.772550) | 1.561262 / 4.565676 (-3.004414) | 0.101814 / 0.424275 (-0.322461) | 0.014078 / 0.007607 (0.006471) | 0.666564 / 0.226044 (0.440520) | 6.616379 / 2.268929 (4.347450) | 3.263892 / 55.444624 (-52.180732) | 2.891774 / 6.876477 (-3.984703) | 2.945260 / 2.142072 (0.803188) | 1.014379 / 4.805227 (-3.790848) | 0.201762 / 6.500664 (-6.298902) | 0.078012 / 0.075469 (0.002543) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.567808 / 1.841788 (-0.273980) | 19.096552 / 8.074308 (11.022244) | 15.522285 / 10.191392 (5.330893) | 0.226568 / 0.680424 (-0.453856) | 0.021078 / 0.534201 (-0.513123) | 0.501686 / 0.579283 (-0.077597) | 0.517575 / 0.434364 (0.083211) | 0.589685 / 0.540337 (0.049348) | 0.705053 / 1.386936 (-0.681883) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#db56f7f0d2f0b99af4da17d388c205152504c7d9 \"CML watermark\")\n"
] | 2023-05-11T17:29:57 | 2023-05-15T07:39:13 | 2023-05-12T15:14:48 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5845",
"html_url": "https://github.com/huggingface/datasets/pull/5845",
"diff_url": "https://github.com/huggingface/datasets/pull/5845.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5845.patch",
"merged_at": "2023-05-12T15:14:48"
} | Adds the `date_format` param introduced in Pandas 2.0 to the CSV reader and improves its type hints. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5845/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5845/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5841 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5841/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5841/comments | https://api.github.com/repos/huggingface/datasets/issues/5841/events | https://github.com/huggingface/datasets/issues/5841 | 1,705,286,639 | I_kwDODunzps5lpJvv | 5,841 | Abusurdly slow on iteration | {
"login": "fecet",
"id": 41792945,
"node_id": "MDQ6VXNlcjQxNzkyOTQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/41792945?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fecet",
"html_url": "https://github.com/fecet",
"followers_url": "https://api.github.com/users/fecet/followers",
"following_url": "https://api.github.com/users/fecet/following{/other_user}",
"gists_url": "https://api.github.com/users/fecet/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fecet/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fecet/subscriptions",
"organizations_url": "https://api.github.com/users/fecet/orgs",
"repos_url": "https://api.github.com/users/fecet/repos",
"events_url": "https://api.github.com/users/fecet/events{/privacy}",
"received_events_url": "https://api.github.com/users/fecet/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi ! You can try to use the [Image](https://huggingface.co/docs/datasets/v2.12.0/en/package_reference/main_classes#datasets.Image) type which [decodes images on-the-fly](https://huggingface.co/docs/datasets/v2.12.0/en/about_dataset_features#image-feature) into pytorch tensors :)\r\n\r\n```python\r\nds = Dataset.from_dict({\"tensor\":a}).with_format(\"torch\")\r\n%time sum(1 for _ in ds)\r\n# CPU times: user 5.04 s, sys: 96.5 ms, total: 5.14 s\r\n# Wall time: 5.14 s\r\n# 10000\r\n```\r\n\r\n```python\r\nfeatures = Features({\"tensor\": Image()})\r\nds = Dataset.from_dict({\"tensor\":a}, features=features).with_format(\"torch\")\r\n%time sum(1 for _ in ds)\r\n# CPU times: user 1.86 s, sys: 49 ms, total: 1.91 s\r\n# Wall time: 1.9 s\r\n# 10000\r\n```\r\n\r\n-> Speed x2.7\r\n\r\nAnd if you want to keep using arrays of integers, consider using the [Array2D](https://huggingface.co/docs/datasets/v2.12.0/en/package_reference/main_classes#datasets.Array2D) or [Array3D](https://huggingface.co/docs/datasets/v2.12.0/en/package_reference/main_classes#datasets.Array3D) types which are even faster (since it doesn't decode images):\r\n\r\n```python\r\nfeatures = Features({\"tensor\": Array2D(shape=(100, 224), dtype=\"float32\")})\r\nds = Dataset.from_dict({\"tensor\":a}, features=features).with_format(\"torch\")\r\n%time sum(1 for _ in ds)\r\n# CPU times: user 828 ms, sys: 68.4 ms, total: 896 ms\r\n# Wall time: 897 ms\r\n# 10000\r\n```\r\n\r\n-> Speed x5.7\r\n\r\nBatching also speeds up a lot\r\n\r\n```python\r\nfrom torch.utils.data import DataLoader\r\ndl = DataLoader(ds, batch_size=100)\r\n%time sum(1 for _ in dl)\r\n# CPU times: user 564 ms, sys: 83.5 ms, total: 648 ms\r\n# Wall time: 579 ms\r\n# 100\r\n```\r\n\r\n-> Speed x8.9\r\n\r\n```python\r\n%time sum(1 for _ in ds.iter(batch_size=100))\r\n# CPU times: user 119 ms, sys: 96.8 ms, total: 215 ms\r\n# Wall time: 117 ms\r\n# 100\r\n```\r\n\r\n-> Speed x46",
"Anyway, regarding the speed difference between numpy and pytorch, I think the issue is that we first convert numpy sub-arrays to pytorch and then consolidate into one tensor, while we should to the opposite. Indeed converting a numpy array to pytorch has a fix cost that seems to cause a slow down. The current pipeline is\r\n\r\n```\r\narrow -> nested numpy arrays -> lists of torch tensors -> one torch tensor\r\n```\r\n\r\nand we should do\r\n\r\n```\r\narrow -> nested numpy arrays -> one numpy array -> one torch tensor\r\n```",
"I have a similar issue: iterating over a dataset takes 5s without applying any transform, but takes ~30s after applying a transform.\r\nHere is the minimum code to reproduce the problem\r\n\r\n```python\r\nimport numpy as np\r\nfrom datasets import Dataset, DatasetDict, load_dataset, Array3D, Image, Features\r\nfrom torch.utils.data import DataLoader\r\nfrom tqdm import tqdm\r\nimport torchvision \r\nfrom torchvision.transforms import ToTensor, Normalize\r\n\r\n\r\n#################################\r\n# Without transform\r\n#################################\r\n \r\ntrain_dataset = load_dataset(\r\n 'cifar100',\r\n split='train',\r\n use_auth_token=True,\r\n)\r\n\r\ntrain_dataset.set_format(type=\"numpy\", columns=[\"img\", \"fine_label\"])\r\n\r\ntrain_loader= DataLoader(\r\n train_dataset,\r\n batch_size=100,\r\n pin_memory=False,\r\n shuffle=True,\r\n num_workers=8,\r\n)\r\n\r\nfor batch in tqdm(train_loader, desc=\"Loading data, no transform\"):\r\n pass\r\n\r\n\r\n#################################\r\n# With transform\r\n#################################\r\n\r\ntransform_func = torchvision.transforms.Compose([\r\n ToTensor(), \r\n Normalize(mean=[0.485, 0.456, 0.406], std= [0.229, 0.224, 0.225]),] \r\n)\r\n \r\ntrain_dataset = train_dataset.map(\r\n desc=f\"Preprocessing samples\",\r\n function=lambda x: {\"img\": transform_func(x[\"img\"])},\r\n)\r\n\r\ntrain_dataset.set_format(type=\"numpy\", columns=[\"img\", \"fine_label\"])\r\n\r\n\r\ntrain_loader= DataLoader(\r\n train_dataset,\r\n batch_size=100,\r\n pin_memory=False,\r\n shuffle=True,\r\n num_workers=8,\r\n)\r\n\r\n\r\nfor batch in tqdm(train_loader, desc=\"Loading data after transform\"):\r\n pass \r\n```\r\n\r\nI have also tried converting the Image column to an Array3D\r\n```python\r\nimg_shape = train_dataset[0][\"img\"].shape\r\n\r\nfeatures = train_dataset.features.copy()\r\nfeatures[\"x\"] = Array3D(shape=img_shape, dtype=\"float32\")\r\n\r\ntrain_dataset = train_dataset.map(\r\n desc=f\"Preprocessing samples\",\r\n function=lambda x: {\"x\": np.array(x[\"img\"], dtype=np.uint8)},\r\n features=features,\r\n)\r\ntrain_dataset.cast_column(\"x\", Array3D(shape=img_shape, dtype=\"float32\"))\r\ntrain_dataset.set_format(type=\"numpy\", columns=[\"x\", \"fine_label\"])\r\n```\r\nbut to no avail. Any clue?",
"Thanks! I convert my dataset feature to Array3D and this speed became awesome!"
] | 2023-05-11T08:04:09 | 2023-05-15T15:38:13 | 2023-05-15T15:38:13 | NONE | null | null | null | ### Describe the bug
I am attempting to iterate through an image dataset, but I am encountering a significant slowdown in the iteration speed. In order to investigate this issue, I conducted the following experiment:
```python
a=torch.randn(100,224)
a=torch.stack([a] * 10000)
a.shape
# %%
ds=Dataset.from_dict({"tensor":a})
for i in tqdm(ds.with_format("numpy")):
pass
for i in tqdm(ds.with_format("torch")):
pass
```
I noticed that the dataset in numpy format performs significantly faster than the one in torch format. My hypothesis is that the dataset undergoes a transformation process of torch->python->numpy(torch) in the background, which might be causing the slowdown. Is there any way to expedite the process by bypassing such transformations?
Furthermore, if I increase the size of a to an image shape, like:
```python
a=torch.randn(3,224,224)
```
the iteration speed becomes absurdly slow, around 100 iterations per second, whereas the speed with numpy format is approximately 250 iterations per second. This level of speed would be unacceptable for large image datasets, as it could take several hours just to iterate through a single epoch.
### Steps to reproduce the bug
```python
a=torch.randn(100,224)
a=torch.stack([a] * 10000)
a.shape
# %%
ds=Dataset.from_dict({"tensor":a})
for i in tqdm(ds.with_format("numpy")):
pass
for i in tqdm(ds.with_format("torch")):
pass
```
### Expected behavior
iteration faster
### Environment info
- `datasets` version: 2.11.0
- Platform: Linux-5.4.0-148-generic-x86_64-with-glibc2.10
- Python version: 3.8.16
- Huggingface_hub version: 0.13.4
- PyArrow version: 11.0.0
- Pandas version: 2.0.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5841/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5841/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5840 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5840/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5840/comments | https://api.github.com/repos/huggingface/datasets/issues/5840/events | https://github.com/huggingface/datasets/issues/5840 | 1,705,212,085 | I_kwDODunzps5lo3i1 | 5,840 | load model error. | {
"login": "LanShanPi",
"id": 58167546,
"node_id": "MDQ6VXNlcjU4MTY3NTQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/58167546?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LanShanPi",
"html_url": "https://github.com/LanShanPi",
"followers_url": "https://api.github.com/users/LanShanPi/followers",
"following_url": "https://api.github.com/users/LanShanPi/following{/other_user}",
"gists_url": "https://api.github.com/users/LanShanPi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LanShanPi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LanShanPi/subscriptions",
"organizations_url": "https://api.github.com/users/LanShanPi/orgs",
"repos_url": "https://api.github.com/users/LanShanPi/repos",
"events_url": "https://api.github.com/users/LanShanPi/events{/privacy}",
"received_events_url": "https://api.github.com/users/LanShanPi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Please report this in the `transformers` repo, as it's not related to `datasets`"
] | 2023-05-11T07:12:38 | 2023-05-12T13:44:07 | 2023-05-12T13:44:06 | NONE | null | null | null | ### Describe the bug
I had trained one model use deepspeed, when I load the final load I get the follow error:
OSError: Can't load tokenizer for '/XXX/DeepSpeedExamples/applications/DeepSpeed-Chat/output/step3-models/1.3b/actor'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure '/home/fm001/hzl/Project/DeepSpeedExamples/applications/DeepSpeed-Chat/output/step3-models/1.3b/actor' is the correct path to a directory containing all relevant files for a BloomTokenizerFast tokenizer.
my load code is : python chat.py --path /XXX/DeepSpeedExamples/applications/DeepSpeed-Chat/output/step3-models/1.3b/actor/
### Steps to reproduce the bug
。。。
### Expected behavior
。。。
### Environment info
。。。 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5840/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5840/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5838 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5838/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5838/comments | https://api.github.com/repos/huggingface/datasets/issues/5838/events | https://github.com/huggingface/datasets/issues/5838 | 1,703,210,848 | I_kwDODunzps5lhO9g | 5,838 | Streaming support for `load_from_disk` | {
"login": "Nilabhra",
"id": 5437792,
"node_id": "MDQ6VXNlcjU0Mzc3OTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/5437792?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Nilabhra",
"html_url": "https://github.com/Nilabhra",
"followers_url": "https://api.github.com/users/Nilabhra/followers",
"following_url": "https://api.github.com/users/Nilabhra/following{/other_user}",
"gists_url": "https://api.github.com/users/Nilabhra/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Nilabhra/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Nilabhra/subscriptions",
"organizations_url": "https://api.github.com/users/Nilabhra/orgs",
"repos_url": "https://api.github.com/users/Nilabhra/repos",
"events_url": "https://api.github.com/users/Nilabhra/events{/privacy}",
"received_events_url": "https://api.github.com/users/Nilabhra/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | [
"As the name says, `load_from_disk` load the data from your disk. If the data is hosted on S3, it is first downloaded locally and then loaded from your disk.\r\n\r\nThere is a discussion on streaming data from S3 here though: #5281 ",
"@lhoestq \r\nThanks for your comment. I have checked out the discussion before and attempted at replicating the mentioned changes in the main branch (#5580). What I found was that if a dataset is saved using `save_to_disk`, it cannot be read by `load_dataset`. The error message asks me to to use `load_from_disk` instead. What would be the correct way of saving the data in this scenario?",
"Using `push_to_hub` you can save the dataset on the HF Hub as parquet files, and reload it / stream it using `load_dataset` :)\r\n\r\nIf you want to save your dataset somewhere else you can use `.to_parquet` to get a parquet file. If your dataset is big it's usually recommended to shard it into multi parquet files (around 1GB each).",
"@lhoestq \r\nThanks for the explanation. Appreciate it. I'll try this out.",
"@lhoestq\r\nI tried the method you mentioned. This the current scenario I'm facing:\r\n\r\n- The parquet file can be read from disk and streaming can be enabled.\r\n- The parquet file can be read from `s3` (local MinIO).\r\n- When `streaming=True` is enabled for `s3`, I get the error mentioned below:\r\n\r\n```\r\nFile ~/.../lib/python3.8/site-packages/s3fs/core.py:502, in S3FileSystem.set_session(self, refresh, kwargs)\r\n 500 conf = AioConfig(**config_kwargs)\r\n 501 if self.session is None:\r\n--> 502 self.session = aiobotocore.session.AioSession(**self.kwargs)\r\n 504 for parameters in (config_kwargs, self.kwargs, init_kwargs, client_kwargs):\r\n 505 for option in (\"region_name\", \"endpoint_url\"):\r\n\r\nTypeError: __init__() got an unexpected keyword argument 'headers'\r\n```\r\n\r\nDoes this mean there is a bug in the main branch?",
"Streaming from S3 is still experimental, there might be a few bugs unfortunately.\r\n\r\nCan you share the full stack trace ?",
"@lhoestq \r\nSure, here you go:\r\n\r\n```python\r\nTypeError Traceback (most recent call last)\r\nCell In[8], line 1\r\n----> 1 dataset = load_dataset(\"parquet\", data_files=[\"s3://<bucket name>/<data folder>/data-parquet\"], storage_options=fs.storage_options, streaming=True)\r\n\r\nFile ~/.../datasets/src/datasets/load.py:1790, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, num_proc, storage_options, **config_kwargs)\r\n 1788 # Return iterable dataset in case of streaming\r\n 1789 if streaming:\r\n-> 1790 return builder_instance.as_streaming_dataset(split=split)\r\n 1792 # Some datasets are already processed on the HF google storage\r\n 1793 # Don't try downloading from Google storage for the packaged datasets as text, json, csv or pandas\r\n 1794 try_from_hf_gcs = path not in _PACKAGED_DATASETS_MODULES\r\n\r\nFile ~/.../datasets/src/datasets/builder.py:1264, in DatasetBuilder.as_streaming_dataset(self, split, base_path)\r\n 1257 dl_manager = StreamingDownloadManager(\r\n 1258 base_path=base_path or self.base_path,\r\n 1259 download_config=DownloadConfig(use_auth_token=self.use_auth_token, storage_options=self.storage_options),\r\n 1260 dataset_name=self.name,\r\n 1261 data_dir=self.config.data_dir,\r\n 1262 )\r\n 1263 self._check_manual_download(dl_manager)\r\n-> 1264 splits_generators = {sg.name: sg for sg in self._split_generators(dl_manager)}\r\n 1265 # By default, return all splits\r\n 1266 if split is None:\r\n\r\nFile ~/.../datasets/src/datasets/packaged_modules/parquet/parquet.py:34, in Parquet._split_generators(self, dl_manager)\r\n 32 if not self.config.data_files:\r\n 33 raise ValueError(f\"At least one data file must be specified, but got data_files={self.config.data_files}\")\r\n---> 34 data_files = dl_manager.download_and_extract(self.config.data_files)\r\n 35 if isinstance(data_files, (str, list, tuple)):\r\n 36 files = data_files\r\n\r\nFile ~/.../datasets/src/datasets/download/streaming_download_manager.py:1087, in StreamingDownloadManager.download_and_extract(self, url_or_urls)\r\n 1069 def download_and_extract(self, url_or_urls):\r\n 1070 \"\"\"Prepare given `url_or_urls` for streaming (add extraction protocol).\r\n 1071 \r\n 1072 This is the lazy version of `DownloadManager.download_and_extract` for streaming.\r\n (...)\r\n 1085 url(s): (`str` or `list` or `dict`), URL(s) to stream data from matching the given input `url_or_urls`.\r\n 1086 \"\"\"\r\n-> 1087 return self.extract(self.download(url_or_urls))\r\n\r\nFile ~/.../datasets/src/datasets/download/streaming_download_manager.py:1039, in StreamingDownloadManager.extract(self, url_or_urls)\r\n 1020 def extract(self, url_or_urls):\r\n 1021 \"\"\"Add extraction protocol for given url(s) for streaming.\r\n 1022 \r\n 1023 This is the lazy version of `DownloadManager.extract` for streaming.\r\n (...)\r\n 1037 ```\r\n 1038 \"\"\"\r\n-> 1039 urlpaths = map_nested(self._extract, url_or_urls, map_tuple=True)\r\n 1040 return urlpaths\r\n\r\nFile ~/.../datasets/src/datasets/utils/py_utils.py:443, in map_nested(function, data_struct, dict_only, map_list, map_tuple, map_numpy, num_proc, parallel_min_length, types, disable_tqdm, desc)\r\n 441 num_proc = 1\r\n 442 if num_proc <= 1 or len(iterable) < parallel_min_length:\r\n--> 443 mapped = [\r\n 444 _single_map_nested((function, obj, types, None, True, None))\r\n 445 for obj in logging.tqdm(iterable, disable=disable_tqdm, desc=desc)\r\n 446 ]\r\n 447 else:\r\n 448 num_proc = num_proc if num_proc <= len(iterable) else len(iterable)\r\n\r\nFile ~/.../datasets/src/datasets/utils/py_utils.py:444, in <listcomp>(.0)\r\n 441 num_proc = 1\r\n 442 if num_proc <= 1 or len(iterable) < parallel_min_length:\r\n 443 mapped = [\r\n--> 444 _single_map_nested((function, obj, types, None, True, None))\r\n 445 for obj in logging.tqdm(iterable, disable=disable_tqdm, desc=desc)\r\n 446 ]\r\n 447 else:\r\n 448 num_proc = num_proc if num_proc <= len(iterable) else len(iterable)\r\n\r\nFile ~/.../datasets/src/datasets/utils/py_utils.py:363, in _single_map_nested(args)\r\n 361 return {k: _single_map_nested((function, v, types, None, True, None)) for k, v in pbar}\r\n 362 else:\r\n--> 363 mapped = [_single_map_nested((function, v, types, None, True, None)) for v in pbar]\r\n 364 if isinstance(data_struct, list):\r\n 365 return mapped\r\n\r\nFile ~/.../datasets/src/datasets/utils/py_utils.py:363, in <listcomp>(.0)\r\n 361 return {k: _single_map_nested((function, v, types, None, True, None)) for k, v in pbar}\r\n 362 else:\r\n--> 363 mapped = [_single_map_nested((function, v, types, None, True, None)) for v in pbar]\r\n 364 if isinstance(data_struct, list):\r\n 365 return mapped\r\n\r\nFile ~/.../datasets/src/datasets/utils/py_utils.py:346, in _single_map_nested(args)\r\n 344 # Singleton first to spare some computation\r\n 345 if not isinstance(data_struct, dict) and not isinstance(data_struct, types):\r\n--> 346 return function(data_struct)\r\n 348 # Reduce logging to keep things readable in multiprocessing with tqdm\r\n 349 if rank is not None and logging.get_verbosity() < logging.WARNING:\r\n\r\nFile ~/.../datasets/src/datasets/download/streaming_download_manager.py:1044, in StreamingDownloadManager._extract(self, urlpath)\r\n 1042 def _extract(self, urlpath: str) -> str:\r\n 1043 urlpath = str(urlpath)\r\n-> 1044 protocol = _get_extraction_protocol(urlpath, use_auth_token=self.download_config.use_auth_token)\r\n 1045 # get inner file: zip://train-00000.json.gz::https://foo.bar/data.zip -> zip://train-00000.json.gz\r\n 1046 path = urlpath.split(\"::\")[0]\r\n\r\nFile ~/.../datasets/src/datasets/download/streaming_download_manager.py:433, in _get_extraction_protocol(urlpath, use_auth_token)\r\n 431 else:\r\n 432 urlpath, kwargs = urlpath, {}\r\n--> 433 with fsspec.open(urlpath, **kwargs) as f:\r\n 434 return _get_extraction_protocol_with_magic_number(f)\r\n\r\nFile ~/.../lib/python3.8/site-packages/fsspec/core.py:102, in OpenFile.__enter__(self)\r\n 99 def __enter__(self):\r\n 100 mode = self.mode.replace(\"t\", \"\").replace(\"b\", \"\") + \"b\"\r\n--> 102 f = self.fs.open(self.path, mode=mode)\r\n 104 self.fobjects = [f]\r\n 106 if self.compression is not None:\r\n\r\nFile ~/.../lib/python3.8/site-packages/fsspec/spec.py:1199, in AbstractFileSystem.open(self, path, mode, block_size, cache_options, compression, **kwargs)\r\n 1197 else:\r\n 1198 ac = kwargs.pop(\"autocommit\", not self._intrans)\r\n-> 1199 f = self._open(\r\n 1200 path,\r\n 1201 mode=mode,\r\n 1202 block_size=block_size,\r\n 1203 autocommit=ac,\r\n 1204 cache_options=cache_options,\r\n 1205 **kwargs,\r\n 1206 )\r\n 1207 if compression is not None:\r\n 1208 from fsspec.compression import compr\r\n\r\nFile ~/.../lib/python3.8/site-packages/s3fs/core.py:659, in S3FileSystem._open(self, path, mode, block_size, acl, version_id, fill_cache, cache_type, autocommit, requester_pays, cache_options, **kwargs)\r\n 656 if cache_type is None:\r\n 657 cache_type = self.default_cache_type\r\n--> 659 return S3File(\r\n 660 self,\r\n 661 path,\r\n 662 mode,\r\n 663 block_size=block_size,\r\n 664 acl=acl,\r\n 665 version_id=version_id,\r\n 666 fill_cache=fill_cache,\r\n 667 s3_additional_kwargs=kw,\r\n 668 cache_type=cache_type,\r\n 669 autocommit=autocommit,\r\n 670 requester_pays=requester_pays,\r\n 671 cache_options=cache_options,\r\n 672 )\r\n\r\nFile ~/.../lib/python3.8/site-packages/s3fs/core.py:2043, in S3File.__init__(self, s3, path, mode, block_size, acl, version_id, fill_cache, s3_additional_kwargs, autocommit, cache_type, requester_pays, cache_options)\r\n 2041 self.details = s3.info(path)\r\n 2042 self.version_id = self.details.get(\"VersionId\")\r\n-> 2043 super().__init__(\r\n 2044 s3,\r\n 2045 path,\r\n 2046 mode,\r\n 2047 block_size,\r\n 2048 autocommit=autocommit,\r\n 2049 cache_type=cache_type,\r\n 2050 cache_options=cache_options,\r\n 2051 )\r\n 2052 self.s3 = self.fs # compatibility\r\n 2054 # when not using autocommit we want to have transactional state to manage\r\n\r\nFile ~/.../lib/python3.8/site-packages/fsspec/spec.py:1555, in AbstractBufferedFile.__init__(self, fs, path, mode, block_size, autocommit, cache_type, cache_options, size, **kwargs)\r\n 1553 self.size = size\r\n 1554 else:\r\n-> 1555 self.size = self.details[\"size\"]\r\n 1556 self.cache = caches[cache_type](\r\n 1557 self.blocksize, self._fetch_range, self.size, **cache_options\r\n 1558 )\r\n 1559 else:\r\n\r\nFile ~/.../lib/python3.8/site-packages/fsspec/spec.py:1568, in AbstractBufferedFile.details(self)\r\n 1565 @property\r\n 1566 def details(self):\r\n 1567 if self._details is None:\r\n-> 1568 self._details = self.fs.info(self.path)\r\n 1569 return self._details\r\n\r\nFile ~/.../lib/python3.8/site-packages/fsspec/asyn.py:115, in sync_wrapper.<locals>.wrapper(*args, **kwargs)\r\n 112 @functools.wraps(func)\r\n 113 def wrapper(*args, **kwargs):\r\n 114 self = obj or args[0]\r\n--> 115 return sync(self.loop, func, *args, **kwargs)\r\n\r\nFile ~/.../lib/python3.8/site-packages/fsspec/asyn.py:100, in sync(loop, func, timeout, *args, **kwargs)\r\n 98 raise FSTimeoutError from return_result\r\n 99 elif isinstance(return_result, BaseException):\r\n--> 100 raise return_result\r\n 101 else:\r\n 102 return return_result\r\n\r\nFile ~/.../lib/python3.8/site-packages/fsspec/asyn.py:55, in _runner(event, coro, result, timeout)\r\n 53 coro = asyncio.wait_for(coro, timeout=timeout)\r\n 54 try:\r\n---> 55 result[0] = await coro\r\n 56 except Exception as ex:\r\n 57 result[0] = ex\r\n\r\nFile ~/.../lib/python3.8/site-packages/s3fs/core.py:1248, in S3FileSystem._info(self, path, bucket, key, refresh, version_id)\r\n 1246 if key:\r\n 1247 try:\r\n-> 1248 out = await self._call_s3(\r\n 1249 \"head_object\",\r\n 1250 self.kwargs,\r\n 1251 Bucket=bucket,\r\n 1252 Key=key,\r\n 1253 **version_id_kw(version_id),\r\n 1254 **self.req_kw,\r\n 1255 )\r\n 1256 return {\r\n 1257 \"ETag\": out.get(\"ETag\", \"\"),\r\n 1258 \"LastModified\": out[\"LastModified\"],\r\n (...)\r\n 1264 \"ContentType\": out.get(\"ContentType\"),\r\n 1265 }\r\n 1266 except FileNotFoundError:\r\n\r\nFile ~/.../lib/python3.8/site-packages/s3fs/core.py:341, in S3FileSystem._call_s3(self, method, *akwarglist, **kwargs)\r\n 340 async def _call_s3(self, method, *akwarglist, **kwargs):\r\n--> 341 await self.set_session()\r\n 342 s3 = await self.get_s3(kwargs.get(\"Bucket\"))\r\n 343 method = getattr(s3, method)\r\n\r\nFile ~/.../lib/python3.8/site-packages/s3fs/core.py:502, in S3FileSystem.set_session(self, refresh, kwargs)\r\n 500 conf = AioConfig(**config_kwargs)\r\n 501 if self.session is None:\r\n--> 502 self.session = aiobotocore.session.AioSession(**self.kwargs)\r\n 504 for parameters in (config_kwargs, self.kwargs, init_kwargs, client_kwargs):\r\n 505 for option in (\"region_name\", \"endpoint_url\"):\r\n\r\nTypeError: __init__() got an unexpected keyword argument 'headers'\r\n```",
"Is `\"data-parquet\"` a file ? In `data_files` you should pass the paths to the parquet files (not to a directory). Glob patterns are not supported yet for S3 URLs.\r\n\r\nThe bug seems to happen because your provided data file has no extension. Because of that it tries to infer it from the file content, but fails because `_get_extraction_protocol` doesn't support S3 URLs yet.\r\n\r\n",
"@lhoestq \r\nThank you for your answer. Saving the file with `.parquet` extension solved the issue! This is really great! Really appreciate all the help! \r\n\r\nLet me know if I should close the issue or feel free to close it if you want.",
"Cool ! I'm glad it worked out :)\r\n\r\nSure feel free to close the issue, since the original question about streaming with load_from_disk has been answered anyway"
] | 2023-05-10T06:25:22 | 2023-05-12T09:37:45 | 2023-05-12T09:37:45 | NONE | null | null | null | ### Feature request
Support for streaming datasets stored in object stores in `load_from_disk`.
### Motivation
The `load_from_disk` function supports fetching datasets stored in object stores such as `s3`. In many cases, the datasets that are stored in object stores are very large and being able to stream the data from the buckets becomes essential.
### Your contribution
I'd be happy to contribute this feature if I could get the guidance on how to do so. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5838/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5838/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5836 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5836/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5836/comments | https://api.github.com/repos/huggingface/datasets/issues/5836/events | https://github.com/huggingface/datasets/pull/5836 | 1,702,773,316 | PR_kwDODunzps5QIgzu | 5,836 | [docs] Custom decoding transforms | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5836). All of your documentation changes will be reflected on that endpoint.",
"The error seems unrelated to the changes, so feel free to merge.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006562 / 0.011353 (-0.004791) | 0.004568 / 0.011008 (-0.006440) | 0.098151 / 0.038508 (0.059643) | 0.028117 / 0.023109 (0.005008) | 0.305442 / 0.275898 (0.029544) | 0.338288 / 0.323480 (0.014808) | 0.005012 / 0.007986 (-0.002973) | 0.003415 / 0.004328 (-0.000913) | 0.075022 / 0.004250 (0.070771) | 0.036869 / 0.037052 (-0.000183) | 0.301427 / 0.258489 (0.042937) | 0.348485 / 0.293841 (0.054644) | 0.030761 / 0.128546 (-0.097785) | 0.011461 / 0.075646 (-0.064185) | 0.321987 / 0.419271 (-0.097285) | 0.042885 / 0.043533 (-0.000648) | 0.300691 / 0.255139 (0.045552) | 0.333208 / 0.283200 (0.050008) | 0.090203 / 0.141683 (-0.051480) | 1.459744 / 1.452155 (0.007590) | 1.522960 / 1.492716 (0.030243) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.213219 / 0.018006 (0.195213) | 0.408118 / 0.000490 (0.407629) | 0.003716 / 0.000200 (0.003516) | 0.000077 / 0.000054 (0.000022) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023060 / 0.037411 (-0.014351) | 0.097423 / 0.014526 (0.082897) | 0.103988 / 0.176557 (-0.072568) | 0.162793 / 0.737135 (-0.574343) | 0.108282 / 0.296338 (-0.188056) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.431628 / 0.215209 (0.216419) | 4.300881 / 2.077655 (2.223226) | 2.058853 / 1.504120 (0.554733) | 1.897910 / 1.541195 (0.356715) | 1.991723 / 1.468490 (0.523233) | 0.699686 / 4.584777 (-3.885091) | 3.395004 / 3.745712 (-0.350708) | 1.841613 / 5.269862 (-3.428248) | 1.152347 / 4.565676 (-3.413330) | 0.082517 / 0.424275 (-0.341758) | 0.012323 / 0.007607 (0.004715) | 0.535812 / 0.226044 (0.309767) | 5.374103 / 2.268929 (3.105174) | 2.429662 / 55.444624 (-53.014962) | 2.097199 / 6.876477 (-4.779277) | 2.172625 / 2.142072 (0.030552) | 0.810156 / 4.805227 (-3.995071) | 0.151629 / 6.500664 (-6.349035) | 0.066528 / 0.075469 (-0.008941) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.220667 / 1.841788 (-0.621121) | 13.696976 / 8.074308 (5.622668) | 14.042916 / 10.191392 (3.851524) | 0.129626 / 0.680424 (-0.550798) | 0.016593 / 0.534201 (-0.517607) | 0.383747 / 0.579283 (-0.195536) | 0.386872 / 0.434364 (-0.047492) | 0.456524 / 0.540337 (-0.083813) | 0.545033 / 1.386936 (-0.841903) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006361 / 0.011353 (-0.004992) | 0.004516 / 0.011008 (-0.006493) | 0.077155 / 0.038508 (0.038647) | 0.027239 / 0.023109 (0.004130) | 0.359892 / 0.275898 (0.083994) | 0.391994 / 0.323480 (0.068514) | 0.004950 / 0.007986 (-0.003036) | 0.003379 / 0.004328 (-0.000949) | 0.077057 / 0.004250 (0.072806) | 0.039562 / 0.037052 (0.002509) | 0.364244 / 0.258489 (0.105755) | 0.416033 / 0.293841 (0.122192) | 0.031049 / 0.128546 (-0.097497) | 0.011479 / 0.075646 (-0.064167) | 0.086479 / 0.419271 (-0.332793) | 0.039381 / 0.043533 (-0.004151) | 0.372143 / 0.255139 (0.117004) | 0.388569 / 0.283200 (0.105369) | 0.090954 / 0.141683 (-0.050728) | 1.540957 / 1.452155 (0.088802) | 1.596841 / 1.492716 (0.104125) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.221130 / 0.018006 (0.203123) | 0.403728 / 0.000490 (0.403238) | 0.003172 / 0.000200 (0.002972) | 0.000078 / 0.000054 (0.000024) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024963 / 0.037411 (-0.012449) | 0.101065 / 0.014526 (0.086539) | 0.110846 / 0.176557 (-0.065710) | 0.158578 / 0.737135 (-0.578557) | 0.112235 / 0.296338 (-0.184104) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.457320 / 0.215209 (0.242111) | 4.548094 / 2.077655 (2.470439) | 2.175376 / 1.504120 (0.671256) | 1.964755 / 1.541195 (0.423561) | 2.008128 / 1.468490 (0.539638) | 0.702448 / 4.584777 (-3.882329) | 3.437595 / 3.745712 (-0.308117) | 3.009871 / 5.269862 (-2.259990) | 1.558181 / 4.565676 (-3.007496) | 0.082568 / 0.424275 (-0.341707) | 0.012371 / 0.007607 (0.004764) | 0.550688 / 0.226044 (0.324644) | 5.534210 / 2.268929 (3.265282) | 2.649605 / 55.444624 (-52.795020) | 2.317293 / 6.876477 (-4.559184) | 2.351525 / 2.142072 (0.209453) | 0.808971 / 4.805227 (-3.996256) | 0.152737 / 6.500664 (-6.347927) | 0.068416 / 0.075469 (-0.007053) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.340219 / 1.841788 (-0.501569) | 13.903388 / 8.074308 (5.829080) | 13.063477 / 10.191392 (2.872085) | 0.130216 / 0.680424 (-0.550208) | 0.016522 / 0.534201 (-0.517679) | 0.398946 / 0.579283 (-0.180337) | 0.382450 / 0.434364 (-0.051914) | 0.491007 / 0.540337 (-0.049330) | 0.577747 / 1.386936 (-0.809189) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#15c37ed142e4fbcb8c00ae62d4c71c84ce41959a \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007812 / 0.011353 (-0.003541) | 0.005563 / 0.011008 (-0.005446) | 0.099372 / 0.038508 (0.060864) | 0.035629 / 0.023109 (0.012520) | 0.301457 / 0.275898 (0.025559) | 0.339136 / 0.323480 (0.015656) | 0.006152 / 0.007986 (-0.001834) | 0.005843 / 0.004328 (0.001515) | 0.075280 / 0.004250 (0.071030) | 0.052789 / 0.037052 (0.015736) | 0.301805 / 0.258489 (0.043316) | 0.347918 / 0.293841 (0.054078) | 0.036182 / 0.128546 (-0.092364) | 0.012655 / 0.075646 (-0.062991) | 0.334428 / 0.419271 (-0.084844) | 0.062746 / 0.043533 (0.019213) | 0.296932 / 0.255139 (0.041793) | 0.314115 / 0.283200 (0.030916) | 0.121291 / 0.141683 (-0.020392) | 1.453252 / 1.452155 (0.001097) | 1.564714 / 1.492716 (0.071997) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.243810 / 0.018006 (0.225804) | 0.547129 / 0.000490 (0.546640) | 0.004666 / 0.000200 (0.004466) | 0.000089 / 0.000054 (0.000035) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028214 / 0.037411 (-0.009197) | 0.108878 / 0.014526 (0.094352) | 0.122313 / 0.176557 (-0.054243) | 0.182412 / 0.737135 (-0.554723) | 0.127014 / 0.296338 (-0.169324) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.423946 / 0.215209 (0.208737) | 4.207112 / 2.077655 (2.129457) | 2.048658 / 1.504120 (0.544538) | 1.843593 / 1.541195 (0.302398) | 1.952426 / 1.468490 (0.483936) | 0.712098 / 4.584777 (-3.872679) | 3.824971 / 3.745712 (0.079258) | 3.507141 / 5.269862 (-1.762721) | 1.868866 / 4.565676 (-2.696810) | 0.087895 / 0.424275 (-0.336380) | 0.012783 / 0.007607 (0.005176) | 0.524087 / 0.226044 (0.298042) | 5.246498 / 2.268929 (2.977570) | 2.495944 / 55.444624 (-52.948680) | 2.126779 / 6.876477 (-4.749698) | 2.315545 / 2.142072 (0.173472) | 0.859546 / 4.805227 (-3.945681) | 0.173457 / 6.500664 (-6.327208) | 0.067483 / 0.075469 (-0.007986) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.173851 / 1.841788 (-0.667937) | 15.091913 / 8.074308 (7.017605) | 14.640035 / 10.191392 (4.448643) | 0.168498 / 0.680424 (-0.511926) | 0.017513 / 0.534201 (-0.516688) | 0.425770 / 0.579283 (-0.153513) | 0.434248 / 0.434364 (-0.000116) | 0.504204 / 0.540337 (-0.036134) | 0.616885 / 1.386936 (-0.770051) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007775 / 0.011353 (-0.003578) | 0.005153 / 0.011008 (-0.005855) | 0.075461 / 0.038508 (0.036953) | 0.034994 / 0.023109 (0.011885) | 0.372389 / 0.275898 (0.096491) | 0.397911 / 0.323480 (0.074431) | 0.006572 / 0.007986 (-0.001413) | 0.005549 / 0.004328 (0.001220) | 0.075101 / 0.004250 (0.070851) | 0.054014 / 0.037052 (0.016962) | 0.368964 / 0.258489 (0.110475) | 0.425353 / 0.293841 (0.131512) | 0.035546 / 0.128546 (-0.093001) | 0.012707 / 0.075646 (-0.062939) | 0.087418 / 0.419271 (-0.331853) | 0.046425 / 0.043533 (0.002893) | 0.363982 / 0.255139 (0.108843) | 0.376421 / 0.283200 (0.093221) | 0.105369 / 0.141683 (-0.036314) | 1.494408 / 1.452155 (0.042253) | 1.596783 / 1.492716 (0.104067) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.258780 / 0.018006 (0.240773) | 0.533373 / 0.000490 (0.532883) | 0.000432 / 0.000200 (0.000232) | 0.000058 / 0.000054 (0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030687 / 0.037411 (-0.006725) | 0.110231 / 0.014526 (0.095705) | 0.123738 / 0.176557 (-0.052819) | 0.171999 / 0.737135 (-0.565137) | 0.127673 / 0.296338 (-0.168665) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.448058 / 0.215209 (0.232849) | 4.459381 / 2.077655 (2.381726) | 2.234020 / 1.504120 (0.729900) | 2.038616 / 1.541195 (0.497421) | 2.123795 / 1.468490 (0.655305) | 0.702664 / 4.584777 (-3.882113) | 3.837133 / 3.745712 (0.091420) | 2.138574 / 5.269862 (-3.131287) | 1.375955 / 4.565676 (-3.189722) | 0.086996 / 0.424275 (-0.337280) | 0.012461 / 0.007607 (0.004854) | 0.557978 / 0.226044 (0.331934) | 5.648613 / 2.268929 (3.379685) | 2.777829 / 55.444624 (-52.666796) | 2.392424 / 6.876477 (-4.484052) | 2.482823 / 2.142072 (0.340750) | 0.851891 / 4.805227 (-3.953336) | 0.171335 / 6.500664 (-6.329329) | 0.065041 / 0.075469 (-0.010428) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.319697 / 1.841788 (-0.522091) | 15.748688 / 8.074308 (7.674380) | 13.397042 / 10.191392 (3.205650) | 0.166424 / 0.680424 (-0.514000) | 0.017755 / 0.534201 (-0.516446) | 0.424989 / 0.579283 (-0.154294) | 0.424705 / 0.434364 (-0.009659) | 0.494190 / 0.540337 (-0.046147) | 0.588315 / 1.386936 (-0.798622) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#15c37ed142e4fbcb8c00ae62d4c71c84ce41959a \"CML watermark\")\n"
] | 2023-05-09T21:21:41 | 2023-05-15T07:36:12 | 2023-05-10T20:23:03 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5836",
"html_url": "https://github.com/huggingface/datasets/pull/5836",
"diff_url": "https://github.com/huggingface/datasets/pull/5836.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5836.patch",
"merged_at": "2023-05-10T20:23:03"
} | Adds custom decoding transform solution to the docs to fix #5782. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5836/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5836/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5835 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5835/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5835/comments | https://api.github.com/repos/huggingface/datasets/issues/5835/events | https://github.com/huggingface/datasets/pull/5835 | 1,702,522,620 | PR_kwDODunzps5QHquR | 5,835 | Always set nullable fields in the writer | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006640 / 0.011353 (-0.004713) | 0.004606 / 0.011008 (-0.006402) | 0.098870 / 0.038508 (0.060362) | 0.028201 / 0.023109 (0.005092) | 0.304396 / 0.275898 (0.028498) | 0.339804 / 0.323480 (0.016324) | 0.005011 / 0.007986 (-0.002974) | 0.003530 / 0.004328 (-0.000799) | 0.075223 / 0.004250 (0.070973) | 0.037922 / 0.037052 (0.000870) | 0.310273 / 0.258489 (0.051784) | 0.348324 / 0.293841 (0.054483) | 0.030181 / 0.128546 (-0.098365) | 0.011584 / 0.075646 (-0.064062) | 0.322637 / 0.419271 (-0.096635) | 0.043119 / 0.043533 (-0.000414) | 0.314514 / 0.255139 (0.059375) | 0.334384 / 0.283200 (0.051185) | 0.092551 / 0.141683 (-0.049132) | 1.496694 / 1.452155 (0.044539) | 1.555426 / 1.492716 (0.062710) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.205078 / 0.018006 (0.187072) | 0.399200 / 0.000490 (0.398710) | 0.004881 / 0.000200 (0.004681) | 0.000200 / 0.000054 (0.000146) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025042 / 0.037411 (-0.012369) | 0.101501 / 0.014526 (0.086975) | 0.107430 / 0.176557 (-0.069127) | 0.170107 / 0.737135 (-0.567028) | 0.111253 / 0.296338 (-0.185086) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.460358 / 0.215209 (0.245149) | 4.592037 / 2.077655 (2.514383) | 2.222612 / 1.504120 (0.718493) | 2.022804 / 1.541195 (0.481610) | 2.040824 / 1.468490 (0.572334) | 0.700485 / 4.584777 (-3.884292) | 3.427847 / 3.745712 (-0.317866) | 2.836916 / 5.269862 (-2.432946) | 1.505055 / 4.565676 (-3.060621) | 0.083206 / 0.424275 (-0.341069) | 0.046492 / 0.007607 (0.038885) | 0.555562 / 0.226044 (0.329518) | 5.563574 / 2.268929 (3.294645) | 2.635273 / 55.444624 (-52.809351) | 2.299377 / 6.876477 (-4.577100) | 2.394512 / 2.142072 (0.252440) | 0.809541 / 4.805227 (-3.995686) | 0.151814 / 6.500664 (-6.348850) | 0.067241 / 0.075469 (-0.008228) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.188396 / 1.841788 (-0.653392) | 13.714596 / 8.074308 (5.640288) | 14.076906 / 10.191392 (3.885514) | 0.143447 / 0.680424 (-0.536977) | 0.016514 / 0.534201 (-0.517687) | 0.383075 / 0.579283 (-0.196209) | 0.386997 / 0.434364 (-0.047367) | 0.441941 / 0.540337 (-0.098396) | 0.522145 / 1.386936 (-0.864791) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006266 / 0.011353 (-0.005086) | 0.004562 / 0.011008 (-0.006446) | 0.077472 / 0.038508 (0.038964) | 0.027596 / 0.023109 (0.004486) | 0.400498 / 0.275898 (0.124600) | 0.406728 / 0.323480 (0.083248) | 0.004745 / 0.007986 (-0.003241) | 0.003375 / 0.004328 (-0.000954) | 0.076645 / 0.004250 (0.072394) | 0.037756 / 0.037052 (0.000703) | 0.415183 / 0.258489 (0.156694) | 0.413758 / 0.293841 (0.119917) | 0.030624 / 0.128546 (-0.097922) | 0.011525 / 0.075646 (-0.064121) | 0.086033 / 0.419271 (-0.333238) | 0.039307 / 0.043533 (-0.004226) | 0.418192 / 0.255139 (0.163053) | 0.403152 / 0.283200 (0.119952) | 0.094141 / 0.141683 (-0.047542) | 1.459012 / 1.452155 (0.006857) | 1.546493 / 1.492716 (0.053777) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.239494 / 0.018006 (0.221488) | 0.420918 / 0.000490 (0.420428) | 0.000411 / 0.000200 (0.000211) | 0.000057 / 0.000054 (0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024525 / 0.037411 (-0.012886) | 0.099793 / 0.014526 (0.085267) | 0.105888 / 0.176557 (-0.070669) | 0.155912 / 0.737135 (-0.581223) | 0.109937 / 0.296338 (-0.186401) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.470108 / 0.215209 (0.254899) | 4.696390 / 2.077655 (2.618735) | 2.467841 / 1.504120 (0.963721) | 2.275012 / 1.541195 (0.733818) | 2.430736 / 1.468490 (0.962245) | 0.700442 / 4.584777 (-3.884335) | 3.458451 / 3.745712 (-0.287261) | 1.921120 / 5.269862 (-3.348742) | 1.183292 / 4.565676 (-3.382384) | 0.083985 / 0.424275 (-0.340290) | 0.012510 / 0.007607 (0.004903) | 0.589066 / 0.226044 (0.363022) | 5.896070 / 2.268929 (3.627141) | 2.935379 / 55.444624 (-52.509245) | 2.599524 / 6.876477 (-4.276953) | 2.663426 / 2.142072 (0.521354) | 0.812096 / 4.805227 (-3.993131) | 0.152559 / 6.500664 (-6.348105) | 0.066906 / 0.075469 (-0.008563) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.333341 / 1.841788 (-0.508446) | 14.441667 / 8.074308 (6.367359) | 14.754069 / 10.191392 (4.562677) | 0.155707 / 0.680424 (-0.524716) | 0.016983 / 0.534201 (-0.517218) | 0.389386 / 0.579283 (-0.189897) | 0.394106 / 0.434364 (-0.040258) | 0.447355 / 0.540337 (-0.092982) | 0.533142 / 1.386936 (-0.853794) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#99ee4467ce77f8f718159a535e237dd8790b5bed \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007801 / 0.011353 (-0.003552) | 0.004884 / 0.011008 (-0.006124) | 0.114754 / 0.038508 (0.076245) | 0.040427 / 0.023109 (0.017318) | 0.402064 / 0.275898 (0.126166) | 0.428830 / 0.323480 (0.105350) | 0.006429 / 0.007986 (-0.001556) | 0.004394 / 0.004328 (0.000066) | 0.087681 / 0.004250 (0.083431) | 0.053684 / 0.037052 (0.016632) | 0.399967 / 0.258489 (0.141478) | 0.445298 / 0.293841 (0.151457) | 0.033194 / 0.128546 (-0.095352) | 0.010288 / 0.075646 (-0.065359) | 0.390719 / 0.419271 (-0.028552) | 0.059311 / 0.043533 (0.015778) | 0.393651 / 0.255139 (0.138512) | 0.418395 / 0.283200 (0.135196) | 0.121494 / 0.141683 (-0.020189) | 1.735470 / 1.452155 (0.283315) | 1.820485 / 1.492716 (0.327769) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.012887 / 0.018006 (-0.005119) | 0.491652 / 0.000490 (0.491162) | 0.005481 / 0.000200 (0.005281) | 0.000127 / 0.000054 (0.000073) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030931 / 0.037411 (-0.006480) | 0.125212 / 0.014526 (0.110686) | 0.136004 / 0.176557 (-0.040552) | 0.201686 / 0.737135 (-0.535449) | 0.140181 / 0.296338 (-0.156157) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.475003 / 0.215209 (0.259794) | 4.743918 / 2.077655 (2.666263) | 2.149422 / 1.504120 (0.645302) | 1.925016 / 1.541195 (0.383821) | 2.061441 / 1.468490 (0.592951) | 0.619845 / 4.584777 (-3.964932) | 4.534691 / 3.745712 (0.788979) | 2.248198 / 5.269862 (-3.021664) | 1.409868 / 4.565676 (-3.155808) | 0.080265 / 0.424275 (-0.344010) | 0.014455 / 0.007607 (0.006848) | 0.597810 / 0.226044 (0.371765) | 5.845492 / 2.268929 (3.576564) | 2.729139 / 55.444624 (-52.715486) | 2.313879 / 6.876477 (-4.562598) | 2.418763 / 2.142072 (0.276690) | 0.748687 / 4.805227 (-4.056540) | 0.165278 / 6.500664 (-6.335387) | 0.076848 / 0.075469 (0.001379) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.416349 / 1.841788 (-0.425439) | 17.440903 / 8.074308 (9.366595) | 17.025733 / 10.191392 (6.834341) | 0.167428 / 0.680424 (-0.512995) | 0.020484 / 0.534201 (-0.513717) | 0.470273 / 0.579283 (-0.109010) | 0.494380 / 0.434364 (0.060016) | 0.566131 / 0.540337 (0.025794) | 0.690444 / 1.386936 (-0.696492) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007695 / 0.011353 (-0.003657) | 0.005551 / 0.011008 (-0.005457) | 0.087812 / 0.038508 (0.049304) | 0.039107 / 0.023109 (0.015998) | 0.436461 / 0.275898 (0.160563) | 0.465116 / 0.323480 (0.141636) | 0.006590 / 0.007986 (-0.001396) | 0.004672 / 0.004328 (0.000343) | 0.087109 / 0.004250 (0.082858) | 0.054227 / 0.037052 (0.017175) | 0.442660 / 0.258489 (0.184171) | 0.484296 / 0.293841 (0.190455) | 0.033308 / 0.128546 (-0.095238) | 0.010780 / 0.075646 (-0.064866) | 0.095255 / 0.419271 (-0.324016) | 0.054399 / 0.043533 (0.010866) | 0.431734 / 0.255139 (0.176595) | 0.453583 / 0.283200 (0.170383) | 0.116067 / 0.141683 (-0.025616) | 1.780701 / 1.452155 (0.328546) | 1.851077 / 1.492716 (0.358360) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.228000 / 0.018006 (0.209994) | 0.485733 / 0.000490 (0.485243) | 0.003955 / 0.000200 (0.003755) | 0.000109 / 0.000054 (0.000054) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033974 / 0.037411 (-0.003437) | 0.134504 / 0.014526 (0.119978) | 0.144421 / 0.176557 (-0.032135) | 0.202171 / 0.737135 (-0.534964) | 0.152015 / 0.296338 (-0.144323) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.520462 / 0.215209 (0.305253) | 5.233339 / 2.077655 (3.155684) | 2.575013 / 1.504120 (1.070893) | 2.384119 / 1.541195 (0.842924) | 2.403856 / 1.468490 (0.935366) | 0.618656 / 4.584777 (-3.966121) | 4.663582 / 3.745712 (0.917870) | 3.738594 / 5.269862 (-1.531268) | 1.794903 / 4.565676 (-2.770773) | 0.077903 / 0.424275 (-0.346372) | 0.014681 / 0.007607 (0.007074) | 0.648615 / 0.226044 (0.422570) | 6.503721 / 2.268929 (4.234792) | 3.326239 / 55.444624 (-52.118386) | 2.989791 / 6.876477 (-3.886685) | 2.995479 / 2.142072 (0.853407) | 0.765483 / 4.805227 (-4.039744) | 0.169783 / 6.500664 (-6.330882) | 0.077533 / 0.075469 (0.002064) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.518736 / 1.841788 (-0.323051) | 17.989119 / 8.074308 (9.914811) | 15.484365 / 10.191392 (5.292973) | 0.168507 / 0.680424 (-0.511917) | 0.020289 / 0.534201 (-0.513912) | 0.467491 / 0.579283 (-0.111793) | 0.501714 / 0.434364 (0.067350) | 0.553418 / 0.540337 (0.013081) | 0.662199 / 1.386936 (-0.724737) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#5ebda17e4362bd5f6123543a14fa526a3b54481a \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007044 / 0.011353 (-0.004309) | 0.004750 / 0.011008 (-0.006258) | 0.096694 / 0.038508 (0.058186) | 0.035682 / 0.023109 (0.012573) | 0.300613 / 0.275898 (0.024715) | 0.334831 / 0.323480 (0.011351) | 0.006428 / 0.007986 (-0.001558) | 0.004456 / 0.004328 (0.000128) | 0.075060 / 0.004250 (0.070810) | 0.053166 / 0.037052 (0.016114) | 0.299601 / 0.258489 (0.041112) | 0.359521 / 0.293841 (0.065680) | 0.028072 / 0.128546 (-0.100474) | 0.009216 / 0.075646 (-0.066430) | 0.328895 / 0.419271 (-0.090377) | 0.050881 / 0.043533 (0.007349) | 0.298265 / 0.255139 (0.043126) | 0.318095 / 0.283200 (0.034896) | 0.116046 / 0.141683 (-0.025637) | 1.491312 / 1.452155 (0.039157) | 1.556053 / 1.492716 (0.063337) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.014248 / 0.018006 (-0.003758) | 0.551455 / 0.000490 (0.550965) | 0.006096 / 0.000200 (0.005897) | 0.000145 / 0.000054 (0.000091) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030598 / 0.037411 (-0.006813) | 0.109549 / 0.014526 (0.095023) | 0.123207 / 0.176557 (-0.053350) | 0.181940 / 0.737135 (-0.555195) | 0.128965 / 0.296338 (-0.167374) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.404552 / 0.215209 (0.189343) | 4.030674 / 2.077655 (1.953020) | 1.841819 / 1.504120 (0.337699) | 1.650055 / 1.541195 (0.108860) | 1.763208 / 1.468490 (0.294718) | 0.532715 / 4.584777 (-4.052062) | 3.774810 / 3.745712 (0.029098) | 3.221927 / 5.269862 (-2.047934) | 1.607974 / 4.565676 (-2.957702) | 0.067160 / 0.424275 (-0.357116) | 0.012479 / 0.007607 (0.004872) | 0.498801 / 0.226044 (0.272757) | 4.980567 / 2.268929 (2.711638) | 2.356017 / 55.444624 (-53.088608) | 2.018975 / 6.876477 (-4.857502) | 2.218343 / 2.142072 (0.076270) | 0.645714 / 4.805227 (-4.159514) | 0.145470 / 6.500664 (-6.355195) | 0.065666 / 0.075469 (-0.009803) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.205756 / 1.841788 (-0.636031) | 15.682779 / 8.074308 (7.608470) | 14.748987 / 10.191392 (4.557595) | 0.167105 / 0.680424 (-0.513319) | 0.017554 / 0.534201 (-0.516647) | 0.393924 / 0.579283 (-0.185359) | 0.432659 / 0.434364 (-0.001705) | 0.502033 / 0.540337 (-0.038304) | 0.602244 / 1.386936 (-0.784692) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007077 / 0.011353 (-0.004276) | 0.004911 / 0.011008 (-0.006097) | 0.075120 / 0.038508 (0.036612) | 0.035460 / 0.023109 (0.012351) | 0.362569 / 0.275898 (0.086671) | 0.398995 / 0.323480 (0.075515) | 0.006587 / 0.007986 (-0.001398) | 0.004571 / 0.004328 (0.000242) | 0.074647 / 0.004250 (0.070397) | 0.057331 / 0.037052 (0.020279) | 0.365123 / 0.258489 (0.106634) | 0.408617 / 0.293841 (0.114776) | 0.028911 / 0.128546 (-0.099635) | 0.009533 / 0.075646 (-0.066113) | 0.081566 / 0.419271 (-0.337705) | 0.048841 / 0.043533 (0.005308) | 0.367245 / 0.255139 (0.112106) | 0.375975 / 0.283200 (0.092776) | 0.123211 / 0.141683 (-0.018472) | 1.471588 / 1.452155 (0.019433) | 1.569342 / 1.492716 (0.076625) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.328443 / 0.018006 (0.310436) | 0.541402 / 0.000490 (0.540912) | 0.000440 / 0.000200 (0.000240) | 0.000058 / 0.000054 (0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030772 / 0.037411 (-0.006639) | 0.115833 / 0.014526 (0.101307) | 0.127837 / 0.176557 (-0.048719) | 0.180897 / 0.737135 (-0.556238) | 0.132458 / 0.296338 (-0.163881) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.445979 / 0.215209 (0.230770) | 4.453101 / 2.077655 (2.375447) | 2.276625 / 1.504120 (0.772505) | 2.102167 / 1.541195 (0.560972) | 2.181583 / 1.468490 (0.713093) | 0.525069 / 4.584777 (-4.059708) | 3.803446 / 3.745712 (0.057734) | 1.954173 / 5.269862 (-3.315688) | 1.088734 / 4.565676 (-3.476942) | 0.066020 / 0.424275 (-0.358255) | 0.012158 / 0.007607 (0.004551) | 0.546828 / 0.226044 (0.320783) | 5.454060 / 2.268929 (3.185132) | 2.756154 / 55.444624 (-52.688470) | 2.476501 / 6.876477 (-4.399976) | 2.525875 / 2.142072 (0.383803) | 0.647515 / 4.805227 (-4.157712) | 0.144511 / 6.500664 (-6.356153) | 0.067060 / 0.075469 (-0.008409) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.306456 / 1.841788 (-0.535332) | 15.822623 / 8.074308 (7.748315) | 14.929114 / 10.191392 (4.737721) | 0.168650 / 0.680424 (-0.511773) | 0.018043 / 0.534201 (-0.516158) | 0.396712 / 0.579283 (-0.182572) | 0.425800 / 0.434364 (-0.008564) | 0.466452 / 0.540337 (-0.073885) | 0.564370 / 1.386936 (-0.822566) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#5ebda17e4362bd5f6123543a14fa526a3b54481a \"CML watermark\")\n"
] | 2023-05-09T18:16:59 | 2023-05-23T16:10:29 | 2023-05-19T13:04:30 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5835",
"html_url": "https://github.com/huggingface/datasets/pull/5835",
"diff_url": "https://github.com/huggingface/datasets/pull/5835.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5835.patch",
"merged_at": "2023-05-19T13:04:30"
} | This fixes loading of e.g. parquet data with non-nullable fields.
Indeed `datasets.Features` doesn't support non-nullable fields, which can lead to data not concatenable due to arrow schema mismatch. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5835/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5835/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5834 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5834/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5834/comments | https://api.github.com/repos/huggingface/datasets/issues/5834/events | https://github.com/huggingface/datasets/issues/5834 | 1,702,448,892 | I_kwDODunzps5leU78 | 5,834 | Is uint8 supported? | {
"login": "Ryou0634",
"id": 17979572,
"node_id": "MDQ6VXNlcjE3OTc5NTcy",
"avatar_url": "https://avatars.githubusercontent.com/u/17979572?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ryou0634",
"html_url": "https://github.com/Ryou0634",
"followers_url": "https://api.github.com/users/Ryou0634/followers",
"following_url": "https://api.github.com/users/Ryou0634/following{/other_user}",
"gists_url": "https://api.github.com/users/Ryou0634/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ryou0634/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ryou0634/subscriptions",
"organizations_url": "https://api.github.com/users/Ryou0634/orgs",
"repos_url": "https://api.github.com/users/Ryou0634/repos",
"events_url": "https://api.github.com/users/Ryou0634/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ryou0634/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi ! The numpy formatting detaults to int64 and float32 - but you can use uint8 using\r\n```python\r\nds = ds.with_format(\"numpy\", dtype=np.uint8)\r\n```",
"Related to https://github.com/huggingface/datasets/issues/5517.",
"Thank you!\r\nBy setting `ds.with_format(\"numpy\", dtype=np.uint8)`, the dataset returns the data in `uint8`.\r\n\r\nHowever, `with_format` and `set_format` seem to cast the data on-the-fly.\r\nI want to reduce the dataset size by using `uint8` instead of `int64` and I observe no difference between using `int64` and `uint8` for the vector.\r\nIs there any way to actually store the data in `uint8` and save the disk space and the downloading time when loaded from the hub?\r\n",
"If the feature type is `Value(\"uint8\")` then it's written an uint8 on disk using the uint8 Arrow dtype.\r\n\r\ne.g.\r\n```python\r\nds = Dataset.from_dict({\"a\": range(10)}, features=Features({\"a\": Value(\"uint8\")}))\r\nds.data.nbytes\r\n# 10\r\n```",
"Oh, I understand now.\r\nThe data was stored in `uint8` from the beginning (when the dataset returns `int64`).\r\n\r\nThank you for your time!\r\nMy question is fully resolved."
] | 2023-05-09T17:31:13 | 2023-05-13T05:04:21 | 2023-05-13T05:04:21 | NONE | null | null | null | ### Describe the bug
I expect the dataset to store the data in the `uint8` data type, but it's returning `int64` instead.
While I've found that `datasets` doesn't yet support float16 (https://github.com/huggingface/datasets/issues/4981), I'm wondering if this is the case for other data types as well.
Is there a way to store vector data as `uint8` and then upload it to the hub?
### Steps to reproduce the bug
```python
from datasets import Features, Dataset, Sequence, Value
import numpy as np
dataset = Dataset.from_dict(
{"vector": [np.array([0, 1, 2], dtype=np.uint8)]}, features=Features({"vector": Sequence(Value("uint8"))})
).with_format("numpy")
print(dataset[0]["vector"].dtype)
```
### Expected behavior
Expected: `uint8`
Actual: `int64`
### Environment info
- `datasets` version: 2.12.0
- Platform: macOS-12.1-x86_64-i386-64bit
- Python version: 3.8.12
- Huggingface_hub version: 0.12.1
- PyArrow version: 11.0.0
- Pandas version: 1.5.3 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5834/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5834/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5832 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5832/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5832/comments | https://api.github.com/repos/huggingface/datasets/issues/5832/events | https://github.com/huggingface/datasets/issues/5832 | 1,702,135,336 | I_kwDODunzps5ldIYo | 5,832 | 404 Client Error: Not Found for url: https://huggingface.co/api/models/bert-large-cased | {
"login": "varungupta31",
"id": 51288316,
"node_id": "MDQ6VXNlcjUxMjg4MzE2",
"avatar_url": "https://avatars.githubusercontent.com/u/51288316?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/varungupta31",
"html_url": "https://github.com/varungupta31",
"followers_url": "https://api.github.com/users/varungupta31/followers",
"following_url": "https://api.github.com/users/varungupta31/following{/other_user}",
"gists_url": "https://api.github.com/users/varungupta31/gists{/gist_id}",
"starred_url": "https://api.github.com/users/varungupta31/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/varungupta31/subscriptions",
"organizations_url": "https://api.github.com/users/varungupta31/orgs",
"repos_url": "https://api.github.com/users/varungupta31/repos",
"events_url": "https://api.github.com/users/varungupta31/events{/privacy}",
"received_events_url": "https://api.github.com/users/varungupta31/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"moved to https://github.com/huggingface/transformers/issues/23233"
] | 2023-05-09T14:14:59 | 2023-05-09T14:25:59 | 2023-05-09T14:25:59 | NONE | null | null | null | ### Describe the bug
Running [Bert-Large-Cased](https://huggingface.co/bert-large-cased) model causes `HTTPError`, with the following traceback-
```
HTTPError Traceback (most recent call last)
<ipython-input-6-5c580443a1ad> in <module>
----> 1 tokenizer = BertTokenizer.from_pretrained('bert-large-cased')
~/miniconda3/envs/cmd-chall/lib/python3.7/site-packages/transformers/tokenization_utils_base.py in from_pretrained(cls, pretrained_model_name_or_path, *init_inputs, **kwargs)
1646 # At this point pretrained_model_name_or_path is either a directory or a model identifier name
1647 fast_tokenizer_file = get_fast_tokenizer_file(
-> 1648 pretrained_model_name_or_path, revision=revision, use_auth_token=use_auth_token
1649 )
1650 additional_files_names = {
~/miniconda3/envs/cmd-chall/lib/python3.7/site-packages/transformers/tokenization_utils_base.py in get_fast_tokenizer_file(path_or_repo, revision, use_auth_token)
3406 """
3407 # Inspect all files from the repo/folder.
-> 3408 all_files = get_list_of_files(path_or_repo, revision=revision, use_auth_token=use_auth_token)
3409 tokenizer_files_map = {}
3410 for file_name in all_files:
~/miniconda3/envs/cmd-chall/lib/python3.7/site-packages/transformers/file_utils.py in get_list_of_files(path_or_repo, revision, use_auth_token)
1685 token = None
1686 model_info = HfApi(endpoint=HUGGINGFACE_CO_RESOLVE_ENDPOINT).model_info(
-> 1687 path_or_repo, revision=revision, token=token
1688 )
1689 return [f.rfilename for f in model_info.siblings]
~/miniconda3/envs/cmd-chall/lib/python3.7/site-packages/huggingface_hub/hf_api.py in model_info(self, repo_id, revision, token)
246 )
247 r = requests.get(path, headers=headers)
--> 248 r.raise_for_status()
249 d = r.json()
250 return ModelInfo(**d)
~/miniconda3/envs/cmd-chall/lib/python3.7/site-packages/requests/models.py in raise_for_status(self)
951
952 if http_error_msg:
--> 953 raise HTTPError(http_error_msg, response=self)
954
955 def close(self):
HTTPError: 404 Client Error: Not Found for url: https://huggingface.co/api/models/bert-large-cased
```
I have also tried running in offline mode, as [discussed here](https://huggingface.co/docs/transformers/installation#offline-mode)
```
HF_DATASETS_OFFLINE=1
TRANSFORMERS_OFFLINE=1
```
### Steps to reproduce the bug
1. `from transformers import BertTokenizer, BertModel`
2. `tokenizer = BertTokenizer.from_pretrained('bert-large-cased')`
### Expected behavior
Run without the HTTP error.
### Environment info
| # Name | Version | Build | Channel | |
|--------------------|------------|-----------------------------|---------|---|
| _libgcc_mutex | 0.1 | main | | |
| _openmp_mutex | 4.5 | 1_gnu | | |
| _pytorch_select | 0.1 | cpu_0 | | |
| appdirs | 1.4.4 | pypi_0 | pypi | |
| backcall | 0.2.0 | pypi_0 | pypi | |
| blas | 1.0 | mkl | | |
| bzip2 | 1.0.8 | h7b6447c_0 | | |
| ca-certificates | 2021.7.5 | h06a4308_1 | | |
| certifi | 2021.5.30 | py37h06a4308_0 | | |
| cffi | 1.14.6 | py37h400218f_0 | | |
| charset-normalizer | 2.0.3 | pypi_0 | pypi | |
| click | 8.0.1 | pypi_0 | pypi | |
| colorama | 0.4.4 | pypi_0 | pypi | |
| cudatoolkit | 11.1.74 | h6bb024c_0 | nvidia | |
| cycler | 0.11.0 | pypi_0 | pypi | |
| decorator | 5.0.9 | pypi_0 | pypi | |
| docker-pycreds | 0.4.0 | pypi_0 | pypi | |
| docopt | 0.6.2 | pypi_0 | pypi | |
| dominate | 2.6.0 | pypi_0 | pypi | |
| ffmpeg | 4.3 | hf484d3e_0 | pytorch | |
| filelock | 3.0.12 | pypi_0 | pypi | |
| fonttools | 4.38.0 | pypi_0 | pypi | |
| freetype | 2.10.4 | h5ab3b9f_0 | | |
| gitdb | 4.0.7 | pypi_0 | pypi | |
| gitpython | 3.1.18 | pypi_0 | pypi | |
| gmp | 6.2.1 | h2531618_2 | | |
| gnutls | 3.6.15 | he1e5248_0 | | |
| huggingface-hub | 0.0.12 | pypi_0 | pypi | |
| humanize | 3.10.0 | pypi_0 | pypi | |
| idna | 3.2 | pypi_0 | pypi | |
| importlib-metadata | 4.6.1 | pypi_0 | pypi | |
| intel-openmp | 2019.4 | 243 | | |
| ipdb | 0.13.9 | pypi_0 | pypi | |
| ipython | 7.25.0 | pypi_0 | pypi | |
| ipython-genutils | 0.2.0 | pypi_0 | pypi | |
| jedi | 0.18.0 | pypi_0 | pypi | |
| joblib | 1.0.1 | pypi_0 | pypi | |
| jpeg | 9b | h024ee3a_2 | | |
| jsonpickle | 1.5.2 | pypi_0 | pypi | |
| kiwisolver | 1.4.4 | pypi_0 | pypi | |
| lame | 3.100 | h7b6447c_0 | | |
| lcms2 | 2.12 | h3be6417_0 | | |
| ld_impl_linux-64 | 2.35.1 | h7274673_9 | | |
| libffi | 3.3 | he6710b0_2 | | |
| libgcc-ng | 9.3.0 | h5101ec6_17 | | |
| libgomp | 9.3.0 | h5101ec6_17 | | |
| libiconv | 1.15 | h63c8f33_5 | | |
| libidn2 | 2.3.2 | h7f8727e_0 | | |
| libmklml | 2019.0.5 | 0 | | |
| libpng | 1.6.37 | hbc83047_0 | | |
| libstdcxx-ng | 9.3.0 | hd4cf53a_17 | | |
| libtasn1 | 4.16.0 | h27cfd23_0 | | |
| libtiff | 4.2.0 | h85742a9_0 | | |
| libunistring | 0.9.10 | h27cfd23_0 | | |
| libuv | 1.40.0 | h7b6447c_0 | | |
| libwebp-base | 1.2.0 | h27cfd23_0 | | |
| lz4-c | 1.9.3 | h2531618_0 | | |
| matplotlib | 3.5.3 | pypi_0 | pypi | |
| matplotlib-inline | 0.1.2 | pypi_0 | pypi | |
| mergedeep | 1.3.4 | pypi_0 | pypi | |
| mkl | 2020.2 | 256 | | |
| mkl-service | 2.3.0 | py37he8ac12f_0 | | |
| mkl_fft | 1.3.0 | py37h54f3939_0 | | |
| mkl_random | 1.1.1 | py37h0573a6f_0 | | |
| msgpack | 1.0.2 | pypi_0 | pypi | |
| munch | 2.5.0 | pypi_0 | pypi | |
| ncurses | 6.2 | he6710b0_1 | | |
| nettle | 3.7.3 | hbbd107a_1 | | |
| ninja | 1.10.2 | hff7bd54_1 | | |
| nltk | 3.8.1 | pypi_0 | pypi | |
| numpy | 1.19.2 | py37h54aff64_0 | | |
| numpy-base | 1.19.2 | py37hfa32c7d_0 | | |
| olefile | 0.46 | py37_0 | | |
| openh264 | 2.1.0 | hd408876_0 | | |
| openjpeg | 2.3.0 | h05c96fa_1 | | |
| openssl | 1.1.1k | h27cfd23_0 | | |
| packaging | 21.0 | pypi_0 | pypi | |
| pandas | 1.3.1 | pypi_0 | pypi | |
| parso | 0.8.2 | pypi_0 | pypi | |
| pathtools | 0.1.2 | pypi_0 | pypi | |
| pexpect | 4.8.0 | pypi_0 | pypi | |
| pickleshare | 0.7.5 | pypi_0 | pypi | |
| pillow | 8.3.1 | py37h2c7a002_0 | | |
| pip | 21.1.3 | py37h06a4308_0 | | |
| prompt-toolkit | 3.0.19 | pypi_0 | pypi | |
| protobuf | 4.21.12 | pypi_0 | pypi | |
| psutil | 5.8.0 | pypi_0 | pypi | |
| ptyprocess | 0.7.0 | pypi_0 | pypi | |
| py-cpuinfo | 8.0.0 | pypi_0 | pypi | |
| pycparser | 2.20 | py_2 | | |
| pygments | 2.9.0 | pypi_0 | pypi | |
| pyparsing | 2.4.7 | pypi_0 | pypi | |
| python | 3.7.10 | h12debd9_4 | | |
| python-dateutil | 2.8.2 | pypi_0 | pypi | |
| pytorch | 1.9.0 | py3.7_cuda11.1_cudnn8.0.5_0 | pytorch | |
| pytz | 2021.1 | pypi_0 | pypi | |
| pyyaml | 5.4.1 | pypi_0 | pypi | |
| readline | 8.1 | h27cfd23_0 | | |
| regex | 2022.10.31 | pypi_0 | pypi | |
| requests | 2.26.0 | pypi_0 | pypi | |
| sacred | 0.8.2 | pypi_0 | pypi | |
| sacremoses | 0.0.45 | pypi_0 | pypi | |
| scikit-learn | 0.24.2 | pypi_0 | pypi | |
| scipy | 1.7.0 | pypi_0 | pypi | |
| sentry-sdk | 1.15.0 | pypi_0 | pypi | |
| setproctitle | 1.3.2 | pypi_0 | pypi | |
| setuptools | 52.0.0 | py37h06a4308_0 | | |
| six | 1.16.0 | pyhd3eb1b0_0 | | |
| smmap | 4.0.0 | pypi_0 | pypi | |
| sqlite | 3.36.0 | hc218d9a_0 | | |
| threadpoolctl | 2.2.0 | pypi_0 | pypi | |
| tk | 8.6.10 | hbc83047_0 | | |
| tokenizers | 0.10.3 | pypi_0 | pypi | |
| toml | 0.10.2 | pypi_0 | pypi | |
| torchaudio | 0.9.0 | py37 | pytorch | |
| torchvision | 0.10.0 | py37_cu111 | pytorch | |
| tqdm | 4.61.2 | pypi_0 | pypi | |
| traitlets | 5.0.5 | pypi_0 | pypi | |
| transformers | 4.9.1 | pypi_0 | pypi | |
| typing-extensions | 3.10.0.0 | hd3eb1b0_0 | | |
| typing_extensions | 3.10.0.0 | pyh06a4308_0 | | |
| urllib3 | 1.26.14 | pypi_0 | pypi | |
| wandb | 0.13.10 | pypi_0 | pypi | |
| wcwidth | 0.2.5 | pypi_0 | pypi | |
| wheel | 0.36.2 | pyhd3eb1b0_0 | | |
| wrapt | 1.12.1 | pypi_0 | pypi | |
| xz | 5.2.5 | h7b6447c_0 | | |
| zipp | 3.5.0 | pypi_0 | pypi | |
| zlib | 1.2.11 | h7b6447c_3 | | |
| zstd | 1.4.9 | haebb681_0 | | | | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5832/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5832/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5830 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5830/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5830/comments | https://api.github.com/repos/huggingface/datasets/issues/5830/events | https://github.com/huggingface/datasets/pull/5830 | 1,701,451,399 | PR_kwDODunzps5QEFEi | 5,830 | Debug windows #2 | {
"login": "HyukjinKwon",
"id": 6477701,
"node_id": "MDQ6VXNlcjY0Nzc3MDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/6477701?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HyukjinKwon",
"html_url": "https://github.com/HyukjinKwon",
"followers_url": "https://api.github.com/users/HyukjinKwon/followers",
"following_url": "https://api.github.com/users/HyukjinKwon/following{/other_user}",
"gists_url": "https://api.github.com/users/HyukjinKwon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HyukjinKwon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HyukjinKwon/subscriptions",
"organizations_url": "https://api.github.com/users/HyukjinKwon/orgs",
"repos_url": "https://api.github.com/users/HyukjinKwon/repos",
"events_url": "https://api.github.com/users/HyukjinKwon/events{/privacy}",
"received_events_url": "https://api.github.com/users/HyukjinKwon/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2023-05-09T06:40:34 | 2023-05-09T06:40:47 | 2023-05-09T06:40:47 | NONE | null | true | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5830",
"html_url": "https://github.com/huggingface/datasets/pull/5830",
"diff_url": "https://github.com/huggingface/datasets/pull/5830.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5830.patch",
"merged_at": null
} | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5830/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5830/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5829 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5829/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5829/comments | https://api.github.com/repos/huggingface/datasets/issues/5829/events | https://github.com/huggingface/datasets/issues/5829 | 1,699,958,189 | I_kwDODunzps5lU02t | 5,829 | (mach-o file, but is an incompatible architecture (have 'arm64', need 'x86_64')) | {
"login": "elcolie",
"id": 18206728,
"node_id": "MDQ6VXNlcjE4MjA2NzI4",
"avatar_url": "https://avatars.githubusercontent.com/u/18206728?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/elcolie",
"html_url": "https://github.com/elcolie",
"followers_url": "https://api.github.com/users/elcolie/followers",
"following_url": "https://api.github.com/users/elcolie/following{/other_user}",
"gists_url": "https://api.github.com/users/elcolie/gists{/gist_id}",
"starred_url": "https://api.github.com/users/elcolie/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/elcolie/subscriptions",
"organizations_url": "https://api.github.com/users/elcolie/orgs",
"repos_url": "https://api.github.com/users/elcolie/repos",
"events_url": "https://api.github.com/users/elcolie/events{/privacy}",
"received_events_url": "https://api.github.com/users/elcolie/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Can you paste the error stack trace?",
"That is weird. I can't reproduce it again after reboot.\r\n```python\r\nIn [2]: import platform\r\n\r\nIn [3]: platform.platform()\r\nOut[3]: 'macOS-13.2-arm64-arm-64bit'\r\n\r\nIn [4]: from datasets import load_dataset\r\n ...:\r\n ...: jazzy = load_dataset(\"nomic-ai/gpt4all-j-prompt-generations\", revision='v1.2-jazzy')\r\nFound cached dataset parquet (/Users/sarit/.cache/huggingface/datasets/nomic-ai___parquet/nomic-ai--gpt4all-j-prompt-generations-a3b62015e2e52043/0.0.0/2a3b91fbd88a2c90d1dbbb32b460cf621d31bd5b05b934492fdef7d8d6f236ec)\r\n100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 63.25it/s]\r\n```"
] | 2023-05-08T10:07:14 | 2023-05-09T00:46:42 | 2023-05-09T00:46:42 | NONE | null | null | null | ### Describe the bug
M2 MBP can't run
```python
from datasets import load_dataset
jazzy = load_dataset("nomic-ai/gpt4all-j-prompt-generations", revision='v1.2-jazzy')
```
### Steps to reproduce the bug
1. Use M2 MBP
2. Python 3.10.10 from pyenv
3. Run
```
from datasets import load_dataset
jazzy = load_dataset("nomic-ai/gpt4all-j-prompt-generations", revision='v1.2-jazzy')
```
### Expected behavior
Be able to run normally
### Environment info
```
from datasets import load_dataset
jazzy = load_dataset("nomic-ai/gpt4all-j-prompt-generations", revision='v1.2-jazzy')
```
OSX: 13.2
CPU: M2
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5829/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5829/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5828 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5828/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5828/comments | https://api.github.com/repos/huggingface/datasets/issues/5828/events | https://github.com/huggingface/datasets/issues/5828 | 1,699,235,739 | I_kwDODunzps5lSEeb | 5,828 | Stream data concatenation issue | {
"login": "krishnapriya-18",
"id": 48817796,
"node_id": "MDQ6VXNlcjQ4ODE3Nzk2",
"avatar_url": "https://avatars.githubusercontent.com/u/48817796?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/krishnapriya-18",
"html_url": "https://github.com/krishnapriya-18",
"followers_url": "https://api.github.com/users/krishnapriya-18/followers",
"following_url": "https://api.github.com/users/krishnapriya-18/following{/other_user}",
"gists_url": "https://api.github.com/users/krishnapriya-18/gists{/gist_id}",
"starred_url": "https://api.github.com/users/krishnapriya-18/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/krishnapriya-18/subscriptions",
"organizations_url": "https://api.github.com/users/krishnapriya-18/orgs",
"repos_url": "https://api.github.com/users/krishnapriya-18/repos",
"events_url": "https://api.github.com/users/krishnapriya-18/events{/privacy}",
"received_events_url": "https://api.github.com/users/krishnapriya-18/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi! \r\n\r\nYou can call `map` as follows to avoid the error:\r\n```python\r\naugmented_dataset_cln = dataset_cln['train'].map(augment_dataset, features=dataset_cln['train'].features)\r\n```",
"Thanks it is solved"
] | 2023-05-07T21:02:54 | 2023-05-10T05:06:58 | 2023-05-10T05:05:47 | NONE | null | null | null | ### Describe the bug
I am not able to concatenate the augmentation of the stream data. I am using the latest version of dataset.
ValueError: The features can't be aligned because the key audio of features {'audio_id': Value(dtype='string',
id=None), 'audio': {'array': Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), 'path':
Value(dtype='null', id=None), 'sampling_rate': Value(dtype='int64', id=None)}, 'transcript': Value(dtype='string',
id=None)} has unexpected type - {'array': Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None),
'path': Value(dtype='null', id=None), 'sampling_rate': Value(dtype='int64', id=None)} (expected either
Audio(sampling_rate=16000, mono=True, decode=True, id=None) or Value("null").
### Steps to reproduce the bug
dataset = load_dataset("tobiolatunji/afrispeech-200", "all", streaming=True).shuffle(seed=42)
dataset_cln = dataset.remove_columns(['speaker_id', 'path', 'age_group', 'gender', 'accent', 'domain', 'country', 'duration'])
dataset_cln = dataset_cln.cast_column("audio", Audio(sampling_rate=16000))
from audiomentations import AddGaussianNoise,Compose,Gain,OneOf,PitchShift,PolarityInversion,TimeStretch
augmentation = Compose([
AddGaussianNoise(min_amplitude=0.005, max_amplitude=0.015, p=0.2)
])
def augment_dataset(batch):
audio = batch["audio"]
audio["array"] = augmentation(audio["array"], sample_rate=audio["sampling_rate"])
return batch
augmented_dataset_cln = dataset_cln['train'].map(augment_dataset)
dataset_cln['train'] = interleave_datasets([dataset_cln['train'], augmented_dataset_cln])
dataset_cln['train'] = dataset_cln['train'].shuffle(seed=42)
### Expected behavior
I should be able to merge as sampling rate is same.
### Environment info
import datasets
import transformers
import accelerate
print(datasets.__version__)
print(transformers.__version__)
print(torch.__version__)
print(evaluate.__version__)
print(accelerate.__version__)
2.12.0
4.28.1
2.0.0
0.4.0
0.18.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5828/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5828/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5824 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5824/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5824/comments | https://api.github.com/repos/huggingface/datasets/issues/5824/events | https://github.com/huggingface/datasets/pull/5824 | 1,697,152,148 | PR_kwDODunzps5P1rIZ | 5,824 | Fix incomplete docstring for `BuilderConfig` | {
"login": "Laurent2916",
"id": 21087104,
"node_id": "MDQ6VXNlcjIxMDg3MTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/21087104?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Laurent2916",
"html_url": "https://github.com/Laurent2916",
"followers_url": "https://api.github.com/users/Laurent2916/followers",
"following_url": "https://api.github.com/users/Laurent2916/following{/other_user}",
"gists_url": "https://api.github.com/users/Laurent2916/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Laurent2916/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Laurent2916/subscriptions",
"organizations_url": "https://api.github.com/users/Laurent2916/orgs",
"repos_url": "https://api.github.com/users/Laurent2916/repos",
"events_url": "https://api.github.com/users/Laurent2916/events{/privacy}",
"received_events_url": "https://api.github.com/users/Laurent2916/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007658 / 0.011353 (-0.003695) | 0.005497 / 0.011008 (-0.005511) | 0.097142 / 0.038508 (0.058633) | 0.034602 / 0.023109 (0.011493) | 0.304191 / 0.275898 (0.028293) | 0.329103 / 0.323480 (0.005624) | 0.005936 / 0.007986 (-0.002049) | 0.004324 / 0.004328 (-0.000004) | 0.073387 / 0.004250 (0.069137) | 0.049657 / 0.037052 (0.012604) | 0.301352 / 0.258489 (0.042863) | 0.343095 / 0.293841 (0.049254) | 0.036767 / 0.128546 (-0.091779) | 0.012438 / 0.075646 (-0.063208) | 0.333804 / 0.419271 (-0.085468) | 0.064557 / 0.043533 (0.021024) | 0.302397 / 0.255139 (0.047258) | 0.319739 / 0.283200 (0.036540) | 0.119264 / 0.141683 (-0.022418) | 1.465309 / 1.452155 (0.013155) | 1.578194 / 1.492716 (0.085478) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.256552 / 0.018006 (0.238545) | 0.555344 / 0.000490 (0.554854) | 0.004845 / 0.000200 (0.004645) | 0.000082 / 0.000054 (0.000027) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027215 / 0.037411 (-0.010197) | 0.107071 / 0.014526 (0.092545) | 0.116343 / 0.176557 (-0.060213) | 0.172646 / 0.737135 (-0.564490) | 0.123366 / 0.296338 (-0.172973) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.411421 / 0.215209 (0.196212) | 4.126028 / 2.077655 (2.048373) | 1.975826 / 1.504120 (0.471706) | 1.784404 / 1.541195 (0.243210) | 1.848697 / 1.468490 (0.380207) | 0.686400 / 4.584777 (-3.898377) | 3.677649 / 3.745712 (-0.068063) | 2.077787 / 5.269862 (-3.192075) | 1.310912 / 4.565676 (-3.254764) | 0.083980 / 0.424275 (-0.340295) | 0.012183 / 0.007607 (0.004575) | 0.506969 / 0.226044 (0.280924) | 5.094730 / 2.268929 (2.825802) | 2.419790 / 55.444624 (-53.024834) | 2.106592 / 6.876477 (-4.769884) | 2.244309 / 2.142072 (0.102237) | 0.814312 / 4.805227 (-3.990915) | 0.167872 / 6.500664 (-6.332792) | 0.065339 / 0.075469 (-0.010130) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.193314 / 1.841788 (-0.648474) | 14.980621 / 8.074308 (6.906313) | 14.352452 / 10.191392 (4.161060) | 0.164531 / 0.680424 (-0.515893) | 0.017432 / 0.534201 (-0.516769) | 0.422193 / 0.579283 (-0.157090) | 0.410047 / 0.434364 (-0.024317) | 0.497011 / 0.540337 (-0.043326) | 0.581395 / 1.386936 (-0.805541) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007214 / 0.011353 (-0.004139) | 0.005449 / 0.011008 (-0.005559) | 0.074320 / 0.038508 (0.035812) | 0.034261 / 0.023109 (0.011152) | 0.378265 / 0.275898 (0.102367) | 0.414419 / 0.323480 (0.090939) | 0.005804 / 0.007986 (-0.002182) | 0.004205 / 0.004328 (-0.000124) | 0.073266 / 0.004250 (0.069015) | 0.050444 / 0.037052 (0.013392) | 0.372999 / 0.258489 (0.114510) | 0.436032 / 0.293841 (0.142191) | 0.035432 / 0.128546 (-0.093114) | 0.012581 / 0.075646 (-0.063065) | 0.085777 / 0.419271 (-0.333495) | 0.046902 / 0.043533 (0.003369) | 0.378732 / 0.255139 (0.123593) | 0.401746 / 0.283200 (0.118547) | 0.113398 / 0.141683 (-0.028285) | 1.463851 / 1.452155 (0.011696) | 1.566387 / 1.492716 (0.073670) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.261246 / 0.018006 (0.243240) | 0.546730 / 0.000490 (0.546241) | 0.005245 / 0.000200 (0.005045) | 0.000103 / 0.000054 (0.000048) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029441 / 0.037411 (-0.007970) | 0.111834 / 0.014526 (0.097308) | 0.122411 / 0.176557 (-0.054145) | 0.171288 / 0.737135 (-0.565847) | 0.130338 / 0.296338 (-0.166001) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.433405 / 0.215209 (0.218196) | 4.315790 / 2.077655 (2.238135) | 2.121934 / 1.504120 (0.617814) | 1.924123 / 1.541195 (0.382928) | 2.029077 / 1.468490 (0.560587) | 0.710245 / 4.584777 (-3.874532) | 3.844393 / 3.745712 (0.098681) | 3.576580 / 5.269862 (-1.693281) | 1.930985 / 4.565676 (-2.634691) | 0.092186 / 0.424275 (-0.332090) | 0.012307 / 0.007607 (0.004700) | 0.533722 / 0.226044 (0.307677) | 5.324447 / 2.268929 (3.055519) | 2.615451 / 55.444624 (-52.829174) | 2.282310 / 6.876477 (-4.594167) | 2.319847 / 2.142072 (0.177774) | 0.849364 / 4.805227 (-3.955864) | 0.172722 / 6.500664 (-6.327942) | 0.064721 / 0.075469 (-0.010748) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.289942 / 1.841788 (-0.551846) | 15.875062 / 8.074308 (7.800754) | 14.784682 / 10.191392 (4.593290) | 0.144432 / 0.680424 (-0.535991) | 0.017703 / 0.534201 (-0.516498) | 0.424357 / 0.579283 (-0.154926) | 0.419078 / 0.434364 (-0.015286) | 0.489331 / 0.540337 (-0.051006) | 0.585284 / 1.386936 (-0.801652) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#e3f4f124a1b118a5bfff5bae76b25a68aedbebbc \"CML watermark\")\n"
] | 2023-05-05T07:34:28 | 2023-05-05T12:39:14 | 2023-05-05T12:31:54 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5824",
"html_url": "https://github.com/huggingface/datasets/pull/5824",
"diff_url": "https://github.com/huggingface/datasets/pull/5824.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5824.patch",
"merged_at": "2023-05-05T12:31:54"
} | Fixes #5820
Also fixed a couple of typos I spotted | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5824/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5824/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5823 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5823/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5823/comments | https://api.github.com/repos/huggingface/datasets/issues/5823/events | https://github.com/huggingface/datasets/issues/5823 | 1,697,024,789 | I_kwDODunzps5lJosV | 5,823 | [2.12.0] DatasetDict.save_to_disk not saving to S3 | {
"login": "thejamesmarq",
"id": 5233185,
"node_id": "MDQ6VXNlcjUyMzMxODU=",
"avatar_url": "https://avatars.githubusercontent.com/u/5233185?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thejamesmarq",
"html_url": "https://github.com/thejamesmarq",
"followers_url": "https://api.github.com/users/thejamesmarq/followers",
"following_url": "https://api.github.com/users/thejamesmarq/following{/other_user}",
"gists_url": "https://api.github.com/users/thejamesmarq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thejamesmarq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thejamesmarq/subscriptions",
"organizations_url": "https://api.github.com/users/thejamesmarq/orgs",
"repos_url": "https://api.github.com/users/thejamesmarq/repos",
"events_url": "https://api.github.com/users/thejamesmarq/events{/privacy}",
"received_events_url": "https://api.github.com/users/thejamesmarq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi ! Can you try adding the `s3://` prefix ?\r\n```python\r\nf\"s3://{s3_bucket}/{s3_dir}/{dataset_name}\"\r\n```",
"Ugh, yeah that was it. Thank you!"
] | 2023-05-05T05:22:59 | 2023-05-05T15:01:18 | 2023-05-05T15:01:17 | NONE | null | null | null | ### Describe the bug
When trying to save a `DatasetDict` to a private S3 bucket using `save_to_disk`, the artifacts are instead saved locally, and not in the S3 bucket.
I have tried using the deprecated `fs` as well as the `storage_options` arguments and I get the same results.
### Steps to reproduce the bug
1. Create a DatsetDict `dataset`
2. Create a S3FileSystem object
`s3 = datasets.filesystems.S3FileSystem(key=aws_access_key_id, secret=aws_secret_access_key)`
3. Save using `dataset_dict.save_to_disk(f"{s3_bucket}/{s3_dir}/{dataset_name}", storage_options=s3.storage_options)` or `dataset_dict.save_to_disk(f"{s3_bucket}/{s3_dir}/{dataset_name}", fs=s3)`
4. Check the corresponding S3 bucket and verify nothing has been uploaded
5. Check the path at f"{s3_bucket}/{s3_dir}/{dataset_name}" and verify that files have been saved there
### Expected behavior
Artifacts are uploaded at the f"{s3_bucket}/{s3_dir}/{dataset_name}" S3 location.
### Environment info
- `datasets` version: 2.12.0
- Platform: macOS-13.3.1-x86_64-i386-64bit
- Python version: 3.11.2
- Huggingface_hub version: 0.14.1
- PyArrow version: 12.0.0
- Pandas version: 2.0.1 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5823/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5823/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5822 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5822/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5822/comments | https://api.github.com/repos/huggingface/datasets/issues/5822/events | https://github.com/huggingface/datasets/issues/5822 | 1,696,627,308 | I_kwDODunzps5lIHps | 5,822 | Audio Dataset with_format torch problem | {
"login": "paulbauriegel",
"id": 20282916,
"node_id": "MDQ6VXNlcjIwMjgyOTE2",
"avatar_url": "https://avatars.githubusercontent.com/u/20282916?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/paulbauriegel",
"html_url": "https://github.com/paulbauriegel",
"followers_url": "https://api.github.com/users/paulbauriegel/followers",
"following_url": "https://api.github.com/users/paulbauriegel/following{/other_user}",
"gists_url": "https://api.github.com/users/paulbauriegel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/paulbauriegel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/paulbauriegel/subscriptions",
"organizations_url": "https://api.github.com/users/paulbauriegel/orgs",
"repos_url": "https://api.github.com/users/paulbauriegel/repos",
"events_url": "https://api.github.com/users/paulbauriegel/events{/privacy}",
"received_events_url": "https://api.github.com/users/paulbauriegel/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi ! Can you try with a more recent version of `datasets` ?",
"Ok, yes it worked with the most recent version. Thanks"
] | 2023-05-04T20:07:51 | 2023-05-11T20:45:53 | 2023-05-11T20:45:53 | NONE | null | null | null | ### Describe the bug
Common Voice v10 Delta (German) Dataset from here https://commonvoice.mozilla.org/de/datasets
```
audio_dataset = \
(Dataset
.from_dict({"audio": ('/tmp/cv-corpus-10.0-delta-2022-07-04/de/clips/' + df.path).to_list()})
.cast_column("audio", Audio(sampling_rate=16_000))
.with_format('numpy'))
audio_dataset[0]["audio"]
```
works, but
```
audio_dataset = \
(Dataset
.from_dict({"audio": ('/tmp/cv-corpus-10.0-delta-2022-07-04/de/clips/' + df.path).to_list()})
.cast_column("audio", Audio(sampling_rate=16_000))
.with_format('torch'))
audio_dataset[0]["audio"]
```
does not instead I get
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[54], line 1
----> 1 audio_dataset[0]["audio"]
File /anaconda/envs/azureml_py38/lib/python3.8/site-packages/datasets/arrow_dataset.py:2154, in Dataset.__getitem__(self, key)
2152 def __getitem__(self, key): # noqa: F811
2153 """Can be used to index columns (by string names) or rows (by integer index or iterable of indices or bools)."""
-> 2154 return self._getitem(
2155 key,
2156 )
File /anaconda/envs/azureml_py38/lib/python3.8/site-packages/datasets/arrow_dataset.py:2139, in Dataset._getitem(self, key, decoded, **kwargs)
2137 formatter = get_formatter(format_type, features=self.features, decoded=decoded, **format_kwargs)
2138 pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None)
-> 2139 formatted_output = format_table(
2140 pa_subtable, key, formatter=formatter, format_columns=format_columns, output_all_columns=output_all_columns
2141 )
2142 return formatted_output
File /anaconda/envs/azureml_py38/lib/python3.8/site-packages/datasets/formatting/formatting.py:532, in format_table(table, key, formatter, format_columns, output_all_columns)
530 python_formatter = PythonFormatter(features=None)
531 if format_columns is None:
--> 532 return formatter(pa_table, query_type=query_type)
533 elif query_type == "column":
534 if key in format_columns:
File /anaconda/envs/azureml_py38/lib/python3.8/site-packages/datasets/formatting/formatting.py:281, in Formatter.__call__(self, pa_table, query_type)
279 def __call__(self, pa_table: pa.Table, query_type: str) -> Union[RowFormat, ColumnFormat, BatchFormat]:
280 if query_type == "row":
--> 281 return self.format_row(pa_table)
282 elif query_type == "column":
283 return self.format_column(pa_table)
File /anaconda/envs/azureml_py38/lib/python3.8/site-packages/datasets/formatting/torch_formatter.py:58, in TorchFormatter.format_row(self, pa_table)
56 def format_row(self, pa_table: pa.Table) -> dict:
57 row = self.numpy_arrow_extractor().extract_row(pa_table)
---> 58 return self.recursive_tensorize(row)
File /anaconda/envs/azureml_py38/lib/python3.8/site-packages/datasets/formatting/torch_formatter.py:54, in TorchFormatter.recursive_tensorize(self, data_struct)
53 def recursive_tensorize(self, data_struct: dict):
---> 54 return map_nested(self._recursive_tensorize, data_struct, map_list=False)
File /anaconda/envs/azureml_py38/lib/python3.8/site-packages/datasets/utils/py_utils.py:356, in map_nested(function, data_struct, dict_only, map_list, map_tuple, map_numpy, num_proc, types, disable_tqdm, desc)
354 num_proc = 1
355 if num_proc <= 1 or len(iterable) <= num_proc:
--> 356 mapped = [
357 _single_map_nested((function, obj, types, None, True, None))
358 for obj in logging.tqdm(iterable, disable=disable_tqdm, desc=desc)
359 ]
360 else:
361 split_kwds = [] # We organize the splits ourselve (contiguous splits)
File /anaconda/envs/azureml_py38/lib/python3.8/site-packages/datasets/utils/py_utils.py:357, in <listcomp>(.0)
354 num_proc = 1
355 if num_proc <= 1 or len(iterable) <= num_proc:
356 mapped = [
--> 357 _single_map_nested((function, obj, types, None, True, None))
358 for obj in logging.tqdm(iterable, disable=disable_tqdm, desc=desc)
359 ]
360 else:
361 split_kwds = [] # We organize the splits ourselve (contiguous splits)
File /anaconda/envs/azureml_py38/lib/python3.8/site-packages/datasets/utils/py_utils.py:309, in _single_map_nested(args)
306 pbar = logging.tqdm(pbar_iterable, disable=disable_tqdm, position=rank, unit="obj", desc=pbar_desc)
308 if isinstance(data_struct, dict):
--> 309 return {k: _single_map_nested((function, v, types, None, True, None)) for k, v in pbar}
310 else:
311 mapped = [_single_map_nested((function, v, types, None, True, None)) for v in pbar]
File /anaconda/envs/azureml_py38/lib/python3.8/site-packages/datasets/utils/py_utils.py:309, in <dictcomp>(.0)
306 pbar = logging.tqdm(pbar_iterable, disable=disable_tqdm, position=rank, unit="obj", desc=pbar_desc)
308 if isinstance(data_struct, dict):
--> 309 return {k: _single_map_nested((function, v, types, None, True, None)) for k, v in pbar}
310 else:
311 mapped = [_single_map_nested((function, v, types, None, True, None)) for v in pbar]
File /anaconda/envs/azureml_py38/lib/python3.8/site-packages/datasets/utils/py_utils.py:293, in _single_map_nested(args)
291 # Singleton first to spare some computation
292 if not isinstance(data_struct, dict) and not isinstance(data_struct, types):
--> 293 return function(data_struct)
295 # Reduce logging to keep things readable in multiprocessing with tqdm
296 if rank is not None and logging.get_verbosity() < logging.WARNING:
File /anaconda/envs/azureml_py38/lib/python3.8/site-packages/datasets/formatting/torch_formatter.py:51, in TorchFormatter._recursive_tensorize(self, data_struct)
49 if data_struct.dtype == np.object: # pytorch tensors cannot be instantied from an array of objects
50 return [self.recursive_tensorize(substruct) for substruct in data_struct]
---> 51 return self._tensorize(data_struct)
File /anaconda/envs/azureml_py38/lib/python3.8/site-packages/datasets/formatting/torch_formatter.py:38, in TorchFormatter._tensorize(self, value)
35 import torch
37 default_dtype = {}
---> 38 if np.issubdtype(value.dtype, np.integer):
39 default_dtype = {"dtype": torch.int64}
40 elif np.issubdtype(value.dtype, np.floating):
AttributeError: 'NoneType' object has no attribute 'dtype'
```
### Steps to reproduce the bug
1. Download some audio dataset in this case I used Common Voice v10 Delta (German) Dataset from here https://commonvoice.mozilla.org/de/datasets
2. Try the Code from above
### Expected behavior
It should work for torch
### Environment info
pytorch: 2.0.0
datasets: 2.3.2
numpy: 1.21.6
Python: 3.8
Linux | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5822/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5822/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5820 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5820/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5820/comments | https://api.github.com/repos/huggingface/datasets/issues/5820/events | https://github.com/huggingface/datasets/issues/5820 | 1,695,892,811 | I_kwDODunzps5lFUVL | 5,820 | Incomplete docstring for `BuilderConfig` | {
"login": "Laurent2916",
"id": 21087104,
"node_id": "MDQ6VXNlcjIxMDg3MTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/21087104?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Laurent2916",
"html_url": "https://github.com/Laurent2916",
"followers_url": "https://api.github.com/users/Laurent2916/followers",
"following_url": "https://api.github.com/users/Laurent2916/following{/other_user}",
"gists_url": "https://api.github.com/users/Laurent2916/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Laurent2916/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Laurent2916/subscriptions",
"organizations_url": "https://api.github.com/users/Laurent2916/orgs",
"repos_url": "https://api.github.com/users/Laurent2916/repos",
"events_url": "https://api.github.com/users/Laurent2916/events{/privacy}",
"received_events_url": "https://api.github.com/users/Laurent2916/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892877,
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue",
"name": "good first issue",
"color": "7057ff",
"default": true,
"description": "Good for newcomers"
}
] | closed | false | null | [] | null | [
"Thanks for reporting! You are more than welcome to improve `BuilderConfig`'s docstring.\r\n\r\nThis class serves an identical purpose as `tensorflow_datasets`'s `BuilderConfig`, and its docstring is [here](https://github.com/tensorflow/datasets/blob/a95e38b5bb018312c3d3720619c2a8ef83ebf57f/tensorflow_datasets/core/dataset_builder.py#L81), so feel free to re-use parts of it."
] | 2023-05-04T12:14:34 | 2023-05-05T12:31:56 | 2023-05-05T12:31:56 | CONTRIBUTOR | null | null | null | Hi guys !
I stumbled upon this docstring while working on a project.
Some of the attributes have missing descriptions.
https://github.com/huggingface/datasets/blob/bc5fef5b6d91f009e4101684adcb374df2c170f6/src/datasets/builder.py#L104-L117 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5820/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5820/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5819 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5819/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5819/comments | https://api.github.com/repos/huggingface/datasets/issues/5819/events | https://github.com/huggingface/datasets/issues/5819 | 1,695,536,738 | I_kwDODunzps5lD9Zi | 5,819 | Cannot pickle error in Dataset.from_generator() | {
"login": "xinghaow99",
"id": 50691954,
"node_id": "MDQ6VXNlcjUwNjkxOTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/50691954?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xinghaow99",
"html_url": "https://github.com/xinghaow99",
"followers_url": "https://api.github.com/users/xinghaow99/followers",
"following_url": "https://api.github.com/users/xinghaow99/following{/other_user}",
"gists_url": "https://api.github.com/users/xinghaow99/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xinghaow99/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xinghaow99/subscriptions",
"organizations_url": "https://api.github.com/users/xinghaow99/orgs",
"repos_url": "https://api.github.com/users/xinghaow99/repos",
"events_url": "https://api.github.com/users/xinghaow99/events{/privacy}",
"received_events_url": "https://api.github.com/users/xinghaow99/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi! It should work if you put `model = torch.compile(model)` inside the `generate_data` function. If a referenced object is outside, it needs to be pickable, and that's not the case for the compiled models (or functions). ",
"> Hi! It should work if you put `model = torch.compile(model)` inside the `generate_data` function. If a referenced object is outside, it needs to be pickable, and that's not the case for the compiled models (or functions).\r\n\r\nHi! Thank you for your reply! Everything works perfectly with your suggestion!\r\n\r\nClosing the issue.\r\n"
] | 2023-05-04T08:39:09 | 2023-05-05T19:20:59 | 2023-05-05T19:20:58 | NONE | null | null | null | ### Describe the bug
I'm trying to use Dataset.from_generator() to generate a large dataset.
### Steps to reproduce the bug
Code to reproduce:
```
from transformers import T5Tokenizer, T5ForConditionalGeneration, GenerationConfig
import torch
from tqdm import tqdm
from datasets import load_dataset
tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-small")
model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-small", device_map="auto")
model = torch.compile(model)
def generate_data(data_loader):
model.eval()
for batch in tqdm(data_loader):
input_ids = tokenizer(batch['instruction'], return_tensors='pt', padding=True, truncation=True).input_ids.to("cuda:0")
with torch.no_grad():
outputs = model.generate(input_ids, generation_config=generation_config)
decoder_hidden_states = outputs.decoder_hidden_states
for i, h in zip(batch['instruction'], decoder_hidden_states):
yield {"instruction": i, "decoder_hidden_states": h}
generation_config = GenerationConfig(
temperature=1,
max_new_tokens=1024,
do_sample=False,
num_return_sequences=1,
return_dict_in_generate=True,
output_scores=True,
output_hidden_states=True,
)
from datasets import Dataset, load_dataset
from torch.utils.data import DataLoader
dataset = load_dataset("HuggingFaceH4/databricks_dolly_15k")
train_loader = DataLoader(dataset['train'], batch_size=2, shuffle=True)
dataset = Dataset.from_generator(generator=generate_data, gen_kwargs={"data_loader": train_loader})
dataset.save_to_disk("data/flant5_small_generation")
```
### Expected behavior
The dataset should be generated and saved.
But the following error occurred:
```
Traceback (most recent call last):
File "/remote-home/xhwang/alpaca-lora/data_collection_t5.py", line 46, in <module>
dataset = Dataset.from_generator(generator=generate_data, gen_kwargs={"data_loader": train_loader})
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 1035, in from_generator
return GeneratorDatasetInputStream(
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/io/generator.py", line 28, in __init__
self.builder = Generator(
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/builder.py", line 336, in __init__
self.config, self.config_id = self._create_builder_config(
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/builder.py", line 505, in _create_builder_config
config_id = builder_config.create_config_id(
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/builder.py", line 179, in create_config_id
suffix = Hasher.hash(config_kwargs_to_add_to_suffix)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/fingerprint.py", line 236, in hash
return cls.hash_default(value)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/fingerprint.py", line 229, in hash_default
return cls.hash_bytes(dumps(value))
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 726, in dumps
dump(obj, file)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 701, in dump
Pickler(file, recurse=True).dump(obj)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 394, in dump
StockPickler.dump(self, obj)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 487, in dump
self.save(obj)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 691, in save
dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 1186, in save_module_dict
StockPickler.save_dict(pickler, obj)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 972, in save_dict
self._batch_setitems(obj.items())
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 998, in _batch_setitems
save(v)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 691, in save
dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 1311, in save_function
dill._dill._save_with_postproc(
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 1084, in _save_with_postproc
pickler._batch_setitems(iter(source.items()))
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 998, in _batch_setitems
save(v)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 691, in save
dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 603, in save
self.save_reduce(obj=obj, *rv)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 717, in save_reduce
save(state)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 691, in save
dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 1186, in save_module_dict
StockPickler.save_dict(pickler, obj)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 972, in save_dict
self._batch_setitems(obj.items())
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 998, in _batch_setitems
save(v)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 691, in save
dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 603, in save
self.save_reduce(obj=obj, *rv)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 717, in save_reduce
save(state)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 691, in save
dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 1186, in save_module_dict
StockPickler.save_dict(pickler, obj)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 972, in save_dict
self._batch_setitems(obj.items())
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 998, in _batch_setitems
save(v)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 691, in save
dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 1311, in save_function
dill._dill._save_with_postproc(
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 1070, in _save_with_postproc
pickler.save_reduce(*reduction, obj=obj)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 717, in save_reduce
save(state)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 691, in save
dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 887, in save_tuple
save(element)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 691, in save
dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 1186, in save_module_dict
StockPickler.save_dict(pickler, obj)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 972, in save_dict
self._batch_setitems(obj.items())
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 998, in _batch_setitems
save(v)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 691, in save
dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 1311, in save_function
dill._dill._save_with_postproc(
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 1070, in _save_with_postproc
pickler.save_reduce(*reduction, obj=obj)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 717, in save_reduce
save(state)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 691, in save
dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 887, in save_tuple
save(element)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 691, in save
dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 1186, in save_module_dict
StockPickler.save_dict(pickler, obj)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 972, in save_dict
self._batch_setitems(obj.items())
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 1003, in _batch_setitems
save(v)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 691, in save
dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 1311, in save_function
dill._dill._save_with_postproc(
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 1084, in _save_with_postproc
pickler._batch_setitems(iter(source.items()))
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 998, in _batch_setitems
save(v)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 691, in save
dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 578, in save
rv = reduce(self.proto)
TypeError: cannot pickle 'ConfigModuleInstance' object
```
### Environment info
- `datasets` version: 2.11.0
- Platform: Linux-4.15.0-156-generic-x86_64-with-glibc2.31
- Python version: 3.10.10
- Huggingface_hub version: 0.13.2
- PyArrow version: 11.0.0
- Pandas version: 1.5.3 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5819/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5819/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5817 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5817/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5817/comments | https://api.github.com/repos/huggingface/datasets/issues/5817/events | https://github.com/huggingface/datasets/issues/5817 | 1,694,891,866 | I_kwDODunzps5lBf9a | 5,817 | Setting `num_proc` errors when `.map` returns additional items. | {
"login": "davidgilbertson",
"id": 4443482,
"node_id": "MDQ6VXNlcjQ0NDM0ODI=",
"avatar_url": "https://avatars.githubusercontent.com/u/4443482?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/davidgilbertson",
"html_url": "https://github.com/davidgilbertson",
"followers_url": "https://api.github.com/users/davidgilbertson/followers",
"following_url": "https://api.github.com/users/davidgilbertson/following{/other_user}",
"gists_url": "https://api.github.com/users/davidgilbertson/gists{/gist_id}",
"starred_url": "https://api.github.com/users/davidgilbertson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/davidgilbertson/subscriptions",
"organizations_url": "https://api.github.com/users/davidgilbertson/orgs",
"repos_url": "https://api.github.com/users/davidgilbertson/repos",
"events_url": "https://api.github.com/users/davidgilbertson/events{/privacy}",
"received_events_url": "https://api.github.com/users/davidgilbertson/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi ! Unfortunately I couldn't reproduce on my side locally and with datasets 2.11 and python 3.10.11 on colab.\r\nWhat version of `multiprocess` are you using ?",
"I've got `multiprocess` version `0.70.14`.\r\n\r\nI've done some more testing and the error only occurs in PyCharm's Python Console. It seems to be [this PyCharm bug](https://youtrack.jetbrains.com/issue/PY-51922/Multiprocessing-bug.-Can-only-run-in-debugger.), I'll close this.",
"For other users facing this, my workaround is to conditionally set `num_proc` so I can work interactively in the PyCharm Python Console while developing, then when I'm ready to run on the whole dataset, run it as a script and use multiprocessing.\r\n\r\n```py\r\nmapped_ds = ds.map(\r\n my_map_function,\r\n batched=True,\r\n remove_columns=ds.column_names,\r\n num_proc=1 if \"PYCHARM_HOSTED\" in os.environ else 8,\r\n)\r\n```"
] | 2023-05-03T21:46:53 | 2023-05-04T21:14:21 | 2023-05-04T20:22:25 | NONE | null | null | null | ### Describe the bug
I'm using a map function that returns more rows than are passed in.
If I try to use `num_proc` I get:
```
File "/home/davidg/.virtualenvs/learning/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 563, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/home/davidg/.virtualenvs/learning/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 528, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/home/davidg/.virtualenvs/learning/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 3097, in map
for rank, done, content in iflatmap_unordered(
File "/home/davidg/.virtualenvs/learning/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 1372, in iflatmap_unordered
yield queue.get(timeout=0.05)
File "<string>", line 2, in get
File "/home/davidg/.virtualenvs/learning/lib/python3.10/site-packages/multiprocess/managers.py", line 818, in _callmethod
kind, result = conn.recv()
File "/home/davidg/.virtualenvs/learning/lib/python3.10/site-packages/multiprocess/connection.py", line 258, in recv
buf = self._recv_bytes()
File "/home/davidg/.virtualenvs/learning/lib/python3.10/site-packages/multiprocess/connection.py", line 422, in _recv_bytes
buf = self._recv(4)
File "/home/davidg/.virtualenvs/learning/lib/python3.10/site-packages/multiprocess/connection.py", line 391, in _recv
raise EOFError
EOFError
```
### Steps to reproduce the bug
This is copied from the [Datasets docs](https://huggingface.co/docs/datasets/v2.12.0/en/process#batch-processing), with `num_proc` added, and will error.
```py
import datasets
dataset = ... # any old dataset
def chunk_examples(examples):
chunks = []
for sentence in examples["text"]:
chunks += [sentence[i : i + 50] for i in range(0, len(sentence), 50)]
return {"chunks": chunks}
chunked_dataset = dataset.map(
chunk_examples,
batched=True,
remove_columns=dataset.column_names,
num_proc=2, # Remove and it works
)
```
### Expected behavior
Should work fine. On a related note, multi-processing also fails if there is a Meta class anywhere in scope (and there are plenty in the standard library). This is the fault of `dill` and is a long standing issue.
Have you considered using Loky for multiprocessing? I've found that the built-in `datasets` multi-processing breaks more than it works so have written my own function using `loky`, for reference:
```py
import datasets
import loky
def fast_loop(dataset: datasets.Dataset, func, num_proc=None):
if num_proc is None:
import os
num_proc = len(os.sched_getaffinity(0))
shards = [
dataset.shard(num_shards=num_proc, index=i, contiguous=True)
for i in range(num_proc)
]
executor = loky.get_reusable_executor(max_workers=num_proc)
results = executor.map(func, shards)
return datasets.combine.concatenate_datasets(list(results))
```
### Environment info
- `datasets` version: 2.11.0
- Platform: Linux-5.15.90.1-microsoft-standard-WSL2-x86_64-with-glibc2.31
- Python version: 3.10.8
- Huggingface_hub version: 0.12.1
- PyArrow version: 11.0.0
- Pandas version: 2.0.1 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5817/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5817/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5816 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5816/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5816/comments | https://api.github.com/repos/huggingface/datasets/issues/5816/events | https://github.com/huggingface/datasets/pull/5816 | 1,694,590,856 | PR_kwDODunzps5Ps4t9 | 5,816 | Preserve `stopping_strategy` of shuffled interleaved dataset (random cycling case) | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007862 / 0.011353 (-0.003491) | 0.005747 / 0.011008 (-0.005261) | 0.106818 / 0.038508 (0.068310) | 0.036630 / 0.023109 (0.013521) | 0.344218 / 0.275898 (0.068320) | 0.398803 / 0.323480 (0.075324) | 0.006187 / 0.007986 (-0.001799) | 0.005686 / 0.004328 (0.001358) | 0.078568 / 0.004250 (0.074318) | 0.051786 / 0.037052 (0.014734) | 0.361736 / 0.258489 (0.103247) | 0.396323 / 0.293841 (0.102482) | 0.037943 / 0.128546 (-0.090603) | 0.013957 / 0.075646 (-0.061689) | 0.366782 / 0.419271 (-0.052490) | 0.054700 / 0.043533 (0.011167) | 0.349692 / 0.255139 (0.094553) | 0.366481 / 0.283200 (0.083281) | 0.117394 / 0.141683 (-0.024289) | 1.593156 / 1.452155 (0.141001) | 1.708864 / 1.492716 (0.216148) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.229529 / 0.018006 (0.211523) | 0.490531 / 0.000490 (0.490042) | 0.002934 / 0.000200 (0.002734) | 0.000094 / 0.000054 (0.000040) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028074 / 0.037411 (-0.009337) | 0.122321 / 0.014526 (0.107795) | 0.129120 / 0.176557 (-0.047436) | 0.188413 / 0.737135 (-0.548722) | 0.138983 / 0.296338 (-0.157355) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.479350 / 0.215209 (0.264141) | 4.926201 / 2.077655 (2.848546) | 2.265557 / 1.504120 (0.761437) | 2.014580 / 1.541195 (0.473386) | 2.120517 / 1.468490 (0.652027) | 0.795334 / 4.584777 (-3.789443) | 4.509754 / 3.745712 (0.764042) | 4.328313 / 5.269862 (-0.941548) | 2.153304 / 4.565676 (-2.412373) | 0.102942 / 0.424275 (-0.321333) | 0.053504 / 0.007607 (0.045896) | 0.609392 / 0.226044 (0.383347) | 6.114048 / 2.268929 (3.845119) | 2.773306 / 55.444624 (-52.671318) | 2.443434 / 6.876477 (-4.433042) | 2.612005 / 2.142072 (0.469932) | 0.950435 / 4.805227 (-3.854792) | 0.194081 / 6.500664 (-6.306583) | 0.074513 / 0.075469 (-0.000956) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.402897 / 1.841788 (-0.438891) | 18.263033 / 8.074308 (10.188724) | 16.579809 / 10.191392 (6.388417) | 0.212319 / 0.680424 (-0.468104) | 0.020468 / 0.534201 (-0.513733) | 0.494850 / 0.579283 (-0.084433) | 0.483790 / 0.434364 (0.049426) | 0.572073 / 0.540337 (0.031735) | 0.684353 / 1.386936 (-0.702583) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009732 / 0.011353 (-0.001621) | 0.005901 / 0.011008 (-0.005107) | 0.084568 / 0.038508 (0.046060) | 0.038743 / 0.023109 (0.015634) | 0.431323 / 0.275898 (0.155425) | 0.472124 / 0.323480 (0.148644) | 0.006255 / 0.007986 (-0.001731) | 0.005892 / 0.004328 (0.001563) | 0.081913 / 0.004250 (0.077662) | 0.055560 / 0.037052 (0.018507) | 0.442857 / 0.258489 (0.184368) | 0.481887 / 0.293841 (0.188046) | 0.040730 / 0.128546 (-0.087816) | 0.014339 / 0.075646 (-0.061307) | 0.099258 / 0.419271 (-0.320013) | 0.054692 / 0.043533 (0.011159) | 0.436323 / 0.255139 (0.181184) | 0.461046 / 0.283200 (0.177846) | 0.125972 / 0.141683 (-0.015710) | 1.673173 / 1.452155 (0.221018) | 1.781364 / 1.492716 (0.288648) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.271450 / 0.018006 (0.253444) | 0.514484 / 0.000490 (0.513994) | 0.000455 / 0.000200 (0.000255) | 0.000061 / 0.000054 (0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.036104 / 0.037411 (-0.001308) | 0.143306 / 0.014526 (0.128780) | 0.151105 / 0.176557 (-0.025451) | 0.210737 / 0.737135 (-0.526399) | 0.151404 / 0.296338 (-0.144934) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.573613 / 0.215209 (0.358404) | 5.828222 / 2.077655 (3.750567) | 2.993028 / 1.504120 (1.488908) | 2.617900 / 1.541195 (1.076706) | 2.754673 / 1.468490 (1.286183) | 1.010624 / 4.584777 (-3.574152) | 4.971261 / 3.745712 (1.225549) | 4.382017 / 5.269862 (-0.887845) | 1.971894 / 4.565676 (-2.593782) | 0.104404 / 0.424275 (-0.319871) | 0.014595 / 0.007607 (0.006988) | 0.657684 / 0.226044 (0.431639) | 6.566151 / 2.268929 (4.297222) | 3.221378 / 55.444624 (-52.223246) | 2.809402 / 6.876477 (-4.067075) | 2.882426 / 2.142072 (0.740354) | 1.006134 / 4.805227 (-3.799093) | 0.204469 / 6.500664 (-6.296196) | 0.078147 / 0.075469 (0.002678) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.574768 / 1.841788 (-0.267020) | 18.193335 / 8.074308 (10.119027) | 17.275353 / 10.191392 (7.083961) | 0.166890 / 0.680424 (-0.513534) | 0.020612 / 0.534201 (-0.513589) | 0.496179 / 0.579283 (-0.083104) | 0.507824 / 0.434364 (0.073460) | 0.620984 / 0.540337 (0.080647) | 0.749727 / 1.386936 (-0.637209) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#06988d3e01820b93ebcdc76158339fd6f67329dc \"CML watermark\")\n",
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006534 / 0.011353 (-0.004819) | 0.004456 / 0.011008 (-0.006553) | 0.097978 / 0.038508 (0.059470) | 0.027614 / 0.023109 (0.004505) | 0.309833 / 0.275898 (0.033935) | 0.337006 / 0.323480 (0.013526) | 0.004986 / 0.007986 (-0.002999) | 0.004521 / 0.004328 (0.000193) | 0.075053 / 0.004250 (0.070803) | 0.037095 / 0.037052 (0.000043) | 0.305430 / 0.258489 (0.046941) | 0.345298 / 0.293841 (0.051457) | 0.029784 / 0.128546 (-0.098762) | 0.011449 / 0.075646 (-0.064197) | 0.323346 / 0.419271 (-0.095925) | 0.042188 / 0.043533 (-0.001345) | 0.318653 / 0.255139 (0.063514) | 0.333799 / 0.283200 (0.050599) | 0.088194 / 0.141683 (-0.053488) | 1.511012 / 1.452155 (0.058857) | 1.578205 / 1.492716 (0.085489) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.229695 / 0.018006 (0.211689) | 0.413276 / 0.000490 (0.412786) | 0.009142 / 0.000200 (0.008942) | 0.000537 / 0.000054 (0.000482) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024327 / 0.037411 (-0.013084) | 0.097953 / 0.014526 (0.083427) | 0.105551 / 0.176557 (-0.071005) | 0.169397 / 0.737135 (-0.567738) | 0.109784 / 0.296338 (-0.186554) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.417713 / 0.215209 (0.202504) | 4.190703 / 2.077655 (2.113048) | 1.873504 / 1.504120 (0.369384) | 1.664540 / 1.541195 (0.123346) | 1.704539 / 1.468490 (0.236049) | 0.699840 / 4.584777 (-3.884937) | 3.480605 / 3.745712 (-0.265107) | 1.844229 / 5.269862 (-3.425633) | 1.155793 / 4.565676 (-3.409883) | 0.083013 / 0.424275 (-0.341262) | 0.012414 / 0.007607 (0.004807) | 0.518357 / 0.226044 (0.292313) | 5.186136 / 2.268929 (2.917207) | 2.329263 / 55.444624 (-53.115361) | 1.991395 / 6.876477 (-4.885081) | 2.074563 / 2.142072 (-0.067509) | 0.801388 / 4.805227 (-4.003839) | 0.152236 / 6.500664 (-6.348428) | 0.067414 / 0.075469 (-0.008055) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.197290 / 1.841788 (-0.644497) | 13.666537 / 8.074308 (5.592229) | 13.017190 / 10.191392 (2.825798) | 0.142109 / 0.680424 (-0.538314) | 0.016321 / 0.534201 (-0.517880) | 0.378434 / 0.579283 (-0.200849) | 0.381101 / 0.434364 (-0.053263) | 0.444113 / 0.540337 (-0.096225) | 0.521448 / 1.386936 (-0.865488) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006273 / 0.011353 (-0.005080) | 0.004408 / 0.011008 (-0.006600) | 0.077100 / 0.038508 (0.038592) | 0.027361 / 0.023109 (0.004251) | 0.358170 / 0.275898 (0.082272) | 0.390125 / 0.323480 (0.066646) | 0.004736 / 0.007986 (-0.003250) | 0.004663 / 0.004328 (0.000334) | 0.077626 / 0.004250 (0.073376) | 0.037103 / 0.037052 (0.000051) | 0.360044 / 0.258489 (0.101555) | 0.411539 / 0.293841 (0.117698) | 0.030173 / 0.128546 (-0.098373) | 0.011618 / 0.075646 (-0.064028) | 0.086036 / 0.419271 (-0.333235) | 0.039077 / 0.043533 (-0.004456) | 0.382223 / 0.255139 (0.127084) | 0.384817 / 0.283200 (0.101618) | 0.094591 / 0.141683 (-0.047092) | 1.494961 / 1.452155 (0.042807) | 1.583769 / 1.492716 (0.091053) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.227467 / 0.018006 (0.209460) | 0.396648 / 0.000490 (0.396159) | 0.000382 / 0.000200 (0.000182) | 0.000057 / 0.000054 (0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025346 / 0.037411 (-0.012065) | 0.102086 / 0.014526 (0.087560) | 0.108570 / 0.176557 (-0.067986) | 0.158777 / 0.737135 (-0.578359) | 0.112885 / 0.296338 (-0.183453) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.460731 / 0.215209 (0.245522) | 4.556450 / 2.077655 (2.478795) | 2.258185 / 1.504120 (0.754065) | 2.122584 / 1.541195 (0.581389) | 2.224638 / 1.468490 (0.756148) | 0.691909 / 4.584777 (-3.892868) | 3.482634 / 3.745712 (-0.263078) | 2.772837 / 5.269862 (-2.497024) | 1.533897 / 4.565676 (-3.031780) | 0.083025 / 0.424275 (-0.341250) | 0.012629 / 0.007607 (0.005022) | 0.548397 / 0.226044 (0.322352) | 5.492005 / 2.268929 (3.223077) | 2.669841 / 55.444624 (-52.774784) | 2.366947 / 6.876477 (-4.509529) | 2.496795 / 2.142072 (0.354722) | 0.804868 / 4.805227 (-4.000359) | 0.151686 / 6.500664 (-6.348978) | 0.068333 / 0.075469 (-0.007136) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.320414 / 1.841788 (-0.521374) | 14.367567 / 8.074308 (6.293258) | 14.047702 / 10.191392 (3.856310) | 0.129087 / 0.680424 (-0.551337) | 0.016658 / 0.534201 (-0.517543) | 0.381949 / 0.579283 (-0.197335) | 0.390105 / 0.434364 (-0.044258) | 0.445947 / 0.540337 (-0.094390) | 0.531074 / 1.386936 (-0.855862) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#c67c9f3797ecc231b34d87ddef489c1238ec4046 \"CML watermark\")\n"
] | 2023-05-03T18:34:18 | 2023-05-04T14:31:55 | 2023-05-04T14:24:49 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5816",
"html_url": "https://github.com/huggingface/datasets/pull/5816",
"diff_url": "https://github.com/huggingface/datasets/pull/5816.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5816.patch",
"merged_at": "2023-05-04T14:24:49"
} | Preserve the `stopping_strategy` in the `RandomlyCyclingMultiSourcesExamplesIterable.shard_data_sources` to fix shuffling a dataset interleaved (from multiple sources) with probabilities.
Fix #5812
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5816/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5816/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5813 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5813/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5813/comments | https://api.github.com/repos/huggingface/datasets/issues/5813/events | https://github.com/huggingface/datasets/pull/5813 | 1,691,908,535 | PR_kwDODunzps5Pj0_E | 5,813 | [DO-NOT-MERGE] Debug Windows issue at #3 | {
"login": "HyukjinKwon",
"id": 6477701,
"node_id": "MDQ6VXNlcjY0Nzc3MDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/6477701?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HyukjinKwon",
"html_url": "https://github.com/HyukjinKwon",
"followers_url": "https://api.github.com/users/HyukjinKwon/followers",
"following_url": "https://api.github.com/users/HyukjinKwon/following{/other_user}",
"gists_url": "https://api.github.com/users/HyukjinKwon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HyukjinKwon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HyukjinKwon/subscriptions",
"organizations_url": "https://api.github.com/users/HyukjinKwon/orgs",
"repos_url": "https://api.github.com/users/HyukjinKwon/repos",
"events_url": "https://api.github.com/users/HyukjinKwon/events{/privacy}",
"received_events_url": "https://api.github.com/users/HyukjinKwon/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2023-05-02T07:19:34 | 2023-05-02T07:21:30 | 2023-05-02T07:21:30 | NONE | null | true | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5813",
"html_url": "https://github.com/huggingface/datasets/pull/5813",
"diff_url": "https://github.com/huggingface/datasets/pull/5813.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5813.patch",
"merged_at": null
} | TBD | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5813/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5813/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5812 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5812/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5812/comments | https://api.github.com/repos/huggingface/datasets/issues/5812/events | https://github.com/huggingface/datasets/issues/5812 | 1,691,798,169 | I_kwDODunzps5k1sqZ | 5,812 | Cannot shuffle interleaved IterableDataset with "all_exhausted" stopping strategy | {
"login": "off99555",
"id": 15215732,
"node_id": "MDQ6VXNlcjE1MjE1NzMy",
"avatar_url": "https://avatars.githubusercontent.com/u/15215732?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/off99555",
"html_url": "https://github.com/off99555",
"followers_url": "https://api.github.com/users/off99555/followers",
"following_url": "https://api.github.com/users/off99555/following{/other_user}",
"gists_url": "https://api.github.com/users/off99555/gists{/gist_id}",
"starred_url": "https://api.github.com/users/off99555/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/off99555/subscriptions",
"organizations_url": "https://api.github.com/users/off99555/orgs",
"repos_url": "https://api.github.com/users/off99555/repos",
"events_url": "https://api.github.com/users/off99555/events{/privacy}",
"received_events_url": "https://api.github.com/users/off99555/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 3287858981,
"node_id": "MDU6TGFiZWwzMjg3ODU4OTgx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/streaming",
"name": "streaming",
"color": "fef2c0",
"default": false,
"description": ""
}
] | closed | false | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 2023-05-02T05:26:17 | 2023-05-04T14:24:51 | 2023-05-04T14:24:51 | NONE | null | null | null | ### Describe the bug
Shuffling interleaved `IterableDataset` with "all_exhausted" strategy yields non-exhaustive sampling.
### Steps to reproduce the bug
```py
from datasets import IterableDataset, interleave_datasets
def gen(bias, length):
for i in range(length):
yield dict(a=bias+i)
seed = 42
probabilities = [0.2, 0.6, 0.2]
d1 = IterableDataset.from_generator(lambda: gen(0, 3))
d2 = IterableDataset.from_generator(lambda: gen(10, 4))
d3 = IterableDataset.from_generator(lambda: gen(20, 3))
ds = interleave_datasets([d1, d2, d3], probabilities=probabilities, seed=seed, stopping_strategy='all_exhausted')
ds = ds.shuffle(buffer_size=1000)
for x in ds:
print(x)
```
This code produces
```
{'a': 0}
{'a': 22}
{'a': 20}
{'a': 21}
{'a': 10}
{'a': 1}
```
### Expected behavior
It should produce a longer list of examples to exhaust all the datasets.
If you comment out the shuffle line, it will exhaust all the datasets properly.
Here is the output if you comment out shuffling:
```
{'a': 10}
{'a': 11}
{'a': 20}
{'a': 12}
{'a': 0}
{'a': 21}
{'a': 13}
{'a': 10}
{'a': 1}
{'a': 11}
{'a': 12}
{'a': 22}
{'a': 13}
{'a': 20}
{'a': 10}
{'a': 11}
{'a': 12}
{'a': 2}
```
### Environment info
- `datasets` version: 2.12.0
- Platform: Linux-5.10.147+-x86_64-with-glibc2.31
- Python version: 3.10.11
- Huggingface_hub version: 0.14.1
- PyArrow version: 9.0.0
- Pandas version: 1.5.3
This was run on Google Colab. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5812/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5812/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5810 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5810/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5810/comments | https://api.github.com/repos/huggingface/datasets/issues/5810/events | https://github.com/huggingface/datasets/pull/5810 | 1,689,917,822 | PR_kwDODunzps5PdJHI | 5,810 | Add `fn_kwargs` to `map` and `filter` of `IterableDataset` and `IterableDatasetDict` | {
"login": "yuukicammy",
"id": 3927621,
"node_id": "MDQ6VXNlcjM5Mjc2MjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/3927621?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yuukicammy",
"html_url": "https://github.com/yuukicammy",
"followers_url": "https://api.github.com/users/yuukicammy/followers",
"following_url": "https://api.github.com/users/yuukicammy/following{/other_user}",
"gists_url": "https://api.github.com/users/yuukicammy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yuukicammy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yuukicammy/subscriptions",
"organizations_url": "https://api.github.com/users/yuukicammy/orgs",
"repos_url": "https://api.github.com/users/yuukicammy/repos",
"events_url": "https://api.github.com/users/yuukicammy/events{/privacy}",
"received_events_url": "https://api.github.com/users/yuukicammy/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Sorry, the local test passed because it was inadvertently testing the main branch. I am currently fixing where the test failed.",
"- I have fixed the bug and addressed the above two points.\r\n- I have tested locally and confirmed that the test passes.\r\n\r\nPlease check the contents. @lhoestq \r\n\r\n5715a7e64bdd2951e6705aee58d592392e1538d6",
"Cool ! You can run `make style` to fix code formatting to fix the ci",
"I had forgotten about it. I did it. @lhoestq \r\n00248926a37c6f1387614aa388c36fdc105a59f5",
"Thanks for putting this together @yuukicammy ! Looking forward to using this new addition ASAP. \r\n@lhoestq - sorry to bother you with this, but if this looks good to you, any chance we could get this merged in? \r\n\r\nThanks again to you both! ",
"Yup there's just one test to remove and we can merge",
"Sorry for my understanding wrong! Correspondence has been addressed. @lhoestq \r\n ca511b7b29fdde51ffd69b58bda79220472e9e94\r\n\r\nThanks for your comment! @brianhill11 ",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006788 / 0.011353 (-0.004564) | 0.004372 / 0.011008 (-0.006636) | 0.097746 / 0.038508 (0.059238) | 0.034858 / 0.023109 (0.011749) | 0.298122 / 0.275898 (0.022224) | 0.335272 / 0.323480 (0.011792) | 0.005810 / 0.007986 (-0.002175) | 0.004944 / 0.004328 (0.000616) | 0.072352 / 0.004250 (0.068101) | 0.041730 / 0.037052 (0.004678) | 0.316482 / 0.258489 (0.057992) | 0.338710 / 0.293841 (0.044869) | 0.027975 / 0.128546 (-0.100571) | 0.008746 / 0.075646 (-0.066901) | 0.329336 / 0.419271 (-0.089935) | 0.051327 / 0.043533 (0.007794) | 0.300695 / 0.255139 (0.045556) | 0.322813 / 0.283200 (0.039613) | 0.101133 / 0.141683 (-0.040550) | 1.422767 / 1.452155 (-0.029388) | 1.538364 / 1.492716 (0.045648) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.016698 / 0.018006 (-0.001308) | 0.447042 / 0.000490 (0.446552) | 0.007609 / 0.000200 (0.007409) | 0.000277 / 0.000054 (0.000223) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026732 / 0.037411 (-0.010679) | 0.108295 / 0.014526 (0.093769) | 0.116905 / 0.176557 (-0.059652) | 0.173166 / 0.737135 (-0.563969) | 0.122560 / 0.296338 (-0.173779) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.394893 / 0.215209 (0.179683) | 3.950314 / 2.077655 (1.872659) | 1.780576 / 1.504120 (0.276456) | 1.579855 / 1.541195 (0.038660) | 1.711197 / 1.468490 (0.242707) | 0.521469 / 4.584777 (-4.063308) | 3.838850 / 3.745712 (0.093138) | 3.101095 / 5.269862 (-2.168767) | 1.531574 / 4.565676 (-3.034102) | 0.065291 / 0.424275 (-0.358984) | 0.011979 / 0.007607 (0.004372) | 0.496543 / 0.226044 (0.270498) | 4.965446 / 2.268929 (2.696517) | 2.250788 / 55.444624 (-53.193837) | 1.923231 / 6.876477 (-4.953245) | 2.075372 / 2.142072 (-0.066700) | 0.638708 / 4.805227 (-4.166519) | 0.142048 / 6.500664 (-6.358616) | 0.064225 / 0.075469 (-0.011244) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.211799 / 1.841788 (-0.629989) | 14.791822 / 8.074308 (6.717514) | 14.274993 / 10.191392 (4.083601) | 0.163942 / 0.680424 (-0.516482) | 0.017541 / 0.534201 (-0.516660) | 0.396440 / 0.579283 (-0.182843) | 0.427502 / 0.434364 (-0.006861) | 0.494273 / 0.540337 (-0.046064) | 0.586877 / 1.386936 (-0.800059) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006846 / 0.011353 (-0.004506) | 0.004854 / 0.011008 (-0.006154) | 0.075654 / 0.038508 (0.037146) | 0.034295 / 0.023109 (0.011186) | 0.378095 / 0.275898 (0.102197) | 0.407833 / 0.323480 (0.084353) | 0.006155 / 0.007986 (-0.001830) | 0.004259 / 0.004328 (-0.000070) | 0.076195 / 0.004250 (0.071944) | 0.051901 / 0.037052 (0.014849) | 0.375027 / 0.258489 (0.116538) | 0.428189 / 0.293841 (0.134348) | 0.028814 / 0.128546 (-0.099733) | 0.009209 / 0.075646 (-0.066438) | 0.083681 / 0.419271 (-0.335591) | 0.049158 / 0.043533 (0.005625) | 0.366669 / 0.255139 (0.111530) | 0.388767 / 0.283200 (0.105568) | 0.107837 / 0.141683 (-0.033845) | 1.476354 / 1.452155 (0.024199) | 1.580160 / 1.492716 (0.087443) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.218900 / 0.018006 (0.200894) | 0.445475 / 0.000490 (0.444985) | 0.000423 / 0.000200 (0.000223) | 0.000063 / 0.000054 (0.000008) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029740 / 0.037411 (-0.007671) | 0.115192 / 0.014526 (0.100666) | 0.122439 / 0.176557 (-0.054118) | 0.170639 / 0.737135 (-0.566496) | 0.128085 / 0.296338 (-0.168254) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.437745 / 0.215209 (0.222536) | 4.385695 / 2.077655 (2.308040) | 2.189893 / 1.504120 (0.685773) | 2.023160 / 1.541195 (0.481965) | 2.112798 / 1.468490 (0.644308) | 0.522497 / 4.584777 (-4.062280) | 3.881356 / 3.745712 (0.135644) | 3.206090 / 5.269862 (-2.063772) | 1.308241 / 4.565676 (-3.257435) | 0.065635 / 0.424275 (-0.358640) | 0.012288 / 0.007607 (0.004681) | 0.537265 / 0.226044 (0.311220) | 5.361641 / 2.268929 (3.092712) | 2.638941 / 55.444624 (-52.805684) | 2.344717 / 6.876477 (-4.531759) | 2.437619 / 2.142072 (0.295546) | 0.645079 / 4.805227 (-4.160149) | 0.143852 / 6.500664 (-6.356812) | 0.065796 / 0.075469 (-0.009673) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.276588 / 1.841788 (-0.565200) | 15.239396 / 8.074308 (7.165088) | 13.150591 / 10.191392 (2.959199) | 0.163635 / 0.680424 (-0.516789) | 0.017533 / 0.534201 (-0.516668) | 0.397659 / 0.579283 (-0.181624) | 0.425589 / 0.434364 (-0.008774) | 0.466570 / 0.540337 (-0.073768) | 0.563953 / 1.386936 (-0.822983) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#807d5c5ed4f8db7761b92bed498b2193acce8fb7 \"CML watermark\")\n"
] | 2023-04-30T13:23:01 | 2023-05-22T08:12:39 | 2023-05-22T08:05:31 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5810",
"html_url": "https://github.com/huggingface/datasets/pull/5810",
"diff_url": "https://github.com/huggingface/datasets/pull/5810.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5810.patch",
"merged_at": "2023-05-22T08:05:31"
} | # Overview
I've added an argument`fn_kwargs` for map and filter methods of `IterableDataset` and `IterableDatasetDict` classes.
# Details
Currently, the map and filter methods of some classes related to `IterableDataset` do not allow specifing the arguments passed to the function. This pull request adds `fn_kwargs` to pass arguments to the mapping function. This allows users to preprocess data more flexibly.
Added `fn_kwargs` to the following classes and methods (description of the argument is also added).
1. class `FilteredExamplesIterable`
2. method `filter` of class `IterableDataset`
3. method `map` of class `IterableDatasetDict`
4. method `filter` of class `IterableDatasetDict`
# Example of changes
Here's an example of how to use the new functionality:
```python
from datasets import IterableDatasetDict
def preprocess_function(example, a=None, b=None):
# do something
return example
dataset = IterableDatasetDict(...)
dataset = dataset.map(preprocess_function, fn_kwargs={"a": 1, "b": 2})
```
# Related Issues
This pull request is related to the following issue:
https://github.com/huggingface/datasets/issues/3444 .
# Testing
I have added unit tests to test the new functionality.
In test_iterable_dataset.py
- Added `test_filtered_examples_iterable_with_fn_kwargs` for [1](#details).
- Added `test_iterable_dataset_filter` for [2](#details).
- Added `test_iterable_dataset_map_with_fn_kwargs`. This is not a newly added feature, but was added because it was not tested.
In test_dataset_dict.py
- Added `_create_dummy_iterable_dataset` for [3](#details) and [4](#details).
- Added `_create_dummy_iterable_dataset_dict` for [3](#details) and [4](#details).
- Added `test_iterable_map` for [3](#details).
- Added `test_iterable_filter` for [4](#details).
Note that, there is no test for `IterableDatasetDict` at the current main branch. I thought about writing tests for `IterableDatasetDict` in a new file, but I decided to add them in the test file for `DatasetDict` (test_dataset_dict.py).
# Checklist
- [x] Format the code.
- [x] Added tests.
- [x] Passed tests locally. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5810/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5810/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5804 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5804/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5804/comments | https://api.github.com/repos/huggingface/datasets/issues/5804/events | https://github.com/huggingface/datasets/pull/5804 | 1,688,285,666 | PR_kwDODunzps5PX0Dk | 5,804 | Set dev version | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5804). All of your documentation changes will be reflected on that endpoint.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006448 / 0.011353 (-0.004905) | 0.004440 / 0.011008 (-0.006568) | 0.097837 / 0.038508 (0.059328) | 0.027754 / 0.023109 (0.004645) | 0.306462 / 0.275898 (0.030564) | 0.332454 / 0.323480 (0.008975) | 0.004984 / 0.007986 (-0.003001) | 0.004703 / 0.004328 (0.000375) | 0.075213 / 0.004250 (0.070962) | 0.036524 / 0.037052 (-0.000529) | 0.310149 / 0.258489 (0.051659) | 0.346392 / 0.293841 (0.052552) | 0.031012 / 0.128546 (-0.097534) | 0.011598 / 0.075646 (-0.064049) | 0.323066 / 0.419271 (-0.096206) | 0.042945 / 0.043533 (-0.000588) | 0.302286 / 0.255139 (0.047147) | 0.327813 / 0.283200 (0.044614) | 0.092540 / 0.141683 (-0.049143) | 1.532893 / 1.452155 (0.080739) | 1.556676 / 1.492716 (0.063960) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.195126 / 0.018006 (0.177120) | 0.399623 / 0.000490 (0.399133) | 0.003176 / 0.000200 (0.002976) | 0.000068 / 0.000054 (0.000014) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023612 / 0.037411 (-0.013799) | 0.097794 / 0.014526 (0.083268) | 0.104665 / 0.176557 (-0.071891) | 0.167145 / 0.737135 (-0.569990) | 0.108769 / 0.296338 (-0.187570) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.437818 / 0.215209 (0.222608) | 4.354896 / 2.077655 (2.277242) | 2.092832 / 1.504120 (0.588712) | 1.957630 / 1.541195 (0.416435) | 2.033135 / 1.468490 (0.564645) | 0.702316 / 4.584777 (-3.882461) | 3.448035 / 3.745712 (-0.297678) | 1.906762 / 5.269862 (-3.363100) | 1.253274 / 4.565676 (-3.312402) | 0.082486 / 0.424275 (-0.341789) | 0.012442 / 0.007607 (0.004835) | 0.532096 / 0.226044 (0.306052) | 5.366580 / 2.268929 (3.097652) | 2.441904 / 55.444624 (-53.002720) | 2.112116 / 6.876477 (-4.764361) | 2.185471 / 2.142072 (0.043398) | 0.797905 / 4.805227 (-4.007322) | 0.149811 / 6.500664 (-6.350853) | 0.066507 / 0.075469 (-0.008962) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.206300 / 1.841788 (-0.635487) | 13.620851 / 8.074308 (5.546543) | 14.190666 / 10.191392 (3.999274) | 0.142343 / 0.680424 (-0.538081) | 0.016867 / 0.534201 (-0.517334) | 0.381557 / 0.579283 (-0.197726) | 0.373935 / 0.434364 (-0.060429) | 0.437856 / 0.540337 (-0.102481) | 0.525235 / 1.386936 (-0.861701) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006598 / 0.011353 (-0.004755) | 0.004487 / 0.011008 (-0.006522) | 0.077582 / 0.038508 (0.039073) | 0.028008 / 0.023109 (0.004899) | 0.341602 / 0.275898 (0.065704) | 0.377105 / 0.323480 (0.053625) | 0.004999 / 0.007986 (-0.002986) | 0.004791 / 0.004328 (0.000462) | 0.076418 / 0.004250 (0.072167) | 0.038347 / 0.037052 (0.001295) | 0.343196 / 0.258489 (0.084707) | 0.382459 / 0.293841 (0.088618) | 0.030597 / 0.128546 (-0.097950) | 0.011579 / 0.075646 (-0.064067) | 0.085876 / 0.419271 (-0.333396) | 0.043241 / 0.043533 (-0.000292) | 0.343754 / 0.255139 (0.088615) | 0.380689 / 0.283200 (0.097489) | 0.096015 / 0.141683 (-0.045668) | 1.464419 / 1.452155 (0.012264) | 1.574010 / 1.492716 (0.081294) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.156433 / 0.018006 (0.138427) | 0.403179 / 0.000490 (0.402690) | 0.002415 / 0.000200 (0.002215) | 0.000082 / 0.000054 (0.000027) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024946 / 0.037411 (-0.012465) | 0.100568 / 0.014526 (0.086042) | 0.106440 / 0.176557 (-0.070117) | 0.158457 / 0.737135 (-0.578678) | 0.110774 / 0.296338 (-0.185564) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.434734 / 0.215209 (0.219525) | 4.343874 / 2.077655 (2.266220) | 2.059759 / 1.504120 (0.555639) | 1.855124 / 1.541195 (0.313930) | 1.908567 / 1.468490 (0.440077) | 0.695283 / 4.584777 (-3.889494) | 3.347724 / 3.745712 (-0.397988) | 2.979498 / 5.269862 (-2.290364) | 1.532040 / 4.565676 (-3.033636) | 0.083021 / 0.424275 (-0.341254) | 0.012522 / 0.007607 (0.004915) | 0.540934 / 0.226044 (0.314890) | 5.385690 / 2.268929 (3.116762) | 2.507409 / 55.444624 (-52.937216) | 2.160537 / 6.876477 (-4.715939) | 2.269195 / 2.142072 (0.127123) | 0.804718 / 4.805227 (-4.000509) | 0.152432 / 6.500664 (-6.348232) | 0.068783 / 0.075469 (-0.006686) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.294698 / 1.841788 (-0.547090) | 14.152792 / 8.074308 (6.078484) | 14.233132 / 10.191392 (4.041740) | 0.143655 / 0.680424 (-0.536768) | 0.016844 / 0.534201 (-0.517357) | 0.380246 / 0.579283 (-0.199037) | 0.381730 / 0.434364 (-0.052633) | 0.456838 / 0.540337 (-0.083499) | 0.543677 / 1.386936 (-0.843259) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#b28d5610887f2e107765f5f1557679184db08214 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008586 / 0.011353 (-0.002767) | 0.005886 / 0.011008 (-0.005122) | 0.114522 / 0.038508 (0.076014) | 0.040966 / 0.023109 (0.017857) | 0.366655 / 0.275898 (0.090757) | 0.408765 / 0.323480 (0.085285) | 0.006822 / 0.007986 (-0.001164) | 0.004508 / 0.004328 (0.000180) | 0.084715 / 0.004250 (0.080465) | 0.054007 / 0.037052 (0.016954) | 0.380500 / 0.258489 (0.122011) | 0.410377 / 0.293841 (0.116536) | 0.041040 / 0.128546 (-0.087507) | 0.013940 / 0.075646 (-0.061707) | 0.398456 / 0.419271 (-0.020816) | 0.059315 / 0.043533 (0.015782) | 0.353640 / 0.255139 (0.098501) | 0.388682 / 0.283200 (0.105482) | 0.121744 / 0.141683 (-0.019939) | 1.729306 / 1.452155 (0.277151) | 1.824768 / 1.492716 (0.332052) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.228806 / 0.018006 (0.210800) | 0.492790 / 0.000490 (0.492300) | 0.010815 / 0.000200 (0.010615) | 0.000372 / 0.000054 (0.000318) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031750 / 0.037411 (-0.005662) | 0.127160 / 0.014526 (0.112635) | 0.136717 / 0.176557 (-0.039839) | 0.205590 / 0.737135 (-0.531545) | 0.142596 / 0.296338 (-0.153742) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.486419 / 0.215209 (0.271210) | 4.858572 / 2.077655 (2.780918) | 2.173867 / 1.504120 (0.669747) | 1.934619 / 1.541195 (0.393424) | 2.104185 / 1.468490 (0.635695) | 0.837913 / 4.584777 (-3.746864) | 4.552192 / 3.745712 (0.806480) | 2.565040 / 5.269862 (-2.704822) | 1.808499 / 4.565676 (-2.757178) | 0.103283 / 0.424275 (-0.320993) | 0.015040 / 0.007607 (0.007433) | 0.602325 / 0.226044 (0.376281) | 6.038655 / 2.268929 (3.769727) | 2.759789 / 55.444624 (-52.684835) | 2.330990 / 6.876477 (-4.545487) | 2.404111 / 2.142072 (0.262038) | 1.011637 / 4.805227 (-3.793590) | 0.202142 / 6.500664 (-6.298522) | 0.079496 / 0.075469 (0.004026) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.429543 / 1.841788 (-0.412245) | 18.052409 / 8.074308 (9.978101) | 16.989154 / 10.191392 (6.797762) | 0.208981 / 0.680424 (-0.471443) | 0.020490 / 0.534201 (-0.513711) | 0.502746 / 0.579283 (-0.076537) | 0.491769 / 0.434364 (0.057405) | 0.581970 / 0.540337 (0.041632) | 0.695816 / 1.386936 (-0.691120) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008449 / 0.011353 (-0.002904) | 0.006633 / 0.011008 (-0.004375) | 0.088638 / 0.038508 (0.050130) | 0.040013 / 0.023109 (0.016904) | 0.413108 / 0.275898 (0.137210) | 0.446310 / 0.323480 (0.122830) | 0.006515 / 0.007986 (-0.001471) | 0.006223 / 0.004328 (0.001894) | 0.089823 / 0.004250 (0.085573) | 0.052029 / 0.037052 (0.014977) | 0.407263 / 0.258489 (0.148774) | 0.449416 / 0.293841 (0.155576) | 0.041810 / 0.128546 (-0.086736) | 0.014604 / 0.075646 (-0.061042) | 0.103728 / 0.419271 (-0.315543) | 0.058212 / 0.043533 (0.014679) | 0.408936 / 0.255139 (0.153797) | 0.436727 / 0.283200 (0.153528) | 0.124344 / 0.141683 (-0.017339) | 1.752112 / 1.452155 (0.299957) | 1.859104 / 1.492716 (0.366387) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.231172 / 0.018006 (0.213166) | 0.502974 / 0.000490 (0.502485) | 0.005586 / 0.000200 (0.005386) | 0.000137 / 0.000054 (0.000082) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034097 / 0.037411 (-0.003314) | 0.133780 / 0.014526 (0.119254) | 0.142321 / 0.176557 (-0.034236) | 0.199807 / 0.737135 (-0.537329) | 0.150073 / 0.296338 (-0.146266) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.515658 / 0.215209 (0.300449) | 5.129783 / 2.077655 (3.052129) | 2.534767 / 1.504120 (1.030648) | 2.352468 / 1.541195 (0.811274) | 2.430708 / 1.468490 (0.962218) | 0.850087 / 4.584777 (-3.734690) | 4.529622 / 3.745712 (0.783910) | 2.451986 / 5.269862 (-2.817876) | 1.569568 / 4.565676 (-2.996109) | 0.102907 / 0.424275 (-0.321368) | 0.014420 / 0.007607 (0.006813) | 0.635124 / 0.226044 (0.409080) | 6.260496 / 2.268929 (3.991568) | 3.094984 / 55.444624 (-52.349640) | 2.780629 / 6.876477 (-4.095847) | 2.947620 / 2.142072 (0.805548) | 1.002397 / 4.805227 (-3.802830) | 0.200502 / 6.500664 (-6.300162) | 0.076577 / 0.075469 (0.001107) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.505958 / 1.841788 (-0.335829) | 18.364986 / 8.074308 (10.290678) | 16.707214 / 10.191392 (6.515822) | 0.210976 / 0.680424 (-0.469447) | 0.022077 / 0.534201 (-0.512124) | 0.516174 / 0.579283 (-0.063109) | 0.502469 / 0.434364 (0.068105) | 0.626790 / 0.540337 (0.086453) | 0.747230 / 1.386936 (-0.639706) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#bc5fef5b6d91f009e4101684adcb374df2c170f6 \"CML watermark\")\n"
] | 2023-04-28T10:10:01 | 2023-04-28T10:18:51 | 2023-04-28T10:10:29 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5804",
"html_url": "https://github.com/huggingface/datasets/pull/5804",
"diff_url": "https://github.com/huggingface/datasets/pull/5804.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5804.patch",
"merged_at": "2023-04-28T10:10:29"
} | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5804/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5804/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5803 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5803/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5803/comments | https://api.github.com/repos/huggingface/datasets/issues/5803/events | https://github.com/huggingface/datasets/pull/5803 | 1,688,256,290 | PR_kwDODunzps5PXtte | 5,803 | Release: 2.12.0 | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5803). All of your documentation changes will be reflected on that endpoint.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008303 / 0.011353 (-0.003050) | 0.005681 / 0.011008 (-0.005327) | 0.111830 / 0.038508 (0.073322) | 0.039222 / 0.023109 (0.016112) | 0.336773 / 0.275898 (0.060875) | 0.376673 / 0.323480 (0.053193) | 0.006756 / 0.007986 (-0.001230) | 0.006078 / 0.004328 (0.001749) | 0.083552 / 0.004250 (0.079301) | 0.054430 / 0.037052 (0.017377) | 0.337310 / 0.258489 (0.078821) | 0.386138 / 0.293841 (0.092297) | 0.040068 / 0.128546 (-0.088478) | 0.013895 / 0.075646 (-0.061751) | 0.384174 / 0.419271 (-0.035097) | 0.058244 / 0.043533 (0.014711) | 0.342410 / 0.255139 (0.087271) | 0.362417 / 0.283200 (0.079217) | 0.123470 / 0.141683 (-0.018213) | 1.662938 / 1.452155 (0.210784) | 1.786488 / 1.492716 (0.293771) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.232629 / 0.018006 (0.214622) | 0.478252 / 0.000490 (0.477762) | 0.008519 / 0.000200 (0.008319) | 0.000111 / 0.000054 (0.000057) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031222 / 0.037411 (-0.006190) | 0.125875 / 0.014526 (0.111350) | 0.138995 / 0.176557 (-0.037562) | 0.213073 / 0.737135 (-0.524062) | 0.141848 / 0.296338 (-0.154490) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.463648 / 0.215209 (0.248439) | 4.582969 / 2.077655 (2.505314) | 2.104622 / 1.504120 (0.600502) | 1.887697 / 1.541195 (0.346502) | 1.946096 / 1.468490 (0.477606) | 0.809008 / 4.584777 (-3.775769) | 4.527871 / 3.745712 (0.782159) | 4.862721 / 5.269862 (-0.407141) | 2.423257 / 4.565676 (-2.142419) | 0.101080 / 0.424275 (-0.323196) | 0.014767 / 0.007607 (0.007160) | 0.574471 / 0.226044 (0.348427) | 5.746445 / 2.268929 (3.477516) | 2.682584 / 55.444624 (-52.762040) | 2.320113 / 6.876477 (-4.556364) | 2.474530 / 2.142072 (0.332458) | 0.992979 / 4.805227 (-3.812249) | 0.200812 / 6.500664 (-6.299852) | 0.076291 / 0.075469 (0.000822) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.395533 / 1.841788 (-0.446254) | 17.418803 / 8.074308 (9.344495) | 16.584875 / 10.191392 (6.393483) | 0.167739 / 0.680424 (-0.512685) | 0.020923 / 0.534201 (-0.513278) | 0.500788 / 0.579283 (-0.078496) | 0.510270 / 0.434364 (0.075906) | 0.589608 / 0.540337 (0.049270) | 0.694233 / 1.386936 (-0.692703) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008440 / 0.011353 (-0.002913) | 0.005871 / 0.011008 (-0.005137) | 0.085805 / 0.038508 (0.047297) | 0.039324 / 0.023109 (0.016215) | 0.400587 / 0.275898 (0.124689) | 0.431729 / 0.323480 (0.108249) | 0.006557 / 0.007986 (-0.001429) | 0.005778 / 0.004328 (0.001450) | 0.084394 / 0.004250 (0.080144) | 0.055274 / 0.037052 (0.018222) | 0.410568 / 0.258489 (0.152079) | 0.439952 / 0.293841 (0.146111) | 0.040335 / 0.128546 (-0.088211) | 0.013968 / 0.075646 (-0.061679) | 0.098765 / 0.419271 (-0.320507) | 0.055897 / 0.043533 (0.012364) | 0.387584 / 0.255139 (0.132445) | 0.412568 / 0.283200 (0.129368) | 0.120393 / 0.141683 (-0.021290) | 1.730996 / 1.452155 (0.278841) | 1.821538 / 1.492716 (0.328822) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.245688 / 0.018006 (0.227682) | 0.484888 / 0.000490 (0.484398) | 0.000485 / 0.000200 (0.000285) | 0.000068 / 0.000054 (0.000013) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032340 / 0.037411 (-0.005072) | 0.130819 / 0.014526 (0.116293) | 0.138491 / 0.176557 (-0.038065) | 0.196902 / 0.737135 (-0.540233) | 0.145404 / 0.296338 (-0.150935) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.487643 / 0.215209 (0.272434) | 4.818956 / 2.077655 (2.741301) | 2.332316 / 1.504120 (0.828196) | 2.102018 / 1.541195 (0.560823) | 2.156743 / 1.468490 (0.688253) | 0.803365 / 4.584777 (-3.781412) | 4.308561 / 3.745712 (0.562849) | 2.373331 / 5.269862 (-2.896530) | 1.539474 / 4.565676 (-3.026202) | 0.099081 / 0.424275 (-0.325194) | 0.014627 / 0.007607 (0.007020) | 0.609883 / 0.226044 (0.383838) | 6.092402 / 2.268929 (3.823474) | 2.858137 / 55.444624 (-52.586488) | 2.463256 / 6.876477 (-4.413220) | 2.637048 / 2.142072 (0.494976) | 0.959552 / 4.805227 (-3.845676) | 0.194170 / 6.500664 (-6.306495) | 0.075231 / 0.075469 (-0.000238) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.516502 / 1.841788 (-0.325285) | 18.077893 / 8.074308 (10.003585) | 16.507961 / 10.191392 (6.316569) | 0.171643 / 0.680424 (-0.508780) | 0.020378 / 0.534201 (-0.513823) | 0.491508 / 0.579283 (-0.087775) | 0.492136 / 0.434364 (0.057772) | 0.602258 / 0.540337 (0.061920) | 0.719882 / 1.386936 (-0.667054) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#330ac3e95fd3f2d61bac31b5b9c24399a5b54723 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006572 / 0.011353 (-0.004781) | 0.004647 / 0.011008 (-0.006362) | 0.098277 / 0.038508 (0.059769) | 0.027937 / 0.023109 (0.004828) | 0.339833 / 0.275898 (0.063935) | 0.398305 / 0.323480 (0.074825) | 0.005093 / 0.007986 (-0.002893) | 0.003374 / 0.004328 (-0.000954) | 0.075287 / 0.004250 (0.071037) | 0.037355 / 0.037052 (0.000303) | 0.339779 / 0.258489 (0.081290) | 0.403756 / 0.293841 (0.109915) | 0.030705 / 0.128546 (-0.097841) | 0.011596 / 0.075646 (-0.064050) | 0.323809 / 0.419271 (-0.095463) | 0.043357 / 0.043533 (-0.000176) | 0.342817 / 0.255139 (0.087678) | 0.386330 / 0.283200 (0.103130) | 0.088229 / 0.141683 (-0.053454) | 1.466017 / 1.452155 (0.013862) | 1.566551 / 1.492716 (0.073835) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.196276 / 0.018006 (0.178269) | 0.420321 / 0.000490 (0.419831) | 0.002234 / 0.000200 (0.002034) | 0.000071 / 0.000054 (0.000016) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023999 / 0.037411 (-0.013412) | 0.095117 / 0.014526 (0.080592) | 0.102544 / 0.176557 (-0.074013) | 0.164796 / 0.737135 (-0.572340) | 0.107030 / 0.296338 (-0.189309) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.429299 / 0.215209 (0.214089) | 4.272503 / 2.077655 (2.194849) | 2.101890 / 1.504120 (0.597771) | 1.978907 / 1.541195 (0.437713) | 2.008993 / 1.468490 (0.540503) | 0.695171 / 4.584777 (-3.889606) | 3.427050 / 3.745712 (-0.318662) | 1.892945 / 5.269862 (-3.376917) | 1.247156 / 4.565676 (-3.318521) | 0.082576 / 0.424275 (-0.341699) | 0.012526 / 0.007607 (0.004918) | 0.526338 / 0.226044 (0.300293) | 5.313855 / 2.268929 (3.044927) | 2.421134 / 55.444624 (-53.023490) | 2.072026 / 6.876477 (-4.804451) | 2.159846 / 2.142072 (0.017773) | 0.800753 / 4.805227 (-4.004474) | 0.150507 / 6.500664 (-6.350157) | 0.066378 / 0.075469 (-0.009091) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.218709 / 1.841788 (-0.623079) | 13.649239 / 8.074308 (5.574931) | 13.952762 / 10.191392 (3.761370) | 0.141967 / 0.680424 (-0.538457) | 0.016443 / 0.534201 (-0.517758) | 0.380408 / 0.579283 (-0.198875) | 0.377693 / 0.434364 (-0.056671) | 0.439819 / 0.540337 (-0.100518) | 0.529667 / 1.386936 (-0.857269) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006722 / 0.011353 (-0.004630) | 0.004495 / 0.011008 (-0.006513) | 0.075459 / 0.038508 (0.036951) | 0.028135 / 0.023109 (0.005026) | 0.349904 / 0.275898 (0.074006) | 0.390620 / 0.323480 (0.067140) | 0.005175 / 0.007986 (-0.002810) | 0.004720 / 0.004328 (0.000392) | 0.074243 / 0.004250 (0.069993) | 0.039084 / 0.037052 (0.002032) | 0.352486 / 0.258489 (0.093997) | 0.397549 / 0.293841 (0.103708) | 0.030596 / 0.128546 (-0.097950) | 0.011627 / 0.075646 (-0.064020) | 0.083394 / 0.419271 (-0.335878) | 0.042155 / 0.043533 (-0.001378) | 0.345668 / 0.255139 (0.090529) | 0.383474 / 0.283200 (0.100275) | 0.096530 / 0.141683 (-0.045153) | 1.493360 / 1.452155 (0.041206) | 1.572259 / 1.492716 (0.079543) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.162605 / 0.018006 (0.144599) | 0.409513 / 0.000490 (0.409023) | 0.002029 / 0.000200 (0.001829) | 0.000069 / 0.000054 (0.000015) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025824 / 0.037411 (-0.011588) | 0.102439 / 0.014526 (0.087913) | 0.109515 / 0.176557 (-0.067041) | 0.160650 / 0.737135 (-0.576486) | 0.112971 / 0.296338 (-0.183367) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.433293 / 0.215209 (0.218084) | 4.340286 / 2.077655 (2.262631) | 2.055857 / 1.504120 (0.551737) | 1.854451 / 1.541195 (0.313256) | 1.912752 / 1.468490 (0.444261) | 0.700076 / 4.584777 (-3.884701) | 3.361542 / 3.745712 (-0.384170) | 2.760204 / 5.269862 (-2.509658) | 1.477395 / 4.565676 (-3.088282) | 0.082868 / 0.424275 (-0.341407) | 0.012479 / 0.007607 (0.004872) | 0.532749 / 0.226044 (0.306704) | 5.323701 / 2.268929 (3.054772) | 2.509524 / 55.444624 (-52.935100) | 2.168668 / 6.876477 (-4.707809) | 2.259112 / 2.142072 (0.117040) | 0.806686 / 4.805227 (-3.998542) | 0.154620 / 6.500664 (-6.346044) | 0.068348 / 0.075469 (-0.007121) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.316512 / 1.841788 (-0.525276) | 14.158143 / 8.074308 (6.083835) | 14.110643 / 10.191392 (3.919251) | 0.143760 / 0.680424 (-0.536664) | 0.016851 / 0.534201 (-0.517350) | 0.376594 / 0.579283 (-0.202689) | 0.386957 / 0.434364 (-0.047407) | 0.466185 / 0.540337 (-0.074152) | 0.550269 / 1.386936 (-0.836667) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#8e1af7b30c94ce77abd9de732f19198e197d900c \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009457 / 0.011353 (-0.001896) | 0.006453 / 0.011008 (-0.004555) | 0.136392 / 0.038508 (0.097884) | 0.038378 / 0.023109 (0.015269) | 0.413171 / 0.275898 (0.137273) | 0.451605 / 0.323480 (0.128126) | 0.007123 / 0.007986 (-0.000863) | 0.006316 / 0.004328 (0.001987) | 0.103009 / 0.004250 (0.098758) | 0.049182 / 0.037052 (0.012130) | 0.398635 / 0.258489 (0.140146) | 0.463146 / 0.293841 (0.169305) | 0.056247 / 0.128546 (-0.072299) | 0.019589 / 0.075646 (-0.056058) | 0.475882 / 0.419271 (0.056610) | 0.094918 / 0.043533 (0.051385) | 0.416502 / 0.255139 (0.161363) | 0.447129 / 0.283200 (0.163929) | 0.133314 / 0.141683 (-0.008369) | 2.132888 / 1.452155 (0.680733) | 2.073383 / 1.492716 (0.580667) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.273037 / 0.018006 (0.255030) | 0.625675 / 0.000490 (0.625185) | 0.003449 / 0.000200 (0.003249) | 0.000185 / 0.000054 (0.000130) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031889 / 0.037411 (-0.005523) | 0.131673 / 0.014526 (0.117148) | 0.141575 / 0.176557 (-0.034982) | 0.214978 / 0.737135 (-0.522158) | 0.145586 / 0.296338 (-0.150752) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.711135 / 0.215209 (0.495926) | 7.162492 / 2.077655 (5.084837) | 2.906028 / 1.504120 (1.401908) | 2.488855 / 1.541195 (0.947660) | 2.574628 / 1.468490 (1.106138) | 1.587824 / 4.584777 (-2.996953) | 6.332962 / 3.745712 (2.587250) | 5.419578 / 5.269862 (0.149717) | 2.935413 / 4.565676 (-1.630263) | 0.169159 / 0.424275 (-0.255116) | 0.015358 / 0.007607 (0.007751) | 0.862036 / 0.226044 (0.635992) | 8.559256 / 2.268929 (6.290328) | 3.530756 / 55.444624 (-51.913868) | 2.626288 / 6.876477 (-4.250188) | 2.770063 / 2.142072 (0.627990) | 1.500116 / 4.805227 (-3.305112) | 0.265109 / 6.500664 (-6.235555) | 0.084944 / 0.075469 (0.009475) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.631060 / 1.841788 (-0.210728) | 19.022827 / 8.074308 (10.948519) | 22.973632 / 10.191392 (12.782240) | 0.296265 / 0.680424 (-0.384158) | 0.032317 / 0.534201 (-0.501884) | 0.624171 / 0.579283 (0.044888) | 0.690643 / 0.434364 (0.256279) | 0.691206 / 0.540337 (0.150869) | 0.758855 / 1.386936 (-0.628081) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009441 / 0.011353 (-0.001912) | 0.006270 / 0.011008 (-0.004739) | 0.110284 / 0.038508 (0.071776) | 0.035952 / 0.023109 (0.012842) | 0.521894 / 0.275898 (0.245996) | 0.582624 / 0.323480 (0.259144) | 0.011400 / 0.007986 (0.003414) | 0.004677 / 0.004328 (0.000348) | 0.115721 / 0.004250 (0.111470) | 0.048521 / 0.037052 (0.011469) | 0.497142 / 0.258489 (0.238653) | 0.573733 / 0.293841 (0.279892) | 0.055788 / 0.128546 (-0.072759) | 0.020949 / 0.075646 (-0.054697) | 0.132968 / 0.419271 (-0.286303) | 0.063045 / 0.043533 (0.019512) | 0.537769 / 0.255139 (0.282630) | 0.527560 / 0.283200 (0.244361) | 0.123756 / 0.141683 (-0.017927) | 1.994111 / 1.452155 (0.541956) | 2.104623 / 1.492716 (0.611907) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.279057 / 0.018006 (0.261051) | 0.537342 / 0.000490 (0.536852) | 0.007782 / 0.000200 (0.007582) | 0.000115 / 0.000054 (0.000060) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032018 / 0.037411 (-0.005394) | 0.133456 / 0.014526 (0.118930) | 0.142039 / 0.176557 (-0.034517) | 0.213769 / 0.737135 (-0.523366) | 0.143811 / 0.296338 (-0.152527) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.680142 / 0.215209 (0.464933) | 6.450439 / 2.077655 (4.372784) | 2.820724 / 1.504120 (1.316604) | 2.520407 / 1.541195 (0.979212) | 2.568972 / 1.468490 (1.100482) | 1.250584 / 4.584777 (-3.334193) | 6.108222 / 3.745712 (2.362509) | 3.065965 / 5.269862 (-2.203897) | 2.108675 / 4.565676 (-2.457002) | 0.167870 / 0.424275 (-0.256405) | 0.015127 / 0.007607 (0.007520) | 0.849645 / 0.226044 (0.623600) | 8.508727 / 2.268929 (6.239799) | 3.707897 / 55.444624 (-51.736727) | 3.009279 / 6.876477 (-3.867198) | 3.067179 / 2.142072 (0.925106) | 1.516370 / 4.805227 (-3.288858) | 0.264845 / 6.500664 (-6.235819) | 0.095137 / 0.075469 (0.019668) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.826306 / 1.841788 (-0.015481) | 20.119641 / 8.074308 (12.045333) | 21.532158 / 10.191392 (11.340766) | 0.278631 / 0.680424 (-0.401793) | 0.029494 / 0.534201 (-0.504707) | 0.621887 / 0.579283 (0.042604) | 0.686864 / 0.434364 (0.252500) | 0.695412 / 0.540337 (0.155074) | 0.864829 / 1.386936 (-0.522108) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#8e1af7b30c94ce77abd9de732f19198e197d900c \"CML watermark\")\n"
] | 2023-04-28T09:52:11 | 2023-04-28T10:18:56 | 2023-04-28T09:54:43 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5803",
"html_url": "https://github.com/huggingface/datasets/pull/5803",
"diff_url": "https://github.com/huggingface/datasets/pull/5803.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5803.patch",
"merged_at": "2023-04-28T09:54:43"
} | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5803/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5803/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5802 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5802/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5802/comments | https://api.github.com/repos/huggingface/datasets/issues/5802/events | https://github.com/huggingface/datasets/pull/5802 | 1,686,509,799 | PR_kwDODunzps5PR199 | 5,802 | Validate non-empty data_files | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007818 / 0.011353 (-0.003535) | 0.005456 / 0.011008 (-0.005552) | 0.114685 / 0.038508 (0.076177) | 0.038398 / 0.023109 (0.015289) | 0.351289 / 0.275898 (0.075391) | 0.389170 / 0.323480 (0.065690) | 0.006213 / 0.007986 (-0.001773) | 0.005796 / 0.004328 (0.001467) | 0.085315 / 0.004250 (0.081065) | 0.049251 / 0.037052 (0.012198) | 0.368119 / 0.258489 (0.109630) | 0.394725 / 0.293841 (0.100884) | 0.040390 / 0.128546 (-0.088157) | 0.014076 / 0.075646 (-0.061570) | 0.393771 / 0.419271 (-0.025500) | 0.058929 / 0.043533 (0.015397) | 0.349526 / 0.255139 (0.094387) | 0.378409 / 0.283200 (0.095210) | 0.114354 / 0.141683 (-0.027329) | 1.749244 / 1.452155 (0.297089) | 1.847946 / 1.492716 (0.355229) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.241648 / 0.018006 (0.223641) | 0.468419 / 0.000490 (0.467929) | 0.004311 / 0.000200 (0.004111) | 0.000091 / 0.000054 (0.000036) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029978 / 0.037411 (-0.007433) | 0.121832 / 0.014526 (0.107306) | 0.133516 / 0.176557 (-0.043041) | 0.199174 / 0.737135 (-0.537961) | 0.138181 / 0.296338 (-0.158158) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.478346 / 0.215209 (0.263137) | 4.723967 / 2.077655 (2.646312) | 2.107724 / 1.504120 (0.603604) | 1.874810 / 1.541195 (0.333615) | 1.911568 / 1.468490 (0.443078) | 0.800966 / 4.584777 (-3.783811) | 4.399032 / 3.745712 (0.653320) | 2.346160 / 5.269862 (-2.923702) | 1.506673 / 4.565676 (-3.059004) | 0.099119 / 0.424275 (-0.325156) | 0.014055 / 0.007607 (0.006448) | 0.582419 / 0.226044 (0.356375) | 5.789147 / 2.268929 (3.520218) | 2.632443 / 55.444624 (-52.812182) | 2.217630 / 6.876477 (-4.658846) | 2.337709 / 2.142072 (0.195637) | 0.995345 / 4.805227 (-3.809882) | 0.200040 / 6.500664 (-6.300624) | 0.076855 / 0.075469 (0.001386) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.386104 / 1.841788 (-0.455683) | 17.109772 / 8.074308 (9.035464) | 16.147612 / 10.191392 (5.956220) | 0.162846 / 0.680424 (-0.517577) | 0.020692 / 0.534201 (-0.513509) | 0.495752 / 0.579283 (-0.083531) | 0.475715 / 0.434364 (0.041351) | 0.619826 / 0.540337 (0.079488) | 0.720745 / 1.386936 (-0.666191) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008255 / 0.011353 (-0.003098) | 0.006118 / 0.011008 (-0.004890) | 0.088004 / 0.038508 (0.049496) | 0.039225 / 0.023109 (0.016116) | 0.399290 / 0.275898 (0.123392) | 0.432272 / 0.323480 (0.108792) | 0.007382 / 0.007986 (-0.000603) | 0.004576 / 0.004328 (0.000248) | 0.086511 / 0.004250 (0.082260) | 0.050472 / 0.037052 (0.013420) | 0.404160 / 0.258489 (0.145671) | 0.445356 / 0.293841 (0.151515) | 0.041549 / 0.128546 (-0.086997) | 0.014148 / 0.075646 (-0.061498) | 0.101697 / 0.419271 (-0.317574) | 0.057474 / 0.043533 (0.013941) | 0.395093 / 0.255139 (0.139954) | 0.418613 / 0.283200 (0.135414) | 0.123217 / 0.141683 (-0.018466) | 1.726146 / 1.452155 (0.273991) | 1.852746 / 1.492716 (0.360029) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.256876 / 0.018006 (0.238870) | 0.476336 / 0.000490 (0.475846) | 0.000465 / 0.000200 (0.000265) | 0.000068 / 0.000054 (0.000013) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034304 / 0.037411 (-0.003107) | 0.132617 / 0.014526 (0.118091) | 0.141712 / 0.176557 (-0.034845) | 0.198101 / 0.737135 (-0.539034) | 0.150877 / 0.296338 (-0.145461) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.504717 / 0.215209 (0.289508) | 5.035060 / 2.077655 (2.957405) | 2.494812 / 1.504120 (0.990692) | 2.306601 / 1.541195 (0.765406) | 2.481860 / 1.468490 (1.013370) | 0.826041 / 4.584777 (-3.758736) | 4.414748 / 3.745712 (0.669036) | 2.417899 / 5.269862 (-2.851963) | 1.574548 / 4.565676 (-2.991128) | 0.101712 / 0.424275 (-0.322563) | 0.014388 / 0.007607 (0.006781) | 0.616674 / 0.226044 (0.390630) | 6.180382 / 2.268929 (3.911453) | 2.969110 / 55.444624 (-52.475514) | 2.574383 / 6.876477 (-4.302094) | 2.711008 / 2.142072 (0.568935) | 0.997679 / 4.805227 (-3.807548) | 0.201241 / 6.500664 (-6.299423) | 0.076132 / 0.075469 (0.000663) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.542704 / 1.841788 (-0.299084) | 17.610700 / 8.074308 (9.536392) | 16.152973 / 10.191392 (5.961581) | 0.166040 / 0.680424 (-0.514384) | 0.020286 / 0.534201 (-0.513915) | 0.506724 / 0.579283 (-0.072559) | 0.484348 / 0.434364 (0.049984) | 0.606524 / 0.540337 (0.066187) | 0.734997 / 1.386936 (-0.651939) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#a200ec9126a0879f3d38d4e9e3787633a23af42e \"CML watermark\")\n"
] | 2023-04-27T09:51:36 | 2023-04-27T14:59:47 | 2023-04-27T14:51:40 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5802",
"html_url": "https://github.com/huggingface/datasets/pull/5802",
"diff_url": "https://github.com/huggingface/datasets/pull/5802.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5802.patch",
"merged_at": "2023-04-27T14:51:40"
} | This PR adds validation of `data_files`, so that they are non-empty (str, list, or dict) or `None` (default).
See: https://github.com/huggingface/datasets/pull/5787#discussion_r1178862327 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5802/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5802/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5800 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5800/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5800/comments | https://api.github.com/repos/huggingface/datasets/issues/5800/events | https://github.com/huggingface/datasets/pull/5800 | 1,686,348,096 | PR_kwDODunzps5PRTRh | 5,800 | Change downloaded file permission based on umask | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2023-04-27T08:13:30 | 2023-04-27T09:33:05 | 2023-04-27T09:30:16 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5800",
"html_url": "https://github.com/huggingface/datasets/pull/5800",
"diff_url": "https://github.com/huggingface/datasets/pull/5800.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5800.patch",
"merged_at": "2023-04-27T09:30:16"
} | This PR changes the permission of downloaded files to cache, so that the umask is taken into account.
Related to:
- #2157
Fix #5799.
CC: @stas00 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5800/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5800/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5799 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5799/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5799/comments | https://api.github.com/repos/huggingface/datasets/issues/5799/events | https://github.com/huggingface/datasets/issues/5799 | 1,686,334,572 | I_kwDODunzps5kg2xs | 5,799 | Files downloaded to cache do not respect umask | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 2023-04-27T08:06:05 | 2023-04-27T09:30:17 | 2023-04-27T09:30:17 | MEMBER | null | null | null | As reported by @stas00, files downloaded to the cache do not respect umask:
```bash
$ ls -l /path/to/cache/datasets/downloads/
-rw------- 1 uername username 150M Apr 25 16:41 5e646c1d600f065adaeb134e536f6f2f296a6d804bd1f0e1fdcd20ee28c185c6
```
Related to:
- #2065 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5799/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5799/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5796 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5796/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5796/comments | https://api.github.com/repos/huggingface/datasets/issues/5796/events | https://github.com/huggingface/datasets/pull/5796 | 1,685,451,919 | PR_kwDODunzps5PORm- | 5,796 | Spark docs | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010480 / 0.011353 (-0.000872) | 0.006743 / 0.011008 (-0.004265) | 0.126503 / 0.038508 (0.087995) | 0.036918 / 0.023109 (0.013808) | 0.387372 / 0.275898 (0.111474) | 0.456930 / 0.323480 (0.133450) | 0.008038 / 0.007986 (0.000052) | 0.005082 / 0.004328 (0.000753) | 0.093312 / 0.004250 (0.089062) | 0.065440 / 0.037052 (0.028387) | 0.378172 / 0.258489 (0.119683) | 0.430049 / 0.293841 (0.136208) | 0.054372 / 0.128546 (-0.074174) | 0.021875 / 0.075646 (-0.053772) | 0.441722 / 0.419271 (0.022450) | 0.063716 / 0.043533 (0.020183) | 0.375718 / 0.255139 (0.120579) | 0.413688 / 0.283200 (0.130488) | 0.122583 / 0.141683 (-0.019100) | 1.835992 / 1.452155 (0.383838) | 1.915862 / 1.492716 (0.423145) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.275305 / 0.018006 (0.257299) | 0.617170 / 0.000490 (0.616680) | 0.006467 / 0.000200 (0.006267) | 0.000117 / 0.000054 (0.000063) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031057 / 0.037411 (-0.006354) | 0.135178 / 0.014526 (0.120653) | 0.139265 / 0.176557 (-0.037292) | 0.221597 / 0.737135 (-0.515538) | 0.147632 / 0.296338 (-0.148706) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.640621 / 0.215209 (0.425411) | 6.354359 / 2.077655 (4.276704) | 2.748945 / 1.504120 (1.244825) | 2.396637 / 1.541195 (0.855442) | 2.395193 / 1.468490 (0.926703) | 1.209604 / 4.584777 (-3.375173) | 5.626901 / 3.745712 (1.881189) | 3.300941 / 5.269862 (-1.968920) | 2.123598 / 4.565676 (-2.442078) | 0.144270 / 0.424275 (-0.280005) | 0.015114 / 0.007607 (0.007507) | 0.812352 / 0.226044 (0.586307) | 8.024250 / 2.268929 (5.755322) | 3.557589 / 55.444624 (-51.887036) | 2.840632 / 6.876477 (-4.035845) | 3.152319 / 2.142072 (1.010246) | 1.447232 / 4.805227 (-3.357995) | 0.251740 / 6.500664 (-6.248924) | 0.083725 / 0.075469 (0.008256) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.568032 / 1.841788 (-0.273755) | 18.463860 / 8.074308 (10.389552) | 21.217395 / 10.191392 (11.026003) | 0.228457 / 0.680424 (-0.451967) | 0.031398 / 0.534201 (-0.502803) | 0.547627 / 0.579283 (-0.031656) | 0.642921 / 0.434364 (0.208557) | 0.687857 / 0.540337 (0.147520) | 0.800940 / 1.386936 (-0.585996) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009933 / 0.011353 (-0.001420) | 0.006065 / 0.011008 (-0.004943) | 0.102556 / 0.038508 (0.064048) | 0.034646 / 0.023109 (0.011537) | 0.437951 / 0.275898 (0.162053) | 0.482439 / 0.323480 (0.158959) | 0.007715 / 0.007986 (-0.000271) | 0.007426 / 0.004328 (0.003098) | 0.096427 / 0.004250 (0.092177) | 0.052983 / 0.037052 (0.015930) | 0.464533 / 0.258489 (0.206044) | 0.484848 / 0.293841 (0.191007) | 0.050415 / 0.128546 (-0.078131) | 0.021001 / 0.075646 (-0.054645) | 0.121214 / 0.419271 (-0.298058) | 0.061658 / 0.043533 (0.018125) | 0.431898 / 0.255139 (0.176759) | 0.482106 / 0.283200 (0.198907) | 0.128524 / 0.141683 (-0.013159) | 1.775714 / 1.452155 (0.323559) | 1.904738 / 1.492716 (0.412021) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.287641 / 0.018006 (0.269635) | 0.600667 / 0.000490 (0.600178) | 0.005097 / 0.000200 (0.004897) | 0.000112 / 0.000054 (0.000057) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032836 / 0.037411 (-0.004575) | 0.133114 / 0.014526 (0.118588) | 0.150874 / 0.176557 (-0.025683) | 0.217069 / 0.737135 (-0.520066) | 0.160387 / 0.296338 (-0.135951) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.668444 / 0.215209 (0.453235) | 6.240015 / 2.077655 (4.162360) | 2.808661 / 1.504120 (1.304542) | 2.336550 / 1.541195 (0.795356) | 2.538973 / 1.468490 (1.070483) | 1.189292 / 4.584777 (-3.395485) | 5.781028 / 3.745712 (2.035315) | 3.149895 / 5.269862 (-2.119967) | 2.130646 / 4.565676 (-2.435030) | 0.144944 / 0.424275 (-0.279331) | 0.014650 / 0.007607 (0.007043) | 0.792313 / 0.226044 (0.566269) | 7.933108 / 2.268929 (5.664180) | 3.527527 / 55.444624 (-51.917098) | 2.864271 / 6.876477 (-4.012205) | 3.098330 / 2.142072 (0.956258) | 1.421208 / 4.805227 (-3.384019) | 0.255638 / 6.500664 (-6.245026) | 0.086971 / 0.075469 (0.011502) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.585317 / 1.841788 (-0.256471) | 18.643133 / 8.074308 (10.568825) | 21.921256 / 10.191392 (11.729864) | 0.215493 / 0.680424 (-0.464931) | 0.028348 / 0.534201 (-0.505853) | 0.556925 / 0.579283 (-0.022358) | 0.631480 / 0.434364 (0.197116) | 0.654026 / 0.540337 (0.113689) | 0.799727 / 1.386936 (-0.587209) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#62520514b524b5904c7e4f0beddab1971212a96a \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006516 / 0.011353 (-0.004837) | 0.004500 / 0.011008 (-0.006509) | 0.097639 / 0.038508 (0.059131) | 0.028336 / 0.023109 (0.005227) | 0.377263 / 0.275898 (0.101365) | 0.409209 / 0.323480 (0.085729) | 0.004832 / 0.007986 (-0.003154) | 0.004629 / 0.004328 (0.000301) | 0.075046 / 0.004250 (0.070795) | 0.034080 / 0.037052 (-0.002972) | 0.377565 / 0.258489 (0.119076) | 0.419204 / 0.293841 (0.125363) | 0.030343 / 0.128546 (-0.098203) | 0.011465 / 0.075646 (-0.064182) | 0.322777 / 0.419271 (-0.096494) | 0.043774 / 0.043533 (0.000241) | 0.375808 / 0.255139 (0.120669) | 0.402665 / 0.283200 (0.119465) | 0.086811 / 0.141683 (-0.054872) | 1.518686 / 1.452155 (0.066531) | 1.540381 / 1.492716 (0.047664) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.197730 / 0.018006 (0.179724) | 0.409285 / 0.000490 (0.408795) | 0.004739 / 0.000200 (0.004539) | 0.000084 / 0.000054 (0.000029) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022974 / 0.037411 (-0.014437) | 0.096843 / 0.014526 (0.082317) | 0.103241 / 0.176557 (-0.073316) | 0.163691 / 0.737135 (-0.573444) | 0.107905 / 0.296338 (-0.188433) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.449408 / 0.215209 (0.234199) | 4.501375 / 2.077655 (2.423720) | 2.181491 / 1.504120 (0.677371) | 1.986153 / 1.541195 (0.444958) | 2.024735 / 1.468490 (0.556245) | 0.695368 / 4.584777 (-3.889409) | 3.416912 / 3.745712 (-0.328800) | 1.893343 / 5.269862 (-3.376519) | 1.275535 / 4.565676 (-3.290142) | 0.082772 / 0.424275 (-0.341503) | 0.012365 / 0.007607 (0.004758) | 0.553859 / 0.226044 (0.327814) | 5.540014 / 2.268929 (3.271085) | 2.634298 / 55.444624 (-52.810326) | 2.286686 / 6.876477 (-4.589790) | 2.384402 / 2.142072 (0.242330) | 0.806413 / 4.805227 (-3.998814) | 0.151757 / 6.500664 (-6.348907) | 0.067155 / 0.075469 (-0.008314) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.198776 / 1.841788 (-0.643012) | 13.517434 / 8.074308 (5.443126) | 13.926300 / 10.191392 (3.734908) | 0.141887 / 0.680424 (-0.538537) | 0.016571 / 0.534201 (-0.517630) | 0.383179 / 0.579283 (-0.196104) | 0.395189 / 0.434364 (-0.039175) | 0.479635 / 0.540337 (-0.060702) | 0.570576 / 1.386936 (-0.816360) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006691 / 0.011353 (-0.004662) | 0.004634 / 0.011008 (-0.006375) | 0.077087 / 0.038508 (0.038579) | 0.028281 / 0.023109 (0.005172) | 0.340108 / 0.275898 (0.064210) | 0.370611 / 0.323480 (0.047131) | 0.004997 / 0.007986 (-0.002988) | 0.003336 / 0.004328 (-0.000992) | 0.074814 / 0.004250 (0.070563) | 0.039001 / 0.037052 (0.001948) | 0.344225 / 0.258489 (0.085736) | 0.380621 / 0.293841 (0.086780) | 0.030858 / 0.128546 (-0.097689) | 0.011623 / 0.075646 (-0.064023) | 0.085016 / 0.419271 (-0.334256) | 0.042378 / 0.043533 (-0.001155) | 0.341428 / 0.255139 (0.086289) | 0.364823 / 0.283200 (0.081624) | 0.096695 / 0.141683 (-0.044988) | 1.527683 / 1.452155 (0.075528) | 1.585361 / 1.492716 (0.092645) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.184280 / 0.018006 (0.166274) | 0.397845 / 0.000490 (0.397355) | 0.004415 / 0.000200 (0.004215) | 0.000074 / 0.000054 (0.000020) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024296 / 0.037411 (-0.013115) | 0.101053 / 0.014526 (0.086527) | 0.108968 / 0.176557 (-0.067589) | 0.155732 / 0.737135 (-0.581403) | 0.112604 / 0.296338 (-0.183735) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.440819 / 0.215209 (0.225609) | 4.394017 / 2.077655 (2.316363) | 2.092456 / 1.504120 (0.588336) | 1.880186 / 1.541195 (0.338991) | 1.918035 / 1.468490 (0.449545) | 0.698059 / 4.584777 (-3.886718) | 3.422598 / 3.745712 (-0.323114) | 1.860465 / 5.269862 (-3.409396) | 1.157788 / 4.565676 (-3.407889) | 0.083566 / 0.424275 (-0.340709) | 0.012440 / 0.007607 (0.004832) | 0.549526 / 0.226044 (0.323481) | 5.500623 / 2.268929 (3.231694) | 2.546980 / 55.444624 (-52.897644) | 2.199527 / 6.876477 (-4.676949) | 2.297276 / 2.142072 (0.155203) | 0.801580 / 4.805227 (-4.003648) | 0.151842 / 6.500664 (-6.348822) | 0.067165 / 0.075469 (-0.008305) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.329097 / 1.841788 (-0.512691) | 13.830354 / 8.074308 (5.756046) | 14.155250 / 10.191392 (3.963858) | 0.144517 / 0.680424 (-0.535907) | 0.016738 / 0.534201 (-0.517463) | 0.379337 / 0.579283 (-0.199946) | 0.391382 / 0.434364 (-0.042982) | 0.459153 / 0.540337 (-0.081184) | 0.547287 / 1.386936 (-0.839649) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#2efb0289c887ec60d54e0715cd85c111cb45f9ee \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007176 / 0.011353 (-0.004177) | 0.005125 / 0.011008 (-0.005883) | 0.096060 / 0.038508 (0.057552) | 0.033262 / 0.023109 (0.010152) | 0.311461 / 0.275898 (0.035563) | 0.340673 / 0.323480 (0.017193) | 0.005700 / 0.007986 (-0.002286) | 0.005223 / 0.004328 (0.000894) | 0.072812 / 0.004250 (0.068561) | 0.042078 / 0.037052 (0.005025) | 0.320042 / 0.258489 (0.061553) | 0.346539 / 0.293841 (0.052698) | 0.035284 / 0.128546 (-0.093262) | 0.012021 / 0.075646 (-0.063625) | 0.331555 / 0.419271 (-0.087717) | 0.051058 / 0.043533 (0.007525) | 0.303001 / 0.255139 (0.047862) | 0.328431 / 0.283200 (0.045231) | 0.100954 / 0.141683 (-0.040729) | 1.407445 / 1.452155 (-0.044710) | 1.512826 / 1.492716 (0.020110) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.216442 / 0.018006 (0.198436) | 0.446298 / 0.000490 (0.445809) | 0.004701 / 0.000200 (0.004501) | 0.000084 / 0.000054 (0.000030) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028088 / 0.037411 (-0.009324) | 0.108669 / 0.014526 (0.094144) | 0.119597 / 0.176557 (-0.056960) | 0.178249 / 0.737135 (-0.558886) | 0.123914 / 0.296338 (-0.172424) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.413437 / 0.215209 (0.198228) | 4.136602 / 2.077655 (2.058947) | 1.875872 / 1.504120 (0.371752) | 1.680783 / 1.541195 (0.139588) | 1.757059 / 1.468490 (0.288569) | 0.711080 / 4.584777 (-3.873697) | 3.791701 / 3.745712 (0.045989) | 2.111612 / 5.269862 (-3.158250) | 1.351204 / 4.565676 (-3.214473) | 0.086477 / 0.424275 (-0.337798) | 0.012359 / 0.007607 (0.004752) | 0.504984 / 0.226044 (0.278940) | 5.040456 / 2.268929 (2.771527) | 2.266946 / 55.444624 (-53.177679) | 1.957827 / 6.876477 (-4.918650) | 2.120490 / 2.142072 (-0.021583) | 0.856148 / 4.805227 (-3.949079) | 0.172414 / 6.500664 (-6.328250) | 0.066833 / 0.075469 (-0.008636) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.198163 / 1.841788 (-0.643625) | 14.944930 / 8.074308 (6.870622) | 14.317196 / 10.191392 (4.125804) | 0.166104 / 0.680424 (-0.514320) | 0.017443 / 0.534201 (-0.516758) | 0.423025 / 0.579283 (-0.156258) | 0.437476 / 0.434364 (0.003112) | 0.500156 / 0.540337 (-0.040181) | 0.606226 / 1.386936 (-0.780710) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007417 / 0.011353 (-0.003936) | 0.005143 / 0.011008 (-0.005865) | 0.076401 / 0.038508 (0.037893) | 0.034818 / 0.023109 (0.011709) | 0.339633 / 0.275898 (0.063735) | 0.373839 / 0.323480 (0.050359) | 0.006004 / 0.007986 (-0.001982) | 0.005403 / 0.004328 (0.001075) | 0.074150 / 0.004250 (0.069899) | 0.050489 / 0.037052 (0.013436) | 0.343357 / 0.258489 (0.084868) | 0.377009 / 0.293841 (0.083168) | 0.035921 / 0.128546 (-0.092625) | 0.012197 / 0.075646 (-0.063449) | 0.087992 / 0.419271 (-0.331279) | 0.049452 / 0.043533 (0.005919) | 0.340495 / 0.255139 (0.085356) | 0.360277 / 0.283200 (0.077077) | 0.111114 / 0.141683 (-0.030569) | 1.463888 / 1.452155 (0.011734) | 1.548320 / 1.492716 (0.055604) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.228437 / 0.018006 (0.210431) | 0.445120 / 0.000490 (0.444631) | 0.000392 / 0.000200 (0.000192) | 0.000058 / 0.000054 (0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029965 / 0.037411 (-0.007446) | 0.113484 / 0.014526 (0.098958) | 0.125249 / 0.176557 (-0.051308) | 0.177201 / 0.737135 (-0.559934) | 0.128750 / 0.296338 (-0.167589) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.420089 / 0.215209 (0.204880) | 4.195772 / 2.077655 (2.118117) | 2.021539 / 1.504120 (0.517419) | 1.825118 / 1.541195 (0.283924) | 1.904090 / 1.468490 (0.435600) | 0.716276 / 4.584777 (-3.868501) | 3.742257 / 3.745712 (-0.003455) | 3.368880 / 5.269862 (-1.900981) | 1.728285 / 4.565676 (-2.837392) | 0.087656 / 0.424275 (-0.336619) | 0.012263 / 0.007607 (0.004656) | 0.524321 / 0.226044 (0.298277) | 5.217610 / 2.268929 (2.948682) | 2.474670 / 55.444624 (-52.969955) | 2.135452 / 6.876477 (-4.741025) | 2.292578 / 2.142072 (0.150505) | 0.852109 / 4.805227 (-3.953119) | 0.172031 / 6.500664 (-6.328633) | 0.065230 / 0.075469 (-0.010240) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.260494 / 1.841788 (-0.581293) | 15.019167 / 8.074308 (6.944859) | 14.647586 / 10.191392 (4.456193) | 0.170578 / 0.680424 (-0.509846) | 0.017619 / 0.534201 (-0.516582) | 0.423116 / 0.579283 (-0.156167) | 0.426680 / 0.434364 (-0.007684) | 0.519563 / 0.540337 (-0.020775) | 0.619335 / 1.386936 (-0.767601) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#e210dc20c19b5e6af05df9ca6e82984dfb42465f \"CML watermark\")\n"
] | 2023-04-26T17:39:43 | 2023-04-27T16:41:50 | 2023-04-27T16:34:45 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5796",
"html_url": "https://github.com/huggingface/datasets/pull/5796",
"diff_url": "https://github.com/huggingface/datasets/pull/5796.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5796.patch",
"merged_at": "2023-04-27T16:34:45"
} | Added a "Use with Spark" doc page to document `Dataset.from_spark` following https://github.com/huggingface/datasets/pull/5701
cc @maddiedawson | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5796/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5796/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5795 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5795/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5795/comments | https://api.github.com/repos/huggingface/datasets/issues/5795/events | https://github.com/huggingface/datasets/pull/5795 | 1,685,414,505 | PR_kwDODunzps5POJo8 | 5,795 | Fix spark imports | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010844 / 0.011353 (-0.000509) | 0.007329 / 0.011008 (-0.003680) | 0.133764 / 0.038508 (0.095256) | 0.040213 / 0.023109 (0.017103) | 0.413466 / 0.275898 (0.137568) | 0.452860 / 0.323480 (0.129380) | 0.008109 / 0.007986 (0.000123) | 0.005773 / 0.004328 (0.001444) | 0.109969 / 0.004250 (0.105718) | 0.053001 / 0.037052 (0.015949) | 0.416377 / 0.258489 (0.157888) | 0.477486 / 0.293841 (0.183645) | 0.056556 / 0.128546 (-0.071990) | 0.024322 / 0.075646 (-0.051324) | 0.437750 / 0.419271 (0.018479) | 0.087732 / 0.043533 (0.044199) | 0.421540 / 0.255139 (0.166401) | 0.429143 / 0.283200 (0.145944) | 0.144864 / 0.141683 (0.003181) | 1.882785 / 1.452155 (0.430631) | 1.980721 / 1.492716 (0.488005) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.285497 / 0.018006 (0.267491) | 0.601820 / 0.000490 (0.601331) | 0.005003 / 0.000200 (0.004804) | 0.000122 / 0.000054 (0.000067) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030673 / 0.037411 (-0.006739) | 0.126883 / 0.014526 (0.112357) | 0.137677 / 0.176557 (-0.038880) | 0.211504 / 0.737135 (-0.525632) | 0.144752 / 0.296338 (-0.151587) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.665845 / 0.215209 (0.450636) | 6.369040 / 2.077655 (4.291385) | 2.708979 / 1.504120 (1.204859) | 2.370842 / 1.541195 (0.829647) | 2.445987 / 1.468490 (0.977497) | 1.260806 / 4.584777 (-3.323971) | 5.979216 / 3.745712 (2.233504) | 3.334350 / 5.269862 (-1.935512) | 2.187298 / 4.565676 (-2.378379) | 0.155494 / 0.424275 (-0.268781) | 0.017351 / 0.007607 (0.009744) | 0.853626 / 0.226044 (0.627581) | 8.375001 / 2.268929 (6.106072) | 3.528312 / 55.444624 (-51.916313) | 2.890509 / 6.876477 (-3.985968) | 3.051016 / 2.142072 (0.908944) | 1.529811 / 4.805227 (-3.275416) | 0.273883 / 6.500664 (-6.226781) | 0.086617 / 0.075469 (0.011148) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.648231 / 1.841788 (-0.193557) | 19.487109 / 8.074308 (11.412801) | 23.474621 / 10.191392 (13.283229) | 0.221392 / 0.680424 (-0.459032) | 0.028878 / 0.534201 (-0.505323) | 0.582302 / 0.579283 (0.003019) | 0.615059 / 0.434364 (0.180695) | 0.656082 / 0.540337 (0.115745) | 0.740544 / 1.386936 (-0.646392) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010687 / 0.011353 (-0.000665) | 0.007114 / 0.011008 (-0.003894) | 0.135426 / 0.038508 (0.096918) | 0.041027 / 0.023109 (0.017918) | 0.466441 / 0.275898 (0.190543) | 0.503545 / 0.323480 (0.180065) | 0.009418 / 0.007986 (0.001432) | 0.004976 / 0.004328 (0.000647) | 0.101342 / 0.004250 (0.097092) | 0.058289 / 0.037052 (0.021237) | 0.473715 / 0.258489 (0.215226) | 0.539556 / 0.293841 (0.245715) | 0.063138 / 0.128546 (-0.065408) | 0.020429 / 0.075646 (-0.055217) | 0.124179 / 0.419271 (-0.295093) | 0.066400 / 0.043533 (0.022867) | 0.450793 / 0.255139 (0.195654) | 0.494163 / 0.283200 (0.210964) | 0.131179 / 0.141683 (-0.010504) | 1.876396 / 1.452155 (0.424241) | 1.974148 / 1.492716 (0.481432) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.313362 / 0.018006 (0.295356) | 0.602618 / 0.000490 (0.602129) | 0.008279 / 0.000200 (0.008079) | 0.000155 / 0.000054 (0.000101) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.037250 / 0.037411 (-0.000161) | 0.144151 / 0.014526 (0.129625) | 0.155733 / 0.176557 (-0.020824) | 0.214334 / 0.737135 (-0.522801) | 0.167124 / 0.296338 (-0.129214) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.686471 / 0.215209 (0.471262) | 6.749174 / 2.077655 (4.671520) | 3.024941 / 1.504120 (1.520821) | 2.553363 / 1.541195 (1.012168) | 2.679107 / 1.468490 (1.210617) | 1.317212 / 4.584777 (-3.267565) | 5.917575 / 3.745712 (2.171862) | 3.412715 / 5.269862 (-1.857146) | 2.203478 / 4.565676 (-2.362198) | 0.150387 / 0.424275 (-0.273888) | 0.015977 / 0.007607 (0.008370) | 0.862999 / 0.226044 (0.636954) | 8.706459 / 2.268929 (6.437530) | 3.762648 / 55.444624 (-51.681977) | 2.992544 / 6.876477 (-3.883933) | 3.135796 / 2.142072 (0.993724) | 1.504140 / 4.805227 (-3.301088) | 0.268265 / 6.500664 (-6.232399) | 0.083297 / 0.075469 (0.007828) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.690193 / 1.841788 (-0.151594) | 19.912854 / 8.074308 (11.838546) | 23.568217 / 10.191392 (13.376825) | 0.285125 / 0.680424 (-0.395299) | 0.030593 / 0.534201 (-0.503608) | 0.565305 / 0.579283 (-0.013978) | 0.659283 / 0.434364 (0.224919) | 0.678864 / 0.540337 (0.138527) | 0.793634 / 1.386936 (-0.593302) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#9d0edbe3f3258b7e580d1b58c0eea6637b5e22b2 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.011615 / 0.011353 (0.000262) | 0.006716 / 0.011008 (-0.004292) | 0.146868 / 0.038508 (0.108360) | 0.037621 / 0.023109 (0.014512) | 0.425563 / 0.275898 (0.149664) | 0.483217 / 0.323480 (0.159737) | 0.007830 / 0.007986 (-0.000156) | 0.005940 / 0.004328 (0.001612) | 0.100771 / 0.004250 (0.096521) | 0.063907 / 0.037052 (0.026854) | 0.422993 / 0.258489 (0.164503) | 0.496514 / 0.293841 (0.202673) | 0.056004 / 0.128546 (-0.072542) | 0.021441 / 0.075646 (-0.054206) | 0.453589 / 0.419271 (0.034317) | 0.067555 / 0.043533 (0.024022) | 0.442490 / 0.255139 (0.187351) | 0.503941 / 0.283200 (0.220742) | 0.134023 / 0.141683 (-0.007660) | 1.886329 / 1.452155 (0.434175) | 2.030867 / 1.492716 (0.538150) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.288063 / 0.018006 (0.270057) | 0.627177 / 0.000490 (0.626687) | 0.006335 / 0.000200 (0.006135) | 0.000171 / 0.000054 (0.000116) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032424 / 0.037411 (-0.004987) | 0.132749 / 0.014526 (0.118223) | 0.144727 / 0.176557 (-0.031829) | 0.232577 / 0.737135 (-0.504558) | 0.157315 / 0.296338 (-0.139024) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.623058 / 0.215209 (0.407849) | 6.272447 / 2.077655 (4.194792) | 2.506778 / 1.504120 (1.002658) | 2.203094 / 1.541195 (0.661899) | 2.346972 / 1.468490 (0.878482) | 1.358498 / 4.584777 (-3.226279) | 5.879670 / 3.745712 (2.133958) | 5.818406 / 5.269862 (0.548545) | 3.231936 / 4.565676 (-1.333741) | 0.154013 / 0.424275 (-0.270263) | 0.021541 / 0.007607 (0.013934) | 0.823746 / 0.226044 (0.597702) | 8.140304 / 2.268929 (5.871375) | 3.366911 / 55.444624 (-52.077714) | 2.696856 / 6.876477 (-4.179621) | 2.845743 / 2.142072 (0.703671) | 1.522363 / 4.805227 (-3.282864) | 0.278938 / 6.500664 (-6.221726) | 0.085044 / 0.075469 (0.009575) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.681348 / 1.841788 (-0.160440) | 19.686703 / 8.074308 (11.612395) | 22.995655 / 10.191392 (12.804263) | 0.218876 / 0.680424 (-0.461548) | 0.029334 / 0.534201 (-0.504867) | 0.560846 / 0.579283 (-0.018438) | 0.645210 / 0.434364 (0.210846) | 0.697842 / 0.540337 (0.157505) | 0.832875 / 1.386936 (-0.554061) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009509 / 0.011353 (-0.001844) | 0.006471 / 0.011008 (-0.004537) | 0.101477 / 0.038508 (0.062969) | 0.035281 / 0.023109 (0.012171) | 0.470032 / 0.275898 (0.194134) | 0.501475 / 0.323480 (0.177995) | 0.007641 / 0.007986 (-0.000344) | 0.006784 / 0.004328 (0.002455) | 0.096111 / 0.004250 (0.091861) | 0.055199 / 0.037052 (0.018146) | 0.470095 / 0.258489 (0.211606) | 0.530955 / 0.293841 (0.237114) | 0.056161 / 0.128546 (-0.072385) | 0.022055 / 0.075646 (-0.053591) | 0.121585 / 0.419271 (-0.297686) | 0.063736 / 0.043533 (0.020203) | 0.470771 / 0.255139 (0.215632) | 0.490546 / 0.283200 (0.207346) | 0.128825 / 0.141683 (-0.012858) | 1.898639 / 1.452155 (0.446484) | 2.052305 / 1.492716 (0.559589) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.322526 / 0.018006 (0.304520) | 0.628096 / 0.000490 (0.627607) | 0.006837 / 0.000200 (0.006637) | 0.000199 / 0.000054 (0.000145) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033830 / 0.037411 (-0.003581) | 0.136217 / 0.014526 (0.121691) | 0.147006 / 0.176557 (-0.029551) | 0.203950 / 0.737135 (-0.533185) | 0.150327 / 0.296338 (-0.146011) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.654287 / 0.215209 (0.439078) | 6.430306 / 2.077655 (4.352651) | 2.881750 / 1.504120 (1.377630) | 2.489505 / 1.541195 (0.948310) | 2.543037 / 1.468490 (1.074547) | 1.226682 / 4.584777 (-3.358094) | 5.902076 / 3.745712 (2.156364) | 3.335344 / 5.269862 (-1.934518) | 2.156738 / 4.565676 (-2.408939) | 0.151804 / 0.424275 (-0.272472) | 0.015238 / 0.007607 (0.007631) | 0.816364 / 0.226044 (0.590319) | 8.126367 / 2.268929 (5.857438) | 3.653222 / 55.444624 (-51.791402) | 2.886667 / 6.876477 (-3.989809) | 3.120852 / 2.142072 (0.978779) | 1.421423 / 4.805227 (-3.383804) | 0.264590 / 6.500664 (-6.236074) | 0.085716 / 0.075469 (0.010247) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.745258 / 1.841788 (-0.096530) | 19.379253 / 8.074308 (11.304945) | 23.827046 / 10.191392 (13.635654) | 0.267702 / 0.680424 (-0.412722) | 0.030253 / 0.534201 (-0.503948) | 0.542037 / 0.579283 (-0.037246) | 0.655946 / 0.434364 (0.221582) | 0.683525 / 0.540337 (0.143188) | 0.831333 / 1.386936 (-0.555603) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#5b011a258329375aa4dc7b414bd4e7b6363c5357 \"CML watermark\")\n"
] | 2023-04-26T17:09:32 | 2023-04-26T17:49:03 | 2023-04-26T17:39:12 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5795",
"html_url": "https://github.com/huggingface/datasets/pull/5795",
"diff_url": "https://github.com/huggingface/datasets/pull/5795.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5795.patch",
"merged_at": "2023-04-26T17:39:12"
} | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5795/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5795/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5790 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5790/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5790/comments | https://api.github.com/repos/huggingface/datasets/issues/5790/events | https://github.com/huggingface/datasets/pull/5790 | 1,683,229,126 | PR_kwDODunzps5PG0mJ | 5,790 | Allow to run CI on push to ci-branch | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007852 / 0.011353 (-0.003500) | 0.005804 / 0.011008 (-0.005204) | 0.098268 / 0.038508 (0.059760) | 0.036440 / 0.023109 (0.013331) | 0.299952 / 0.275898 (0.024054) | 0.335590 / 0.323480 (0.012111) | 0.006332 / 0.007986 (-0.001653) | 0.004218 / 0.004328 (-0.000110) | 0.074733 / 0.004250 (0.070483) | 0.055252 / 0.037052 (0.018200) | 0.300854 / 0.258489 (0.042365) | 0.353442 / 0.293841 (0.059601) | 0.036447 / 0.128546 (-0.092099) | 0.012638 / 0.075646 (-0.063009) | 0.336680 / 0.419271 (-0.082591) | 0.052436 / 0.043533 (0.008903) | 0.292606 / 0.255139 (0.037467) | 0.319676 / 0.283200 (0.036476) | 0.111137 / 0.141683 (-0.030546) | 1.449569 / 1.452155 (-0.002586) | 1.558110 / 1.492716 (0.065394) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.306043 / 0.018006 (0.288037) | 0.563174 / 0.000490 (0.562684) | 0.032227 / 0.000200 (0.032027) | 0.000491 / 0.000054 (0.000436) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029874 / 0.037411 (-0.007537) | 0.109330 / 0.014526 (0.094805) | 0.122579 / 0.176557 (-0.053978) | 0.181398 / 0.737135 (-0.555737) | 0.127124 / 0.296338 (-0.169215) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.417950 / 0.215209 (0.202741) | 4.163883 / 2.077655 (2.086228) | 1.985209 / 1.504120 (0.481089) | 1.793660 / 1.541195 (0.252465) | 1.895193 / 1.468490 (0.426703) | 0.694331 / 4.584777 (-3.890446) | 3.820170 / 3.745712 (0.074458) | 2.180556 / 5.269862 (-3.089305) | 1.490671 / 4.565676 (-3.075006) | 0.086132 / 0.424275 (-0.338143) | 0.012289 / 0.007607 (0.004682) | 0.511182 / 0.226044 (0.285137) | 5.117855 / 2.268929 (2.848927) | 2.403914 / 55.444624 (-53.040710) | 2.071107 / 6.876477 (-4.805369) | 2.184108 / 2.142072 (0.042036) | 0.835028 / 4.805227 (-3.970199) | 0.167707 / 6.500664 (-6.332957) | 0.066724 / 0.075469 (-0.008746) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.203921 / 1.841788 (-0.637867) | 15.214676 / 8.074308 (7.140368) | 14.971337 / 10.191392 (4.779945) | 0.170225 / 0.680424 (-0.510199) | 0.017924 / 0.534201 (-0.516277) | 0.428532 / 0.579283 (-0.150751) | 0.449157 / 0.434364 (0.014793) | 0.507723 / 0.540337 (-0.032614) | 0.615331 / 1.386936 (-0.771605) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008172 / 0.011353 (-0.003181) | 0.005405 / 0.011008 (-0.005603) | 0.074684 / 0.038508 (0.036176) | 0.039133 / 0.023109 (0.016024) | 0.342598 / 0.275898 (0.066700) | 0.377752 / 0.323480 (0.054272) | 0.006655 / 0.007986 (-0.001331) | 0.005788 / 0.004328 (0.001459) | 0.074014 / 0.004250 (0.069763) | 0.056225 / 0.037052 (0.019173) | 0.342330 / 0.258489 (0.083841) | 0.381052 / 0.293841 (0.087211) | 0.036574 / 0.128546 (-0.091973) | 0.012472 / 0.075646 (-0.063174) | 0.087574 / 0.419271 (-0.331698) | 0.050178 / 0.043533 (0.006646) | 0.351116 / 0.255139 (0.095977) | 0.363772 / 0.283200 (0.080572) | 0.118313 / 0.141683 (-0.023370) | 1.436691 / 1.452155 (-0.015463) | 1.551397 / 1.492716 (0.058680) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.265201 / 0.018006 (0.247195) | 0.561855 / 0.000490 (0.561366) | 0.000463 / 0.000200 (0.000263) | 0.000058 / 0.000054 (0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030540 / 0.037411 (-0.006871) | 0.118815 / 0.014526 (0.104289) | 0.127689 / 0.176557 (-0.048868) | 0.176211 / 0.737135 (-0.560924) | 0.133130 / 0.296338 (-0.163208) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.416318 / 0.215209 (0.201109) | 4.146806 / 2.077655 (2.069151) | 1.983437 / 1.504120 (0.479317) | 1.799733 / 1.541195 (0.258539) | 1.889026 / 1.468490 (0.420536) | 0.723330 / 4.584777 (-3.861447) | 3.817795 / 3.745712 (0.072083) | 2.158449 / 5.269862 (-3.111413) | 1.377348 / 4.565676 (-3.188328) | 0.088504 / 0.424275 (-0.335771) | 0.012560 / 0.007607 (0.004953) | 0.530382 / 0.226044 (0.304337) | 5.308529 / 2.268929 (3.039600) | 2.469655 / 55.444624 (-52.974970) | 2.136209 / 6.876477 (-4.740267) | 2.322997 / 2.142072 (0.180924) | 0.861396 / 4.805227 (-3.943831) | 0.172747 / 6.500664 (-6.327917) | 0.067617 / 0.075469 (-0.007852) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.263225 / 1.841788 (-0.578563) | 15.878025 / 8.074308 (7.803717) | 14.815627 / 10.191392 (4.624235) | 0.148722 / 0.680424 (-0.531702) | 0.018071 / 0.534201 (-0.516130) | 0.428389 / 0.579283 (-0.150894) | 0.428635 / 0.434364 (-0.005729) | 0.496953 / 0.540337 (-0.043385) | 0.592783 / 1.386936 (-0.794153) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#d2e5568dc7a47f9a99678d2889bd2e3c33afdd00 \"CML watermark\")\n"
] | 2023-04-25T13:57:26 | 2023-04-26T13:43:08 | 2023-04-26T13:35:47 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5790",
"html_url": "https://github.com/huggingface/datasets/pull/5790",
"diff_url": "https://github.com/huggingface/datasets/pull/5790.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5790.patch",
"merged_at": "2023-04-26T13:35:47"
} | This PR allows to run the CI on push to a branch named "ci-*", without needing to open a PR.
- This will allow to make CI tests without opening a PR, e.g., for future `huggingface-hub` releases, future dependency releases (like `fsspec`, `pandas`,...)
Note that to build the documentation, we already allow it on push to a branch named "doc-builder*".
See:
- #5788
CC: @Wauplin | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5790/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5790/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5788 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5788/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5788/comments | https://api.github.com/repos/huggingface/datasets/issues/5788/events | https://github.com/huggingface/datasets/pull/5788 | 1,681,136,256 | PR_kwDODunzps5O_v4B | 5,788 | Prepare tests for hfh 0.14 | {
"login": "Wauplin",
"id": 11801849,
"node_id": "MDQ6VXNlcjExODAxODQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/11801849?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Wauplin",
"html_url": "https://github.com/Wauplin",
"followers_url": "https://api.github.com/users/Wauplin/followers",
"following_url": "https://api.github.com/users/Wauplin/following{/other_user}",
"gists_url": "https://api.github.com/users/Wauplin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Wauplin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Wauplin/subscriptions",
"organizations_url": "https://api.github.com/users/Wauplin/orgs",
"repos_url": "https://api.github.com/users/Wauplin/repos",
"events_url": "https://api.github.com/users/Wauplin/events{/privacy}",
"received_events_url": "https://api.github.com/users/Wauplin/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007343 / 0.011353 (-0.004010) | 0.005145 / 0.011008 (-0.005863) | 0.099820 / 0.038508 (0.061312) | 0.033487 / 0.023109 (0.010378) | 0.313069 / 0.275898 (0.037171) | 0.335420 / 0.323480 (0.011940) | 0.005959 / 0.007986 (-0.002027) | 0.005373 / 0.004328 (0.001044) | 0.076568 / 0.004250 (0.072317) | 0.048702 / 0.037052 (0.011650) | 0.322957 / 0.258489 (0.064468) | 0.363044 / 0.293841 (0.069203) | 0.035070 / 0.128546 (-0.093476) | 0.012029 / 0.075646 (-0.063618) | 0.334664 / 0.419271 (-0.084607) | 0.050549 / 0.043533 (0.007017) | 0.310113 / 0.255139 (0.054974) | 0.324405 / 0.283200 (0.041205) | 0.097596 / 0.141683 (-0.044087) | 1.440741 / 1.452155 (-0.011414) | 1.531194 / 1.492716 (0.038478) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.220799 / 0.018006 (0.202793) | 0.438158 / 0.000490 (0.437668) | 0.007737 / 0.000200 (0.007537) | 0.000082 / 0.000054 (0.000027) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026888 / 0.037411 (-0.010523) | 0.106281 / 0.014526 (0.091755) | 0.117419 / 0.176557 (-0.059138) | 0.179144 / 0.737135 (-0.557992) | 0.122477 / 0.296338 (-0.173861) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.412667 / 0.215209 (0.197458) | 4.108784 / 2.077655 (2.031129) | 1.834300 / 1.504120 (0.330180) | 1.627256 / 1.541195 (0.086061) | 1.691036 / 1.468490 (0.222546) | 0.713405 / 4.584777 (-3.871372) | 3.839262 / 3.745712 (0.093550) | 2.108453 / 5.269862 (-3.161408) | 1.340740 / 4.565676 (-3.224936) | 0.087776 / 0.424275 (-0.336499) | 0.012730 / 0.007607 (0.005123) | 0.505323 / 0.226044 (0.279279) | 5.085176 / 2.268929 (2.816247) | 2.307165 / 55.444624 (-53.137459) | 1.936771 / 6.876477 (-4.939706) | 2.097391 / 2.142072 (-0.044681) | 0.856215 / 4.805227 (-3.949012) | 0.171826 / 6.500664 (-6.328838) | 0.066603 / 0.075469 (-0.008866) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.202126 / 1.841788 (-0.639661) | 15.173598 / 8.074308 (7.099290) | 15.012645 / 10.191392 (4.821253) | 0.162187 / 0.680424 (-0.518237) | 0.017462 / 0.534201 (-0.516739) | 0.423895 / 0.579283 (-0.155388) | 0.432010 / 0.434364 (-0.002354) | 0.503234 / 0.540337 (-0.037104) | 0.598948 / 1.386936 (-0.787988) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007099 / 0.011353 (-0.004254) | 0.005167 / 0.011008 (-0.005841) | 0.075551 / 0.038508 (0.037043) | 0.033050 / 0.023109 (0.009940) | 0.339629 / 0.275898 (0.063731) | 0.380486 / 0.323480 (0.057006) | 0.005776 / 0.007986 (-0.002209) | 0.004029 / 0.004328 (-0.000299) | 0.075074 / 0.004250 (0.070823) | 0.046709 / 0.037052 (0.009656) | 0.340203 / 0.258489 (0.081714) | 0.380849 / 0.293841 (0.087008) | 0.035027 / 0.128546 (-0.093519) | 0.012226 / 0.075646 (-0.063420) | 0.087525 / 0.419271 (-0.331747) | 0.049361 / 0.043533 (0.005828) | 0.341854 / 0.255139 (0.086715) | 0.359590 / 0.283200 (0.076390) | 0.100102 / 0.141683 (-0.041581) | 1.482759 / 1.452155 (0.030605) | 1.569905 / 1.492716 (0.077189) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.213615 / 0.018006 (0.195609) | 0.441117 / 0.000490 (0.440628) | 0.004932 / 0.000200 (0.004732) | 0.000093 / 0.000054 (0.000038) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031313 / 0.037411 (-0.006098) | 0.110191 / 0.014526 (0.095665) | 0.125320 / 0.176557 (-0.051237) | 0.177658 / 0.737135 (-0.559477) | 0.127928 / 0.296338 (-0.168410) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.426952 / 0.215209 (0.211743) | 4.247731 / 2.077655 (2.170076) | 2.107318 / 1.504120 (0.603198) | 1.843845 / 1.541195 (0.302650) | 1.894822 / 1.468490 (0.426332) | 0.696232 / 4.584777 (-3.888545) | 3.826516 / 3.745712 (0.080804) | 2.126688 / 5.269862 (-3.143174) | 1.327062 / 4.565676 (-3.238615) | 0.085693 / 0.424275 (-0.338582) | 0.012226 / 0.007607 (0.004619) | 0.521904 / 0.226044 (0.295859) | 5.219798 / 2.268929 (2.950869) | 2.524908 / 55.444624 (-52.919716) | 2.212078 / 6.876477 (-4.664399) | 2.373944 / 2.142072 (0.231871) | 0.833846 / 4.805227 (-3.971381) | 0.169639 / 6.500664 (-6.331025) | 0.064538 / 0.075469 (-0.010931) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.254930 / 1.841788 (-0.586858) | 15.585277 / 8.074308 (7.510969) | 14.762857 / 10.191392 (4.571465) | 0.146959 / 0.680424 (-0.533465) | 0.017451 / 0.534201 (-0.516750) | 0.424469 / 0.579283 (-0.154814) | 0.422359 / 0.434364 (-0.012004) | 0.489930 / 0.540337 (-0.050408) | 0.595856 / 1.386936 (-0.791080) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#213c72f52ae52b662f967d3218f66c70a3043048 \"CML watermark\")\n",
"@albertvillanova thanks for the review. As you prefer for the github CI config. I just took it from @lhoestq's branch when testing hfh==0.14.0. I think it's still relevant for next releases. In any case, I let you handle merging the PR :)",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008371 / 0.011353 (-0.002982) | 0.005210 / 0.011008 (-0.005798) | 0.105639 / 0.038508 (0.067131) | 0.045903 / 0.023109 (0.022794) | 0.391231 / 0.275898 (0.115333) | 0.438824 / 0.323480 (0.115345) | 0.006270 / 0.007986 (-0.001715) | 0.005950 / 0.004328 (0.001621) | 0.079685 / 0.004250 (0.075434) | 0.052121 / 0.037052 (0.015069) | 0.387787 / 0.258489 (0.129298) | 0.434322 / 0.293841 (0.140481) | 0.032598 / 0.128546 (-0.095948) | 0.012126 / 0.075646 (-0.063520) | 0.359658 / 0.419271 (-0.059613) | 0.046686 / 0.043533 (0.003154) | 0.391973 / 0.255139 (0.136834) | 0.421149 / 0.283200 (0.137949) | 0.105920 / 0.141683 (-0.035763) | 1.483008 / 1.452155 (0.030854) | 1.617010 / 1.492716 (0.124294) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.199111 / 0.018006 (0.181105) | 0.407995 / 0.000490 (0.407505) | 0.006706 / 0.000200 (0.006506) | 0.000229 / 0.000054 (0.000175) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030247 / 0.037411 (-0.007164) | 0.115977 / 0.014526 (0.101451) | 0.118112 / 0.176557 (-0.058444) | 0.182710 / 0.737135 (-0.554426) | 0.122483 / 0.296338 (-0.173855) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.430455 / 0.215209 (0.215246) | 4.314298 / 2.077655 (2.236643) | 1.898124 / 1.504120 (0.394005) | 1.734909 / 1.541195 (0.193715) | 1.802400 / 1.468490 (0.333910) | 0.717237 / 4.584777 (-3.867539) | 4.004705 / 3.745712 (0.258993) | 2.138901 / 5.269862 (-3.130960) | 1.254037 / 4.565676 (-3.311640) | 0.085594 / 0.424275 (-0.338681) | 0.013774 / 0.007607 (0.006166) | 0.535218 / 0.226044 (0.309174) | 5.373730 / 2.268929 (3.104801) | 2.371194 / 55.444624 (-53.073430) | 2.111206 / 6.876477 (-4.765270) | 2.225137 / 2.142072 (0.083064) | 0.838325 / 4.805227 (-3.966902) | 0.159176 / 6.500664 (-6.341488) | 0.072285 / 0.075469 (-0.003184) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.352232 / 1.841788 (-0.489555) | 16.926722 / 8.074308 (8.852414) | 16.709531 / 10.191392 (6.518139) | 0.159249 / 0.680424 (-0.521175) | 0.017667 / 0.534201 (-0.516534) | 0.426894 / 0.579283 (-0.152390) | 0.539903 / 0.434364 (0.105539) | 0.537471 / 0.540337 (-0.002866) | 0.619592 / 1.386936 (-0.767344) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008354 / 0.011353 (-0.002999) | 0.005366 / 0.011008 (-0.005642) | 0.080961 / 0.038508 (0.042453) | 0.046574 / 0.023109 (0.023465) | 0.345949 / 0.275898 (0.070051) | 0.394041 / 0.323480 (0.070562) | 0.006209 / 0.007986 (-0.001777) | 0.005980 / 0.004328 (0.001651) | 0.076235 / 0.004250 (0.071984) | 0.051833 / 0.037052 (0.014780) | 0.348786 / 0.258489 (0.090297) | 0.397421 / 0.293841 (0.103580) | 0.033026 / 0.128546 (-0.095520) | 0.012217 / 0.075646 (-0.063429) | 0.087439 / 0.419271 (-0.331832) | 0.045488 / 0.043533 (0.001955) | 0.352160 / 0.255139 (0.097021) | 0.379079 / 0.283200 (0.095879) | 0.116111 / 0.141683 (-0.025572) | 1.470177 / 1.452155 (0.018022) | 1.587499 / 1.492716 (0.094783) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.296149 / 0.018006 (0.278143) | 0.592362 / 0.000490 (0.591872) | 0.000492 / 0.000200 (0.000292) | 0.000064 / 0.000054 (0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.036599 / 0.037411 (-0.000813) | 0.113768 / 0.014526 (0.099242) | 0.116198 / 0.176557 (-0.060358) | 0.180329 / 0.737135 (-0.556806) | 0.123942 / 0.296338 (-0.172396) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.452445 / 0.215209 (0.237236) | 4.504330 / 2.077655 (2.426675) | 2.275645 / 1.504120 (0.771525) | 2.107765 / 1.541195 (0.566571) | 2.086363 / 1.468490 (0.617873) | 0.723721 / 4.584777 (-3.861056) | 3.825330 / 3.745712 (0.079618) | 2.162743 / 5.269862 (-3.107119) | 1.255953 / 4.565676 (-3.309724) | 0.085860 / 0.424275 (-0.338415) | 0.013790 / 0.007607 (0.006183) | 0.560257 / 0.226044 (0.334213) | 5.618180 / 2.268929 (3.349251) | 2.625423 / 55.444624 (-52.819202) | 2.374381 / 6.876477 (-4.502095) | 2.496560 / 2.142072 (0.354488) | 0.841120 / 4.805227 (-3.964107) | 0.161541 / 6.500664 (-6.339123) | 0.075270 / 0.075469 (-0.000199) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.432916 / 1.841788 (-0.408872) | 14.858534 / 8.074308 (6.784226) | 14.973521 / 10.191392 (4.782129) | 0.148312 / 0.680424 (-0.532112) | 0.016811 / 0.534201 (-0.517390) | 0.382623 / 0.579283 (-0.196660) | 0.389767 / 0.434364 (-0.044596) | 0.449657 / 0.540337 (-0.090680) | 0.533723 / 1.386936 (-0.853214) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#f8344350f15265a585188ac986ae49a8ed8289fe \"CML watermark\")\n",
"I agree it is good to have a way to run the CI on push, without needing to open a PR.\r\n\r\nBut I think the branch name should be more generic (and this is not specific to this PR). See:\r\n- #5790 ",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007208 / 0.011353 (-0.004145) | 0.005600 / 0.011008 (-0.005408) | 0.096129 / 0.038508 (0.057621) | 0.027834 / 0.023109 (0.004725) | 0.295106 / 0.275898 (0.019208) | 0.323983 / 0.323480 (0.000503) | 0.005164 / 0.007986 (-0.002822) | 0.003962 / 0.004328 (-0.000366) | 0.078339 / 0.004250 (0.074089) | 0.036974 / 0.037052 (-0.000078) | 0.310315 / 0.258489 (0.051826) | 0.338036 / 0.293841 (0.044195) | 0.042124 / 0.128546 (-0.086422) | 0.015886 / 0.075646 (-0.059760) | 0.337961 / 0.419271 (-0.081310) | 0.051507 / 0.043533 (0.007974) | 0.297505 / 0.255139 (0.042366) | 0.310728 / 0.283200 (0.027528) | 0.086312 / 0.141683 (-0.055371) | 1.356923 / 1.452155 (-0.095232) | 1.429366 / 1.492716 (-0.063350) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.205495 / 0.018006 (0.187489) | 0.460639 / 0.000490 (0.460149) | 0.003996 / 0.000200 (0.003796) | 0.000093 / 0.000054 (0.000038) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021970 / 0.037411 (-0.015442) | 0.090283 / 0.014526 (0.075757) | 0.098579 / 0.176557 (-0.077978) | 0.160437 / 0.737135 (-0.576699) | 0.102738 / 0.296338 (-0.193600) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.494474 / 0.215209 (0.279265) | 4.967453 / 2.077655 (2.889799) | 2.045852 / 1.504120 (0.541732) | 1.858022 / 1.541195 (0.316827) | 1.771874 / 1.468490 (0.303384) | 1.186368 / 4.584777 (-3.398408) | 4.974762 / 3.745712 (1.229050) | 2.616225 / 5.269862 (-2.653636) | 1.702971 / 4.565676 (-2.862705) | 0.124929 / 0.424275 (-0.299346) | 0.011774 / 0.007607 (0.004167) | 0.569643 / 0.226044 (0.343598) | 5.793114 / 2.268929 (3.524186) | 2.441561 / 55.444624 (-53.003064) | 1.862233 / 6.876477 (-5.014243) | 1.931142 / 2.142072 (-0.210931) | 1.148915 / 4.805227 (-3.656313) | 0.203914 / 6.500664 (-6.296750) | 0.062468 / 0.075469 (-0.013001) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.188708 / 1.841788 (-0.653080) | 13.710830 / 8.074308 (5.636522) | 15.695153 / 10.191392 (5.503761) | 0.171467 / 0.680424 (-0.508957) | 0.024509 / 0.534201 (-0.509692) | 0.450270 / 0.579283 (-0.129014) | 0.500712 / 0.434364 (0.066348) | 0.488632 / 0.540337 (-0.051706) | 0.574893 / 1.386936 (-0.812043) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007254 / 0.011353 (-0.004099) | 0.006199 / 0.011008 (-0.004809) | 0.072079 / 0.038508 (0.033571) | 0.026909 / 0.023109 (0.003800) | 0.355538 / 0.275898 (0.079640) | 0.358625 / 0.323480 (0.035145) | 0.005564 / 0.007986 (-0.002421) | 0.005278 / 0.004328 (0.000950) | 0.076469 / 0.004250 (0.072219) | 0.038269 / 0.037052 (0.001216) | 0.355214 / 0.258489 (0.096725) | 0.383219 / 0.293841 (0.089378) | 0.046516 / 0.128546 (-0.082030) | 0.015393 / 0.075646 (-0.060254) | 0.088506 / 0.419271 (-0.330765) | 0.050326 / 0.043533 (0.006793) | 0.327265 / 0.255139 (0.072126) | 0.370176 / 0.283200 (0.086976) | 0.102438 / 0.141683 (-0.039245) | 1.378969 / 1.452155 (-0.073186) | 1.441998 / 1.492716 (-0.050719) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.209044 / 0.018006 (0.191038) | 0.455733 / 0.000490 (0.455243) | 0.005856 / 0.000200 (0.005656) | 0.000116 / 0.000054 (0.000061) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025336 / 0.037411 (-0.012075) | 0.097449 / 0.014526 (0.082923) | 0.106301 / 0.176557 (-0.070255) | 0.153053 / 0.737135 (-0.584082) | 0.107938 / 0.296338 (-0.188401) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.491070 / 0.215209 (0.275861) | 5.049637 / 2.077655 (2.971982) | 2.064709 / 1.504120 (0.560589) | 1.782266 / 1.541195 (0.241072) | 1.798570 / 1.468490 (0.330080) | 0.988886 / 4.584777 (-3.595891) | 4.690324 / 3.745712 (0.944612) | 4.317355 / 5.269862 (-0.952507) | 2.347596 / 4.565676 (-2.218081) | 0.117249 / 0.424275 (-0.307026) | 0.011614 / 0.007607 (0.004007) | 0.630033 / 0.226044 (0.403988) | 6.140108 / 2.268929 (3.871180) | 2.638080 / 55.444624 (-52.806545) | 2.133017 / 6.876477 (-4.743459) | 2.123392 / 2.142072 (-0.018680) | 1.178056 / 4.805227 (-3.627171) | 0.209465 / 6.500664 (-6.291199) | 0.063234 / 0.075469 (-0.012235) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.238089 / 1.841788 (-0.603699) | 14.066866 / 8.074308 (5.992558) | 16.225480 / 10.191392 (6.034088) | 0.206466 / 0.680424 (-0.473958) | 0.027279 / 0.534201 (-0.506922) | 0.443006 / 0.579283 (-0.136277) | 0.509512 / 0.434364 (0.075148) | 0.479075 / 0.540337 (-0.061263) | 0.573546 / 1.386936 (-0.813390) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#c6015a070c66a5bbd84603d415ccc57cb668b44b \"CML watermark\")\n"
] | 2023-04-24T12:13:03 | 2023-04-25T14:32:56 | 2023-04-25T14:25:30 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5788",
"html_url": "https://github.com/huggingface/datasets/pull/5788",
"diff_url": "https://github.com/huggingface/datasets/pull/5788.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5788.patch",
"merged_at": "2023-04-25T14:25:30"
} | Related to the coming release of `huggingface_hub==0.14.0`. It will break some internal tests. The PR fixes these tests. Let's double-check the CI but I expect the fixed tests to be running fine with both `hfh<=0.13.4` and `hfh==0.14`. Worth case scenario, existing PRs will have to be rebased once this fix is merged.
See related [discussion](https://huggingface.slack.com/archives/C02V5EA0A95/p1682337463368609?thread_ts=1681994202.635609&cid=C02V5EA0A95) (private slack).
cc @lhoestq | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5788/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5788/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5872 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5872/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5872/comments | https://api.github.com/repos/huggingface/datasets/issues/5872/events | https://github.com/huggingface/datasets/pull/5872 | 1,713,174,662 | PR_kwDODunzps5QrQ5o | 5,872 | Fix infer module for uppercase extensions | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007049 / 0.011353 (-0.004304) | 0.005034 / 0.011008 (-0.005974) | 0.097737 / 0.038508 (0.059229) | 0.033280 / 0.023109 (0.010170) | 0.301017 / 0.275898 (0.025119) | 0.336593 / 0.323480 (0.013113) | 0.005567 / 0.007986 (-0.002419) | 0.005384 / 0.004328 (0.001056) | 0.072980 / 0.004250 (0.068730) | 0.045030 / 0.037052 (0.007978) | 0.303280 / 0.258489 (0.044791) | 0.367528 / 0.293841 (0.073687) | 0.034131 / 0.128546 (-0.094415) | 0.012118 / 0.075646 (-0.063528) | 0.331677 / 0.419271 (-0.087594) | 0.049211 / 0.043533 (0.005678) | 0.297535 / 0.255139 (0.042396) | 0.318136 / 0.283200 (0.034936) | 0.101574 / 0.141683 (-0.040109) | 1.472769 / 1.452155 (0.020615) | 1.541724 / 1.492716 (0.049007) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.014646 / 0.018006 (-0.003360) | 0.439050 / 0.000490 (0.438560) | 0.008575 / 0.000200 (0.008375) | 0.000297 / 0.000054 (0.000242) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027591 / 0.037411 (-0.009820) | 0.111639 / 0.014526 (0.097113) | 0.117098 / 0.176557 (-0.059458) | 0.173281 / 0.737135 (-0.563855) | 0.123197 / 0.296338 (-0.173141) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.397507 / 0.215209 (0.182298) | 3.971457 / 2.077655 (1.893803) | 1.781158 / 1.504120 (0.277038) | 1.590419 / 1.541195 (0.049224) | 1.716374 / 1.468490 (0.247884) | 0.687150 / 4.584777 (-3.897627) | 3.691009 / 3.745712 (-0.054703) | 2.050900 / 5.269862 (-3.218961) | 1.304893 / 4.565676 (-3.260784) | 0.084507 / 0.424275 (-0.339768) | 0.012231 / 0.007607 (0.004624) | 0.493033 / 0.226044 (0.266988) | 4.929957 / 2.268929 (2.661028) | 2.209069 / 55.444624 (-53.235555) | 1.885992 / 6.876477 (-4.990485) | 2.007004 / 2.142072 (-0.135069) | 0.827265 / 4.805227 (-3.977963) | 0.168225 / 6.500664 (-6.332439) | 0.064988 / 0.075469 (-0.010481) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.182341 / 1.841788 (-0.659447) | 14.691983 / 8.074308 (6.617674) | 14.350720 / 10.191392 (4.159328) | 0.164307 / 0.680424 (-0.516117) | 0.017480 / 0.534201 (-0.516720) | 0.421843 / 0.579283 (-0.157441) | 0.417481 / 0.434364 (-0.016883) | 0.496587 / 0.540337 (-0.043751) | 0.581208 / 1.386936 (-0.805728) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007070 / 0.011353 (-0.004283) | 0.005083 / 0.011008 (-0.005926) | 0.075009 / 0.038508 (0.036500) | 0.032343 / 0.023109 (0.009234) | 0.366788 / 0.275898 (0.090890) | 0.392273 / 0.323480 (0.068794) | 0.005512 / 0.007986 (-0.002474) | 0.003999 / 0.004328 (-0.000329) | 0.073743 / 0.004250 (0.069492) | 0.046203 / 0.037052 (0.009151) | 0.367874 / 0.258489 (0.109385) | 0.409154 / 0.293841 (0.115313) | 0.035227 / 0.128546 (-0.093319) | 0.012223 / 0.075646 (-0.063424) | 0.087149 / 0.419271 (-0.332122) | 0.045648 / 0.043533 (0.002115) | 0.362414 / 0.255139 (0.107275) | 0.379970 / 0.283200 (0.096770) | 0.100631 / 0.141683 (-0.041052) | 1.439733 / 1.452155 (-0.012422) | 1.506266 / 1.492716 (0.013550) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.227071 / 0.018006 (0.209065) | 0.451243 / 0.000490 (0.450753) | 0.000406 / 0.000200 (0.000206) | 0.000060 / 0.000054 (0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028952 / 0.037411 (-0.008459) | 0.111934 / 0.014526 (0.097408) | 0.124080 / 0.176557 (-0.052477) | 0.174022 / 0.737135 (-0.563113) | 0.126811 / 0.296338 (-0.169527) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.436423 / 0.215209 (0.221214) | 4.331959 / 2.077655 (2.254304) | 2.111914 / 1.504120 (0.607794) | 1.921338 / 1.541195 (0.380143) | 1.994425 / 1.468490 (0.525935) | 0.699164 / 4.584777 (-3.885613) | 3.722143 / 3.745712 (-0.023569) | 3.516538 / 5.269862 (-1.753323) | 1.867245 / 4.565676 (-2.698431) | 0.085923 / 0.424275 (-0.338352) | 0.012059 / 0.007607 (0.004452) | 0.586147 / 0.226044 (0.360102) | 5.395823 / 2.268929 (3.126894) | 2.594430 / 55.444624 (-52.850194) | 2.275021 / 6.876477 (-4.601456) | 2.347810 / 2.142072 (0.205737) | 0.835118 / 4.805227 (-3.970109) | 0.167089 / 6.500664 (-6.333575) | 0.064893 / 0.075469 (-0.010576) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.291423 / 1.841788 (-0.550365) | 14.992696 / 8.074308 (6.918388) | 13.307842 / 10.191392 (3.116450) | 0.163799 / 0.680424 (-0.516625) | 0.017315 / 0.534201 (-0.516886) | 0.461319 / 0.579283 (-0.117965) | 0.430474 / 0.434364 (-0.003889) | 0.568115 / 0.540337 (0.027777) | 0.647909 / 1.386936 (-0.739027) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#a5161c9ecdcdde9cc99c7f212da13523d5ba6bdb \"CML watermark\")\n"
] | 2023-05-17T05:56:45 | 2023-05-17T14:26:59 | 2023-05-17T14:19:18 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5872",
"html_url": "https://github.com/huggingface/datasets/pull/5872",
"diff_url": "https://github.com/huggingface/datasets/pull/5872.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5872.patch",
"merged_at": "2023-05-17T14:19:18"
} | Fix the `infer_module_for_data_files` and `infer_module_for_data_files_in_archives` functions when passed a data file name with uppercase extension, e.g. `filename.TXT`.
Before, `None` module was returned. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5872/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5872/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5868 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5868/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5868/comments | https://api.github.com/repos/huggingface/datasets/issues/5868/events | https://github.com/huggingface/datasets/issues/5868 | 1,711,173,098 | I_kwDODunzps5l_m3q | 5,868 | Is it possible to change a cached file and 're-cache' it instead of re-generating? | {
"login": "zyh3826",
"id": 31238754,
"node_id": "MDQ6VXNlcjMxMjM4NzU0",
"avatar_url": "https://avatars.githubusercontent.com/u/31238754?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zyh3826",
"html_url": "https://github.com/zyh3826",
"followers_url": "https://api.github.com/users/zyh3826/followers",
"following_url": "https://api.github.com/users/zyh3826/following{/other_user}",
"gists_url": "https://api.github.com/users/zyh3826/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zyh3826/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zyh3826/subscriptions",
"organizations_url": "https://api.github.com/users/zyh3826/orgs",
"repos_url": "https://api.github.com/users/zyh3826/repos",
"events_url": "https://api.github.com/users/zyh3826/events{/privacy}",
"received_events_url": "https://api.github.com/users/zyh3826/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | [
"Arrow files/primitives (tables and arrays) are immutable, so re-generating them is the only option, I'm afraid.",
"> \r\n\r\nGot it, thanks for your reply"
] | 2023-05-16T03:45:42 | 2023-05-17T11:21:36 | 2023-05-17T11:21:36 | NONE | null | null | null | ### Feature request
Hi,
I have a huge cached file using `map`(over 500GB), and I want to change an attribution of each element, is there possible to do it using some method instead of re-generating, because `map` takes over 24 hours
### Motivation
For large datasets, I think it is very important because we always face the problem which is changing something in the original cache without re-generating it.
### Your contribution
For now, I can't help, sorry. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5868/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5868/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5861 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5861/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5861/comments | https://api.github.com/repos/huggingface/datasets/issues/5861/events | https://github.com/huggingface/datasets/pull/5861 | 1,709,807,340 | PR_kwDODunzps5Qf55q | 5,861 | Better error message when combining dataset dicts instead of datasets | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007167 / 0.011353 (-0.004185) | 0.004914 / 0.011008 (-0.006094) | 0.096858 / 0.038508 (0.058350) | 0.033468 / 0.023109 (0.010359) | 0.297276 / 0.275898 (0.021378) | 0.344289 / 0.323480 (0.020809) | 0.005703 / 0.007986 (-0.002282) | 0.003972 / 0.004328 (-0.000357) | 0.075191 / 0.004250 (0.070940) | 0.046247 / 0.037052 (0.009194) | 0.317857 / 0.258489 (0.059368) | 0.347263 / 0.293841 (0.053422) | 0.035017 / 0.128546 (-0.093529) | 0.012036 / 0.075646 (-0.063611) | 0.332522 / 0.419271 (-0.086750) | 0.050188 / 0.043533 (0.006655) | 0.296627 / 0.255139 (0.041488) | 0.319196 / 0.283200 (0.035997) | 0.101100 / 0.141683 (-0.040583) | 1.484536 / 1.452155 (0.032382) | 1.606364 / 1.492716 (0.113648) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.203954 / 0.018006 (0.185948) | 0.436505 / 0.000490 (0.436015) | 0.003853 / 0.000200 (0.003654) | 0.000079 / 0.000054 (0.000025) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025834 / 0.037411 (-0.011578) | 0.105759 / 0.014526 (0.091233) | 0.114289 / 0.176557 (-0.062268) | 0.174388 / 0.737135 (-0.562748) | 0.122248 / 0.296338 (-0.174090) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.404218 / 0.215209 (0.189009) | 4.027900 / 2.077655 (1.950245) | 1.854757 / 1.504120 (0.350637) | 1.668882 / 1.541195 (0.127687) | 1.731451 / 1.468490 (0.262961) | 0.707843 / 4.584777 (-3.876934) | 3.756386 / 3.745712 (0.010674) | 2.067751 / 5.269862 (-3.202110) | 1.313039 / 4.565676 (-3.252638) | 0.086442 / 0.424275 (-0.337833) | 0.012329 / 0.007607 (0.004722) | 0.505964 / 0.226044 (0.279919) | 5.050788 / 2.268929 (2.781860) | 2.353936 / 55.444624 (-53.090688) | 2.055560 / 6.876477 (-4.820917) | 2.162948 / 2.142072 (0.020876) | 0.850532 / 4.805227 (-3.954696) | 0.168560 / 6.500664 (-6.332104) | 0.063143 / 0.075469 (-0.012326) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.182723 / 1.841788 (-0.659065) | 14.779342 / 8.074308 (6.705034) | 14.461572 / 10.191392 (4.270180) | 0.163120 / 0.680424 (-0.517303) | 0.017978 / 0.534201 (-0.516223) | 0.419168 / 0.579283 (-0.160115) | 0.420955 / 0.434364 (-0.013409) | 0.509710 / 0.540337 (-0.030628) | 0.619586 / 1.386936 (-0.767350) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006804 / 0.011353 (-0.004549) | 0.005136 / 0.011008 (-0.005872) | 0.074910 / 0.038508 (0.036402) | 0.032552 / 0.023109 (0.009443) | 0.374998 / 0.275898 (0.099100) | 0.399219 / 0.323480 (0.075739) | 0.005615 / 0.007986 (-0.002371) | 0.004118 / 0.004328 (-0.000210) | 0.074219 / 0.004250 (0.069969) | 0.045924 / 0.037052 (0.008871) | 0.383228 / 0.258489 (0.124739) | 0.407195 / 0.293841 (0.113354) | 0.035460 / 0.128546 (-0.093086) | 0.012460 / 0.075646 (-0.063187) | 0.087077 / 0.419271 (-0.332195) | 0.050507 / 0.043533 (0.006974) | 0.369001 / 0.255139 (0.113862) | 0.385761 / 0.283200 (0.102561) | 0.106999 / 0.141683 (-0.034684) | 1.465456 / 1.452155 (0.013302) | 1.556962 / 1.492716 (0.064246) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.214926 / 0.018006 (0.196920) | 0.436893 / 0.000490 (0.436403) | 0.003388 / 0.000200 (0.003188) | 0.000093 / 0.000054 (0.000038) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029919 / 0.037411 (-0.007492) | 0.110859 / 0.014526 (0.096333) | 0.120617 / 0.176557 (-0.055939) | 0.171781 / 0.737135 (-0.565355) | 0.125627 / 0.296338 (-0.170712) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.436024 / 0.215209 (0.220815) | 4.359167 / 2.077655 (2.281512) | 2.188399 / 1.504120 (0.684279) | 2.001196 / 1.541195 (0.460001) | 2.023710 / 1.468490 (0.555220) | 0.713799 / 4.584777 (-3.870978) | 3.832217 / 3.745712 (0.086504) | 3.269351 / 5.269862 (-2.000510) | 1.534608 / 4.565676 (-3.031068) | 0.088505 / 0.424275 (-0.335770) | 0.012345 / 0.007607 (0.004738) | 0.542446 / 0.226044 (0.316401) | 5.377757 / 2.268929 (3.108828) | 2.659837 / 55.444624 (-52.784787) | 2.272356 / 6.876477 (-4.604120) | 2.297289 / 2.142072 (0.155217) | 0.855276 / 4.805227 (-3.949952) | 0.170666 / 6.500664 (-6.329998) | 0.064549 / 0.075469 (-0.010920) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.255938 / 1.841788 (-0.585850) | 15.151471 / 8.074308 (7.077163) | 12.905762 / 10.191392 (2.714370) | 0.162425 / 0.680424 (-0.517999) | 0.017504 / 0.534201 (-0.516697) | 0.448671 / 0.579283 (-0.130612) | 0.422424 / 0.434364 (-0.011940) | 0.551772 / 0.540337 (0.011434) | 0.649115 / 1.386936 (-0.737821) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#be73d9f192149727c5542ff257df81b03024fa39 \"CML watermark\")\n",
"Having those different checks helps providing an appropriate error message.\r\n\r\nIf the input is a dict, we suggest to select a split. If the input lists is a mix of iterable and non-iterable, we mention that it must be one or the other.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006559 / 0.011353 (-0.004794) | 0.004569 / 0.011008 (-0.006439) | 0.104503 / 0.038508 (0.065995) | 0.028220 / 0.023109 (0.005111) | 0.365507 / 0.275898 (0.089609) | 0.400238 / 0.323480 (0.076758) | 0.004968 / 0.007986 (-0.003017) | 0.003271 / 0.004328 (-0.001057) | 0.082804 / 0.004250 (0.078554) | 0.036299 / 0.037052 (-0.000754) | 0.361201 / 0.258489 (0.102712) | 0.410962 / 0.293841 (0.117121) | 0.030423 / 0.128546 (-0.098123) | 0.011612 / 0.075646 (-0.064034) | 0.331820 / 0.419271 (-0.087452) | 0.043822 / 0.043533 (0.000289) | 0.356242 / 0.255139 (0.101103) | 0.393035 / 0.283200 (0.109836) | 0.088426 / 0.141683 (-0.053257) | 1.484139 / 1.452155 (0.031984) | 1.566712 / 1.492716 (0.073995) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.195887 / 0.018006 (0.177880) | 0.402720 / 0.000490 (0.402231) | 0.003516 / 0.000200 (0.003316) | 0.000075 / 0.000054 (0.000021) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023270 / 0.037411 (-0.014141) | 0.095834 / 0.014526 (0.081308) | 0.102924 / 0.176557 (-0.073632) | 0.161397 / 0.737135 (-0.575738) | 0.105225 / 0.296338 (-0.191114) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.451701 / 0.215209 (0.236491) | 4.495171 / 2.077655 (2.417517) | 2.223203 / 1.504120 (0.719083) | 2.035533 / 1.541195 (0.494338) | 2.076182 / 1.468490 (0.607692) | 0.697317 / 4.584777 (-3.887460) | 3.406309 / 3.745712 (-0.339403) | 1.847179 / 5.269862 (-3.422683) | 1.158762 / 4.565676 (-3.406914) | 0.083067 / 0.424275 (-0.341208) | 0.012453 / 0.007607 (0.004846) | 0.546502 / 0.226044 (0.320458) | 5.455712 / 2.268929 (3.186784) | 2.654142 / 55.444624 (-52.790483) | 2.298722 / 6.876477 (-4.577755) | 2.383467 / 2.142072 (0.241395) | 0.805950 / 4.805227 (-3.999278) | 0.152479 / 6.500664 (-6.348185) | 0.066784 / 0.075469 (-0.008685) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.239129 / 1.841788 (-0.602659) | 13.603707 / 8.074308 (5.529398) | 14.062004 / 10.191392 (3.870612) | 0.130928 / 0.680424 (-0.549495) | 0.016907 / 0.534201 (-0.517294) | 0.381614 / 0.579283 (-0.197670) | 0.386770 / 0.434364 (-0.047594) | 0.455792 / 0.540337 (-0.084545) | 0.526092 / 1.386936 (-0.860844) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006202 / 0.011353 (-0.005151) | 0.004478 / 0.011008 (-0.006531) | 0.076492 / 0.038508 (0.037984) | 0.026703 / 0.023109 (0.003594) | 0.355134 / 0.275898 (0.079236) | 0.391207 / 0.323480 (0.067727) | 0.004852 / 0.007986 (-0.003133) | 0.003271 / 0.004328 (-0.001057) | 0.075080 / 0.004250 (0.070830) | 0.038803 / 0.037052 (0.001750) | 0.359530 / 0.258489 (0.101041) | 0.409044 / 0.293841 (0.115203) | 0.030366 / 0.128546 (-0.098180) | 0.011544 / 0.075646 (-0.064102) | 0.084849 / 0.419271 (-0.334423) | 0.040076 / 0.043533 (-0.003457) | 0.357359 / 0.255139 (0.102220) | 0.384075 / 0.283200 (0.100875) | 0.089130 / 0.141683 (-0.052552) | 1.520400 / 1.452155 (0.068246) | 1.604403 / 1.492716 (0.111687) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.257127 / 0.018006 (0.239121) | 0.403691 / 0.000490 (0.403202) | 0.006894 / 0.000200 (0.006694) | 0.000088 / 0.000054 (0.000034) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024653 / 0.037411 (-0.012758) | 0.098834 / 0.014526 (0.084309) | 0.107276 / 0.176557 (-0.069281) | 0.158256 / 0.737135 (-0.578879) | 0.111339 / 0.296338 (-0.184999) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.445006 / 0.215209 (0.229797) | 4.452953 / 2.077655 (2.375299) | 2.168291 / 1.504120 (0.664171) | 1.969457 / 1.541195 (0.428262) | 2.003505 / 1.468490 (0.535015) | 0.695857 / 4.584777 (-3.888920) | 3.433424 / 3.745712 (-0.312288) | 2.466977 / 5.269862 (-2.802885) | 1.528167 / 4.565676 (-3.037509) | 0.082425 / 0.424275 (-0.341850) | 0.012470 / 0.007607 (0.004863) | 0.559039 / 0.226044 (0.332995) | 5.609496 / 2.268929 (3.340568) | 2.602898 / 55.444624 (-52.841726) | 2.273971 / 6.876477 (-4.602506) | 2.303370 / 2.142072 (0.161298) | 0.803875 / 4.805227 (-4.001352) | 0.151069 / 6.500664 (-6.349595) | 0.067956 / 0.075469 (-0.007513) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.334443 / 1.841788 (-0.507345) | 13.773252 / 8.074308 (5.698944) | 13.007042 / 10.191392 (2.815650) | 0.127939 / 0.680424 (-0.552485) | 0.016412 / 0.534201 (-0.517789) | 0.374744 / 0.579283 (-0.204539) | 0.396912 / 0.434364 (-0.037452) | 0.443197 / 0.540337 (-0.097140) | 0.528338 / 1.386936 (-0.858598) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#51d9f2a3064aa89a780e3d02c6cc34000c51c4fb \"CML watermark\")\n",
"Just modified it to use only one loop. I think I managed to keep it readable as well",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007382 / 0.011353 (-0.003971) | 0.005143 / 0.011008 (-0.005865) | 0.097635 / 0.038508 (0.059127) | 0.034726 / 0.023109 (0.011616) | 0.315556 / 0.275898 (0.039658) | 0.355951 / 0.323480 (0.032472) | 0.006055 / 0.007986 (-0.001931) | 0.004264 / 0.004328 (-0.000065) | 0.073636 / 0.004250 (0.069386) | 0.050480 / 0.037052 (0.013428) | 0.316031 / 0.258489 (0.057542) | 0.363933 / 0.293841 (0.070092) | 0.035138 / 0.128546 (-0.093408) | 0.012407 / 0.075646 (-0.063239) | 0.333677 / 0.419271 (-0.085595) | 0.050586 / 0.043533 (0.007053) | 0.309507 / 0.255139 (0.054369) | 0.327043 / 0.283200 (0.043844) | 0.108975 / 0.141683 (-0.032708) | 1.447778 / 1.452155 (-0.004377) | 1.519971 / 1.492716 (0.027255) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.248770 / 0.018006 (0.230764) | 0.603036 / 0.000490 (0.602546) | 0.000383 / 0.000200 (0.000183) | 0.000058 / 0.000054 (0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027094 / 0.037411 (-0.010317) | 0.104427 / 0.014526 (0.089901) | 0.120627 / 0.176557 (-0.055929) | 0.178790 / 0.737135 (-0.558346) | 0.124877 / 0.296338 (-0.171461) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.414442 / 0.215209 (0.199233) | 4.138009 / 2.077655 (2.060355) | 1.964642 / 1.504120 (0.460523) | 1.775940 / 1.541195 (0.234745) | 1.899719 / 1.468490 (0.431228) | 0.695406 / 4.584777 (-3.889371) | 3.760470 / 3.745712 (0.014758) | 3.906958 / 5.269862 (-1.362904) | 2.028164 / 4.565676 (-2.537513) | 0.086704 / 0.424275 (-0.337571) | 0.012465 / 0.007607 (0.004857) | 0.512336 / 0.226044 (0.286292) | 5.108587 / 2.268929 (2.839659) | 2.435273 / 55.444624 (-53.009352) | 2.142387 / 6.876477 (-4.734090) | 2.258234 / 2.142072 (0.116162) | 0.854035 / 4.805227 (-3.951193) | 0.170443 / 6.500664 (-6.330222) | 0.065762 / 0.075469 (-0.009707) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.187529 / 1.841788 (-0.654259) | 15.151164 / 8.074308 (7.076856) | 14.577545 / 10.191392 (4.386153) | 0.166973 / 0.680424 (-0.513450) | 0.017883 / 0.534201 (-0.516318) | 0.427607 / 0.579283 (-0.151676) | 0.417050 / 0.434364 (-0.017314) | 0.508116 / 0.540337 (-0.032221) | 0.590173 / 1.386936 (-0.796763) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007499 / 0.011353 (-0.003854) | 0.005195 / 0.011008 (-0.005813) | 0.073600 / 0.038508 (0.035091) | 0.033574 / 0.023109 (0.010464) | 0.377506 / 0.275898 (0.101608) | 0.432752 / 0.323480 (0.109272) | 0.006042 / 0.007986 (-0.001944) | 0.006427 / 0.004328 (0.002098) | 0.071666 / 0.004250 (0.067416) | 0.053243 / 0.037052 (0.016190) | 0.363972 / 0.258489 (0.105483) | 0.454988 / 0.293841 (0.161147) | 0.035118 / 0.128546 (-0.093428) | 0.012395 / 0.075646 (-0.063251) | 0.084308 / 0.419271 (-0.334963) | 0.048589 / 0.043533 (0.005057) | 0.368036 / 0.255139 (0.112897) | 0.399414 / 0.283200 (0.116215) | 0.109043 / 0.141683 (-0.032640) | 1.462972 / 1.452155 (0.010817) | 1.574443 / 1.492716 (0.081726) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.215107 / 0.018006 (0.197101) | 0.550255 / 0.000490 (0.549765) | 0.004630 / 0.000200 (0.004430) | 0.000104 / 0.000054 (0.000049) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029948 / 0.037411 (-0.007463) | 0.111866 / 0.014526 (0.097340) | 0.126559 / 0.176557 (-0.049997) | 0.181443 / 0.737135 (-0.555693) | 0.130559 / 0.296338 (-0.165779) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.441410 / 0.215209 (0.226201) | 4.403406 / 2.077655 (2.325752) | 2.180276 / 1.504120 (0.676156) | 2.003729 / 1.541195 (0.462534) | 2.079394 / 1.468490 (0.610904) | 0.706061 / 4.584777 (-3.878716) | 3.805668 / 3.745712 (0.059956) | 3.864941 / 5.269862 (-1.404921) | 1.970468 / 4.565676 (-2.595208) | 0.086033 / 0.424275 (-0.338242) | 0.012261 / 0.007607 (0.004654) | 0.550427 / 0.226044 (0.324383) | 5.542270 / 2.268929 (3.273342) | 2.717047 / 55.444624 (-52.727577) | 2.449022 / 6.876477 (-4.427455) | 2.549567 / 2.142072 (0.407495) | 0.854981 / 4.805227 (-3.950247) | 0.169756 / 6.500664 (-6.330908) | 0.067082 / 0.075469 (-0.008387) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.281369 / 1.841788 (-0.560419) | 15.445090 / 8.074308 (7.370781) | 13.205652 / 10.191392 (3.014260) | 0.170070 / 0.680424 (-0.510354) | 0.017815 / 0.534201 (-0.516385) | 0.425193 / 0.579283 (-0.154090) | 0.425205 / 0.434364 (-0.009159) | 0.493561 / 0.540337 (-0.046776) | 0.588994 / 1.386936 (-0.797942) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#e427105fc68fce04d0f3c74efb942cbf3a65d166 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006345 / 0.011353 (-0.005008) | 0.004330 / 0.011008 (-0.006678) | 0.096327 / 0.038508 (0.057819) | 0.032964 / 0.023109 (0.009855) | 0.335600 / 0.275898 (0.059702) | 0.365635 / 0.323480 (0.042155) | 0.005435 / 0.007986 (-0.002551) | 0.005005 / 0.004328 (0.000677) | 0.071107 / 0.004250 (0.066856) | 0.044363 / 0.037052 (0.007311) | 0.339988 / 0.258489 (0.081498) | 0.375575 / 0.293841 (0.081734) | 0.028343 / 0.128546 (-0.100203) | 0.008587 / 0.075646 (-0.067059) | 0.324349 / 0.419271 (-0.094922) | 0.050105 / 0.043533 (0.006573) | 0.327398 / 0.255139 (0.072259) | 0.348479 / 0.283200 (0.065279) | 0.102357 / 0.141683 (-0.039326) | 1.419905 / 1.452155 (-0.032250) | 1.534887 / 1.492716 (0.042171) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.212418 / 0.018006 (0.194412) | 0.433183 / 0.000490 (0.432693) | 0.000595 / 0.000200 (0.000395) | 0.000062 / 0.000054 (0.000007) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027520 / 0.037411 (-0.009891) | 0.109503 / 0.014526 (0.094977) | 0.118202 / 0.176557 (-0.058355) | 0.177236 / 0.737135 (-0.559899) | 0.123736 / 0.296338 (-0.172602) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.405734 / 0.215209 (0.190525) | 4.039566 / 2.077655 (1.961911) | 1.838211 / 1.504120 (0.334091) | 1.652650 / 1.541195 (0.111456) | 1.753488 / 1.468490 (0.284998) | 0.525258 / 4.584777 (-4.059519) | 3.704509 / 3.745712 (-0.041203) | 1.826794 / 5.269862 (-3.443067) | 1.236361 / 4.565676 (-3.329315) | 0.065619 / 0.424275 (-0.358656) | 0.011606 / 0.007607 (0.003999) | 0.505954 / 0.226044 (0.279910) | 5.054140 / 2.268929 (2.785211) | 2.352587 / 55.444624 (-53.092037) | 2.050601 / 6.876477 (-4.825875) | 2.097222 / 2.142072 (-0.044850) | 0.641044 / 4.805227 (-4.164183) | 0.140676 / 6.500664 (-6.359988) | 0.063217 / 0.075469 (-0.012253) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.177750 / 1.841788 (-0.664038) | 14.819346 / 8.074308 (6.745038) | 14.085937 / 10.191392 (3.894545) | 0.168618 / 0.680424 (-0.511806) | 0.017189 / 0.534201 (-0.517011) | 0.393415 / 0.579283 (-0.185868) | 0.422879 / 0.434364 (-0.011485) | 0.477289 / 0.540337 (-0.063048) | 0.569078 / 1.386936 (-0.817858) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006502 / 0.011353 (-0.004850) | 0.004640 / 0.011008 (-0.006368) | 0.073272 / 0.038508 (0.034764) | 0.033225 / 0.023109 (0.010116) | 0.359165 / 0.275898 (0.083267) | 0.391659 / 0.323480 (0.068179) | 0.005684 / 0.007986 (-0.002302) | 0.004045 / 0.004328 (-0.000284) | 0.072880 / 0.004250 (0.068629) | 0.046260 / 0.037052 (0.009208) | 0.361772 / 0.258489 (0.103283) | 0.402905 / 0.293841 (0.109064) | 0.027732 / 0.128546 (-0.100814) | 0.008864 / 0.075646 (-0.066783) | 0.081961 / 0.419271 (-0.337310) | 0.046170 / 0.043533 (0.002637) | 0.364198 / 0.255139 (0.109059) | 0.387468 / 0.283200 (0.104269) | 0.105456 / 0.141683 (-0.036227) | 1.457176 / 1.452155 (0.005021) | 1.564899 / 1.492716 (0.072183) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.179129 / 0.018006 (0.161123) | 0.439699 / 0.000490 (0.439209) | 0.002882 / 0.000200 (0.002682) | 0.000090 / 0.000054 (0.000035) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029123 / 0.037411 (-0.008288) | 0.112046 / 0.014526 (0.097520) | 0.122773 / 0.176557 (-0.053784) | 0.178404 / 0.737135 (-0.558732) | 0.127904 / 0.296338 (-0.168434) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.440413 / 0.215209 (0.225204) | 4.407334 / 2.077655 (2.329680) | 2.112932 / 1.504120 (0.608812) | 1.911034 / 1.541195 (0.369840) | 2.057168 / 1.468490 (0.588677) | 0.525472 / 4.584777 (-4.059305) | 3.738894 / 3.745712 (-0.006818) | 1.807592 / 5.269862 (-3.462270) | 1.053837 / 4.565676 (-3.511839) | 0.066203 / 0.424275 (-0.358072) | 0.011965 / 0.007607 (0.004358) | 0.541137 / 0.226044 (0.315093) | 5.415040 / 2.268929 (3.146112) | 2.580476 / 55.444624 (-52.864148) | 2.234144 / 6.876477 (-4.642333) | 2.306014 / 2.142072 (0.163942) | 0.644221 / 4.805227 (-4.161006) | 0.142870 / 6.500664 (-6.357794) | 0.065015 / 0.075469 (-0.010454) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.303465 / 1.841788 (-0.538323) | 14.949683 / 8.074308 (6.875375) | 14.370871 / 10.191392 (4.179478) | 0.142714 / 0.680424 (-0.537710) | 0.017372 / 0.534201 (-0.516829) | 0.403898 / 0.579283 (-0.175385) | 0.424781 / 0.434364 (-0.009583) | 0.465984 / 0.540337 (-0.074353) | 0.570863 / 1.386936 (-0.816074) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#22d1d533e8ab831b1aa1aab3e7d3c72ba42a83e8 \"CML watermark\")\n"
] | 2023-05-15T10:36:24 | 2023-05-23T10:40:13 | 2023-05-23T10:32:58 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5861",
"html_url": "https://github.com/huggingface/datasets/pull/5861",
"diff_url": "https://github.com/huggingface/datasets/pull/5861.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5861.patch",
"merged_at": "2023-05-23T10:32:58"
} | close https://github.com/huggingface/datasets/issues/5851 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5861/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5861/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5860 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5860/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5860/comments | https://api.github.com/repos/huggingface/datasets/issues/5860/events | https://github.com/huggingface/datasets/pull/5860 | 1,709,727,460 | PR_kwDODunzps5QfojD | 5,860 | Minor tqdm optim | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006917 / 0.011353 (-0.004436) | 0.004803 / 0.011008 (-0.006205) | 0.097082 / 0.038508 (0.058574) | 0.035105 / 0.023109 (0.011996) | 0.325911 / 0.275898 (0.050013) | 0.371858 / 0.323480 (0.048378) | 0.006451 / 0.007986 (-0.001534) | 0.004421 / 0.004328 (0.000093) | 0.075738 / 0.004250 (0.071487) | 0.053624 / 0.037052 (0.016572) | 0.332661 / 0.258489 (0.074172) | 0.372729 / 0.293841 (0.078888) | 0.028279 / 0.128546 (-0.100267) | 0.009318 / 0.075646 (-0.066328) | 0.328505 / 0.419271 (-0.090766) | 0.066962 / 0.043533 (0.023429) | 0.316863 / 0.255139 (0.061724) | 0.344296 / 0.283200 (0.061096) | 0.120575 / 0.141683 (-0.021108) | 1.457867 / 1.452155 (0.005712) | 1.597361 / 1.492716 (0.104644) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.296399 / 0.018006 (0.278392) | 0.507196 / 0.000490 (0.506706) | 0.003036 / 0.000200 (0.002836) | 0.000088 / 0.000054 (0.000034) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028535 / 0.037411 (-0.008876) | 0.110566 / 0.014526 (0.096040) | 0.122078 / 0.176557 (-0.054479) | 0.182926 / 0.737135 (-0.554210) | 0.125546 / 0.296338 (-0.170792) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.426952 / 0.215209 (0.211742) | 4.255608 / 2.077655 (2.177953) | 2.063865 / 1.504120 (0.559745) | 1.867198 / 1.541195 (0.326004) | 2.058236 / 1.468490 (0.589746) | 0.525885 / 4.584777 (-4.058892) | 3.723607 / 3.745712 (-0.022105) | 1.919144 / 5.269862 (-3.350718) | 1.235308 / 4.565676 (-3.330368) | 0.066423 / 0.424275 (-0.357852) | 0.012045 / 0.007607 (0.004438) | 0.528432 / 0.226044 (0.302388) | 5.268723 / 2.268929 (2.999794) | 2.504071 / 55.444624 (-52.940553) | 2.137999 / 6.876477 (-4.738477) | 2.229987 / 2.142072 (0.087914) | 0.641739 / 4.805227 (-4.163488) | 0.142635 / 6.500664 (-6.358029) | 0.065649 / 0.075469 (-0.009820) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.182710 / 1.841788 (-0.659078) | 15.339777 / 8.074308 (7.265469) | 14.722308 / 10.191392 (4.530916) | 0.145914 / 0.680424 (-0.534510) | 0.017861 / 0.534201 (-0.516340) | 0.393092 / 0.579283 (-0.186191) | 0.431179 / 0.434364 (-0.003185) | 0.485712 / 0.540337 (-0.054625) | 0.602634 / 1.386936 (-0.784302) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006792 / 0.011353 (-0.004561) | 0.005118 / 0.011008 (-0.005890) | 0.073440 / 0.038508 (0.034932) | 0.033751 / 0.023109 (0.010642) | 0.389243 / 0.275898 (0.113345) | 0.397083 / 0.323480 (0.073603) | 0.005989 / 0.007986 (-0.001997) | 0.004289 / 0.004328 (-0.000040) | 0.073228 / 0.004250 (0.068977) | 0.053490 / 0.037052 (0.016438) | 0.396070 / 0.258489 (0.137581) | 0.415134 / 0.293841 (0.121293) | 0.028649 / 0.128546 (-0.099897) | 0.009159 / 0.075646 (-0.066487) | 0.080813 / 0.419271 (-0.338458) | 0.048200 / 0.043533 (0.004667) | 0.388009 / 0.255139 (0.132870) | 0.382174 / 0.283200 (0.098975) | 0.107807 / 0.141683 (-0.033876) | 1.467276 / 1.452155 (0.015121) | 1.568091 / 1.492716 (0.075375) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.328030 / 0.018006 (0.310024) | 0.498058 / 0.000490 (0.497568) | 0.002513 / 0.000200 (0.002313) | 0.000099 / 0.000054 (0.000045) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029835 / 0.037411 (-0.007576) | 0.113859 / 0.014526 (0.099333) | 0.130813 / 0.176557 (-0.045743) | 0.183646 / 0.737135 (-0.553490) | 0.136561 / 0.296338 (-0.159777) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.438901 / 0.215209 (0.223692) | 4.376426 / 2.077655 (2.298771) | 2.220932 / 1.504120 (0.716812) | 2.043585 / 1.541195 (0.502390) | 2.161383 / 1.468490 (0.692893) | 0.523224 / 4.584777 (-4.061553) | 3.730589 / 3.745712 (-0.015123) | 1.859602 / 5.269862 (-3.410260) | 1.073415 / 4.565676 (-3.492261) | 0.066363 / 0.424275 (-0.357912) | 0.012491 / 0.007607 (0.004884) | 0.542052 / 0.226044 (0.316008) | 5.426246 / 2.268929 (3.157318) | 2.673884 / 55.444624 (-52.770740) | 2.372611 / 6.876477 (-4.503865) | 2.482216 / 2.142072 (0.340143) | 0.705669 / 4.805227 (-4.099558) | 0.141075 / 6.500664 (-6.359589) | 0.065339 / 0.075469 (-0.010130) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.316403 / 1.841788 (-0.525385) | 15.832870 / 8.074308 (7.758562) | 13.307045 / 10.191392 (3.115653) | 0.147258 / 0.680424 (-0.533166) | 0.017966 / 0.534201 (-0.516235) | 0.414396 / 0.579283 (-0.164887) | 0.431801 / 0.434364 (-0.002563) | 0.465483 / 0.540337 (-0.074855) | 0.577850 / 1.386936 (-0.809086) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#c795c7e332a7c850c3e725f2034d4894b5e314f7 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006368 / 0.011353 (-0.004985) | 0.004274 / 0.011008 (-0.006734) | 0.098799 / 0.038508 (0.060291) | 0.029096 / 0.023109 (0.005986) | 0.308009 / 0.275898 (0.032111) | 0.345701 / 0.323480 (0.022221) | 0.005312 / 0.007986 (-0.002674) | 0.003435 / 0.004328 (-0.000894) | 0.075912 / 0.004250 (0.071662) | 0.041993 / 0.037052 (0.004941) | 0.320075 / 0.258489 (0.061586) | 0.347506 / 0.293841 (0.053665) | 0.025456 / 0.128546 (-0.103091) | 0.008461 / 0.075646 (-0.067185) | 0.322823 / 0.419271 (-0.096448) | 0.044650 / 0.043533 (0.001117) | 0.314118 / 0.255139 (0.058979) | 0.333436 / 0.283200 (0.050237) | 0.093811 / 0.141683 (-0.047871) | 1.464464 / 1.452155 (0.012310) | 1.548098 / 1.492716 (0.055382) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.015905 / 0.018006 (-0.002101) | 0.427847 / 0.000490 (0.427357) | 0.007600 / 0.000200 (0.007400) | 0.000421 / 0.000054 (0.000366) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024530 / 0.037411 (-0.012882) | 0.099907 / 0.014526 (0.085381) | 0.107282 / 0.176557 (-0.069275) | 0.168332 / 0.737135 (-0.568804) | 0.109875 / 0.296338 (-0.186464) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.451064 / 0.215209 (0.235855) | 4.491434 / 2.077655 (2.413779) | 2.253251 / 1.504120 (0.749131) | 2.086740 / 1.541195 (0.545545) | 2.133288 / 1.468490 (0.664798) | 0.558801 / 4.584777 (-4.025976) | 3.463525 / 3.745712 (-0.282187) | 1.747657 / 5.269862 (-3.522205) | 1.005465 / 4.565676 (-3.560211) | 0.068341 / 0.424275 (-0.355934) | 0.012521 / 0.007607 (0.004914) | 0.567002 / 0.226044 (0.340957) | 5.689529 / 2.268929 (3.420601) | 2.700562 / 55.444624 (-52.744062) | 2.384888 / 6.876477 (-4.491589) | 2.503160 / 2.142072 (0.361088) | 0.667107 / 4.805227 (-4.138120) | 0.137253 / 6.500664 (-6.363412) | 0.068300 / 0.075469 (-0.007170) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.202916 / 1.841788 (-0.638872) | 14.163393 / 8.074308 (6.089085) | 14.402463 / 10.191392 (4.211071) | 0.145273 / 0.680424 (-0.535151) | 0.016996 / 0.534201 (-0.517205) | 0.363520 / 0.579283 (-0.215763) | 0.421595 / 0.434364 (-0.012769) | 0.438413 / 0.540337 (-0.101925) | 0.508615 / 1.386936 (-0.878321) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006419 / 0.011353 (-0.004934) | 0.004346 / 0.011008 (-0.006662) | 0.076356 / 0.038508 (0.037848) | 0.029370 / 0.023109 (0.006260) | 0.371046 / 0.275898 (0.095148) | 0.398279 / 0.323480 (0.074799) | 0.005258 / 0.007986 (-0.002728) | 0.003528 / 0.004328 (-0.000800) | 0.076787 / 0.004250 (0.072537) | 0.041575 / 0.037052 (0.004522) | 0.362319 / 0.258489 (0.103830) | 0.402134 / 0.293841 (0.108293) | 0.025633 / 0.128546 (-0.102913) | 0.008826 / 0.075646 (-0.066820) | 0.082380 / 0.419271 (-0.336892) | 0.041655 / 0.043533 (-0.001878) | 0.357583 / 0.255139 (0.102444) | 0.383486 / 0.283200 (0.100287) | 0.093682 / 0.141683 (-0.048001) | 1.488522 / 1.452155 (0.036367) | 1.576090 / 1.492716 (0.083373) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.185556 / 0.018006 (0.167550) | 0.431345 / 0.000490 (0.430855) | 0.002290 / 0.000200 (0.002090) | 0.000082 / 0.000054 (0.000027) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026030 / 0.037411 (-0.011382) | 0.102889 / 0.014526 (0.088364) | 0.109541 / 0.176557 (-0.067015) | 0.161050 / 0.737135 (-0.576085) | 0.113525 / 0.296338 (-0.182814) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.445301 / 0.215209 (0.230092) | 4.437320 / 2.077655 (2.359666) | 2.174181 / 1.504120 (0.670061) | 1.977440 / 1.541195 (0.436245) | 2.036323 / 1.468490 (0.567832) | 0.554227 / 4.584777 (-4.030550) | 3.462746 / 3.745712 (-0.282966) | 1.765257 / 5.269862 (-3.504604) | 1.014515 / 4.565676 (-3.551161) | 0.068391 / 0.424275 (-0.355884) | 0.013154 / 0.007607 (0.005546) | 0.546696 / 0.226044 (0.320652) | 5.490628 / 2.268929 (3.221699) | 2.611947 / 55.444624 (-52.832677) | 2.282659 / 6.876477 (-4.593818) | 2.333972 / 2.142072 (0.191899) | 0.663140 / 4.805227 (-4.142087) | 0.137996 / 6.500664 (-6.362668) | 0.069063 / 0.075469 (-0.006407) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.332147 / 1.841788 (-0.509641) | 14.781592 / 8.074308 (6.707284) | 13.399190 / 10.191392 (3.207798) | 0.139370 / 0.680424 (-0.541054) | 0.016742 / 0.534201 (-0.517459) | 0.364138 / 0.579283 (-0.215146) | 0.402479 / 0.434364 (-0.031885) | 0.427591 / 0.540337 (-0.112746) | 0.520864 / 1.386936 (-0.866072) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#a8279677b58b93f77995c7da67aea2a04b6a7395 \"CML watermark\")\n"
] | 2023-05-15T09:49:37 | 2023-05-17T18:46:46 | 2023-05-17T18:39:35 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5860",
"html_url": "https://github.com/huggingface/datasets/pull/5860",
"diff_url": "https://github.com/huggingface/datasets/pull/5860.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5860.patch",
"merged_at": "2023-05-17T18:39:35"
} | Don't create a tqdm progress bar when `disable_tqdm` is passed to `map_nested`.
On my side it sped up some iterable datasets by ~30% when `map_nested` is used extensively to recursively tensorize python dicts. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5860/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5860/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5853 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5853/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5853/comments | https://api.github.com/repos/huggingface/datasets/issues/5853/events | https://github.com/huggingface/datasets/pull/5853 | 1,708,092,786 | PR_kwDODunzps5QaZLP | 5,853 | [docs] Redirects, migrated from nginx | {
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"@mishig25 note that it's not exactly the same behavior as in nginx as here it interacts a bit with the `version` and the `language`\r\n\r\nShould be close enough, though.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007212 / 0.011353 (-0.004141) | 0.005125 / 0.011008 (-0.005883) | 0.098460 / 0.038508 (0.059952) | 0.034040 / 0.023109 (0.010931) | 0.320203 / 0.275898 (0.044305) | 0.357787 / 0.323480 (0.034307) | 0.006000 / 0.007986 (-0.001986) | 0.005644 / 0.004328 (0.001316) | 0.072654 / 0.004250 (0.068403) | 0.049393 / 0.037052 (0.012341) | 0.345686 / 0.258489 (0.087196) | 0.362345 / 0.293841 (0.068504) | 0.036597 / 0.128546 (-0.091949) | 0.012303 / 0.075646 (-0.063343) | 0.334374 / 0.419271 (-0.084897) | 0.062010 / 0.043533 (0.018477) | 0.312547 / 0.255139 (0.057408) | 0.336021 / 0.283200 (0.052821) | 0.112304 / 0.141683 (-0.029378) | 1.446706 / 1.452155 (-0.005449) | 1.523256 / 1.492716 (0.030540) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.217658 / 0.018006 (0.199652) | 0.449208 / 0.000490 (0.448718) | 0.002878 / 0.000200 (0.002679) | 0.000091 / 0.000054 (0.000036) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025735 / 0.037411 (-0.011676) | 0.105876 / 0.014526 (0.091350) | 0.114887 / 0.176557 (-0.061669) | 0.170984 / 0.737135 (-0.566152) | 0.121420 / 0.296338 (-0.174918) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.419670 / 0.215209 (0.204461) | 4.189453 / 2.077655 (2.111798) | 1.938236 / 1.504120 (0.434116) | 1.769747 / 1.541195 (0.228553) | 1.910919 / 1.468490 (0.442429) | 0.705046 / 4.584777 (-3.879730) | 3.783774 / 3.745712 (0.038062) | 2.096504 / 5.269862 (-3.173358) | 1.339265 / 4.565676 (-3.226412) | 0.086670 / 0.424275 (-0.337605) | 0.012243 / 0.007607 (0.004636) | 0.524701 / 0.226044 (0.298657) | 5.240689 / 2.268929 (2.971760) | 2.473622 / 55.444624 (-52.971003) | 2.170568 / 6.876477 (-4.705909) | 2.289653 / 2.142072 (0.147581) | 0.848913 / 4.805227 (-3.956314) | 0.168332 / 6.500664 (-6.332332) | 0.064926 / 0.075469 (-0.010543) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.193614 / 1.841788 (-0.648173) | 14.920403 / 8.074308 (6.846095) | 14.475059 / 10.191392 (4.283667) | 0.164458 / 0.680424 (-0.515966) | 0.017613 / 0.534201 (-0.516588) | 0.426311 / 0.579283 (-0.152972) | 0.431478 / 0.434364 (-0.002886) | 0.520280 / 0.540337 (-0.020057) | 0.627738 / 1.386936 (-0.759198) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007458 / 0.011353 (-0.003895) | 0.005363 / 0.011008 (-0.005645) | 0.076713 / 0.038508 (0.038205) | 0.034189 / 0.023109 (0.011079) | 0.359938 / 0.275898 (0.084040) | 0.395532 / 0.323480 (0.072052) | 0.005977 / 0.007986 (-0.002008) | 0.004263 / 0.004328 (-0.000065) | 0.075971 / 0.004250 (0.071721) | 0.051924 / 0.037052 (0.014871) | 0.362818 / 0.258489 (0.104329) | 0.409897 / 0.293841 (0.116056) | 0.035494 / 0.128546 (-0.093053) | 0.012399 / 0.075646 (-0.063247) | 0.088335 / 0.419271 (-0.330937) | 0.047968 / 0.043533 (0.004435) | 0.355744 / 0.255139 (0.100606) | 0.376339 / 0.283200 (0.093139) | 0.104542 / 0.141683 (-0.037141) | 1.464826 / 1.452155 (0.012672) | 1.600665 / 1.492716 (0.107948) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.220841 / 0.018006 (0.202834) | 0.446444 / 0.000490 (0.445954) | 0.000392 / 0.000200 (0.000192) | 0.000057 / 0.000054 (0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029402 / 0.037411 (-0.008009) | 0.116511 / 0.014526 (0.101986) | 0.122959 / 0.176557 (-0.053598) | 0.171674 / 0.737135 (-0.565462) | 0.129871 / 0.296338 (-0.166468) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.450411 / 0.215209 (0.235202) | 4.471859 / 2.077655 (2.394205) | 2.229439 / 1.504120 (0.725319) | 2.053308 / 1.541195 (0.512114) | 2.142476 / 1.468490 (0.673986) | 0.708299 / 4.584777 (-3.876478) | 3.797830 / 3.745712 (0.052118) | 2.142509 / 5.269862 (-3.127352) | 1.333357 / 4.565676 (-3.232320) | 0.086837 / 0.424275 (-0.337439) | 0.012102 / 0.007607 (0.004495) | 0.548428 / 0.226044 (0.322384) | 5.490611 / 2.268929 (3.221682) | 2.713882 / 55.444624 (-52.730742) | 2.399638 / 6.876477 (-4.476839) | 2.481549 / 2.142072 (0.339477) | 0.839812 / 4.805227 (-3.965415) | 0.168890 / 6.500664 (-6.331774) | 0.065564 / 0.075469 (-0.009906) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.275507 / 1.841788 (-0.566281) | 14.896343 / 8.074308 (6.822035) | 13.159701 / 10.191392 (2.968309) | 0.172065 / 0.680424 (-0.508359) | 0.017507 / 0.534201 (-0.516694) | 0.420031 / 0.579283 (-0.159252) | 0.438835 / 0.434364 (0.004471) | 0.490597 / 0.540337 (-0.049741) | 0.583952 / 1.386936 (-0.802984) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#48c9755d0ae9abe4c4d6cd8c1ce76eff849f0e5c \"CML watermark\")\n"
] | 2023-05-12T19:19:27 | 2023-05-15T10:37:19 | 2023-05-15T10:30:14 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5853",
"html_url": "https://github.com/huggingface/datasets/pull/5853",
"diff_url": "https://github.com/huggingface/datasets/pull/5853.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5853.patch",
"merged_at": "2023-05-15T10:30:14"
} | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5853/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5853/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5848 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5848/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5848/comments | https://api.github.com/repos/huggingface/datasets/issues/5848/events | https://github.com/huggingface/datasets/pull/5848 | 1,707,506,734 | PR_kwDODunzps5QYa1B | 5,848 | Add `accelerate` as metric's test dependency to fix CI error | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007565 / 0.011353 (-0.003788) | 0.005361 / 0.011008 (-0.005647) | 0.098963 / 0.038508 (0.060455) | 0.034271 / 0.023109 (0.011162) | 0.323421 / 0.275898 (0.047523) | 0.348495 / 0.323480 (0.025015) | 0.006244 / 0.007986 (-0.001741) | 0.004215 / 0.004328 (-0.000113) | 0.073614 / 0.004250 (0.069364) | 0.049334 / 0.037052 (0.012282) | 0.315277 / 0.258489 (0.056788) | 0.354325 / 0.293841 (0.060484) | 0.035001 / 0.128546 (-0.093545) | 0.012149 / 0.075646 (-0.063497) | 0.335614 / 0.419271 (-0.083657) | 0.050532 / 0.043533 (0.006999) | 0.308500 / 0.255139 (0.053361) | 0.324620 / 0.283200 (0.041421) | 0.110241 / 0.141683 (-0.031442) | 1.443923 / 1.452155 (-0.008232) | 1.559289 / 1.492716 (0.066573) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.207629 / 0.018006 (0.189622) | 0.433251 / 0.000490 (0.432762) | 0.003021 / 0.000200 (0.002821) | 0.000074 / 0.000054 (0.000019) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028312 / 0.037411 (-0.009100) | 0.111829 / 0.014526 (0.097303) | 0.127099 / 0.176557 (-0.049458) | 0.184702 / 0.737135 (-0.552433) | 0.125062 / 0.296338 (-0.171277) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.399451 / 0.215209 (0.184242) | 3.966528 / 2.077655 (1.888874) | 1.826004 / 1.504120 (0.321884) | 1.669547 / 1.541195 (0.128353) | 1.751584 / 1.468490 (0.283094) | 0.688308 / 4.584777 (-3.896469) | 3.813275 / 3.745712 (0.067562) | 3.181554 / 5.269862 (-2.088307) | 1.750566 / 4.565676 (-2.815111) | 0.085038 / 0.424275 (-0.339237) | 0.011992 / 0.007607 (0.004385) | 0.502374 / 0.226044 (0.276330) | 4.970614 / 2.268929 (2.701686) | 2.309617 / 55.444624 (-53.135007) | 2.012427 / 6.876477 (-4.864050) | 2.156348 / 2.142072 (0.014276) | 0.834415 / 4.805227 (-3.970812) | 0.167912 / 6.500664 (-6.332752) | 0.065711 / 0.075469 (-0.009758) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.223132 / 1.841788 (-0.618656) | 15.126753 / 8.074308 (7.052445) | 14.829184 / 10.191392 (4.637792) | 0.142582 / 0.680424 (-0.537842) | 0.017483 / 0.534201 (-0.516718) | 0.429768 / 0.579283 (-0.149516) | 0.422745 / 0.434364 (-0.011619) | 0.508813 / 0.540337 (-0.031525) | 0.618716 / 1.386936 (-0.768220) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007749 / 0.011353 (-0.003604) | 0.005433 / 0.011008 (-0.005576) | 0.076223 / 0.038508 (0.037715) | 0.036334 / 0.023109 (0.013225) | 0.375339 / 0.275898 (0.099441) | 0.413674 / 0.323480 (0.090194) | 0.006207 / 0.007986 (-0.001778) | 0.004085 / 0.004328 (-0.000244) | 0.076154 / 0.004250 (0.071904) | 0.050324 / 0.037052 (0.013271) | 0.382919 / 0.258489 (0.124429) | 0.442508 / 0.293841 (0.148667) | 0.035951 / 0.128546 (-0.092595) | 0.012067 / 0.075646 (-0.063580) | 0.087649 / 0.419271 (-0.331623) | 0.048786 / 0.043533 (0.005253) | 0.373541 / 0.255139 (0.118402) | 0.400437 / 0.283200 (0.117237) | 0.102622 / 0.141683 (-0.039061) | 1.472443 / 1.452155 (0.020288) | 1.580178 / 1.492716 (0.087462) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.222105 / 0.018006 (0.204098) | 0.445465 / 0.000490 (0.444975) | 0.003671 / 0.000200 (0.003471) | 0.000096 / 0.000054 (0.000041) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030808 / 0.037411 (-0.006603) | 0.116687 / 0.014526 (0.102161) | 0.124972 / 0.176557 (-0.051584) | 0.175621 / 0.737135 (-0.561514) | 0.129029 / 0.296338 (-0.167310) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.434627 / 0.215209 (0.219418) | 4.330268 / 2.077655 (2.252613) | 2.140266 / 1.504120 (0.636146) | 1.960705 / 1.541195 (0.419510) | 2.035949 / 1.468490 (0.567459) | 0.696830 / 4.584777 (-3.887947) | 3.790468 / 3.745712 (0.044756) | 3.194112 / 5.269862 (-2.075750) | 1.577728 / 4.565676 (-2.987948) | 0.085445 / 0.424275 (-0.338830) | 0.012207 / 0.007607 (0.004600) | 0.555199 / 0.226044 (0.329154) | 5.551539 / 2.268929 (3.282610) | 2.630917 / 55.444624 (-52.813707) | 2.383362 / 6.876477 (-4.493114) | 2.476301 / 2.142072 (0.334229) | 0.845773 / 4.805227 (-3.959455) | 0.169229 / 6.500664 (-6.331435) | 0.066064 / 0.075469 (-0.009405) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.277543 / 1.841788 (-0.564245) | 15.775637 / 8.074308 (7.701329) | 13.528588 / 10.191392 (3.337196) | 0.167428 / 0.680424 (-0.512996) | 0.017581 / 0.534201 (-0.516620) | 0.454472 / 0.579283 (-0.124811) | 0.427987 / 0.434364 (-0.006377) | 0.551512 / 0.540337 (0.011175) | 0.650811 / 1.386936 (-0.736125) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#96a6f5f526cc90330df597ae0097274742d5b84f \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009800 / 0.011353 (-0.001552) | 0.006443 / 0.011008 (-0.004565) | 0.144137 / 0.038508 (0.105629) | 0.037493 / 0.023109 (0.014383) | 0.482306 / 0.275898 (0.206408) | 0.467625 / 0.323480 (0.144145) | 0.006812 / 0.007986 (-0.001174) | 0.004810 / 0.004328 (0.000481) | 0.109047 / 0.004250 (0.104796) | 0.047169 / 0.037052 (0.010116) | 0.451253 / 0.258489 (0.192764) | 0.511339 / 0.293841 (0.217498) | 0.055583 / 0.128546 (-0.072963) | 0.021810 / 0.075646 (-0.053836) | 0.426522 / 0.419271 (0.007250) | 0.070282 / 0.043533 (0.026749) | 0.469631 / 0.255139 (0.214492) | 0.484951 / 0.283200 (0.201751) | 0.117370 / 0.141683 (-0.024313) | 1.809917 / 1.452155 (0.357763) | 1.882659 / 1.492716 (0.389943) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.223843 / 0.018006 (0.205837) | 0.549216 / 0.000490 (0.548726) | 0.007120 / 0.000200 (0.006920) | 0.000128 / 0.000054 (0.000074) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033057 / 0.037411 (-0.004354) | 0.128242 / 0.014526 (0.113716) | 0.140906 / 0.176557 (-0.035650) | 0.213122 / 0.737135 (-0.524013) | 0.148115 / 0.296338 (-0.148224) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.638712 / 0.215209 (0.423503) | 6.383684 / 2.077655 (4.306029) | 2.477020 / 1.504120 (0.972900) | 2.129190 / 1.541195 (0.587996) | 2.230503 / 1.468490 (0.762013) | 1.367167 / 4.584777 (-3.217610) | 5.570586 / 3.745712 (1.824873) | 5.462857 / 5.269862 (0.192996) | 2.990604 / 4.565676 (-1.575073) | 0.146543 / 0.424275 (-0.277732) | 0.016060 / 0.007607 (0.008453) | 0.812691 / 0.226044 (0.586646) | 7.928041 / 2.268929 (5.659112) | 3.329494 / 55.444624 (-52.115130) | 2.523452 / 6.876477 (-4.353025) | 2.672374 / 2.142072 (0.530302) | 1.598554 / 4.805227 (-3.206673) | 0.284727 / 6.500664 (-6.215937) | 0.080359 / 0.075469 (0.004889) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.501112 / 1.841788 (-0.340675) | 17.553644 / 8.074308 (9.479335) | 22.704062 / 10.191392 (12.512670) | 0.225575 / 0.680424 (-0.454849) | 0.026531 / 0.534201 (-0.507670) | 0.520129 / 0.579283 (-0.059154) | 0.626220 / 0.434364 (0.191856) | 0.631740 / 0.540337 (0.091403) | 0.750611 / 1.386936 (-0.636325) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009866 / 0.011353 (-0.001487) | 0.005733 / 0.011008 (-0.005275) | 0.111529 / 0.038508 (0.073021) | 0.042001 / 0.023109 (0.018891) | 0.458578 / 0.275898 (0.182680) | 0.507796 / 0.323480 (0.184316) | 0.006547 / 0.007986 (-0.001438) | 0.005611 / 0.004328 (0.001282) | 0.115321 / 0.004250 (0.111070) | 0.048741 / 0.037052 (0.011689) | 0.447611 / 0.258489 (0.189122) | 0.531830 / 0.293841 (0.237989) | 0.052176 / 0.128546 (-0.076370) | 0.022431 / 0.075646 (-0.053216) | 0.120709 / 0.419271 (-0.298562) | 0.067301 / 0.043533 (0.023769) | 0.460577 / 0.255139 (0.205438) | 0.497805 / 0.283200 (0.214605) | 0.121830 / 0.141683 (-0.019853) | 1.876436 / 1.452155 (0.424281) | 1.983491 / 1.492716 (0.490775) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.230982 / 0.018006 (0.212976) | 0.540643 / 0.000490 (0.540153) | 0.004646 / 0.000200 (0.004446) | 0.000131 / 0.000054 (0.000077) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034230 / 0.037411 (-0.003181) | 0.136454 / 0.014526 (0.121928) | 0.143370 / 0.176557 (-0.033187) | 0.206752 / 0.737135 (-0.530384) | 0.148722 / 0.296338 (-0.147617) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.704667 / 0.215209 (0.489458) | 7.112079 / 2.077655 (5.034424) | 3.083916 / 1.504120 (1.579797) | 2.606388 / 1.541195 (1.065193) | 2.738505 / 1.468490 (1.270015) | 1.314897 / 4.584777 (-3.269880) | 5.764442 / 3.745712 (2.018729) | 3.491890 / 5.269862 (-1.777972) | 2.299983 / 4.565676 (-2.265693) | 0.169655 / 0.424275 (-0.254620) | 0.015251 / 0.007607 (0.007643) | 0.977230 / 0.226044 (0.751186) | 9.697773 / 2.268929 (7.428844) | 3.826928 / 55.444624 (-51.617697) | 3.108238 / 6.876477 (-3.768239) | 3.103242 / 2.142072 (0.961169) | 1.586645 / 4.805227 (-3.218582) | 0.287181 / 6.500664 (-6.213483) | 0.107332 / 0.075469 (0.031863) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.712710 / 1.841788 (-0.129077) | 19.169403 / 8.074308 (11.095095) | 21.777301 / 10.191392 (11.585909) | 0.216918 / 0.680424 (-0.463506) | 0.026551 / 0.534201 (-0.507650) | 0.570383 / 0.579283 (-0.008900) | 0.643885 / 0.434364 (0.209521) | 0.673906 / 0.540337 (0.133568) | 0.824573 / 1.386936 (-0.562363) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#4ead18b6921c9576a3078d2fb685c38f1e1a4b8a \"CML watermark\")\n"
] | 2023-05-12T12:01:01 | 2023-05-12T13:48:47 | 2023-05-12T13:39:06 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5848",
"html_url": "https://github.com/huggingface/datasets/pull/5848",
"diff_url": "https://github.com/huggingface/datasets/pull/5848.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5848.patch",
"merged_at": "2023-05-12T13:39:06"
} | The `frugalscore` metric uses Transformers' Trainer, which requires `accelerate` (as of recently).
Fixes the following [CI error](https://github.com/huggingface/datasets/actions/runs/4950900048/jobs/8855148703?pr=5845). | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5848/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5848/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5851 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5851/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5851/comments | https://api.github.com/repos/huggingface/datasets/issues/5851/events | https://github.com/huggingface/datasets/issues/5851 | 1,707,907,048 | I_kwDODunzps5lzJfo | 5,851 | Error message not clear in interleaving datasets | {
"login": "surya-narayanan",
"id": 17240858,
"node_id": "MDQ6VXNlcjE3MjQwODU4",
"avatar_url": "https://avatars.githubusercontent.com/u/17240858?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/surya-narayanan",
"html_url": "https://github.com/surya-narayanan",
"followers_url": "https://api.github.com/users/surya-narayanan/followers",
"following_url": "https://api.github.com/users/surya-narayanan/following{/other_user}",
"gists_url": "https://api.github.com/users/surya-narayanan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/surya-narayanan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/surya-narayanan/subscriptions",
"organizations_url": "https://api.github.com/users/surya-narayanan/orgs",
"repos_url": "https://api.github.com/users/surya-narayanan/repos",
"events_url": "https://api.github.com/users/surya-narayanan/events{/privacy}",
"received_events_url": "https://api.github.com/users/surya-narayanan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 2023-05-11T20:52:13 | 2023-05-23T10:32:59 | 2023-05-23T10:32:59 | NONE | null | null | null | ### System Info
standard env
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I'm trying to interleave 'sciq', 'wiki' and the 'pile-enron' dataset. I think the error I made was that I loaded the train split of one, but for the other but the error is not too helpful-
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
[/home/suryahari/Vornoi/save_model_ops.py](https://vscode-remote+ssh-002dremote-002bthomsonlab-002d2-002ejamesgornet-002ecom.vscode-resource.vscode-cdn.net/home/suryahari/Vornoi/save_model_ops.py) in line 3
[41](file:///home/suryahari/Vornoi/save_model_ops.py?line=40) # %%
----> [43](file:///home/suryahari/Vornoi/save_model_ops.py?line=42) dataset = interleave_datasets(datasets, stopping_strategy="all_exhausted")
File [~/miniconda3/envs/vornoi/lib/python3.10/site-packages/datasets/combine.py:124](https://vscode-remote+ssh-002dremote-002bthomsonlab-002d2-002ejamesgornet-002ecom.vscode-resource.vscode-cdn.net/home/suryahari/~/miniconda3/envs/vornoi/lib/python3.10/site-packages/datasets/combine.py:124), in interleave_datasets(datasets, probabilities, seed, info, split, stopping_strategy)
[122](file:///home/suryahari/miniconda3/envs/vornoi/lib/python3.10/site-packages/datasets/combine.py?line=121) for dataset in datasets[1:]:
[123](file:///home/suryahari/miniconda3/envs/vornoi/lib/python3.10/site-packages/datasets/combine.py?line=122) if (map_style and not isinstance(dataset, Dataset)) or (iterable and not isinstance(dataset, IterableDataset)):
--> [124](file:///home/suryahari/miniconda3/envs/vornoi/lib/python3.10/site-packages/datasets/combine.py?line=123) raise ValueError(
[125](file:///home/suryahari/miniconda3/envs/vornoi/lib/python3.10/site-packages/datasets/combine.py?line=124) f"Unable to interleave a {type(datasets[0])} with a {type(dataset)}. Expected a list of Dataset objects or a list of IterableDataset objects."
[126](file:///home/suryahari/miniconda3/envs/vornoi/lib/python3.10/site-packages/datasets/combine.py?line=125) )
[127](file:///home/suryahari/miniconda3/envs/vornoi/lib/python3.10/site-packages/datasets/combine.py?line=126) if stopping_strategy not in ["first_exhausted", "all_exhausted"]:
[128](file:///home/suryahari/miniconda3/envs/vornoi/lib/python3.10/site-packages/datasets/combine.py?line=127) raise ValueError(f"{stopping_strategy} is not supported. Please enter a valid stopping_strategy.")
ValueError: Unable to interleave a with a . Expected a list of Dataset objects or a list of IterableDataset objects.
```
### Expected behavior
the error message should hopefully be more clear | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5851/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5851/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5845 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5845/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5845/comments | https://api.github.com/repos/huggingface/datasets/issues/5845/events | https://github.com/huggingface/datasets/pull/5845 | 1,706,253,251 | PR_kwDODunzps5QUMjS | 5,845 | Add `date_format` param to the CSV reader | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007592 / 0.011353 (-0.003761) | 0.005223 / 0.011008 (-0.005786) | 0.110218 / 0.038508 (0.071710) | 0.027644 / 0.023109 (0.004534) | 0.335063 / 0.275898 (0.059165) | 0.347102 / 0.323480 (0.023623) | 0.005107 / 0.007986 (-0.002878) | 0.003932 / 0.004328 (-0.000396) | 0.086095 / 0.004250 (0.081845) | 0.034735 / 0.037052 (-0.002317) | 0.329029 / 0.258489 (0.070540) | 0.370282 / 0.293841 (0.076441) | 0.043040 / 0.128546 (-0.085507) | 0.019626 / 0.075646 (-0.056021) | 0.336452 / 0.419271 (-0.082819) | 0.070365 / 0.043533 (0.026832) | 0.326881 / 0.255139 (0.071742) | 0.354984 / 0.283200 (0.071785) | 0.102605 / 0.141683 (-0.039077) | 1.459161 / 1.452155 (0.007007) | 1.453599 / 1.492716 (-0.039117) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.201021 / 0.018006 (0.183015) | 0.456415 / 0.000490 (0.455926) | 0.012349 / 0.000200 (0.012149) | 0.000115 / 0.000054 (0.000061) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025199 / 0.037411 (-0.012213) | 0.098536 / 0.014526 (0.084010) | 0.107528 / 0.176557 (-0.069028) | 0.160492 / 0.737135 (-0.576643) | 0.108660 / 0.296338 (-0.187679) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.527020 / 0.215209 (0.311811) | 5.357635 / 2.077655 (3.279980) | 2.062930 / 1.504120 (0.558811) | 1.783009 / 1.541195 (0.241815) | 1.840225 / 1.468490 (0.371735) | 1.074278 / 4.584777 (-3.510499) | 4.710533 / 3.745712 (0.964821) | 2.611202 / 5.269862 (-2.658660) | 1.885487 / 4.565676 (-2.680189) | 0.123201 / 0.424275 (-0.301074) | 0.013880 / 0.007607 (0.006273) | 0.636511 / 0.226044 (0.410467) | 6.516075 / 2.268929 (4.247146) | 2.710138 / 55.444624 (-52.734486) | 2.046606 / 6.876477 (-4.829871) | 2.085907 / 2.142072 (-0.056166) | 1.199489 / 4.805227 (-3.605738) | 0.211668 / 6.500664 (-6.288996) | 0.075436 / 0.075469 (-0.000033) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.219771 / 1.841788 (-0.622016) | 14.276215 / 8.074308 (6.201907) | 16.611529 / 10.191392 (6.420137) | 0.221091 / 0.680424 (-0.459333) | 0.024922 / 0.534201 (-0.509279) | 0.431906 / 0.579283 (-0.147377) | 0.518863 / 0.434364 (0.084499) | 0.515366 / 0.540337 (-0.024971) | 0.640411 / 1.386936 (-0.746525) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007955 / 0.011353 (-0.003398) | 0.004813 / 0.011008 (-0.006196) | 0.076508 / 0.038508 (0.038000) | 0.028137 / 0.023109 (0.005028) | 0.349609 / 0.275898 (0.073711) | 0.403588 / 0.323480 (0.080109) | 0.005456 / 0.007986 (-0.002530) | 0.005677 / 0.004328 (0.001349) | 0.076882 / 0.004250 (0.072632) | 0.039832 / 0.037052 (0.002779) | 0.351930 / 0.258489 (0.093440) | 0.390492 / 0.293841 (0.096651) | 0.045199 / 0.128546 (-0.083347) | 0.023945 / 0.075646 (-0.051701) | 0.091140 / 0.419271 (-0.328132) | 0.057728 / 0.043533 (0.014195) | 0.370663 / 0.255139 (0.115524) | 0.380649 / 0.283200 (0.097449) | 0.097017 / 0.141683 (-0.044666) | 1.362248 / 1.452155 (-0.089907) | 1.445699 / 1.492716 (-0.047018) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.204207 / 0.018006 (0.186201) | 0.474471 / 0.000490 (0.473981) | 0.012187 / 0.000200 (0.011987) | 0.000151 / 0.000054 (0.000096) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023123 / 0.037411 (-0.014288) | 0.097547 / 0.014526 (0.083021) | 0.113877 / 0.176557 (-0.062679) | 0.158307 / 0.737135 (-0.578828) | 0.113876 / 0.296338 (-0.182462) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.519920 / 0.215209 (0.304711) | 5.384371 / 2.077655 (3.306716) | 2.263276 / 1.504120 (0.759156) | 1.960604 / 1.541195 (0.419409) | 2.022864 / 1.468490 (0.554374) | 1.015430 / 4.584777 (-3.569347) | 4.774426 / 3.745712 (1.028714) | 4.549598 / 5.269862 (-0.720264) | 2.412638 / 4.565676 (-2.153039) | 0.117983 / 0.424275 (-0.306292) | 0.013340 / 0.007607 (0.005733) | 0.639826 / 0.226044 (0.413782) | 6.491622 / 2.268929 (4.222693) | 2.946892 / 55.444624 (-52.497732) | 2.376393 / 6.876477 (-4.500084) | 2.285592 / 2.142072 (0.143519) | 1.185049 / 4.805227 (-3.620178) | 0.204127 / 6.500664 (-6.296537) | 0.070285 / 0.075469 (-0.005184) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.439736 / 1.841788 (-0.402052) | 14.852087 / 8.074308 (6.777779) | 15.675742 / 10.191392 (5.484350) | 0.206577 / 0.680424 (-0.473846) | 0.031688 / 0.534201 (-0.502513) | 0.471003 / 0.579283 (-0.108280) | 0.505449 / 0.434364 (0.071085) | 0.506114 / 0.540337 (-0.034224) | 0.583752 / 1.386936 (-0.803184) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#d6fcff8a031db39cb31079bc1fa62ded6e35218c \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.012965 / 0.011353 (0.001612) | 0.006660 / 0.011008 (-0.004348) | 0.126060 / 0.038508 (0.087551) | 0.041154 / 0.023109 (0.018045) | 0.413428 / 0.275898 (0.137530) | 0.429035 / 0.323480 (0.105555) | 0.006680 / 0.007986 (-0.001305) | 0.005063 / 0.004328 (0.000734) | 0.092161 / 0.004250 (0.087911) | 0.056092 / 0.037052 (0.019039) | 0.421460 / 0.258489 (0.162971) | 0.450291 / 0.293841 (0.156450) | 0.050820 / 0.128546 (-0.077726) | 0.021392 / 0.075646 (-0.054255) | 0.426915 / 0.419271 (0.007643) | 0.064908 / 0.043533 (0.021375) | 0.406769 / 0.255139 (0.151630) | 0.434344 / 0.283200 (0.151144) | 0.127967 / 0.141683 (-0.013716) | 1.922414 / 1.452155 (0.470260) | 1.940717 / 1.492716 (0.448000) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.288024 / 0.018006 (0.270017) | 0.615859 / 0.000490 (0.615369) | 0.007095 / 0.000200 (0.006895) | 0.000160 / 0.000054 (0.000106) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028182 / 0.037411 (-0.009230) | 0.126277 / 0.014526 (0.111752) | 0.131687 / 0.176557 (-0.044870) | 0.206191 / 0.737135 (-0.530944) | 0.141799 / 0.296338 (-0.154539) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.631580 / 0.215209 (0.416371) | 6.141942 / 2.077655 (4.064287) | 2.476721 / 1.504120 (0.972602) | 2.128850 / 1.541195 (0.587655) | 2.236468 / 1.468490 (0.767978) | 1.188665 / 4.584777 (-3.396112) | 5.481179 / 3.745712 (1.735467) | 3.120333 / 5.269862 (-2.149529) | 2.365889 / 4.565676 (-2.199787) | 0.145081 / 0.424275 (-0.279194) | 0.015866 / 0.007607 (0.008259) | 0.795650 / 0.226044 (0.569605) | 7.595289 / 2.268929 (5.326361) | 3.174418 / 55.444624 (-52.270207) | 2.905207 / 6.876477 (-3.971270) | 2.428263 / 2.142072 (0.286191) | 1.408900 / 4.805227 (-3.396328) | 0.265485 / 6.500664 (-6.235179) | 0.083882 / 0.075469 (0.008413) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.517025 / 1.841788 (-0.324762) | 18.110288 / 8.074308 (10.035980) | 20.810003 / 10.191392 (10.618611) | 0.210380 / 0.680424 (-0.470044) | 0.030180 / 0.534201 (-0.504021) | 0.523453 / 0.579283 (-0.055830) | 0.603896 / 0.434364 (0.169532) | 0.622554 / 0.540337 (0.082216) | 0.737973 / 1.386936 (-0.648963) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009795 / 0.011353 (-0.001558) | 0.006269 / 0.011008 (-0.004739) | 0.099938 / 0.038508 (0.061430) | 0.035162 / 0.023109 (0.012052) | 0.506353 / 0.275898 (0.230455) | 0.527804 / 0.323480 (0.204324) | 0.007211 / 0.007986 (-0.000775) | 0.005498 / 0.004328 (0.001169) | 0.098325 / 0.004250 (0.094075) | 0.054513 / 0.037052 (0.017461) | 0.525764 / 0.258489 (0.267274) | 0.576699 / 0.293841 (0.282858) | 0.052800 / 0.128546 (-0.075747) | 0.021192 / 0.075646 (-0.054454) | 0.117676 / 0.419271 (-0.301596) | 0.055415 / 0.043533 (0.011882) | 0.516746 / 0.255139 (0.261607) | 0.528417 / 0.283200 (0.245217) | 0.116947 / 0.141683 (-0.024735) | 1.757864 / 1.452155 (0.305709) | 2.043632 / 1.492716 (0.550916) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.284018 / 0.018006 (0.266011) | 0.595086 / 0.000490 (0.594596) | 0.001945 / 0.000200 (0.001745) | 0.000127 / 0.000054 (0.000073) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032255 / 0.037411 (-0.005157) | 0.128201 / 0.014526 (0.113676) | 0.139189 / 0.176557 (-0.037367) | 0.199750 / 0.737135 (-0.537385) | 0.149406 / 0.296338 (-0.146933) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.652184 / 0.215209 (0.436975) | 6.453319 / 2.077655 (4.375664) | 2.831566 / 1.504120 (1.327446) | 2.453064 / 1.541195 (0.911869) | 2.622056 / 1.468490 (1.153566) | 1.191279 / 4.584777 (-3.393498) | 5.504720 / 3.745712 (1.759007) | 5.916900 / 5.269862 (0.647038) | 2.974400 / 4.565676 (-1.591277) | 0.142851 / 0.424275 (-0.281424) | 0.015241 / 0.007607 (0.007634) | 0.917537 / 0.226044 (0.691493) | 8.277645 / 2.268929 (6.008717) | 3.700495 / 55.444624 (-51.744130) | 3.047127 / 6.876477 (-3.829350) | 3.093216 / 2.142072 (0.951143) | 1.413529 / 4.805227 (-3.391698) | 0.259395 / 6.500664 (-6.241270) | 0.083144 / 0.075469 (0.007675) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.632240 / 1.841788 (-0.209548) | 18.687403 / 8.074308 (10.613095) | 20.134091 / 10.191392 (9.942699) | 0.238792 / 0.680424 (-0.441632) | 0.027645 / 0.534201 (-0.506556) | 0.518200 / 0.579283 (-0.061083) | 0.613535 / 0.434364 (0.179171) | 0.631414 / 0.540337 (0.091076) | 0.724658 / 1.386936 (-0.662278) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#ac7caa5e195ad76c7e8ef98914813383f4f668cf \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006228 / 0.011353 (-0.005125) | 0.004517 / 0.011008 (-0.006492) | 0.097998 / 0.038508 (0.059490) | 0.027903 / 0.023109 (0.004793) | 0.309789 / 0.275898 (0.033891) | 0.332784 / 0.323480 (0.009304) | 0.004757 / 0.007986 (-0.003228) | 0.003348 / 0.004328 (-0.000981) | 0.075193 / 0.004250 (0.070942) | 0.037382 / 0.037052 (0.000330) | 0.306929 / 0.258489 (0.048440) | 0.347304 / 0.293841 (0.053463) | 0.030235 / 0.128546 (-0.098312) | 0.011516 / 0.075646 (-0.064131) | 0.322249 / 0.419271 (-0.097023) | 0.044125 / 0.043533 (0.000592) | 0.303874 / 0.255139 (0.048735) | 0.326808 / 0.283200 (0.043608) | 0.088137 / 0.141683 (-0.053546) | 1.521426 / 1.452155 (0.069272) | 1.573823 / 1.492716 (0.081107) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.203204 / 0.018006 (0.185197) | 0.402247 / 0.000490 (0.401757) | 0.003146 / 0.000200 (0.002946) | 0.000088 / 0.000054 (0.000034) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022955 / 0.037411 (-0.014456) | 0.096059 / 0.014526 (0.081533) | 0.105552 / 0.176557 (-0.071004) | 0.167459 / 0.737135 (-0.569676) | 0.106723 / 0.296338 (-0.189615) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.454626 / 0.215209 (0.239417) | 4.556346 / 2.077655 (2.478691) | 2.220349 / 1.504120 (0.716229) | 2.011820 / 1.541195 (0.470625) | 2.048149 / 1.468490 (0.579659) | 0.697583 / 4.584777 (-3.887194) | 3.428394 / 3.745712 (-0.317318) | 1.863872 / 5.269862 (-3.405989) | 1.159691 / 4.565676 (-3.405985) | 0.082598 / 0.424275 (-0.341677) | 0.012202 / 0.007607 (0.004594) | 0.555617 / 0.226044 (0.329572) | 5.545481 / 2.268929 (3.276553) | 2.650850 / 55.444624 (-52.793775) | 2.305864 / 6.876477 (-4.570613) | 2.392252 / 2.142072 (0.250179) | 0.808512 / 4.805227 (-3.996716) | 0.152086 / 6.500664 (-6.348578) | 0.066440 / 0.075469 (-0.009029) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.211789 / 1.841788 (-0.629999) | 13.515546 / 8.074308 (5.441238) | 13.859870 / 10.191392 (3.668478) | 0.150335 / 0.680424 (-0.530088) | 0.016578 / 0.534201 (-0.517623) | 0.379145 / 0.579283 (-0.200138) | 0.393735 / 0.434364 (-0.040628) | 0.460219 / 0.540337 (-0.080118) | 0.555896 / 1.386936 (-0.831040) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006402 / 0.011353 (-0.004950) | 0.004558 / 0.011008 (-0.006450) | 0.077332 / 0.038508 (0.038824) | 0.027955 / 0.023109 (0.004846) | 0.407877 / 0.275898 (0.131979) | 0.432552 / 0.323480 (0.109072) | 0.004850 / 0.007986 (-0.003135) | 0.003329 / 0.004328 (-0.000999) | 0.075767 / 0.004250 (0.071517) | 0.035940 / 0.037052 (-0.001112) | 0.419544 / 0.258489 (0.161055) | 0.454672 / 0.293841 (0.160831) | 0.030461 / 0.128546 (-0.098085) | 0.011536 / 0.075646 (-0.064111) | 0.085774 / 0.419271 (-0.333498) | 0.039408 / 0.043533 (-0.004125) | 0.389909 / 0.255139 (0.134770) | 0.403287 / 0.283200 (0.120088) | 0.088385 / 0.141683 (-0.053298) | 1.596840 / 1.452155 (0.144686) | 1.659296 / 1.492716 (0.166580) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.216349 / 0.018006 (0.198342) | 0.394969 / 0.000490 (0.394479) | 0.000408 / 0.000200 (0.000208) | 0.000059 / 0.000054 (0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024346 / 0.037411 (-0.013066) | 0.099609 / 0.014526 (0.085084) | 0.106779 / 0.176557 (-0.069778) | 0.156889 / 0.737135 (-0.580247) | 0.110625 / 0.296338 (-0.185714) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.443809 / 0.215209 (0.228600) | 4.450524 / 2.077655 (2.372870) | 2.151694 / 1.504120 (0.647574) | 1.952521 / 1.541195 (0.411326) | 1.963320 / 1.468490 (0.494830) | 0.709291 / 4.584777 (-3.875486) | 3.415708 / 3.745712 (-0.330005) | 1.850498 / 5.269862 (-3.419363) | 1.164355 / 4.565676 (-3.401321) | 0.084977 / 0.424275 (-0.339298) | 0.013284 / 0.007607 (0.005677) | 0.555103 / 0.226044 (0.329059) | 5.583587 / 2.268929 (3.314658) | 2.608754 / 55.444624 (-52.835870) | 2.264079 / 6.876477 (-4.612398) | 2.272455 / 2.142072 (0.130382) | 0.820849 / 4.805227 (-3.984379) | 0.155063 / 6.500664 (-6.345601) | 0.069709 / 0.075469 (-0.005760) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.293285 / 1.841788 (-0.548503) | 14.181867 / 8.074308 (6.107559) | 13.021280 / 10.191392 (2.829888) | 0.130101 / 0.680424 (-0.550323) | 0.016461 / 0.534201 (-0.517740) | 0.383651 / 0.579283 (-0.195632) | 0.387353 / 0.434364 (-0.047011) | 0.443351 / 0.540337 (-0.096986) | 0.529448 / 1.386936 (-0.857488) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#05145d50b5bb1b7b42b76516cd6492d4868c46ba \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007513 / 0.011353 (-0.003840) | 0.005328 / 0.011008 (-0.005680) | 0.096937 / 0.038508 (0.058429) | 0.036230 / 0.023109 (0.013121) | 0.325808 / 0.275898 (0.049910) | 0.363601 / 0.323480 (0.040121) | 0.006130 / 0.007986 (-0.001855) | 0.004352 / 0.004328 (0.000023) | 0.073543 / 0.004250 (0.069293) | 0.054114 / 0.037052 (0.017062) | 0.328952 / 0.258489 (0.070463) | 0.366943 / 0.293841 (0.073102) | 0.035768 / 0.128546 (-0.092778) | 0.012505 / 0.075646 (-0.063142) | 0.332260 / 0.419271 (-0.087012) | 0.066673 / 0.043533 (0.023140) | 0.323866 / 0.255139 (0.068727) | 0.341311 / 0.283200 (0.058112) | 0.129898 / 0.141683 (-0.011785) | 1.456890 / 1.452155 (0.004735) | 1.546933 / 1.492716 (0.054217) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.299236 / 0.018006 (0.281229) | 0.496134 / 0.000490 (0.495645) | 0.004233 / 0.000200 (0.004033) | 0.000081 / 0.000054 (0.000027) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028089 / 0.037411 (-0.009322) | 0.104723 / 0.014526 (0.090197) | 0.121032 / 0.176557 (-0.055525) | 0.179916 / 0.737135 (-0.557220) | 0.126628 / 0.296338 (-0.169711) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.403497 / 0.215209 (0.188288) | 4.052481 / 2.077655 (1.974827) | 1.804419 / 1.504120 (0.300299) | 1.619833 / 1.541195 (0.078638) | 1.732438 / 1.468490 (0.263948) | 0.702474 / 4.584777 (-3.882303) | 3.808973 / 3.745712 (0.063261) | 3.682764 / 5.269862 (-1.587098) | 1.919184 / 4.565676 (-2.646493) | 0.086638 / 0.424275 (-0.337637) | 0.012265 / 0.007607 (0.004658) | 0.501273 / 0.226044 (0.275229) | 5.010918 / 2.268929 (2.741989) | 2.278114 / 55.444624 (-53.166510) | 1.942266 / 6.876477 (-4.934211) | 2.101982 / 2.142072 (-0.040091) | 0.847622 / 4.805227 (-3.957606) | 0.172973 / 6.500664 (-6.327691) | 0.066884 / 0.075469 (-0.008586) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.187609 / 1.841788 (-0.654179) | 15.089485 / 8.074308 (7.015177) | 14.787398 / 10.191392 (4.596006) | 0.168254 / 0.680424 (-0.512170) | 0.018266 / 0.534201 (-0.515935) | 0.423204 / 0.579283 (-0.156079) | 0.435238 / 0.434364 (0.000874) | 0.512473 / 0.540337 (-0.027864) | 0.618091 / 1.386936 (-0.768845) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007249 / 0.011353 (-0.004104) | 0.005297 / 0.011008 (-0.005711) | 0.076428 / 0.038508 (0.037920) | 0.033565 / 0.023109 (0.010456) | 0.373756 / 0.275898 (0.097858) | 0.407405 / 0.323480 (0.083925) | 0.006100 / 0.007986 (-0.001886) | 0.006482 / 0.004328 (0.002153) | 0.075884 / 0.004250 (0.071633) | 0.055338 / 0.037052 (0.018286) | 0.378721 / 0.258489 (0.120232) | 0.427065 / 0.293841 (0.133224) | 0.036285 / 0.128546 (-0.092261) | 0.012460 / 0.075646 (-0.063186) | 0.087641 / 0.419271 (-0.331630) | 0.048199 / 0.043533 (0.004666) | 0.386785 / 0.255139 (0.131646) | 0.386702 / 0.283200 (0.103503) | 0.110087 / 0.141683 (-0.031596) | 1.511204 / 1.452155 (0.059050) | 1.585671 / 1.492716 (0.092954) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.313558 / 0.018006 (0.295552) | 0.496991 / 0.000490 (0.496501) | 0.001492 / 0.000200 (0.001292) | 0.000093 / 0.000054 (0.000038) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031814 / 0.037411 (-0.005597) | 0.113486 / 0.014526 (0.098960) | 0.125208 / 0.176557 (-0.051348) | 0.174469 / 0.737135 (-0.562666) | 0.131095 / 0.296338 (-0.165244) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.439282 / 0.215209 (0.224073) | 4.362286 / 2.077655 (2.284631) | 2.153271 / 1.504120 (0.649151) | 1.990482 / 1.541195 (0.449288) | 2.103322 / 1.468490 (0.634831) | 0.692522 / 4.584777 (-3.892254) | 3.861931 / 3.745712 (0.116219) | 3.686294 / 5.269862 (-1.583567) | 1.734525 / 4.565676 (-2.831152) | 0.085057 / 0.424275 (-0.339218) | 0.012116 / 0.007607 (0.004509) | 0.547996 / 0.226044 (0.321952) | 5.513835 / 2.268929 (3.244906) | 2.723829 / 55.444624 (-52.720795) | 2.404715 / 6.876477 (-4.471761) | 2.514768 / 2.142072 (0.372696) | 0.834972 / 4.805227 (-3.970255) | 0.168261 / 6.500664 (-6.332403) | 0.066464 / 0.075469 (-0.009005) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.259923 / 1.841788 (-0.581865) | 15.646277 / 8.074308 (7.571969) | 13.097598 / 10.191392 (2.906206) | 0.187991 / 0.680424 (-0.492433) | 0.017358 / 0.534201 (-0.516843) | 0.427979 / 0.579283 (-0.151304) | 0.425747 / 0.434364 (-0.008617) | 0.501907 / 0.540337 (-0.038431) | 0.595106 / 1.386936 (-0.791830) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#db56f7f0d2f0b99af4da17d388c205152504c7d9 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009378 / 0.011353 (-0.001975) | 0.006434 / 0.011008 (-0.004574) | 0.120603 / 0.038508 (0.082095) | 0.042929 / 0.023109 (0.019820) | 0.366853 / 0.275898 (0.090955) | 0.436795 / 0.323480 (0.113315) | 0.007730 / 0.007986 (-0.000256) | 0.004842 / 0.004328 (0.000513) | 0.091058 / 0.004250 (0.086808) | 0.058256 / 0.037052 (0.021203) | 0.378692 / 0.258489 (0.120203) | 0.467384 / 0.293841 (0.173543) | 0.042948 / 0.128546 (-0.085598) | 0.015172 / 0.075646 (-0.060475) | 0.409225 / 0.419271 (-0.010046) | 0.083672 / 0.043533 (0.040140) | 0.390088 / 0.255139 (0.134949) | 0.406965 / 0.283200 (0.123765) | 0.142132 / 0.141683 (0.000449) | 1.765737 / 1.452155 (0.313582) | 1.895419 / 1.492716 (0.402703) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.244052 / 0.018006 (0.226046) | 0.553383 / 0.000490 (0.552893) | 0.006798 / 0.000200 (0.006598) | 0.000227 / 0.000054 (0.000173) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032032 / 0.037411 (-0.005380) | 0.129990 / 0.014526 (0.115464) | 0.140338 / 0.176557 (-0.036219) | 0.212155 / 0.737135 (-0.524980) | 0.147395 / 0.296338 (-0.148943) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.478760 / 0.215209 (0.263551) | 4.751335 / 2.077655 (2.673680) | 2.164755 / 1.504120 (0.660635) | 1.944288 / 1.541195 (0.403094) | 2.077657 / 1.468490 (0.609167) | 0.818519 / 4.584777 (-3.766258) | 4.689013 / 3.745712 (0.943301) | 2.484079 / 5.269862 (-2.785782) | 1.788632 / 4.565676 (-2.777044) | 0.100484 / 0.424275 (-0.323791) | 0.013838 / 0.007607 (0.006231) | 0.589650 / 0.226044 (0.363605) | 5.859461 / 2.268929 (3.590533) | 2.670025 / 55.444624 (-52.774599) | 2.688709 / 6.876477 (-4.187768) | 2.408060 / 2.142072 (0.265988) | 0.972107 / 4.805227 (-3.833120) | 0.194425 / 6.500664 (-6.306239) | 0.076077 / 0.075469 (0.000608) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.430150 / 1.841788 (-0.411638) | 17.710507 / 8.074308 (9.636199) | 16.210789 / 10.191392 (6.019397) | 0.163940 / 0.680424 (-0.516484) | 0.020295 / 0.534201 (-0.513906) | 0.472596 / 0.579283 (-0.106687) | 0.483107 / 0.434364 (0.048743) | 0.585269 / 0.540337 (0.044931) | 0.705526 / 1.386936 (-0.681410) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008864 / 0.011353 (-0.002489) | 0.006095 / 0.011008 (-0.004913) | 0.088702 / 0.038508 (0.050194) | 0.041596 / 0.023109 (0.018486) | 0.453515 / 0.275898 (0.177617) | 0.476217 / 0.323480 (0.152737) | 0.007574 / 0.007986 (-0.000412) | 0.004727 / 0.004328 (0.000398) | 0.087271 / 0.004250 (0.083021) | 0.059631 / 0.037052 (0.022578) | 0.449379 / 0.258489 (0.190890) | 0.494436 / 0.293841 (0.200595) | 0.043448 / 0.128546 (-0.085098) | 0.014580 / 0.075646 (-0.061067) | 0.103836 / 0.419271 (-0.315435) | 0.057537 / 0.043533 (0.014004) | 0.449359 / 0.255139 (0.194220) | 0.447577 / 0.283200 (0.164377) | 0.123600 / 0.141683 (-0.018083) | 1.748448 / 1.452155 (0.296294) | 1.902116 / 1.492716 (0.409399) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.237214 / 0.018006 (0.219207) | 0.497648 / 0.000490 (0.497158) | 0.003519 / 0.000200 (0.003319) | 0.000112 / 0.000054 (0.000058) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034477 / 0.037411 (-0.002934) | 0.132627 / 0.014526 (0.118101) | 0.139721 / 0.176557 (-0.036836) | 0.195705 / 0.737135 (-0.541430) | 0.150762 / 0.296338 (-0.145577) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.521306 / 0.215209 (0.306097) | 5.184982 / 2.077655 (3.107328) | 2.503979 / 1.504120 (0.999859) | 2.301054 / 1.541195 (0.759860) | 2.352713 / 1.468490 (0.884222) | 0.819804 / 4.584777 (-3.764973) | 4.584011 / 3.745712 (0.838299) | 2.497311 / 5.269862 (-2.772550) | 1.561262 / 4.565676 (-3.004414) | 0.101814 / 0.424275 (-0.322461) | 0.014078 / 0.007607 (0.006471) | 0.666564 / 0.226044 (0.440520) | 6.616379 / 2.268929 (4.347450) | 3.263892 / 55.444624 (-52.180732) | 2.891774 / 6.876477 (-3.984703) | 2.945260 / 2.142072 (0.803188) | 1.014379 / 4.805227 (-3.790848) | 0.201762 / 6.500664 (-6.298902) | 0.078012 / 0.075469 (0.002543) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.567808 / 1.841788 (-0.273980) | 19.096552 / 8.074308 (11.022244) | 15.522285 / 10.191392 (5.330893) | 0.226568 / 0.680424 (-0.453856) | 0.021078 / 0.534201 (-0.513123) | 0.501686 / 0.579283 (-0.077597) | 0.517575 / 0.434364 (0.083211) | 0.589685 / 0.540337 (0.049348) | 0.705053 / 1.386936 (-0.681883) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#db56f7f0d2f0b99af4da17d388c205152504c7d9 \"CML watermark\")\n"
] | 2023-05-11T17:29:57 | 2023-05-15T07:39:13 | 2023-05-12T15:14:48 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5845",
"html_url": "https://github.com/huggingface/datasets/pull/5845",
"diff_url": "https://github.com/huggingface/datasets/pull/5845.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5845.patch",
"merged_at": "2023-05-12T15:14:48"
} | Adds the `date_format` param introduced in Pandas 2.0 to the CSV reader and improves its type hints. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5845/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5845/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5841 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5841/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5841/comments | https://api.github.com/repos/huggingface/datasets/issues/5841/events | https://github.com/huggingface/datasets/issues/5841 | 1,705,286,639 | I_kwDODunzps5lpJvv | 5,841 | Abusurdly slow on iteration | {
"login": "fecet",
"id": 41792945,
"node_id": "MDQ6VXNlcjQxNzkyOTQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/41792945?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fecet",
"html_url": "https://github.com/fecet",
"followers_url": "https://api.github.com/users/fecet/followers",
"following_url": "https://api.github.com/users/fecet/following{/other_user}",
"gists_url": "https://api.github.com/users/fecet/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fecet/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fecet/subscriptions",
"organizations_url": "https://api.github.com/users/fecet/orgs",
"repos_url": "https://api.github.com/users/fecet/repos",
"events_url": "https://api.github.com/users/fecet/events{/privacy}",
"received_events_url": "https://api.github.com/users/fecet/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi ! You can try to use the [Image](https://huggingface.co/docs/datasets/v2.12.0/en/package_reference/main_classes#datasets.Image) type which [decodes images on-the-fly](https://huggingface.co/docs/datasets/v2.12.0/en/about_dataset_features#image-feature) into pytorch tensors :)\r\n\r\n```python\r\nds = Dataset.from_dict({\"tensor\":a}).with_format(\"torch\")\r\n%time sum(1 for _ in ds)\r\n# CPU times: user 5.04 s, sys: 96.5 ms, total: 5.14 s\r\n# Wall time: 5.14 s\r\n# 10000\r\n```\r\n\r\n```python\r\nfeatures = Features({\"tensor\": Image()})\r\nds = Dataset.from_dict({\"tensor\":a}, features=features).with_format(\"torch\")\r\n%time sum(1 for _ in ds)\r\n# CPU times: user 1.86 s, sys: 49 ms, total: 1.91 s\r\n# Wall time: 1.9 s\r\n# 10000\r\n```\r\n\r\n-> Speed x2.7\r\n\r\nAnd if you want to keep using arrays of integers, consider using the [Array2D](https://huggingface.co/docs/datasets/v2.12.0/en/package_reference/main_classes#datasets.Array2D) or [Array3D](https://huggingface.co/docs/datasets/v2.12.0/en/package_reference/main_classes#datasets.Array3D) types which are even faster (since it doesn't decode images):\r\n\r\n```python\r\nfeatures = Features({\"tensor\": Array2D(shape=(100, 224), dtype=\"float32\")})\r\nds = Dataset.from_dict({\"tensor\":a}, features=features).with_format(\"torch\")\r\n%time sum(1 for _ in ds)\r\n# CPU times: user 828 ms, sys: 68.4 ms, total: 896 ms\r\n# Wall time: 897 ms\r\n# 10000\r\n```\r\n\r\n-> Speed x5.7\r\n\r\nBatching also speeds up a lot\r\n\r\n```python\r\nfrom torch.utils.data import DataLoader\r\ndl = DataLoader(ds, batch_size=100)\r\n%time sum(1 for _ in dl)\r\n# CPU times: user 564 ms, sys: 83.5 ms, total: 648 ms\r\n# Wall time: 579 ms\r\n# 100\r\n```\r\n\r\n-> Speed x8.9\r\n\r\n```python\r\n%time sum(1 for _ in ds.iter(batch_size=100))\r\n# CPU times: user 119 ms, sys: 96.8 ms, total: 215 ms\r\n# Wall time: 117 ms\r\n# 100\r\n```\r\n\r\n-> Speed x46",
"Anyway, regarding the speed difference between numpy and pytorch, I think the issue is that we first convert numpy sub-arrays to pytorch and then consolidate into one tensor, while we should to the opposite. Indeed converting a numpy array to pytorch has a fix cost that seems to cause a slow down. The current pipeline is\r\n\r\n```\r\narrow -> nested numpy arrays -> lists of torch tensors -> one torch tensor\r\n```\r\n\r\nand we should do\r\n\r\n```\r\narrow -> nested numpy arrays -> one numpy array -> one torch tensor\r\n```",
"I have a similar issue: iterating over a dataset takes 5s without applying any transform, but takes ~30s after applying a transform.\r\nHere is the minimum code to reproduce the problem\r\n\r\n```python\r\nimport numpy as np\r\nfrom datasets import Dataset, DatasetDict, load_dataset, Array3D, Image, Features\r\nfrom torch.utils.data import DataLoader\r\nfrom tqdm import tqdm\r\nimport torchvision \r\nfrom torchvision.transforms import ToTensor, Normalize\r\n\r\n\r\n#################################\r\n# Without transform\r\n#################################\r\n \r\ntrain_dataset = load_dataset(\r\n 'cifar100',\r\n split='train',\r\n use_auth_token=True,\r\n)\r\n\r\ntrain_dataset.set_format(type=\"numpy\", columns=[\"img\", \"fine_label\"])\r\n\r\ntrain_loader= DataLoader(\r\n train_dataset,\r\n batch_size=100,\r\n pin_memory=False,\r\n shuffle=True,\r\n num_workers=8,\r\n)\r\n\r\nfor batch in tqdm(train_loader, desc=\"Loading data, no transform\"):\r\n pass\r\n\r\n\r\n#################################\r\n# With transform\r\n#################################\r\n\r\ntransform_func = torchvision.transforms.Compose([\r\n ToTensor(), \r\n Normalize(mean=[0.485, 0.456, 0.406], std= [0.229, 0.224, 0.225]),] \r\n)\r\n \r\ntrain_dataset = train_dataset.map(\r\n desc=f\"Preprocessing samples\",\r\n function=lambda x: {\"img\": transform_func(x[\"img\"])},\r\n)\r\n\r\ntrain_dataset.set_format(type=\"numpy\", columns=[\"img\", \"fine_label\"])\r\n\r\n\r\ntrain_loader= DataLoader(\r\n train_dataset,\r\n batch_size=100,\r\n pin_memory=False,\r\n shuffle=True,\r\n num_workers=8,\r\n)\r\n\r\n\r\nfor batch in tqdm(train_loader, desc=\"Loading data after transform\"):\r\n pass \r\n```\r\n\r\nI have also tried converting the Image column to an Array3D\r\n```python\r\nimg_shape = train_dataset[0][\"img\"].shape\r\n\r\nfeatures = train_dataset.features.copy()\r\nfeatures[\"x\"] = Array3D(shape=img_shape, dtype=\"float32\")\r\n\r\ntrain_dataset = train_dataset.map(\r\n desc=f\"Preprocessing samples\",\r\n function=lambda x: {\"x\": np.array(x[\"img\"], dtype=np.uint8)},\r\n features=features,\r\n)\r\ntrain_dataset.cast_column(\"x\", Array3D(shape=img_shape, dtype=\"float32\"))\r\ntrain_dataset.set_format(type=\"numpy\", columns=[\"x\", \"fine_label\"])\r\n```\r\nbut to no avail. Any clue?",
"Thanks! I convert my dataset feature to Array3D and this speed became awesome!"
] | 2023-05-11T08:04:09 | 2023-05-15T15:38:13 | 2023-05-15T15:38:13 | NONE | null | null | null | ### Describe the bug
I am attempting to iterate through an image dataset, but I am encountering a significant slowdown in the iteration speed. In order to investigate this issue, I conducted the following experiment:
```python
a=torch.randn(100,224)
a=torch.stack([a] * 10000)
a.shape
# %%
ds=Dataset.from_dict({"tensor":a})
for i in tqdm(ds.with_format("numpy")):
pass
for i in tqdm(ds.with_format("torch")):
pass
```
I noticed that the dataset in numpy format performs significantly faster than the one in torch format. My hypothesis is that the dataset undergoes a transformation process of torch->python->numpy(torch) in the background, which might be causing the slowdown. Is there any way to expedite the process by bypassing such transformations?
Furthermore, if I increase the size of a to an image shape, like:
```python
a=torch.randn(3,224,224)
```
the iteration speed becomes absurdly slow, around 100 iterations per second, whereas the speed with numpy format is approximately 250 iterations per second. This level of speed would be unacceptable for large image datasets, as it could take several hours just to iterate through a single epoch.
### Steps to reproduce the bug
```python
a=torch.randn(100,224)
a=torch.stack([a] * 10000)
a.shape
# %%
ds=Dataset.from_dict({"tensor":a})
for i in tqdm(ds.with_format("numpy")):
pass
for i in tqdm(ds.with_format("torch")):
pass
```
### Expected behavior
iteration faster
### Environment info
- `datasets` version: 2.11.0
- Platform: Linux-5.4.0-148-generic-x86_64-with-glibc2.10
- Python version: 3.8.16
- Huggingface_hub version: 0.13.4
- PyArrow version: 11.0.0
- Pandas version: 2.0.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5841/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5841/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5840 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5840/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5840/comments | https://api.github.com/repos/huggingface/datasets/issues/5840/events | https://github.com/huggingface/datasets/issues/5840 | 1,705,212,085 | I_kwDODunzps5lo3i1 | 5,840 | load model error. | {
"login": "LanShanPi",
"id": 58167546,
"node_id": "MDQ6VXNlcjU4MTY3NTQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/58167546?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LanShanPi",
"html_url": "https://github.com/LanShanPi",
"followers_url": "https://api.github.com/users/LanShanPi/followers",
"following_url": "https://api.github.com/users/LanShanPi/following{/other_user}",
"gists_url": "https://api.github.com/users/LanShanPi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LanShanPi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LanShanPi/subscriptions",
"organizations_url": "https://api.github.com/users/LanShanPi/orgs",
"repos_url": "https://api.github.com/users/LanShanPi/repos",
"events_url": "https://api.github.com/users/LanShanPi/events{/privacy}",
"received_events_url": "https://api.github.com/users/LanShanPi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Please report this in the `transformers` repo, as it's not related to `datasets`"
] | 2023-05-11T07:12:38 | 2023-05-12T13:44:07 | 2023-05-12T13:44:06 | NONE | null | null | null | ### Describe the bug
I had trained one model use deepspeed, when I load the final load I get the follow error:
OSError: Can't load tokenizer for '/XXX/DeepSpeedExamples/applications/DeepSpeed-Chat/output/step3-models/1.3b/actor'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure '/home/fm001/hzl/Project/DeepSpeedExamples/applications/DeepSpeed-Chat/output/step3-models/1.3b/actor' is the correct path to a directory containing all relevant files for a BloomTokenizerFast tokenizer.
my load code is : python chat.py --path /XXX/DeepSpeedExamples/applications/DeepSpeed-Chat/output/step3-models/1.3b/actor/
### Steps to reproduce the bug
。。。
### Expected behavior
。。。
### Environment info
。。。 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5840/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5840/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5838 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5838/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5838/comments | https://api.github.com/repos/huggingface/datasets/issues/5838/events | https://github.com/huggingface/datasets/issues/5838 | 1,703,210,848 | I_kwDODunzps5lhO9g | 5,838 | Streaming support for `load_from_disk` | {
"login": "Nilabhra",
"id": 5437792,
"node_id": "MDQ6VXNlcjU0Mzc3OTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/5437792?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Nilabhra",
"html_url": "https://github.com/Nilabhra",
"followers_url": "https://api.github.com/users/Nilabhra/followers",
"following_url": "https://api.github.com/users/Nilabhra/following{/other_user}",
"gists_url": "https://api.github.com/users/Nilabhra/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Nilabhra/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Nilabhra/subscriptions",
"organizations_url": "https://api.github.com/users/Nilabhra/orgs",
"repos_url": "https://api.github.com/users/Nilabhra/repos",
"events_url": "https://api.github.com/users/Nilabhra/events{/privacy}",
"received_events_url": "https://api.github.com/users/Nilabhra/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | [
"As the name says, `load_from_disk` load the data from your disk. If the data is hosted on S3, it is first downloaded locally and then loaded from your disk.\r\n\r\nThere is a discussion on streaming data from S3 here though: #5281 ",
"@lhoestq \r\nThanks for your comment. I have checked out the discussion before and attempted at replicating the mentioned changes in the main branch (#5580). What I found was that if a dataset is saved using `save_to_disk`, it cannot be read by `load_dataset`. The error message asks me to to use `load_from_disk` instead. What would be the correct way of saving the data in this scenario?",
"Using `push_to_hub` you can save the dataset on the HF Hub as parquet files, and reload it / stream it using `load_dataset` :)\r\n\r\nIf you want to save your dataset somewhere else you can use `.to_parquet` to get a parquet file. If your dataset is big it's usually recommended to shard it into multi parquet files (around 1GB each).",
"@lhoestq \r\nThanks for the explanation. Appreciate it. I'll try this out.",
"@lhoestq\r\nI tried the method you mentioned. This the current scenario I'm facing:\r\n\r\n- The parquet file can be read from disk and streaming can be enabled.\r\n- The parquet file can be read from `s3` (local MinIO).\r\n- When `streaming=True` is enabled for `s3`, I get the error mentioned below:\r\n\r\n```\r\nFile ~/.../lib/python3.8/site-packages/s3fs/core.py:502, in S3FileSystem.set_session(self, refresh, kwargs)\r\n 500 conf = AioConfig(**config_kwargs)\r\n 501 if self.session is None:\r\n--> 502 self.session = aiobotocore.session.AioSession(**self.kwargs)\r\n 504 for parameters in (config_kwargs, self.kwargs, init_kwargs, client_kwargs):\r\n 505 for option in (\"region_name\", \"endpoint_url\"):\r\n\r\nTypeError: __init__() got an unexpected keyword argument 'headers'\r\n```\r\n\r\nDoes this mean there is a bug in the main branch?",
"Streaming from S3 is still experimental, there might be a few bugs unfortunately.\r\n\r\nCan you share the full stack trace ?",
"@lhoestq \r\nSure, here you go:\r\n\r\n```python\r\nTypeError Traceback (most recent call last)\r\nCell In[8], line 1\r\n----> 1 dataset = load_dataset(\"parquet\", data_files=[\"s3://<bucket name>/<data folder>/data-parquet\"], storage_options=fs.storage_options, streaming=True)\r\n\r\nFile ~/.../datasets/src/datasets/load.py:1790, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, num_proc, storage_options, **config_kwargs)\r\n 1788 # Return iterable dataset in case of streaming\r\n 1789 if streaming:\r\n-> 1790 return builder_instance.as_streaming_dataset(split=split)\r\n 1792 # Some datasets are already processed on the HF google storage\r\n 1793 # Don't try downloading from Google storage for the packaged datasets as text, json, csv or pandas\r\n 1794 try_from_hf_gcs = path not in _PACKAGED_DATASETS_MODULES\r\n\r\nFile ~/.../datasets/src/datasets/builder.py:1264, in DatasetBuilder.as_streaming_dataset(self, split, base_path)\r\n 1257 dl_manager = StreamingDownloadManager(\r\n 1258 base_path=base_path or self.base_path,\r\n 1259 download_config=DownloadConfig(use_auth_token=self.use_auth_token, storage_options=self.storage_options),\r\n 1260 dataset_name=self.name,\r\n 1261 data_dir=self.config.data_dir,\r\n 1262 )\r\n 1263 self._check_manual_download(dl_manager)\r\n-> 1264 splits_generators = {sg.name: sg for sg in self._split_generators(dl_manager)}\r\n 1265 # By default, return all splits\r\n 1266 if split is None:\r\n\r\nFile ~/.../datasets/src/datasets/packaged_modules/parquet/parquet.py:34, in Parquet._split_generators(self, dl_manager)\r\n 32 if not self.config.data_files:\r\n 33 raise ValueError(f\"At least one data file must be specified, but got data_files={self.config.data_files}\")\r\n---> 34 data_files = dl_manager.download_and_extract(self.config.data_files)\r\n 35 if isinstance(data_files, (str, list, tuple)):\r\n 36 files = data_files\r\n\r\nFile ~/.../datasets/src/datasets/download/streaming_download_manager.py:1087, in StreamingDownloadManager.download_and_extract(self, url_or_urls)\r\n 1069 def download_and_extract(self, url_or_urls):\r\n 1070 \"\"\"Prepare given `url_or_urls` for streaming (add extraction protocol).\r\n 1071 \r\n 1072 This is the lazy version of `DownloadManager.download_and_extract` for streaming.\r\n (...)\r\n 1085 url(s): (`str` or `list` or `dict`), URL(s) to stream data from matching the given input `url_or_urls`.\r\n 1086 \"\"\"\r\n-> 1087 return self.extract(self.download(url_or_urls))\r\n\r\nFile ~/.../datasets/src/datasets/download/streaming_download_manager.py:1039, in StreamingDownloadManager.extract(self, url_or_urls)\r\n 1020 def extract(self, url_or_urls):\r\n 1021 \"\"\"Add extraction protocol for given url(s) for streaming.\r\n 1022 \r\n 1023 This is the lazy version of `DownloadManager.extract` for streaming.\r\n (...)\r\n 1037 ```\r\n 1038 \"\"\"\r\n-> 1039 urlpaths = map_nested(self._extract, url_or_urls, map_tuple=True)\r\n 1040 return urlpaths\r\n\r\nFile ~/.../datasets/src/datasets/utils/py_utils.py:443, in map_nested(function, data_struct, dict_only, map_list, map_tuple, map_numpy, num_proc, parallel_min_length, types, disable_tqdm, desc)\r\n 441 num_proc = 1\r\n 442 if num_proc <= 1 or len(iterable) < parallel_min_length:\r\n--> 443 mapped = [\r\n 444 _single_map_nested((function, obj, types, None, True, None))\r\n 445 for obj in logging.tqdm(iterable, disable=disable_tqdm, desc=desc)\r\n 446 ]\r\n 447 else:\r\n 448 num_proc = num_proc if num_proc <= len(iterable) else len(iterable)\r\n\r\nFile ~/.../datasets/src/datasets/utils/py_utils.py:444, in <listcomp>(.0)\r\n 441 num_proc = 1\r\n 442 if num_proc <= 1 or len(iterable) < parallel_min_length:\r\n 443 mapped = [\r\n--> 444 _single_map_nested((function, obj, types, None, True, None))\r\n 445 for obj in logging.tqdm(iterable, disable=disable_tqdm, desc=desc)\r\n 446 ]\r\n 447 else:\r\n 448 num_proc = num_proc if num_proc <= len(iterable) else len(iterable)\r\n\r\nFile ~/.../datasets/src/datasets/utils/py_utils.py:363, in _single_map_nested(args)\r\n 361 return {k: _single_map_nested((function, v, types, None, True, None)) for k, v in pbar}\r\n 362 else:\r\n--> 363 mapped = [_single_map_nested((function, v, types, None, True, None)) for v in pbar]\r\n 364 if isinstance(data_struct, list):\r\n 365 return mapped\r\n\r\nFile ~/.../datasets/src/datasets/utils/py_utils.py:363, in <listcomp>(.0)\r\n 361 return {k: _single_map_nested((function, v, types, None, True, None)) for k, v in pbar}\r\n 362 else:\r\n--> 363 mapped = [_single_map_nested((function, v, types, None, True, None)) for v in pbar]\r\n 364 if isinstance(data_struct, list):\r\n 365 return mapped\r\n\r\nFile ~/.../datasets/src/datasets/utils/py_utils.py:346, in _single_map_nested(args)\r\n 344 # Singleton first to spare some computation\r\n 345 if not isinstance(data_struct, dict) and not isinstance(data_struct, types):\r\n--> 346 return function(data_struct)\r\n 348 # Reduce logging to keep things readable in multiprocessing with tqdm\r\n 349 if rank is not None and logging.get_verbosity() < logging.WARNING:\r\n\r\nFile ~/.../datasets/src/datasets/download/streaming_download_manager.py:1044, in StreamingDownloadManager._extract(self, urlpath)\r\n 1042 def _extract(self, urlpath: str) -> str:\r\n 1043 urlpath = str(urlpath)\r\n-> 1044 protocol = _get_extraction_protocol(urlpath, use_auth_token=self.download_config.use_auth_token)\r\n 1045 # get inner file: zip://train-00000.json.gz::https://foo.bar/data.zip -> zip://train-00000.json.gz\r\n 1046 path = urlpath.split(\"::\")[0]\r\n\r\nFile ~/.../datasets/src/datasets/download/streaming_download_manager.py:433, in _get_extraction_protocol(urlpath, use_auth_token)\r\n 431 else:\r\n 432 urlpath, kwargs = urlpath, {}\r\n--> 433 with fsspec.open(urlpath, **kwargs) as f:\r\n 434 return _get_extraction_protocol_with_magic_number(f)\r\n\r\nFile ~/.../lib/python3.8/site-packages/fsspec/core.py:102, in OpenFile.__enter__(self)\r\n 99 def __enter__(self):\r\n 100 mode = self.mode.replace(\"t\", \"\").replace(\"b\", \"\") + \"b\"\r\n--> 102 f = self.fs.open(self.path, mode=mode)\r\n 104 self.fobjects = [f]\r\n 106 if self.compression is not None:\r\n\r\nFile ~/.../lib/python3.8/site-packages/fsspec/spec.py:1199, in AbstractFileSystem.open(self, path, mode, block_size, cache_options, compression, **kwargs)\r\n 1197 else:\r\n 1198 ac = kwargs.pop(\"autocommit\", not self._intrans)\r\n-> 1199 f = self._open(\r\n 1200 path,\r\n 1201 mode=mode,\r\n 1202 block_size=block_size,\r\n 1203 autocommit=ac,\r\n 1204 cache_options=cache_options,\r\n 1205 **kwargs,\r\n 1206 )\r\n 1207 if compression is not None:\r\n 1208 from fsspec.compression import compr\r\n\r\nFile ~/.../lib/python3.8/site-packages/s3fs/core.py:659, in S3FileSystem._open(self, path, mode, block_size, acl, version_id, fill_cache, cache_type, autocommit, requester_pays, cache_options, **kwargs)\r\n 656 if cache_type is None:\r\n 657 cache_type = self.default_cache_type\r\n--> 659 return S3File(\r\n 660 self,\r\n 661 path,\r\n 662 mode,\r\n 663 block_size=block_size,\r\n 664 acl=acl,\r\n 665 version_id=version_id,\r\n 666 fill_cache=fill_cache,\r\n 667 s3_additional_kwargs=kw,\r\n 668 cache_type=cache_type,\r\n 669 autocommit=autocommit,\r\n 670 requester_pays=requester_pays,\r\n 671 cache_options=cache_options,\r\n 672 )\r\n\r\nFile ~/.../lib/python3.8/site-packages/s3fs/core.py:2043, in S3File.__init__(self, s3, path, mode, block_size, acl, version_id, fill_cache, s3_additional_kwargs, autocommit, cache_type, requester_pays, cache_options)\r\n 2041 self.details = s3.info(path)\r\n 2042 self.version_id = self.details.get(\"VersionId\")\r\n-> 2043 super().__init__(\r\n 2044 s3,\r\n 2045 path,\r\n 2046 mode,\r\n 2047 block_size,\r\n 2048 autocommit=autocommit,\r\n 2049 cache_type=cache_type,\r\n 2050 cache_options=cache_options,\r\n 2051 )\r\n 2052 self.s3 = self.fs # compatibility\r\n 2054 # when not using autocommit we want to have transactional state to manage\r\n\r\nFile ~/.../lib/python3.8/site-packages/fsspec/spec.py:1555, in AbstractBufferedFile.__init__(self, fs, path, mode, block_size, autocommit, cache_type, cache_options, size, **kwargs)\r\n 1553 self.size = size\r\n 1554 else:\r\n-> 1555 self.size = self.details[\"size\"]\r\n 1556 self.cache = caches[cache_type](\r\n 1557 self.blocksize, self._fetch_range, self.size, **cache_options\r\n 1558 )\r\n 1559 else:\r\n\r\nFile ~/.../lib/python3.8/site-packages/fsspec/spec.py:1568, in AbstractBufferedFile.details(self)\r\n 1565 @property\r\n 1566 def details(self):\r\n 1567 if self._details is None:\r\n-> 1568 self._details = self.fs.info(self.path)\r\n 1569 return self._details\r\n\r\nFile ~/.../lib/python3.8/site-packages/fsspec/asyn.py:115, in sync_wrapper.<locals>.wrapper(*args, **kwargs)\r\n 112 @functools.wraps(func)\r\n 113 def wrapper(*args, **kwargs):\r\n 114 self = obj or args[0]\r\n--> 115 return sync(self.loop, func, *args, **kwargs)\r\n\r\nFile ~/.../lib/python3.8/site-packages/fsspec/asyn.py:100, in sync(loop, func, timeout, *args, **kwargs)\r\n 98 raise FSTimeoutError from return_result\r\n 99 elif isinstance(return_result, BaseException):\r\n--> 100 raise return_result\r\n 101 else:\r\n 102 return return_result\r\n\r\nFile ~/.../lib/python3.8/site-packages/fsspec/asyn.py:55, in _runner(event, coro, result, timeout)\r\n 53 coro = asyncio.wait_for(coro, timeout=timeout)\r\n 54 try:\r\n---> 55 result[0] = await coro\r\n 56 except Exception as ex:\r\n 57 result[0] = ex\r\n\r\nFile ~/.../lib/python3.8/site-packages/s3fs/core.py:1248, in S3FileSystem._info(self, path, bucket, key, refresh, version_id)\r\n 1246 if key:\r\n 1247 try:\r\n-> 1248 out = await self._call_s3(\r\n 1249 \"head_object\",\r\n 1250 self.kwargs,\r\n 1251 Bucket=bucket,\r\n 1252 Key=key,\r\n 1253 **version_id_kw(version_id),\r\n 1254 **self.req_kw,\r\n 1255 )\r\n 1256 return {\r\n 1257 \"ETag\": out.get(\"ETag\", \"\"),\r\n 1258 \"LastModified\": out[\"LastModified\"],\r\n (...)\r\n 1264 \"ContentType\": out.get(\"ContentType\"),\r\n 1265 }\r\n 1266 except FileNotFoundError:\r\n\r\nFile ~/.../lib/python3.8/site-packages/s3fs/core.py:341, in S3FileSystem._call_s3(self, method, *akwarglist, **kwargs)\r\n 340 async def _call_s3(self, method, *akwarglist, **kwargs):\r\n--> 341 await self.set_session()\r\n 342 s3 = await self.get_s3(kwargs.get(\"Bucket\"))\r\n 343 method = getattr(s3, method)\r\n\r\nFile ~/.../lib/python3.8/site-packages/s3fs/core.py:502, in S3FileSystem.set_session(self, refresh, kwargs)\r\n 500 conf = AioConfig(**config_kwargs)\r\n 501 if self.session is None:\r\n--> 502 self.session = aiobotocore.session.AioSession(**self.kwargs)\r\n 504 for parameters in (config_kwargs, self.kwargs, init_kwargs, client_kwargs):\r\n 505 for option in (\"region_name\", \"endpoint_url\"):\r\n\r\nTypeError: __init__() got an unexpected keyword argument 'headers'\r\n```",
"Is `\"data-parquet\"` a file ? In `data_files` you should pass the paths to the parquet files (not to a directory). Glob patterns are not supported yet for S3 URLs.\r\n\r\nThe bug seems to happen because your provided data file has no extension. Because of that it tries to infer it from the file content, but fails because `_get_extraction_protocol` doesn't support S3 URLs yet.\r\n\r\n",
"@lhoestq \r\nThank you for your answer. Saving the file with `.parquet` extension solved the issue! This is really great! Really appreciate all the help! \r\n\r\nLet me know if I should close the issue or feel free to close it if you want.",
"Cool ! I'm glad it worked out :)\r\n\r\nSure feel free to close the issue, since the original question about streaming with load_from_disk has been answered anyway"
] | 2023-05-10T06:25:22 | 2023-05-12T09:37:45 | 2023-05-12T09:37:45 | NONE | null | null | null | ### Feature request
Support for streaming datasets stored in object stores in `load_from_disk`.
### Motivation
The `load_from_disk` function supports fetching datasets stored in object stores such as `s3`. In many cases, the datasets that are stored in object stores are very large and being able to stream the data from the buckets becomes essential.
### Your contribution
I'd be happy to contribute this feature if I could get the guidance on how to do so. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5838/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5838/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5836 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5836/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5836/comments | https://api.github.com/repos/huggingface/datasets/issues/5836/events | https://github.com/huggingface/datasets/pull/5836 | 1,702,773,316 | PR_kwDODunzps5QIgzu | 5,836 | [docs] Custom decoding transforms | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5836). All of your documentation changes will be reflected on that endpoint.",
"The error seems unrelated to the changes, so feel free to merge.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006562 / 0.011353 (-0.004791) | 0.004568 / 0.011008 (-0.006440) | 0.098151 / 0.038508 (0.059643) | 0.028117 / 0.023109 (0.005008) | 0.305442 / 0.275898 (0.029544) | 0.338288 / 0.323480 (0.014808) | 0.005012 / 0.007986 (-0.002973) | 0.003415 / 0.004328 (-0.000913) | 0.075022 / 0.004250 (0.070771) | 0.036869 / 0.037052 (-0.000183) | 0.301427 / 0.258489 (0.042937) | 0.348485 / 0.293841 (0.054644) | 0.030761 / 0.128546 (-0.097785) | 0.011461 / 0.075646 (-0.064185) | 0.321987 / 0.419271 (-0.097285) | 0.042885 / 0.043533 (-0.000648) | 0.300691 / 0.255139 (0.045552) | 0.333208 / 0.283200 (0.050008) | 0.090203 / 0.141683 (-0.051480) | 1.459744 / 1.452155 (0.007590) | 1.522960 / 1.492716 (0.030243) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.213219 / 0.018006 (0.195213) | 0.408118 / 0.000490 (0.407629) | 0.003716 / 0.000200 (0.003516) | 0.000077 / 0.000054 (0.000022) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023060 / 0.037411 (-0.014351) | 0.097423 / 0.014526 (0.082897) | 0.103988 / 0.176557 (-0.072568) | 0.162793 / 0.737135 (-0.574343) | 0.108282 / 0.296338 (-0.188056) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.431628 / 0.215209 (0.216419) | 4.300881 / 2.077655 (2.223226) | 2.058853 / 1.504120 (0.554733) | 1.897910 / 1.541195 (0.356715) | 1.991723 / 1.468490 (0.523233) | 0.699686 / 4.584777 (-3.885091) | 3.395004 / 3.745712 (-0.350708) | 1.841613 / 5.269862 (-3.428248) | 1.152347 / 4.565676 (-3.413330) | 0.082517 / 0.424275 (-0.341758) | 0.012323 / 0.007607 (0.004715) | 0.535812 / 0.226044 (0.309767) | 5.374103 / 2.268929 (3.105174) | 2.429662 / 55.444624 (-53.014962) | 2.097199 / 6.876477 (-4.779277) | 2.172625 / 2.142072 (0.030552) | 0.810156 / 4.805227 (-3.995071) | 0.151629 / 6.500664 (-6.349035) | 0.066528 / 0.075469 (-0.008941) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.220667 / 1.841788 (-0.621121) | 13.696976 / 8.074308 (5.622668) | 14.042916 / 10.191392 (3.851524) | 0.129626 / 0.680424 (-0.550798) | 0.016593 / 0.534201 (-0.517607) | 0.383747 / 0.579283 (-0.195536) | 0.386872 / 0.434364 (-0.047492) | 0.456524 / 0.540337 (-0.083813) | 0.545033 / 1.386936 (-0.841903) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006361 / 0.011353 (-0.004992) | 0.004516 / 0.011008 (-0.006493) | 0.077155 / 0.038508 (0.038647) | 0.027239 / 0.023109 (0.004130) | 0.359892 / 0.275898 (0.083994) | 0.391994 / 0.323480 (0.068514) | 0.004950 / 0.007986 (-0.003036) | 0.003379 / 0.004328 (-0.000949) | 0.077057 / 0.004250 (0.072806) | 0.039562 / 0.037052 (0.002509) | 0.364244 / 0.258489 (0.105755) | 0.416033 / 0.293841 (0.122192) | 0.031049 / 0.128546 (-0.097497) | 0.011479 / 0.075646 (-0.064167) | 0.086479 / 0.419271 (-0.332793) | 0.039381 / 0.043533 (-0.004151) | 0.372143 / 0.255139 (0.117004) | 0.388569 / 0.283200 (0.105369) | 0.090954 / 0.141683 (-0.050728) | 1.540957 / 1.452155 (0.088802) | 1.596841 / 1.492716 (0.104125) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.221130 / 0.018006 (0.203123) | 0.403728 / 0.000490 (0.403238) | 0.003172 / 0.000200 (0.002972) | 0.000078 / 0.000054 (0.000024) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024963 / 0.037411 (-0.012449) | 0.101065 / 0.014526 (0.086539) | 0.110846 / 0.176557 (-0.065710) | 0.158578 / 0.737135 (-0.578557) | 0.112235 / 0.296338 (-0.184104) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.457320 / 0.215209 (0.242111) | 4.548094 / 2.077655 (2.470439) | 2.175376 / 1.504120 (0.671256) | 1.964755 / 1.541195 (0.423561) | 2.008128 / 1.468490 (0.539638) | 0.702448 / 4.584777 (-3.882329) | 3.437595 / 3.745712 (-0.308117) | 3.009871 / 5.269862 (-2.259990) | 1.558181 / 4.565676 (-3.007496) | 0.082568 / 0.424275 (-0.341707) | 0.012371 / 0.007607 (0.004764) | 0.550688 / 0.226044 (0.324644) | 5.534210 / 2.268929 (3.265282) | 2.649605 / 55.444624 (-52.795020) | 2.317293 / 6.876477 (-4.559184) | 2.351525 / 2.142072 (0.209453) | 0.808971 / 4.805227 (-3.996256) | 0.152737 / 6.500664 (-6.347927) | 0.068416 / 0.075469 (-0.007053) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.340219 / 1.841788 (-0.501569) | 13.903388 / 8.074308 (5.829080) | 13.063477 / 10.191392 (2.872085) | 0.130216 / 0.680424 (-0.550208) | 0.016522 / 0.534201 (-0.517679) | 0.398946 / 0.579283 (-0.180337) | 0.382450 / 0.434364 (-0.051914) | 0.491007 / 0.540337 (-0.049330) | 0.577747 / 1.386936 (-0.809189) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#15c37ed142e4fbcb8c00ae62d4c71c84ce41959a \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007812 / 0.011353 (-0.003541) | 0.005563 / 0.011008 (-0.005446) | 0.099372 / 0.038508 (0.060864) | 0.035629 / 0.023109 (0.012520) | 0.301457 / 0.275898 (0.025559) | 0.339136 / 0.323480 (0.015656) | 0.006152 / 0.007986 (-0.001834) | 0.005843 / 0.004328 (0.001515) | 0.075280 / 0.004250 (0.071030) | 0.052789 / 0.037052 (0.015736) | 0.301805 / 0.258489 (0.043316) | 0.347918 / 0.293841 (0.054078) | 0.036182 / 0.128546 (-0.092364) | 0.012655 / 0.075646 (-0.062991) | 0.334428 / 0.419271 (-0.084844) | 0.062746 / 0.043533 (0.019213) | 0.296932 / 0.255139 (0.041793) | 0.314115 / 0.283200 (0.030916) | 0.121291 / 0.141683 (-0.020392) | 1.453252 / 1.452155 (0.001097) | 1.564714 / 1.492716 (0.071997) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.243810 / 0.018006 (0.225804) | 0.547129 / 0.000490 (0.546640) | 0.004666 / 0.000200 (0.004466) | 0.000089 / 0.000054 (0.000035) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028214 / 0.037411 (-0.009197) | 0.108878 / 0.014526 (0.094352) | 0.122313 / 0.176557 (-0.054243) | 0.182412 / 0.737135 (-0.554723) | 0.127014 / 0.296338 (-0.169324) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.423946 / 0.215209 (0.208737) | 4.207112 / 2.077655 (2.129457) | 2.048658 / 1.504120 (0.544538) | 1.843593 / 1.541195 (0.302398) | 1.952426 / 1.468490 (0.483936) | 0.712098 / 4.584777 (-3.872679) | 3.824971 / 3.745712 (0.079258) | 3.507141 / 5.269862 (-1.762721) | 1.868866 / 4.565676 (-2.696810) | 0.087895 / 0.424275 (-0.336380) | 0.012783 / 0.007607 (0.005176) | 0.524087 / 0.226044 (0.298042) | 5.246498 / 2.268929 (2.977570) | 2.495944 / 55.444624 (-52.948680) | 2.126779 / 6.876477 (-4.749698) | 2.315545 / 2.142072 (0.173472) | 0.859546 / 4.805227 (-3.945681) | 0.173457 / 6.500664 (-6.327208) | 0.067483 / 0.075469 (-0.007986) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.173851 / 1.841788 (-0.667937) | 15.091913 / 8.074308 (7.017605) | 14.640035 / 10.191392 (4.448643) | 0.168498 / 0.680424 (-0.511926) | 0.017513 / 0.534201 (-0.516688) | 0.425770 / 0.579283 (-0.153513) | 0.434248 / 0.434364 (-0.000116) | 0.504204 / 0.540337 (-0.036134) | 0.616885 / 1.386936 (-0.770051) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007775 / 0.011353 (-0.003578) | 0.005153 / 0.011008 (-0.005855) | 0.075461 / 0.038508 (0.036953) | 0.034994 / 0.023109 (0.011885) | 0.372389 / 0.275898 (0.096491) | 0.397911 / 0.323480 (0.074431) | 0.006572 / 0.007986 (-0.001413) | 0.005549 / 0.004328 (0.001220) | 0.075101 / 0.004250 (0.070851) | 0.054014 / 0.037052 (0.016962) | 0.368964 / 0.258489 (0.110475) | 0.425353 / 0.293841 (0.131512) | 0.035546 / 0.128546 (-0.093001) | 0.012707 / 0.075646 (-0.062939) | 0.087418 / 0.419271 (-0.331853) | 0.046425 / 0.043533 (0.002893) | 0.363982 / 0.255139 (0.108843) | 0.376421 / 0.283200 (0.093221) | 0.105369 / 0.141683 (-0.036314) | 1.494408 / 1.452155 (0.042253) | 1.596783 / 1.492716 (0.104067) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.258780 / 0.018006 (0.240773) | 0.533373 / 0.000490 (0.532883) | 0.000432 / 0.000200 (0.000232) | 0.000058 / 0.000054 (0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030687 / 0.037411 (-0.006725) | 0.110231 / 0.014526 (0.095705) | 0.123738 / 0.176557 (-0.052819) | 0.171999 / 0.737135 (-0.565137) | 0.127673 / 0.296338 (-0.168665) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.448058 / 0.215209 (0.232849) | 4.459381 / 2.077655 (2.381726) | 2.234020 / 1.504120 (0.729900) | 2.038616 / 1.541195 (0.497421) | 2.123795 / 1.468490 (0.655305) | 0.702664 / 4.584777 (-3.882113) | 3.837133 / 3.745712 (0.091420) | 2.138574 / 5.269862 (-3.131287) | 1.375955 / 4.565676 (-3.189722) | 0.086996 / 0.424275 (-0.337280) | 0.012461 / 0.007607 (0.004854) | 0.557978 / 0.226044 (0.331934) | 5.648613 / 2.268929 (3.379685) | 2.777829 / 55.444624 (-52.666796) | 2.392424 / 6.876477 (-4.484052) | 2.482823 / 2.142072 (0.340750) | 0.851891 / 4.805227 (-3.953336) | 0.171335 / 6.500664 (-6.329329) | 0.065041 / 0.075469 (-0.010428) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.319697 / 1.841788 (-0.522091) | 15.748688 / 8.074308 (7.674380) | 13.397042 / 10.191392 (3.205650) | 0.166424 / 0.680424 (-0.514000) | 0.017755 / 0.534201 (-0.516446) | 0.424989 / 0.579283 (-0.154294) | 0.424705 / 0.434364 (-0.009659) | 0.494190 / 0.540337 (-0.046147) | 0.588315 / 1.386936 (-0.798622) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#15c37ed142e4fbcb8c00ae62d4c71c84ce41959a \"CML watermark\")\n"
] | 2023-05-09T21:21:41 | 2023-05-15T07:36:12 | 2023-05-10T20:23:03 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5836",
"html_url": "https://github.com/huggingface/datasets/pull/5836",
"diff_url": "https://github.com/huggingface/datasets/pull/5836.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5836.patch",
"merged_at": "2023-05-10T20:23:03"
} | Adds custom decoding transform solution to the docs to fix #5782. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5836/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5836/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5835 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5835/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5835/comments | https://api.github.com/repos/huggingface/datasets/issues/5835/events | https://github.com/huggingface/datasets/pull/5835 | 1,702,522,620 | PR_kwDODunzps5QHquR | 5,835 | Always set nullable fields in the writer | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006640 / 0.011353 (-0.004713) | 0.004606 / 0.011008 (-0.006402) | 0.098870 / 0.038508 (0.060362) | 0.028201 / 0.023109 (0.005092) | 0.304396 / 0.275898 (0.028498) | 0.339804 / 0.323480 (0.016324) | 0.005011 / 0.007986 (-0.002974) | 0.003530 / 0.004328 (-0.000799) | 0.075223 / 0.004250 (0.070973) | 0.037922 / 0.037052 (0.000870) | 0.310273 / 0.258489 (0.051784) | 0.348324 / 0.293841 (0.054483) | 0.030181 / 0.128546 (-0.098365) | 0.011584 / 0.075646 (-0.064062) | 0.322637 / 0.419271 (-0.096635) | 0.043119 / 0.043533 (-0.000414) | 0.314514 / 0.255139 (0.059375) | 0.334384 / 0.283200 (0.051185) | 0.092551 / 0.141683 (-0.049132) | 1.496694 / 1.452155 (0.044539) | 1.555426 / 1.492716 (0.062710) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.205078 / 0.018006 (0.187072) | 0.399200 / 0.000490 (0.398710) | 0.004881 / 0.000200 (0.004681) | 0.000200 / 0.000054 (0.000146) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025042 / 0.037411 (-0.012369) | 0.101501 / 0.014526 (0.086975) | 0.107430 / 0.176557 (-0.069127) | 0.170107 / 0.737135 (-0.567028) | 0.111253 / 0.296338 (-0.185086) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.460358 / 0.215209 (0.245149) | 4.592037 / 2.077655 (2.514383) | 2.222612 / 1.504120 (0.718493) | 2.022804 / 1.541195 (0.481610) | 2.040824 / 1.468490 (0.572334) | 0.700485 / 4.584777 (-3.884292) | 3.427847 / 3.745712 (-0.317866) | 2.836916 / 5.269862 (-2.432946) | 1.505055 / 4.565676 (-3.060621) | 0.083206 / 0.424275 (-0.341069) | 0.046492 / 0.007607 (0.038885) | 0.555562 / 0.226044 (0.329518) | 5.563574 / 2.268929 (3.294645) | 2.635273 / 55.444624 (-52.809351) | 2.299377 / 6.876477 (-4.577100) | 2.394512 / 2.142072 (0.252440) | 0.809541 / 4.805227 (-3.995686) | 0.151814 / 6.500664 (-6.348850) | 0.067241 / 0.075469 (-0.008228) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.188396 / 1.841788 (-0.653392) | 13.714596 / 8.074308 (5.640288) | 14.076906 / 10.191392 (3.885514) | 0.143447 / 0.680424 (-0.536977) | 0.016514 / 0.534201 (-0.517687) | 0.383075 / 0.579283 (-0.196209) | 0.386997 / 0.434364 (-0.047367) | 0.441941 / 0.540337 (-0.098396) | 0.522145 / 1.386936 (-0.864791) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006266 / 0.011353 (-0.005086) | 0.004562 / 0.011008 (-0.006446) | 0.077472 / 0.038508 (0.038964) | 0.027596 / 0.023109 (0.004486) | 0.400498 / 0.275898 (0.124600) | 0.406728 / 0.323480 (0.083248) | 0.004745 / 0.007986 (-0.003241) | 0.003375 / 0.004328 (-0.000954) | 0.076645 / 0.004250 (0.072394) | 0.037756 / 0.037052 (0.000703) | 0.415183 / 0.258489 (0.156694) | 0.413758 / 0.293841 (0.119917) | 0.030624 / 0.128546 (-0.097922) | 0.011525 / 0.075646 (-0.064121) | 0.086033 / 0.419271 (-0.333238) | 0.039307 / 0.043533 (-0.004226) | 0.418192 / 0.255139 (0.163053) | 0.403152 / 0.283200 (0.119952) | 0.094141 / 0.141683 (-0.047542) | 1.459012 / 1.452155 (0.006857) | 1.546493 / 1.492716 (0.053777) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.239494 / 0.018006 (0.221488) | 0.420918 / 0.000490 (0.420428) | 0.000411 / 0.000200 (0.000211) | 0.000057 / 0.000054 (0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024525 / 0.037411 (-0.012886) | 0.099793 / 0.014526 (0.085267) | 0.105888 / 0.176557 (-0.070669) | 0.155912 / 0.737135 (-0.581223) | 0.109937 / 0.296338 (-0.186401) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.470108 / 0.215209 (0.254899) | 4.696390 / 2.077655 (2.618735) | 2.467841 / 1.504120 (0.963721) | 2.275012 / 1.541195 (0.733818) | 2.430736 / 1.468490 (0.962245) | 0.700442 / 4.584777 (-3.884335) | 3.458451 / 3.745712 (-0.287261) | 1.921120 / 5.269862 (-3.348742) | 1.183292 / 4.565676 (-3.382384) | 0.083985 / 0.424275 (-0.340290) | 0.012510 / 0.007607 (0.004903) | 0.589066 / 0.226044 (0.363022) | 5.896070 / 2.268929 (3.627141) | 2.935379 / 55.444624 (-52.509245) | 2.599524 / 6.876477 (-4.276953) | 2.663426 / 2.142072 (0.521354) | 0.812096 / 4.805227 (-3.993131) | 0.152559 / 6.500664 (-6.348105) | 0.066906 / 0.075469 (-0.008563) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.333341 / 1.841788 (-0.508446) | 14.441667 / 8.074308 (6.367359) | 14.754069 / 10.191392 (4.562677) | 0.155707 / 0.680424 (-0.524716) | 0.016983 / 0.534201 (-0.517218) | 0.389386 / 0.579283 (-0.189897) | 0.394106 / 0.434364 (-0.040258) | 0.447355 / 0.540337 (-0.092982) | 0.533142 / 1.386936 (-0.853794) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#99ee4467ce77f8f718159a535e237dd8790b5bed \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007801 / 0.011353 (-0.003552) | 0.004884 / 0.011008 (-0.006124) | 0.114754 / 0.038508 (0.076245) | 0.040427 / 0.023109 (0.017318) | 0.402064 / 0.275898 (0.126166) | 0.428830 / 0.323480 (0.105350) | 0.006429 / 0.007986 (-0.001556) | 0.004394 / 0.004328 (0.000066) | 0.087681 / 0.004250 (0.083431) | 0.053684 / 0.037052 (0.016632) | 0.399967 / 0.258489 (0.141478) | 0.445298 / 0.293841 (0.151457) | 0.033194 / 0.128546 (-0.095352) | 0.010288 / 0.075646 (-0.065359) | 0.390719 / 0.419271 (-0.028552) | 0.059311 / 0.043533 (0.015778) | 0.393651 / 0.255139 (0.138512) | 0.418395 / 0.283200 (0.135196) | 0.121494 / 0.141683 (-0.020189) | 1.735470 / 1.452155 (0.283315) | 1.820485 / 1.492716 (0.327769) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.012887 / 0.018006 (-0.005119) | 0.491652 / 0.000490 (0.491162) | 0.005481 / 0.000200 (0.005281) | 0.000127 / 0.000054 (0.000073) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030931 / 0.037411 (-0.006480) | 0.125212 / 0.014526 (0.110686) | 0.136004 / 0.176557 (-0.040552) | 0.201686 / 0.737135 (-0.535449) | 0.140181 / 0.296338 (-0.156157) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.475003 / 0.215209 (0.259794) | 4.743918 / 2.077655 (2.666263) | 2.149422 / 1.504120 (0.645302) | 1.925016 / 1.541195 (0.383821) | 2.061441 / 1.468490 (0.592951) | 0.619845 / 4.584777 (-3.964932) | 4.534691 / 3.745712 (0.788979) | 2.248198 / 5.269862 (-3.021664) | 1.409868 / 4.565676 (-3.155808) | 0.080265 / 0.424275 (-0.344010) | 0.014455 / 0.007607 (0.006848) | 0.597810 / 0.226044 (0.371765) | 5.845492 / 2.268929 (3.576564) | 2.729139 / 55.444624 (-52.715486) | 2.313879 / 6.876477 (-4.562598) | 2.418763 / 2.142072 (0.276690) | 0.748687 / 4.805227 (-4.056540) | 0.165278 / 6.500664 (-6.335387) | 0.076848 / 0.075469 (0.001379) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.416349 / 1.841788 (-0.425439) | 17.440903 / 8.074308 (9.366595) | 17.025733 / 10.191392 (6.834341) | 0.167428 / 0.680424 (-0.512995) | 0.020484 / 0.534201 (-0.513717) | 0.470273 / 0.579283 (-0.109010) | 0.494380 / 0.434364 (0.060016) | 0.566131 / 0.540337 (0.025794) | 0.690444 / 1.386936 (-0.696492) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007695 / 0.011353 (-0.003657) | 0.005551 / 0.011008 (-0.005457) | 0.087812 / 0.038508 (0.049304) | 0.039107 / 0.023109 (0.015998) | 0.436461 / 0.275898 (0.160563) | 0.465116 / 0.323480 (0.141636) | 0.006590 / 0.007986 (-0.001396) | 0.004672 / 0.004328 (0.000343) | 0.087109 / 0.004250 (0.082858) | 0.054227 / 0.037052 (0.017175) | 0.442660 / 0.258489 (0.184171) | 0.484296 / 0.293841 (0.190455) | 0.033308 / 0.128546 (-0.095238) | 0.010780 / 0.075646 (-0.064866) | 0.095255 / 0.419271 (-0.324016) | 0.054399 / 0.043533 (0.010866) | 0.431734 / 0.255139 (0.176595) | 0.453583 / 0.283200 (0.170383) | 0.116067 / 0.141683 (-0.025616) | 1.780701 / 1.452155 (0.328546) | 1.851077 / 1.492716 (0.358360) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.228000 / 0.018006 (0.209994) | 0.485733 / 0.000490 (0.485243) | 0.003955 / 0.000200 (0.003755) | 0.000109 / 0.000054 (0.000054) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033974 / 0.037411 (-0.003437) | 0.134504 / 0.014526 (0.119978) | 0.144421 / 0.176557 (-0.032135) | 0.202171 / 0.737135 (-0.534964) | 0.152015 / 0.296338 (-0.144323) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.520462 / 0.215209 (0.305253) | 5.233339 / 2.077655 (3.155684) | 2.575013 / 1.504120 (1.070893) | 2.384119 / 1.541195 (0.842924) | 2.403856 / 1.468490 (0.935366) | 0.618656 / 4.584777 (-3.966121) | 4.663582 / 3.745712 (0.917870) | 3.738594 / 5.269862 (-1.531268) | 1.794903 / 4.565676 (-2.770773) | 0.077903 / 0.424275 (-0.346372) | 0.014681 / 0.007607 (0.007074) | 0.648615 / 0.226044 (0.422570) | 6.503721 / 2.268929 (4.234792) | 3.326239 / 55.444624 (-52.118386) | 2.989791 / 6.876477 (-3.886685) | 2.995479 / 2.142072 (0.853407) | 0.765483 / 4.805227 (-4.039744) | 0.169783 / 6.500664 (-6.330882) | 0.077533 / 0.075469 (0.002064) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.518736 / 1.841788 (-0.323051) | 17.989119 / 8.074308 (9.914811) | 15.484365 / 10.191392 (5.292973) | 0.168507 / 0.680424 (-0.511917) | 0.020289 / 0.534201 (-0.513912) | 0.467491 / 0.579283 (-0.111793) | 0.501714 / 0.434364 (0.067350) | 0.553418 / 0.540337 (0.013081) | 0.662199 / 1.386936 (-0.724737) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#5ebda17e4362bd5f6123543a14fa526a3b54481a \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007044 / 0.011353 (-0.004309) | 0.004750 / 0.011008 (-0.006258) | 0.096694 / 0.038508 (0.058186) | 0.035682 / 0.023109 (0.012573) | 0.300613 / 0.275898 (0.024715) | 0.334831 / 0.323480 (0.011351) | 0.006428 / 0.007986 (-0.001558) | 0.004456 / 0.004328 (0.000128) | 0.075060 / 0.004250 (0.070810) | 0.053166 / 0.037052 (0.016114) | 0.299601 / 0.258489 (0.041112) | 0.359521 / 0.293841 (0.065680) | 0.028072 / 0.128546 (-0.100474) | 0.009216 / 0.075646 (-0.066430) | 0.328895 / 0.419271 (-0.090377) | 0.050881 / 0.043533 (0.007349) | 0.298265 / 0.255139 (0.043126) | 0.318095 / 0.283200 (0.034896) | 0.116046 / 0.141683 (-0.025637) | 1.491312 / 1.452155 (0.039157) | 1.556053 / 1.492716 (0.063337) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.014248 / 0.018006 (-0.003758) | 0.551455 / 0.000490 (0.550965) | 0.006096 / 0.000200 (0.005897) | 0.000145 / 0.000054 (0.000091) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030598 / 0.037411 (-0.006813) | 0.109549 / 0.014526 (0.095023) | 0.123207 / 0.176557 (-0.053350) | 0.181940 / 0.737135 (-0.555195) | 0.128965 / 0.296338 (-0.167374) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.404552 / 0.215209 (0.189343) | 4.030674 / 2.077655 (1.953020) | 1.841819 / 1.504120 (0.337699) | 1.650055 / 1.541195 (0.108860) | 1.763208 / 1.468490 (0.294718) | 0.532715 / 4.584777 (-4.052062) | 3.774810 / 3.745712 (0.029098) | 3.221927 / 5.269862 (-2.047934) | 1.607974 / 4.565676 (-2.957702) | 0.067160 / 0.424275 (-0.357116) | 0.012479 / 0.007607 (0.004872) | 0.498801 / 0.226044 (0.272757) | 4.980567 / 2.268929 (2.711638) | 2.356017 / 55.444624 (-53.088608) | 2.018975 / 6.876477 (-4.857502) | 2.218343 / 2.142072 (0.076270) | 0.645714 / 4.805227 (-4.159514) | 0.145470 / 6.500664 (-6.355195) | 0.065666 / 0.075469 (-0.009803) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.205756 / 1.841788 (-0.636031) | 15.682779 / 8.074308 (7.608470) | 14.748987 / 10.191392 (4.557595) | 0.167105 / 0.680424 (-0.513319) | 0.017554 / 0.534201 (-0.516647) | 0.393924 / 0.579283 (-0.185359) | 0.432659 / 0.434364 (-0.001705) | 0.502033 / 0.540337 (-0.038304) | 0.602244 / 1.386936 (-0.784692) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007077 / 0.011353 (-0.004276) | 0.004911 / 0.011008 (-0.006097) | 0.075120 / 0.038508 (0.036612) | 0.035460 / 0.023109 (0.012351) | 0.362569 / 0.275898 (0.086671) | 0.398995 / 0.323480 (0.075515) | 0.006587 / 0.007986 (-0.001398) | 0.004571 / 0.004328 (0.000242) | 0.074647 / 0.004250 (0.070397) | 0.057331 / 0.037052 (0.020279) | 0.365123 / 0.258489 (0.106634) | 0.408617 / 0.293841 (0.114776) | 0.028911 / 0.128546 (-0.099635) | 0.009533 / 0.075646 (-0.066113) | 0.081566 / 0.419271 (-0.337705) | 0.048841 / 0.043533 (0.005308) | 0.367245 / 0.255139 (0.112106) | 0.375975 / 0.283200 (0.092776) | 0.123211 / 0.141683 (-0.018472) | 1.471588 / 1.452155 (0.019433) | 1.569342 / 1.492716 (0.076625) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.328443 / 0.018006 (0.310436) | 0.541402 / 0.000490 (0.540912) | 0.000440 / 0.000200 (0.000240) | 0.000058 / 0.000054 (0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030772 / 0.037411 (-0.006639) | 0.115833 / 0.014526 (0.101307) | 0.127837 / 0.176557 (-0.048719) | 0.180897 / 0.737135 (-0.556238) | 0.132458 / 0.296338 (-0.163881) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.445979 / 0.215209 (0.230770) | 4.453101 / 2.077655 (2.375447) | 2.276625 / 1.504120 (0.772505) | 2.102167 / 1.541195 (0.560972) | 2.181583 / 1.468490 (0.713093) | 0.525069 / 4.584777 (-4.059708) | 3.803446 / 3.745712 (0.057734) | 1.954173 / 5.269862 (-3.315688) | 1.088734 / 4.565676 (-3.476942) | 0.066020 / 0.424275 (-0.358255) | 0.012158 / 0.007607 (0.004551) | 0.546828 / 0.226044 (0.320783) | 5.454060 / 2.268929 (3.185132) | 2.756154 / 55.444624 (-52.688470) | 2.476501 / 6.876477 (-4.399976) | 2.525875 / 2.142072 (0.383803) | 0.647515 / 4.805227 (-4.157712) | 0.144511 / 6.500664 (-6.356153) | 0.067060 / 0.075469 (-0.008409) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.306456 / 1.841788 (-0.535332) | 15.822623 / 8.074308 (7.748315) | 14.929114 / 10.191392 (4.737721) | 0.168650 / 0.680424 (-0.511773) | 0.018043 / 0.534201 (-0.516158) | 0.396712 / 0.579283 (-0.182572) | 0.425800 / 0.434364 (-0.008564) | 0.466452 / 0.540337 (-0.073885) | 0.564370 / 1.386936 (-0.822566) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#5ebda17e4362bd5f6123543a14fa526a3b54481a \"CML watermark\")\n"
] | 2023-05-09T18:16:59 | 2023-05-23T16:10:29 | 2023-05-19T13:04:30 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5835",
"html_url": "https://github.com/huggingface/datasets/pull/5835",
"diff_url": "https://github.com/huggingface/datasets/pull/5835.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5835.patch",
"merged_at": "2023-05-19T13:04:30"
} | This fixes loading of e.g. parquet data with non-nullable fields.
Indeed `datasets.Features` doesn't support non-nullable fields, which can lead to data not concatenable due to arrow schema mismatch. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5835/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5835/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5834 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5834/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5834/comments | https://api.github.com/repos/huggingface/datasets/issues/5834/events | https://github.com/huggingface/datasets/issues/5834 | 1,702,448,892 | I_kwDODunzps5leU78 | 5,834 | Is uint8 supported? | {
"login": "Ryou0634",
"id": 17979572,
"node_id": "MDQ6VXNlcjE3OTc5NTcy",
"avatar_url": "https://avatars.githubusercontent.com/u/17979572?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ryou0634",
"html_url": "https://github.com/Ryou0634",
"followers_url": "https://api.github.com/users/Ryou0634/followers",
"following_url": "https://api.github.com/users/Ryou0634/following{/other_user}",
"gists_url": "https://api.github.com/users/Ryou0634/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ryou0634/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ryou0634/subscriptions",
"organizations_url": "https://api.github.com/users/Ryou0634/orgs",
"repos_url": "https://api.github.com/users/Ryou0634/repos",
"events_url": "https://api.github.com/users/Ryou0634/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ryou0634/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi ! The numpy formatting detaults to int64 and float32 - but you can use uint8 using\r\n```python\r\nds = ds.with_format(\"numpy\", dtype=np.uint8)\r\n```",
"Related to https://github.com/huggingface/datasets/issues/5517.",
"Thank you!\r\nBy setting `ds.with_format(\"numpy\", dtype=np.uint8)`, the dataset returns the data in `uint8`.\r\n\r\nHowever, `with_format` and `set_format` seem to cast the data on-the-fly.\r\nI want to reduce the dataset size by using `uint8` instead of `int64` and I observe no difference between using `int64` and `uint8` for the vector.\r\nIs there any way to actually store the data in `uint8` and save the disk space and the downloading time when loaded from the hub?\r\n",
"If the feature type is `Value(\"uint8\")` then it's written an uint8 on disk using the uint8 Arrow dtype.\r\n\r\ne.g.\r\n```python\r\nds = Dataset.from_dict({\"a\": range(10)}, features=Features({\"a\": Value(\"uint8\")}))\r\nds.data.nbytes\r\n# 10\r\n```",
"Oh, I understand now.\r\nThe data was stored in `uint8` from the beginning (when the dataset returns `int64`).\r\n\r\nThank you for your time!\r\nMy question is fully resolved."
] | 2023-05-09T17:31:13 | 2023-05-13T05:04:21 | 2023-05-13T05:04:21 | NONE | null | null | null | ### Describe the bug
I expect the dataset to store the data in the `uint8` data type, but it's returning `int64` instead.
While I've found that `datasets` doesn't yet support float16 (https://github.com/huggingface/datasets/issues/4981), I'm wondering if this is the case for other data types as well.
Is there a way to store vector data as `uint8` and then upload it to the hub?
### Steps to reproduce the bug
```python
from datasets import Features, Dataset, Sequence, Value
import numpy as np
dataset = Dataset.from_dict(
{"vector": [np.array([0, 1, 2], dtype=np.uint8)]}, features=Features({"vector": Sequence(Value("uint8"))})
).with_format("numpy")
print(dataset[0]["vector"].dtype)
```
### Expected behavior
Expected: `uint8`
Actual: `int64`
### Environment info
- `datasets` version: 2.12.0
- Platform: macOS-12.1-x86_64-i386-64bit
- Python version: 3.8.12
- Huggingface_hub version: 0.12.1
- PyArrow version: 11.0.0
- Pandas version: 1.5.3 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5834/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5834/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5832 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5832/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5832/comments | https://api.github.com/repos/huggingface/datasets/issues/5832/events | https://github.com/huggingface/datasets/issues/5832 | 1,702,135,336 | I_kwDODunzps5ldIYo | 5,832 | 404 Client Error: Not Found for url: https://huggingface.co/api/models/bert-large-cased | {
"login": "varungupta31",
"id": 51288316,
"node_id": "MDQ6VXNlcjUxMjg4MzE2",
"avatar_url": "https://avatars.githubusercontent.com/u/51288316?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/varungupta31",
"html_url": "https://github.com/varungupta31",
"followers_url": "https://api.github.com/users/varungupta31/followers",
"following_url": "https://api.github.com/users/varungupta31/following{/other_user}",
"gists_url": "https://api.github.com/users/varungupta31/gists{/gist_id}",
"starred_url": "https://api.github.com/users/varungupta31/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/varungupta31/subscriptions",
"organizations_url": "https://api.github.com/users/varungupta31/orgs",
"repos_url": "https://api.github.com/users/varungupta31/repos",
"events_url": "https://api.github.com/users/varungupta31/events{/privacy}",
"received_events_url": "https://api.github.com/users/varungupta31/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"moved to https://github.com/huggingface/transformers/issues/23233"
] | 2023-05-09T14:14:59 | 2023-05-09T14:25:59 | 2023-05-09T14:25:59 | NONE | null | null | null | ### Describe the bug
Running [Bert-Large-Cased](https://huggingface.co/bert-large-cased) model causes `HTTPError`, with the following traceback-
```
HTTPError Traceback (most recent call last)
<ipython-input-6-5c580443a1ad> in <module>
----> 1 tokenizer = BertTokenizer.from_pretrained('bert-large-cased')
~/miniconda3/envs/cmd-chall/lib/python3.7/site-packages/transformers/tokenization_utils_base.py in from_pretrained(cls, pretrained_model_name_or_path, *init_inputs, **kwargs)
1646 # At this point pretrained_model_name_or_path is either a directory or a model identifier name
1647 fast_tokenizer_file = get_fast_tokenizer_file(
-> 1648 pretrained_model_name_or_path, revision=revision, use_auth_token=use_auth_token
1649 )
1650 additional_files_names = {
~/miniconda3/envs/cmd-chall/lib/python3.7/site-packages/transformers/tokenization_utils_base.py in get_fast_tokenizer_file(path_or_repo, revision, use_auth_token)
3406 """
3407 # Inspect all files from the repo/folder.
-> 3408 all_files = get_list_of_files(path_or_repo, revision=revision, use_auth_token=use_auth_token)
3409 tokenizer_files_map = {}
3410 for file_name in all_files:
~/miniconda3/envs/cmd-chall/lib/python3.7/site-packages/transformers/file_utils.py in get_list_of_files(path_or_repo, revision, use_auth_token)
1685 token = None
1686 model_info = HfApi(endpoint=HUGGINGFACE_CO_RESOLVE_ENDPOINT).model_info(
-> 1687 path_or_repo, revision=revision, token=token
1688 )
1689 return [f.rfilename for f in model_info.siblings]
~/miniconda3/envs/cmd-chall/lib/python3.7/site-packages/huggingface_hub/hf_api.py in model_info(self, repo_id, revision, token)
246 )
247 r = requests.get(path, headers=headers)
--> 248 r.raise_for_status()
249 d = r.json()
250 return ModelInfo(**d)
~/miniconda3/envs/cmd-chall/lib/python3.7/site-packages/requests/models.py in raise_for_status(self)
951
952 if http_error_msg:
--> 953 raise HTTPError(http_error_msg, response=self)
954
955 def close(self):
HTTPError: 404 Client Error: Not Found for url: https://huggingface.co/api/models/bert-large-cased
```
I have also tried running in offline mode, as [discussed here](https://huggingface.co/docs/transformers/installation#offline-mode)
```
HF_DATASETS_OFFLINE=1
TRANSFORMERS_OFFLINE=1
```
### Steps to reproduce the bug
1. `from transformers import BertTokenizer, BertModel`
2. `tokenizer = BertTokenizer.from_pretrained('bert-large-cased')`
### Expected behavior
Run without the HTTP error.
### Environment info
| # Name | Version | Build | Channel | |
|--------------------|------------|-----------------------------|---------|---|
| _libgcc_mutex | 0.1 | main | | |
| _openmp_mutex | 4.5 | 1_gnu | | |
| _pytorch_select | 0.1 | cpu_0 | | |
| appdirs | 1.4.4 | pypi_0 | pypi | |
| backcall | 0.2.0 | pypi_0 | pypi | |
| blas | 1.0 | mkl | | |
| bzip2 | 1.0.8 | h7b6447c_0 | | |
| ca-certificates | 2021.7.5 | h06a4308_1 | | |
| certifi | 2021.5.30 | py37h06a4308_0 | | |
| cffi | 1.14.6 | py37h400218f_0 | | |
| charset-normalizer | 2.0.3 | pypi_0 | pypi | |
| click | 8.0.1 | pypi_0 | pypi | |
| colorama | 0.4.4 | pypi_0 | pypi | |
| cudatoolkit | 11.1.74 | h6bb024c_0 | nvidia | |
| cycler | 0.11.0 | pypi_0 | pypi | |
| decorator | 5.0.9 | pypi_0 | pypi | |
| docker-pycreds | 0.4.0 | pypi_0 | pypi | |
| docopt | 0.6.2 | pypi_0 | pypi | |
| dominate | 2.6.0 | pypi_0 | pypi | |
| ffmpeg | 4.3 | hf484d3e_0 | pytorch | |
| filelock | 3.0.12 | pypi_0 | pypi | |
| fonttools | 4.38.0 | pypi_0 | pypi | |
| freetype | 2.10.4 | h5ab3b9f_0 | | |
| gitdb | 4.0.7 | pypi_0 | pypi | |
| gitpython | 3.1.18 | pypi_0 | pypi | |
| gmp | 6.2.1 | h2531618_2 | | |
| gnutls | 3.6.15 | he1e5248_0 | | |
| huggingface-hub | 0.0.12 | pypi_0 | pypi | |
| humanize | 3.10.0 | pypi_0 | pypi | |
| idna | 3.2 | pypi_0 | pypi | |
| importlib-metadata | 4.6.1 | pypi_0 | pypi | |
| intel-openmp | 2019.4 | 243 | | |
| ipdb | 0.13.9 | pypi_0 | pypi | |
| ipython | 7.25.0 | pypi_0 | pypi | |
| ipython-genutils | 0.2.0 | pypi_0 | pypi | |
| jedi | 0.18.0 | pypi_0 | pypi | |
| joblib | 1.0.1 | pypi_0 | pypi | |
| jpeg | 9b | h024ee3a_2 | | |
| jsonpickle | 1.5.2 | pypi_0 | pypi | |
| kiwisolver | 1.4.4 | pypi_0 | pypi | |
| lame | 3.100 | h7b6447c_0 | | |
| lcms2 | 2.12 | h3be6417_0 | | |
| ld_impl_linux-64 | 2.35.1 | h7274673_9 | | |
| libffi | 3.3 | he6710b0_2 | | |
| libgcc-ng | 9.3.0 | h5101ec6_17 | | |
| libgomp | 9.3.0 | h5101ec6_17 | | |
| libiconv | 1.15 | h63c8f33_5 | | |
| libidn2 | 2.3.2 | h7f8727e_0 | | |
| libmklml | 2019.0.5 | 0 | | |
| libpng | 1.6.37 | hbc83047_0 | | |
| libstdcxx-ng | 9.3.0 | hd4cf53a_17 | | |
| libtasn1 | 4.16.0 | h27cfd23_0 | | |
| libtiff | 4.2.0 | h85742a9_0 | | |
| libunistring | 0.9.10 | h27cfd23_0 | | |
| libuv | 1.40.0 | h7b6447c_0 | | |
| libwebp-base | 1.2.0 | h27cfd23_0 | | |
| lz4-c | 1.9.3 | h2531618_0 | | |
| matplotlib | 3.5.3 | pypi_0 | pypi | |
| matplotlib-inline | 0.1.2 | pypi_0 | pypi | |
| mergedeep | 1.3.4 | pypi_0 | pypi | |
| mkl | 2020.2 | 256 | | |
| mkl-service | 2.3.0 | py37he8ac12f_0 | | |
| mkl_fft | 1.3.0 | py37h54f3939_0 | | |
| mkl_random | 1.1.1 | py37h0573a6f_0 | | |
| msgpack | 1.0.2 | pypi_0 | pypi | |
| munch | 2.5.0 | pypi_0 | pypi | |
| ncurses | 6.2 | he6710b0_1 | | |
| nettle | 3.7.3 | hbbd107a_1 | | |
| ninja | 1.10.2 | hff7bd54_1 | | |
| nltk | 3.8.1 | pypi_0 | pypi | |
| numpy | 1.19.2 | py37h54aff64_0 | | |
| numpy-base | 1.19.2 | py37hfa32c7d_0 | | |
| olefile | 0.46 | py37_0 | | |
| openh264 | 2.1.0 | hd408876_0 | | |
| openjpeg | 2.3.0 | h05c96fa_1 | | |
| openssl | 1.1.1k | h27cfd23_0 | | |
| packaging | 21.0 | pypi_0 | pypi | |
| pandas | 1.3.1 | pypi_0 | pypi | |
| parso | 0.8.2 | pypi_0 | pypi | |
| pathtools | 0.1.2 | pypi_0 | pypi | |
| pexpect | 4.8.0 | pypi_0 | pypi | |
| pickleshare | 0.7.5 | pypi_0 | pypi | |
| pillow | 8.3.1 | py37h2c7a002_0 | | |
| pip | 21.1.3 | py37h06a4308_0 | | |
| prompt-toolkit | 3.0.19 | pypi_0 | pypi | |
| protobuf | 4.21.12 | pypi_0 | pypi | |
| psutil | 5.8.0 | pypi_0 | pypi | |
| ptyprocess | 0.7.0 | pypi_0 | pypi | |
| py-cpuinfo | 8.0.0 | pypi_0 | pypi | |
| pycparser | 2.20 | py_2 | | |
| pygments | 2.9.0 | pypi_0 | pypi | |
| pyparsing | 2.4.7 | pypi_0 | pypi | |
| python | 3.7.10 | h12debd9_4 | | |
| python-dateutil | 2.8.2 | pypi_0 | pypi | |
| pytorch | 1.9.0 | py3.7_cuda11.1_cudnn8.0.5_0 | pytorch | |
| pytz | 2021.1 | pypi_0 | pypi | |
| pyyaml | 5.4.1 | pypi_0 | pypi | |
| readline | 8.1 | h27cfd23_0 | | |
| regex | 2022.10.31 | pypi_0 | pypi | |
| requests | 2.26.0 | pypi_0 | pypi | |
| sacred | 0.8.2 | pypi_0 | pypi | |
| sacremoses | 0.0.45 | pypi_0 | pypi | |
| scikit-learn | 0.24.2 | pypi_0 | pypi | |
| scipy | 1.7.0 | pypi_0 | pypi | |
| sentry-sdk | 1.15.0 | pypi_0 | pypi | |
| setproctitle | 1.3.2 | pypi_0 | pypi | |
| setuptools | 52.0.0 | py37h06a4308_0 | | |
| six | 1.16.0 | pyhd3eb1b0_0 | | |
| smmap | 4.0.0 | pypi_0 | pypi | |
| sqlite | 3.36.0 | hc218d9a_0 | | |
| threadpoolctl | 2.2.0 | pypi_0 | pypi | |
| tk | 8.6.10 | hbc83047_0 | | |
| tokenizers | 0.10.3 | pypi_0 | pypi | |
| toml | 0.10.2 | pypi_0 | pypi | |
| torchaudio | 0.9.0 | py37 | pytorch | |
| torchvision | 0.10.0 | py37_cu111 | pytorch | |
| tqdm | 4.61.2 | pypi_0 | pypi | |
| traitlets | 5.0.5 | pypi_0 | pypi | |
| transformers | 4.9.1 | pypi_0 | pypi | |
| typing-extensions | 3.10.0.0 | hd3eb1b0_0 | | |
| typing_extensions | 3.10.0.0 | pyh06a4308_0 | | |
| urllib3 | 1.26.14 | pypi_0 | pypi | |
| wandb | 0.13.10 | pypi_0 | pypi | |
| wcwidth | 0.2.5 | pypi_0 | pypi | |
| wheel | 0.36.2 | pyhd3eb1b0_0 | | |
| wrapt | 1.12.1 | pypi_0 | pypi | |
| xz | 5.2.5 | h7b6447c_0 | | |
| zipp | 3.5.0 | pypi_0 | pypi | |
| zlib | 1.2.11 | h7b6447c_3 | | |
| zstd | 1.4.9 | haebb681_0 | | | | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5832/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5832/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5830 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5830/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5830/comments | https://api.github.com/repos/huggingface/datasets/issues/5830/events | https://github.com/huggingface/datasets/pull/5830 | 1,701,451,399 | PR_kwDODunzps5QEFEi | 5,830 | Debug windows #2 | {
"login": "HyukjinKwon",
"id": 6477701,
"node_id": "MDQ6VXNlcjY0Nzc3MDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/6477701?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HyukjinKwon",
"html_url": "https://github.com/HyukjinKwon",
"followers_url": "https://api.github.com/users/HyukjinKwon/followers",
"following_url": "https://api.github.com/users/HyukjinKwon/following{/other_user}",
"gists_url": "https://api.github.com/users/HyukjinKwon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HyukjinKwon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HyukjinKwon/subscriptions",
"organizations_url": "https://api.github.com/users/HyukjinKwon/orgs",
"repos_url": "https://api.github.com/users/HyukjinKwon/repos",
"events_url": "https://api.github.com/users/HyukjinKwon/events{/privacy}",
"received_events_url": "https://api.github.com/users/HyukjinKwon/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2023-05-09T06:40:34 | 2023-05-09T06:40:47 | 2023-05-09T06:40:47 | NONE | null | true | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5830",
"html_url": "https://github.com/huggingface/datasets/pull/5830",
"diff_url": "https://github.com/huggingface/datasets/pull/5830.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5830.patch",
"merged_at": null
} | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5830/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5830/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5829 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5829/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5829/comments | https://api.github.com/repos/huggingface/datasets/issues/5829/events | https://github.com/huggingface/datasets/issues/5829 | 1,699,958,189 | I_kwDODunzps5lU02t | 5,829 | (mach-o file, but is an incompatible architecture (have 'arm64', need 'x86_64')) | {
"login": "elcolie",
"id": 18206728,
"node_id": "MDQ6VXNlcjE4MjA2NzI4",
"avatar_url": "https://avatars.githubusercontent.com/u/18206728?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/elcolie",
"html_url": "https://github.com/elcolie",
"followers_url": "https://api.github.com/users/elcolie/followers",
"following_url": "https://api.github.com/users/elcolie/following{/other_user}",
"gists_url": "https://api.github.com/users/elcolie/gists{/gist_id}",
"starred_url": "https://api.github.com/users/elcolie/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/elcolie/subscriptions",
"organizations_url": "https://api.github.com/users/elcolie/orgs",
"repos_url": "https://api.github.com/users/elcolie/repos",
"events_url": "https://api.github.com/users/elcolie/events{/privacy}",
"received_events_url": "https://api.github.com/users/elcolie/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Can you paste the error stack trace?",
"That is weird. I can't reproduce it again after reboot.\r\n```python\r\nIn [2]: import platform\r\n\r\nIn [3]: platform.platform()\r\nOut[3]: 'macOS-13.2-arm64-arm-64bit'\r\n\r\nIn [4]: from datasets import load_dataset\r\n ...:\r\n ...: jazzy = load_dataset(\"nomic-ai/gpt4all-j-prompt-generations\", revision='v1.2-jazzy')\r\nFound cached dataset parquet (/Users/sarit/.cache/huggingface/datasets/nomic-ai___parquet/nomic-ai--gpt4all-j-prompt-generations-a3b62015e2e52043/0.0.0/2a3b91fbd88a2c90d1dbbb32b460cf621d31bd5b05b934492fdef7d8d6f236ec)\r\n100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 63.25it/s]\r\n```"
] | 2023-05-08T10:07:14 | 2023-05-09T00:46:42 | 2023-05-09T00:46:42 | NONE | null | null | null | ### Describe the bug
M2 MBP can't run
```python
from datasets import load_dataset
jazzy = load_dataset("nomic-ai/gpt4all-j-prompt-generations", revision='v1.2-jazzy')
```
### Steps to reproduce the bug
1. Use M2 MBP
2. Python 3.10.10 from pyenv
3. Run
```
from datasets import load_dataset
jazzy = load_dataset("nomic-ai/gpt4all-j-prompt-generations", revision='v1.2-jazzy')
```
### Expected behavior
Be able to run normally
### Environment info
```
from datasets import load_dataset
jazzy = load_dataset("nomic-ai/gpt4all-j-prompt-generations", revision='v1.2-jazzy')
```
OSX: 13.2
CPU: M2
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5829/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5829/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5828 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5828/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5828/comments | https://api.github.com/repos/huggingface/datasets/issues/5828/events | https://github.com/huggingface/datasets/issues/5828 | 1,699,235,739 | I_kwDODunzps5lSEeb | 5,828 | Stream data concatenation issue | {
"login": "krishnapriya-18",
"id": 48817796,
"node_id": "MDQ6VXNlcjQ4ODE3Nzk2",
"avatar_url": "https://avatars.githubusercontent.com/u/48817796?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/krishnapriya-18",
"html_url": "https://github.com/krishnapriya-18",
"followers_url": "https://api.github.com/users/krishnapriya-18/followers",
"following_url": "https://api.github.com/users/krishnapriya-18/following{/other_user}",
"gists_url": "https://api.github.com/users/krishnapriya-18/gists{/gist_id}",
"starred_url": "https://api.github.com/users/krishnapriya-18/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/krishnapriya-18/subscriptions",
"organizations_url": "https://api.github.com/users/krishnapriya-18/orgs",
"repos_url": "https://api.github.com/users/krishnapriya-18/repos",
"events_url": "https://api.github.com/users/krishnapriya-18/events{/privacy}",
"received_events_url": "https://api.github.com/users/krishnapriya-18/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi! \r\n\r\nYou can call `map` as follows to avoid the error:\r\n```python\r\naugmented_dataset_cln = dataset_cln['train'].map(augment_dataset, features=dataset_cln['train'].features)\r\n```",
"Thanks it is solved"
] | 2023-05-07T21:02:54 | 2023-05-10T05:06:58 | 2023-05-10T05:05:47 | NONE | null | null | null | ### Describe the bug
I am not able to concatenate the augmentation of the stream data. I am using the latest version of dataset.
ValueError: The features can't be aligned because the key audio of features {'audio_id': Value(dtype='string',
id=None), 'audio': {'array': Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), 'path':
Value(dtype='null', id=None), 'sampling_rate': Value(dtype='int64', id=None)}, 'transcript': Value(dtype='string',
id=None)} has unexpected type - {'array': Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None),
'path': Value(dtype='null', id=None), 'sampling_rate': Value(dtype='int64', id=None)} (expected either
Audio(sampling_rate=16000, mono=True, decode=True, id=None) or Value("null").
### Steps to reproduce the bug
dataset = load_dataset("tobiolatunji/afrispeech-200", "all", streaming=True).shuffle(seed=42)
dataset_cln = dataset.remove_columns(['speaker_id', 'path', 'age_group', 'gender', 'accent', 'domain', 'country', 'duration'])
dataset_cln = dataset_cln.cast_column("audio", Audio(sampling_rate=16000))
from audiomentations import AddGaussianNoise,Compose,Gain,OneOf,PitchShift,PolarityInversion,TimeStretch
augmentation = Compose([
AddGaussianNoise(min_amplitude=0.005, max_amplitude=0.015, p=0.2)
])
def augment_dataset(batch):
audio = batch["audio"]
audio["array"] = augmentation(audio["array"], sample_rate=audio["sampling_rate"])
return batch
augmented_dataset_cln = dataset_cln['train'].map(augment_dataset)
dataset_cln['train'] = interleave_datasets([dataset_cln['train'], augmented_dataset_cln])
dataset_cln['train'] = dataset_cln['train'].shuffle(seed=42)
### Expected behavior
I should be able to merge as sampling rate is same.
### Environment info
import datasets
import transformers
import accelerate
print(datasets.__version__)
print(transformers.__version__)
print(torch.__version__)
print(evaluate.__version__)
print(accelerate.__version__)
2.12.0
4.28.1
2.0.0
0.4.0
0.18.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5828/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5828/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5824 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5824/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5824/comments | https://api.github.com/repos/huggingface/datasets/issues/5824/events | https://github.com/huggingface/datasets/pull/5824 | 1,697,152,148 | PR_kwDODunzps5P1rIZ | 5,824 | Fix incomplete docstring for `BuilderConfig` | {
"login": "Laurent2916",
"id": 21087104,
"node_id": "MDQ6VXNlcjIxMDg3MTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/21087104?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Laurent2916",
"html_url": "https://github.com/Laurent2916",
"followers_url": "https://api.github.com/users/Laurent2916/followers",
"following_url": "https://api.github.com/users/Laurent2916/following{/other_user}",
"gists_url": "https://api.github.com/users/Laurent2916/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Laurent2916/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Laurent2916/subscriptions",
"organizations_url": "https://api.github.com/users/Laurent2916/orgs",
"repos_url": "https://api.github.com/users/Laurent2916/repos",
"events_url": "https://api.github.com/users/Laurent2916/events{/privacy}",
"received_events_url": "https://api.github.com/users/Laurent2916/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007658 / 0.011353 (-0.003695) | 0.005497 / 0.011008 (-0.005511) | 0.097142 / 0.038508 (0.058633) | 0.034602 / 0.023109 (0.011493) | 0.304191 / 0.275898 (0.028293) | 0.329103 / 0.323480 (0.005624) | 0.005936 / 0.007986 (-0.002049) | 0.004324 / 0.004328 (-0.000004) | 0.073387 / 0.004250 (0.069137) | 0.049657 / 0.037052 (0.012604) | 0.301352 / 0.258489 (0.042863) | 0.343095 / 0.293841 (0.049254) | 0.036767 / 0.128546 (-0.091779) | 0.012438 / 0.075646 (-0.063208) | 0.333804 / 0.419271 (-0.085468) | 0.064557 / 0.043533 (0.021024) | 0.302397 / 0.255139 (0.047258) | 0.319739 / 0.283200 (0.036540) | 0.119264 / 0.141683 (-0.022418) | 1.465309 / 1.452155 (0.013155) | 1.578194 / 1.492716 (0.085478) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.256552 / 0.018006 (0.238545) | 0.555344 / 0.000490 (0.554854) | 0.004845 / 0.000200 (0.004645) | 0.000082 / 0.000054 (0.000027) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027215 / 0.037411 (-0.010197) | 0.107071 / 0.014526 (0.092545) | 0.116343 / 0.176557 (-0.060213) | 0.172646 / 0.737135 (-0.564490) | 0.123366 / 0.296338 (-0.172973) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.411421 / 0.215209 (0.196212) | 4.126028 / 2.077655 (2.048373) | 1.975826 / 1.504120 (0.471706) | 1.784404 / 1.541195 (0.243210) | 1.848697 / 1.468490 (0.380207) | 0.686400 / 4.584777 (-3.898377) | 3.677649 / 3.745712 (-0.068063) | 2.077787 / 5.269862 (-3.192075) | 1.310912 / 4.565676 (-3.254764) | 0.083980 / 0.424275 (-0.340295) | 0.012183 / 0.007607 (0.004575) | 0.506969 / 0.226044 (0.280924) | 5.094730 / 2.268929 (2.825802) | 2.419790 / 55.444624 (-53.024834) | 2.106592 / 6.876477 (-4.769884) | 2.244309 / 2.142072 (0.102237) | 0.814312 / 4.805227 (-3.990915) | 0.167872 / 6.500664 (-6.332792) | 0.065339 / 0.075469 (-0.010130) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.193314 / 1.841788 (-0.648474) | 14.980621 / 8.074308 (6.906313) | 14.352452 / 10.191392 (4.161060) | 0.164531 / 0.680424 (-0.515893) | 0.017432 / 0.534201 (-0.516769) | 0.422193 / 0.579283 (-0.157090) | 0.410047 / 0.434364 (-0.024317) | 0.497011 / 0.540337 (-0.043326) | 0.581395 / 1.386936 (-0.805541) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007214 / 0.011353 (-0.004139) | 0.005449 / 0.011008 (-0.005559) | 0.074320 / 0.038508 (0.035812) | 0.034261 / 0.023109 (0.011152) | 0.378265 / 0.275898 (0.102367) | 0.414419 / 0.323480 (0.090939) | 0.005804 / 0.007986 (-0.002182) | 0.004205 / 0.004328 (-0.000124) | 0.073266 / 0.004250 (0.069015) | 0.050444 / 0.037052 (0.013392) | 0.372999 / 0.258489 (0.114510) | 0.436032 / 0.293841 (0.142191) | 0.035432 / 0.128546 (-0.093114) | 0.012581 / 0.075646 (-0.063065) | 0.085777 / 0.419271 (-0.333495) | 0.046902 / 0.043533 (0.003369) | 0.378732 / 0.255139 (0.123593) | 0.401746 / 0.283200 (0.118547) | 0.113398 / 0.141683 (-0.028285) | 1.463851 / 1.452155 (0.011696) | 1.566387 / 1.492716 (0.073670) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.261246 / 0.018006 (0.243240) | 0.546730 / 0.000490 (0.546241) | 0.005245 / 0.000200 (0.005045) | 0.000103 / 0.000054 (0.000048) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029441 / 0.037411 (-0.007970) | 0.111834 / 0.014526 (0.097308) | 0.122411 / 0.176557 (-0.054145) | 0.171288 / 0.737135 (-0.565847) | 0.130338 / 0.296338 (-0.166001) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.433405 / 0.215209 (0.218196) | 4.315790 / 2.077655 (2.238135) | 2.121934 / 1.504120 (0.617814) | 1.924123 / 1.541195 (0.382928) | 2.029077 / 1.468490 (0.560587) | 0.710245 / 4.584777 (-3.874532) | 3.844393 / 3.745712 (0.098681) | 3.576580 / 5.269862 (-1.693281) | 1.930985 / 4.565676 (-2.634691) | 0.092186 / 0.424275 (-0.332090) | 0.012307 / 0.007607 (0.004700) | 0.533722 / 0.226044 (0.307677) | 5.324447 / 2.268929 (3.055519) | 2.615451 / 55.444624 (-52.829174) | 2.282310 / 6.876477 (-4.594167) | 2.319847 / 2.142072 (0.177774) | 0.849364 / 4.805227 (-3.955864) | 0.172722 / 6.500664 (-6.327942) | 0.064721 / 0.075469 (-0.010748) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.289942 / 1.841788 (-0.551846) | 15.875062 / 8.074308 (7.800754) | 14.784682 / 10.191392 (4.593290) | 0.144432 / 0.680424 (-0.535991) | 0.017703 / 0.534201 (-0.516498) | 0.424357 / 0.579283 (-0.154926) | 0.419078 / 0.434364 (-0.015286) | 0.489331 / 0.540337 (-0.051006) | 0.585284 / 1.386936 (-0.801652) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#e3f4f124a1b118a5bfff5bae76b25a68aedbebbc \"CML watermark\")\n"
] | 2023-05-05T07:34:28 | 2023-05-05T12:39:14 | 2023-05-05T12:31:54 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5824",
"html_url": "https://github.com/huggingface/datasets/pull/5824",
"diff_url": "https://github.com/huggingface/datasets/pull/5824.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5824.patch",
"merged_at": "2023-05-05T12:31:54"
} | Fixes #5820
Also fixed a couple of typos I spotted | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5824/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5824/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5823 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5823/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5823/comments | https://api.github.com/repos/huggingface/datasets/issues/5823/events | https://github.com/huggingface/datasets/issues/5823 | 1,697,024,789 | I_kwDODunzps5lJosV | 5,823 | [2.12.0] DatasetDict.save_to_disk not saving to S3 | {
"login": "thejamesmarq",
"id": 5233185,
"node_id": "MDQ6VXNlcjUyMzMxODU=",
"avatar_url": "https://avatars.githubusercontent.com/u/5233185?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thejamesmarq",
"html_url": "https://github.com/thejamesmarq",
"followers_url": "https://api.github.com/users/thejamesmarq/followers",
"following_url": "https://api.github.com/users/thejamesmarq/following{/other_user}",
"gists_url": "https://api.github.com/users/thejamesmarq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thejamesmarq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thejamesmarq/subscriptions",
"organizations_url": "https://api.github.com/users/thejamesmarq/orgs",
"repos_url": "https://api.github.com/users/thejamesmarq/repos",
"events_url": "https://api.github.com/users/thejamesmarq/events{/privacy}",
"received_events_url": "https://api.github.com/users/thejamesmarq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi ! Can you try adding the `s3://` prefix ?\r\n```python\r\nf\"s3://{s3_bucket}/{s3_dir}/{dataset_name}\"\r\n```",
"Ugh, yeah that was it. Thank you!"
] | 2023-05-05T05:22:59 | 2023-05-05T15:01:18 | 2023-05-05T15:01:17 | NONE | null | null | null | ### Describe the bug
When trying to save a `DatasetDict` to a private S3 bucket using `save_to_disk`, the artifacts are instead saved locally, and not in the S3 bucket.
I have tried using the deprecated `fs` as well as the `storage_options` arguments and I get the same results.
### Steps to reproduce the bug
1. Create a DatsetDict `dataset`
2. Create a S3FileSystem object
`s3 = datasets.filesystems.S3FileSystem(key=aws_access_key_id, secret=aws_secret_access_key)`
3. Save using `dataset_dict.save_to_disk(f"{s3_bucket}/{s3_dir}/{dataset_name}", storage_options=s3.storage_options)` or `dataset_dict.save_to_disk(f"{s3_bucket}/{s3_dir}/{dataset_name}", fs=s3)`
4. Check the corresponding S3 bucket and verify nothing has been uploaded
5. Check the path at f"{s3_bucket}/{s3_dir}/{dataset_name}" and verify that files have been saved there
### Expected behavior
Artifacts are uploaded at the f"{s3_bucket}/{s3_dir}/{dataset_name}" S3 location.
### Environment info
- `datasets` version: 2.12.0
- Platform: macOS-13.3.1-x86_64-i386-64bit
- Python version: 3.11.2
- Huggingface_hub version: 0.14.1
- PyArrow version: 12.0.0
- Pandas version: 2.0.1 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5823/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5823/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5822 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5822/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5822/comments | https://api.github.com/repos/huggingface/datasets/issues/5822/events | https://github.com/huggingface/datasets/issues/5822 | 1,696,627,308 | I_kwDODunzps5lIHps | 5,822 | Audio Dataset with_format torch problem | {
"login": "paulbauriegel",
"id": 20282916,
"node_id": "MDQ6VXNlcjIwMjgyOTE2",
"avatar_url": "https://avatars.githubusercontent.com/u/20282916?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/paulbauriegel",
"html_url": "https://github.com/paulbauriegel",
"followers_url": "https://api.github.com/users/paulbauriegel/followers",
"following_url": "https://api.github.com/users/paulbauriegel/following{/other_user}",
"gists_url": "https://api.github.com/users/paulbauriegel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/paulbauriegel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/paulbauriegel/subscriptions",
"organizations_url": "https://api.github.com/users/paulbauriegel/orgs",
"repos_url": "https://api.github.com/users/paulbauriegel/repos",
"events_url": "https://api.github.com/users/paulbauriegel/events{/privacy}",
"received_events_url": "https://api.github.com/users/paulbauriegel/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi ! Can you try with a more recent version of `datasets` ?",
"Ok, yes it worked with the most recent version. Thanks"
] | 2023-05-04T20:07:51 | 2023-05-11T20:45:53 | 2023-05-11T20:45:53 | NONE | null | null | null | ### Describe the bug
Common Voice v10 Delta (German) Dataset from here https://commonvoice.mozilla.org/de/datasets
```
audio_dataset = \
(Dataset
.from_dict({"audio": ('/tmp/cv-corpus-10.0-delta-2022-07-04/de/clips/' + df.path).to_list()})
.cast_column("audio", Audio(sampling_rate=16_000))
.with_format('numpy'))
audio_dataset[0]["audio"]
```
works, but
```
audio_dataset = \
(Dataset
.from_dict({"audio": ('/tmp/cv-corpus-10.0-delta-2022-07-04/de/clips/' + df.path).to_list()})
.cast_column("audio", Audio(sampling_rate=16_000))
.with_format('torch'))
audio_dataset[0]["audio"]
```
does not instead I get
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[54], line 1
----> 1 audio_dataset[0]["audio"]
File /anaconda/envs/azureml_py38/lib/python3.8/site-packages/datasets/arrow_dataset.py:2154, in Dataset.__getitem__(self, key)
2152 def __getitem__(self, key): # noqa: F811
2153 """Can be used to index columns (by string names) or rows (by integer index or iterable of indices or bools)."""
-> 2154 return self._getitem(
2155 key,
2156 )
File /anaconda/envs/azureml_py38/lib/python3.8/site-packages/datasets/arrow_dataset.py:2139, in Dataset._getitem(self, key, decoded, **kwargs)
2137 formatter = get_formatter(format_type, features=self.features, decoded=decoded, **format_kwargs)
2138 pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None)
-> 2139 formatted_output = format_table(
2140 pa_subtable, key, formatter=formatter, format_columns=format_columns, output_all_columns=output_all_columns
2141 )
2142 return formatted_output
File /anaconda/envs/azureml_py38/lib/python3.8/site-packages/datasets/formatting/formatting.py:532, in format_table(table, key, formatter, format_columns, output_all_columns)
530 python_formatter = PythonFormatter(features=None)
531 if format_columns is None:
--> 532 return formatter(pa_table, query_type=query_type)
533 elif query_type == "column":
534 if key in format_columns:
File /anaconda/envs/azureml_py38/lib/python3.8/site-packages/datasets/formatting/formatting.py:281, in Formatter.__call__(self, pa_table, query_type)
279 def __call__(self, pa_table: pa.Table, query_type: str) -> Union[RowFormat, ColumnFormat, BatchFormat]:
280 if query_type == "row":
--> 281 return self.format_row(pa_table)
282 elif query_type == "column":
283 return self.format_column(pa_table)
File /anaconda/envs/azureml_py38/lib/python3.8/site-packages/datasets/formatting/torch_formatter.py:58, in TorchFormatter.format_row(self, pa_table)
56 def format_row(self, pa_table: pa.Table) -> dict:
57 row = self.numpy_arrow_extractor().extract_row(pa_table)
---> 58 return self.recursive_tensorize(row)
File /anaconda/envs/azureml_py38/lib/python3.8/site-packages/datasets/formatting/torch_formatter.py:54, in TorchFormatter.recursive_tensorize(self, data_struct)
53 def recursive_tensorize(self, data_struct: dict):
---> 54 return map_nested(self._recursive_tensorize, data_struct, map_list=False)
File /anaconda/envs/azureml_py38/lib/python3.8/site-packages/datasets/utils/py_utils.py:356, in map_nested(function, data_struct, dict_only, map_list, map_tuple, map_numpy, num_proc, types, disable_tqdm, desc)
354 num_proc = 1
355 if num_proc <= 1 or len(iterable) <= num_proc:
--> 356 mapped = [
357 _single_map_nested((function, obj, types, None, True, None))
358 for obj in logging.tqdm(iterable, disable=disable_tqdm, desc=desc)
359 ]
360 else:
361 split_kwds = [] # We organize the splits ourselve (contiguous splits)
File /anaconda/envs/azureml_py38/lib/python3.8/site-packages/datasets/utils/py_utils.py:357, in <listcomp>(.0)
354 num_proc = 1
355 if num_proc <= 1 or len(iterable) <= num_proc:
356 mapped = [
--> 357 _single_map_nested((function, obj, types, None, True, None))
358 for obj in logging.tqdm(iterable, disable=disable_tqdm, desc=desc)
359 ]
360 else:
361 split_kwds = [] # We organize the splits ourselve (contiguous splits)
File /anaconda/envs/azureml_py38/lib/python3.8/site-packages/datasets/utils/py_utils.py:309, in _single_map_nested(args)
306 pbar = logging.tqdm(pbar_iterable, disable=disable_tqdm, position=rank, unit="obj", desc=pbar_desc)
308 if isinstance(data_struct, dict):
--> 309 return {k: _single_map_nested((function, v, types, None, True, None)) for k, v in pbar}
310 else:
311 mapped = [_single_map_nested((function, v, types, None, True, None)) for v in pbar]
File /anaconda/envs/azureml_py38/lib/python3.8/site-packages/datasets/utils/py_utils.py:309, in <dictcomp>(.0)
306 pbar = logging.tqdm(pbar_iterable, disable=disable_tqdm, position=rank, unit="obj", desc=pbar_desc)
308 if isinstance(data_struct, dict):
--> 309 return {k: _single_map_nested((function, v, types, None, True, None)) for k, v in pbar}
310 else:
311 mapped = [_single_map_nested((function, v, types, None, True, None)) for v in pbar]
File /anaconda/envs/azureml_py38/lib/python3.8/site-packages/datasets/utils/py_utils.py:293, in _single_map_nested(args)
291 # Singleton first to spare some computation
292 if not isinstance(data_struct, dict) and not isinstance(data_struct, types):
--> 293 return function(data_struct)
295 # Reduce logging to keep things readable in multiprocessing with tqdm
296 if rank is not None and logging.get_verbosity() < logging.WARNING:
File /anaconda/envs/azureml_py38/lib/python3.8/site-packages/datasets/formatting/torch_formatter.py:51, in TorchFormatter._recursive_tensorize(self, data_struct)
49 if data_struct.dtype == np.object: # pytorch tensors cannot be instantied from an array of objects
50 return [self.recursive_tensorize(substruct) for substruct in data_struct]
---> 51 return self._tensorize(data_struct)
File /anaconda/envs/azureml_py38/lib/python3.8/site-packages/datasets/formatting/torch_formatter.py:38, in TorchFormatter._tensorize(self, value)
35 import torch
37 default_dtype = {}
---> 38 if np.issubdtype(value.dtype, np.integer):
39 default_dtype = {"dtype": torch.int64}
40 elif np.issubdtype(value.dtype, np.floating):
AttributeError: 'NoneType' object has no attribute 'dtype'
```
### Steps to reproduce the bug
1. Download some audio dataset in this case I used Common Voice v10 Delta (German) Dataset from here https://commonvoice.mozilla.org/de/datasets
2. Try the Code from above
### Expected behavior
It should work for torch
### Environment info
pytorch: 2.0.0
datasets: 2.3.2
numpy: 1.21.6
Python: 3.8
Linux | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5822/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5822/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5820 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5820/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5820/comments | https://api.github.com/repos/huggingface/datasets/issues/5820/events | https://github.com/huggingface/datasets/issues/5820 | 1,695,892,811 | I_kwDODunzps5lFUVL | 5,820 | Incomplete docstring for `BuilderConfig` | {
"login": "Laurent2916",
"id": 21087104,
"node_id": "MDQ6VXNlcjIxMDg3MTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/21087104?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Laurent2916",
"html_url": "https://github.com/Laurent2916",
"followers_url": "https://api.github.com/users/Laurent2916/followers",
"following_url": "https://api.github.com/users/Laurent2916/following{/other_user}",
"gists_url": "https://api.github.com/users/Laurent2916/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Laurent2916/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Laurent2916/subscriptions",
"organizations_url": "https://api.github.com/users/Laurent2916/orgs",
"repos_url": "https://api.github.com/users/Laurent2916/repos",
"events_url": "https://api.github.com/users/Laurent2916/events{/privacy}",
"received_events_url": "https://api.github.com/users/Laurent2916/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892877,
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue",
"name": "good first issue",
"color": "7057ff",
"default": true,
"description": "Good for newcomers"
}
] | closed | false | null | [] | null | [
"Thanks for reporting! You are more than welcome to improve `BuilderConfig`'s docstring.\r\n\r\nThis class serves an identical purpose as `tensorflow_datasets`'s `BuilderConfig`, and its docstring is [here](https://github.com/tensorflow/datasets/blob/a95e38b5bb018312c3d3720619c2a8ef83ebf57f/tensorflow_datasets/core/dataset_builder.py#L81), so feel free to re-use parts of it."
] | 2023-05-04T12:14:34 | 2023-05-05T12:31:56 | 2023-05-05T12:31:56 | CONTRIBUTOR | null | null | null | Hi guys !
I stumbled upon this docstring while working on a project.
Some of the attributes have missing descriptions.
https://github.com/huggingface/datasets/blob/bc5fef5b6d91f009e4101684adcb374df2c170f6/src/datasets/builder.py#L104-L117 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5820/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5820/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5819 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5819/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5819/comments | https://api.github.com/repos/huggingface/datasets/issues/5819/events | https://github.com/huggingface/datasets/issues/5819 | 1,695,536,738 | I_kwDODunzps5lD9Zi | 5,819 | Cannot pickle error in Dataset.from_generator() | {
"login": "xinghaow99",
"id": 50691954,
"node_id": "MDQ6VXNlcjUwNjkxOTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/50691954?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xinghaow99",
"html_url": "https://github.com/xinghaow99",
"followers_url": "https://api.github.com/users/xinghaow99/followers",
"following_url": "https://api.github.com/users/xinghaow99/following{/other_user}",
"gists_url": "https://api.github.com/users/xinghaow99/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xinghaow99/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xinghaow99/subscriptions",
"organizations_url": "https://api.github.com/users/xinghaow99/orgs",
"repos_url": "https://api.github.com/users/xinghaow99/repos",
"events_url": "https://api.github.com/users/xinghaow99/events{/privacy}",
"received_events_url": "https://api.github.com/users/xinghaow99/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi! It should work if you put `model = torch.compile(model)` inside the `generate_data` function. If a referenced object is outside, it needs to be pickable, and that's not the case for the compiled models (or functions). ",
"> Hi! It should work if you put `model = torch.compile(model)` inside the `generate_data` function. If a referenced object is outside, it needs to be pickable, and that's not the case for the compiled models (or functions).\r\n\r\nHi! Thank you for your reply! Everything works perfectly with your suggestion!\r\n\r\nClosing the issue.\r\n"
] | 2023-05-04T08:39:09 | 2023-05-05T19:20:59 | 2023-05-05T19:20:58 | NONE | null | null | null | ### Describe the bug
I'm trying to use Dataset.from_generator() to generate a large dataset.
### Steps to reproduce the bug
Code to reproduce:
```
from transformers import T5Tokenizer, T5ForConditionalGeneration, GenerationConfig
import torch
from tqdm import tqdm
from datasets import load_dataset
tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-small")
model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-small", device_map="auto")
model = torch.compile(model)
def generate_data(data_loader):
model.eval()
for batch in tqdm(data_loader):
input_ids = tokenizer(batch['instruction'], return_tensors='pt', padding=True, truncation=True).input_ids.to("cuda:0")
with torch.no_grad():
outputs = model.generate(input_ids, generation_config=generation_config)
decoder_hidden_states = outputs.decoder_hidden_states
for i, h in zip(batch['instruction'], decoder_hidden_states):
yield {"instruction": i, "decoder_hidden_states": h}
generation_config = GenerationConfig(
temperature=1,
max_new_tokens=1024,
do_sample=False,
num_return_sequences=1,
return_dict_in_generate=True,
output_scores=True,
output_hidden_states=True,
)
from datasets import Dataset, load_dataset
from torch.utils.data import DataLoader
dataset = load_dataset("HuggingFaceH4/databricks_dolly_15k")
train_loader = DataLoader(dataset['train'], batch_size=2, shuffle=True)
dataset = Dataset.from_generator(generator=generate_data, gen_kwargs={"data_loader": train_loader})
dataset.save_to_disk("data/flant5_small_generation")
```
### Expected behavior
The dataset should be generated and saved.
But the following error occurred:
```
Traceback (most recent call last):
File "/remote-home/xhwang/alpaca-lora/data_collection_t5.py", line 46, in <module>
dataset = Dataset.from_generator(generator=generate_data, gen_kwargs={"data_loader": train_loader})
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 1035, in from_generator
return GeneratorDatasetInputStream(
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/io/generator.py", line 28, in __init__
self.builder = Generator(
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/builder.py", line 336, in __init__
self.config, self.config_id = self._create_builder_config(
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/builder.py", line 505, in _create_builder_config
config_id = builder_config.create_config_id(
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/builder.py", line 179, in create_config_id
suffix = Hasher.hash(config_kwargs_to_add_to_suffix)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/fingerprint.py", line 236, in hash
return cls.hash_default(value)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/fingerprint.py", line 229, in hash_default
return cls.hash_bytes(dumps(value))
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 726, in dumps
dump(obj, file)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 701, in dump
Pickler(file, recurse=True).dump(obj)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 394, in dump
StockPickler.dump(self, obj)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 487, in dump
self.save(obj)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 691, in save
dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 1186, in save_module_dict
StockPickler.save_dict(pickler, obj)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 972, in save_dict
self._batch_setitems(obj.items())
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 998, in _batch_setitems
save(v)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 691, in save
dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 1311, in save_function
dill._dill._save_with_postproc(
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 1084, in _save_with_postproc
pickler._batch_setitems(iter(source.items()))
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 998, in _batch_setitems
save(v)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 691, in save
dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 603, in save
self.save_reduce(obj=obj, *rv)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 717, in save_reduce
save(state)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 691, in save
dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 1186, in save_module_dict
StockPickler.save_dict(pickler, obj)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 972, in save_dict
self._batch_setitems(obj.items())
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 998, in _batch_setitems
save(v)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 691, in save
dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 603, in save
self.save_reduce(obj=obj, *rv)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 717, in save_reduce
save(state)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 691, in save
dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 1186, in save_module_dict
StockPickler.save_dict(pickler, obj)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 972, in save_dict
self._batch_setitems(obj.items())
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 998, in _batch_setitems
save(v)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 691, in save
dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 1311, in save_function
dill._dill._save_with_postproc(
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 1070, in _save_with_postproc
pickler.save_reduce(*reduction, obj=obj)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 717, in save_reduce
save(state)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 691, in save
dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 887, in save_tuple
save(element)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 691, in save
dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 1186, in save_module_dict
StockPickler.save_dict(pickler, obj)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 972, in save_dict
self._batch_setitems(obj.items())
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 998, in _batch_setitems
save(v)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 691, in save
dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 1311, in save_function
dill._dill._save_with_postproc(
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 1070, in _save_with_postproc
pickler.save_reduce(*reduction, obj=obj)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 717, in save_reduce
save(state)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 691, in save
dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 887, in save_tuple
save(element)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 691, in save
dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 1186, in save_module_dict
StockPickler.save_dict(pickler, obj)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 972, in save_dict
self._batch_setitems(obj.items())
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 1003, in _batch_setitems
save(v)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 691, in save
dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 1311, in save_function
dill._dill._save_with_postproc(
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 1084, in _save_with_postproc
pickler._batch_setitems(iter(source.items()))
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 998, in _batch_setitems
save(v)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 691, in save
dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 578, in save
rv = reduce(self.proto)
TypeError: cannot pickle 'ConfigModuleInstance' object
```
### Environment info
- `datasets` version: 2.11.0
- Platform: Linux-4.15.0-156-generic-x86_64-with-glibc2.31
- Python version: 3.10.10
- Huggingface_hub version: 0.13.2
- PyArrow version: 11.0.0
- Pandas version: 1.5.3 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5819/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5819/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5817 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5817/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5817/comments | https://api.github.com/repos/huggingface/datasets/issues/5817/events | https://github.com/huggingface/datasets/issues/5817 | 1,694,891,866 | I_kwDODunzps5lBf9a | 5,817 | Setting `num_proc` errors when `.map` returns additional items. | {
"login": "davidgilbertson",
"id": 4443482,
"node_id": "MDQ6VXNlcjQ0NDM0ODI=",
"avatar_url": "https://avatars.githubusercontent.com/u/4443482?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/davidgilbertson",
"html_url": "https://github.com/davidgilbertson",
"followers_url": "https://api.github.com/users/davidgilbertson/followers",
"following_url": "https://api.github.com/users/davidgilbertson/following{/other_user}",
"gists_url": "https://api.github.com/users/davidgilbertson/gists{/gist_id}",
"starred_url": "https://api.github.com/users/davidgilbertson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/davidgilbertson/subscriptions",
"organizations_url": "https://api.github.com/users/davidgilbertson/orgs",
"repos_url": "https://api.github.com/users/davidgilbertson/repos",
"events_url": "https://api.github.com/users/davidgilbertson/events{/privacy}",
"received_events_url": "https://api.github.com/users/davidgilbertson/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi ! Unfortunately I couldn't reproduce on my side locally and with datasets 2.11 and python 3.10.11 on colab.\r\nWhat version of `multiprocess` are you using ?",
"I've got `multiprocess` version `0.70.14`.\r\n\r\nI've done some more testing and the error only occurs in PyCharm's Python Console. It seems to be [this PyCharm bug](https://youtrack.jetbrains.com/issue/PY-51922/Multiprocessing-bug.-Can-only-run-in-debugger.), I'll close this.",
"For other users facing this, my workaround is to conditionally set `num_proc` so I can work interactively in the PyCharm Python Console while developing, then when I'm ready to run on the whole dataset, run it as a script and use multiprocessing.\r\n\r\n```py\r\nmapped_ds = ds.map(\r\n my_map_function,\r\n batched=True,\r\n remove_columns=ds.column_names,\r\n num_proc=1 if \"PYCHARM_HOSTED\" in os.environ else 8,\r\n)\r\n```"
] | 2023-05-03T21:46:53 | 2023-05-04T21:14:21 | 2023-05-04T20:22:25 | NONE | null | null | null | ### Describe the bug
I'm using a map function that returns more rows than are passed in.
If I try to use `num_proc` I get:
```
File "/home/davidg/.virtualenvs/learning/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 563, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/home/davidg/.virtualenvs/learning/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 528, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/home/davidg/.virtualenvs/learning/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 3097, in map
for rank, done, content in iflatmap_unordered(
File "/home/davidg/.virtualenvs/learning/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 1372, in iflatmap_unordered
yield queue.get(timeout=0.05)
File "<string>", line 2, in get
File "/home/davidg/.virtualenvs/learning/lib/python3.10/site-packages/multiprocess/managers.py", line 818, in _callmethod
kind, result = conn.recv()
File "/home/davidg/.virtualenvs/learning/lib/python3.10/site-packages/multiprocess/connection.py", line 258, in recv
buf = self._recv_bytes()
File "/home/davidg/.virtualenvs/learning/lib/python3.10/site-packages/multiprocess/connection.py", line 422, in _recv_bytes
buf = self._recv(4)
File "/home/davidg/.virtualenvs/learning/lib/python3.10/site-packages/multiprocess/connection.py", line 391, in _recv
raise EOFError
EOFError
```
### Steps to reproduce the bug
This is copied from the [Datasets docs](https://huggingface.co/docs/datasets/v2.12.0/en/process#batch-processing), with `num_proc` added, and will error.
```py
import datasets
dataset = ... # any old dataset
def chunk_examples(examples):
chunks = []
for sentence in examples["text"]:
chunks += [sentence[i : i + 50] for i in range(0, len(sentence), 50)]
return {"chunks": chunks}
chunked_dataset = dataset.map(
chunk_examples,
batched=True,
remove_columns=dataset.column_names,
num_proc=2, # Remove and it works
)
```
### Expected behavior
Should work fine. On a related note, multi-processing also fails if there is a Meta class anywhere in scope (and there are plenty in the standard library). This is the fault of `dill` and is a long standing issue.
Have you considered using Loky for multiprocessing? I've found that the built-in `datasets` multi-processing breaks more than it works so have written my own function using `loky`, for reference:
```py
import datasets
import loky
def fast_loop(dataset: datasets.Dataset, func, num_proc=None):
if num_proc is None:
import os
num_proc = len(os.sched_getaffinity(0))
shards = [
dataset.shard(num_shards=num_proc, index=i, contiguous=True)
for i in range(num_proc)
]
executor = loky.get_reusable_executor(max_workers=num_proc)
results = executor.map(func, shards)
return datasets.combine.concatenate_datasets(list(results))
```
### Environment info
- `datasets` version: 2.11.0
- Platform: Linux-5.15.90.1-microsoft-standard-WSL2-x86_64-with-glibc2.31
- Python version: 3.10.8
- Huggingface_hub version: 0.12.1
- PyArrow version: 11.0.0
- Pandas version: 2.0.1 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5817/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5817/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5816 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5816/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5816/comments | https://api.github.com/repos/huggingface/datasets/issues/5816/events | https://github.com/huggingface/datasets/pull/5816 | 1,694,590,856 | PR_kwDODunzps5Ps4t9 | 5,816 | Preserve `stopping_strategy` of shuffled interleaved dataset (random cycling case) | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007862 / 0.011353 (-0.003491) | 0.005747 / 0.011008 (-0.005261) | 0.106818 / 0.038508 (0.068310) | 0.036630 / 0.023109 (0.013521) | 0.344218 / 0.275898 (0.068320) | 0.398803 / 0.323480 (0.075324) | 0.006187 / 0.007986 (-0.001799) | 0.005686 / 0.004328 (0.001358) | 0.078568 / 0.004250 (0.074318) | 0.051786 / 0.037052 (0.014734) | 0.361736 / 0.258489 (0.103247) | 0.396323 / 0.293841 (0.102482) | 0.037943 / 0.128546 (-0.090603) | 0.013957 / 0.075646 (-0.061689) | 0.366782 / 0.419271 (-0.052490) | 0.054700 / 0.043533 (0.011167) | 0.349692 / 0.255139 (0.094553) | 0.366481 / 0.283200 (0.083281) | 0.117394 / 0.141683 (-0.024289) | 1.593156 / 1.452155 (0.141001) | 1.708864 / 1.492716 (0.216148) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.229529 / 0.018006 (0.211523) | 0.490531 / 0.000490 (0.490042) | 0.002934 / 0.000200 (0.002734) | 0.000094 / 0.000054 (0.000040) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028074 / 0.037411 (-0.009337) | 0.122321 / 0.014526 (0.107795) | 0.129120 / 0.176557 (-0.047436) | 0.188413 / 0.737135 (-0.548722) | 0.138983 / 0.296338 (-0.157355) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.479350 / 0.215209 (0.264141) | 4.926201 / 2.077655 (2.848546) | 2.265557 / 1.504120 (0.761437) | 2.014580 / 1.541195 (0.473386) | 2.120517 / 1.468490 (0.652027) | 0.795334 / 4.584777 (-3.789443) | 4.509754 / 3.745712 (0.764042) | 4.328313 / 5.269862 (-0.941548) | 2.153304 / 4.565676 (-2.412373) | 0.102942 / 0.424275 (-0.321333) | 0.053504 / 0.007607 (0.045896) | 0.609392 / 0.226044 (0.383347) | 6.114048 / 2.268929 (3.845119) | 2.773306 / 55.444624 (-52.671318) | 2.443434 / 6.876477 (-4.433042) | 2.612005 / 2.142072 (0.469932) | 0.950435 / 4.805227 (-3.854792) | 0.194081 / 6.500664 (-6.306583) | 0.074513 / 0.075469 (-0.000956) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.402897 / 1.841788 (-0.438891) | 18.263033 / 8.074308 (10.188724) | 16.579809 / 10.191392 (6.388417) | 0.212319 / 0.680424 (-0.468104) | 0.020468 / 0.534201 (-0.513733) | 0.494850 / 0.579283 (-0.084433) | 0.483790 / 0.434364 (0.049426) | 0.572073 / 0.540337 (0.031735) | 0.684353 / 1.386936 (-0.702583) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009732 / 0.011353 (-0.001621) | 0.005901 / 0.011008 (-0.005107) | 0.084568 / 0.038508 (0.046060) | 0.038743 / 0.023109 (0.015634) | 0.431323 / 0.275898 (0.155425) | 0.472124 / 0.323480 (0.148644) | 0.006255 / 0.007986 (-0.001731) | 0.005892 / 0.004328 (0.001563) | 0.081913 / 0.004250 (0.077662) | 0.055560 / 0.037052 (0.018507) | 0.442857 / 0.258489 (0.184368) | 0.481887 / 0.293841 (0.188046) | 0.040730 / 0.128546 (-0.087816) | 0.014339 / 0.075646 (-0.061307) | 0.099258 / 0.419271 (-0.320013) | 0.054692 / 0.043533 (0.011159) | 0.436323 / 0.255139 (0.181184) | 0.461046 / 0.283200 (0.177846) | 0.125972 / 0.141683 (-0.015710) | 1.673173 / 1.452155 (0.221018) | 1.781364 / 1.492716 (0.288648) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.271450 / 0.018006 (0.253444) | 0.514484 / 0.000490 (0.513994) | 0.000455 / 0.000200 (0.000255) | 0.000061 / 0.000054 (0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.036104 / 0.037411 (-0.001308) | 0.143306 / 0.014526 (0.128780) | 0.151105 / 0.176557 (-0.025451) | 0.210737 / 0.737135 (-0.526399) | 0.151404 / 0.296338 (-0.144934) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.573613 / 0.215209 (0.358404) | 5.828222 / 2.077655 (3.750567) | 2.993028 / 1.504120 (1.488908) | 2.617900 / 1.541195 (1.076706) | 2.754673 / 1.468490 (1.286183) | 1.010624 / 4.584777 (-3.574152) | 4.971261 / 3.745712 (1.225549) | 4.382017 / 5.269862 (-0.887845) | 1.971894 / 4.565676 (-2.593782) | 0.104404 / 0.424275 (-0.319871) | 0.014595 / 0.007607 (0.006988) | 0.657684 / 0.226044 (0.431639) | 6.566151 / 2.268929 (4.297222) | 3.221378 / 55.444624 (-52.223246) | 2.809402 / 6.876477 (-4.067075) | 2.882426 / 2.142072 (0.740354) | 1.006134 / 4.805227 (-3.799093) | 0.204469 / 6.500664 (-6.296196) | 0.078147 / 0.075469 (0.002678) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.574768 / 1.841788 (-0.267020) | 18.193335 / 8.074308 (10.119027) | 17.275353 / 10.191392 (7.083961) | 0.166890 / 0.680424 (-0.513534) | 0.020612 / 0.534201 (-0.513589) | 0.496179 / 0.579283 (-0.083104) | 0.507824 / 0.434364 (0.073460) | 0.620984 / 0.540337 (0.080647) | 0.749727 / 1.386936 (-0.637209) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#06988d3e01820b93ebcdc76158339fd6f67329dc \"CML watermark\")\n",
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006534 / 0.011353 (-0.004819) | 0.004456 / 0.011008 (-0.006553) | 0.097978 / 0.038508 (0.059470) | 0.027614 / 0.023109 (0.004505) | 0.309833 / 0.275898 (0.033935) | 0.337006 / 0.323480 (0.013526) | 0.004986 / 0.007986 (-0.002999) | 0.004521 / 0.004328 (0.000193) | 0.075053 / 0.004250 (0.070803) | 0.037095 / 0.037052 (0.000043) | 0.305430 / 0.258489 (0.046941) | 0.345298 / 0.293841 (0.051457) | 0.029784 / 0.128546 (-0.098762) | 0.011449 / 0.075646 (-0.064197) | 0.323346 / 0.419271 (-0.095925) | 0.042188 / 0.043533 (-0.001345) | 0.318653 / 0.255139 (0.063514) | 0.333799 / 0.283200 (0.050599) | 0.088194 / 0.141683 (-0.053488) | 1.511012 / 1.452155 (0.058857) | 1.578205 / 1.492716 (0.085489) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.229695 / 0.018006 (0.211689) | 0.413276 / 0.000490 (0.412786) | 0.009142 / 0.000200 (0.008942) | 0.000537 / 0.000054 (0.000482) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024327 / 0.037411 (-0.013084) | 0.097953 / 0.014526 (0.083427) | 0.105551 / 0.176557 (-0.071005) | 0.169397 / 0.737135 (-0.567738) | 0.109784 / 0.296338 (-0.186554) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.417713 / 0.215209 (0.202504) | 4.190703 / 2.077655 (2.113048) | 1.873504 / 1.504120 (0.369384) | 1.664540 / 1.541195 (0.123346) | 1.704539 / 1.468490 (0.236049) | 0.699840 / 4.584777 (-3.884937) | 3.480605 / 3.745712 (-0.265107) | 1.844229 / 5.269862 (-3.425633) | 1.155793 / 4.565676 (-3.409883) | 0.083013 / 0.424275 (-0.341262) | 0.012414 / 0.007607 (0.004807) | 0.518357 / 0.226044 (0.292313) | 5.186136 / 2.268929 (2.917207) | 2.329263 / 55.444624 (-53.115361) | 1.991395 / 6.876477 (-4.885081) | 2.074563 / 2.142072 (-0.067509) | 0.801388 / 4.805227 (-4.003839) | 0.152236 / 6.500664 (-6.348428) | 0.067414 / 0.075469 (-0.008055) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.197290 / 1.841788 (-0.644497) | 13.666537 / 8.074308 (5.592229) | 13.017190 / 10.191392 (2.825798) | 0.142109 / 0.680424 (-0.538314) | 0.016321 / 0.534201 (-0.517880) | 0.378434 / 0.579283 (-0.200849) | 0.381101 / 0.434364 (-0.053263) | 0.444113 / 0.540337 (-0.096225) | 0.521448 / 1.386936 (-0.865488) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006273 / 0.011353 (-0.005080) | 0.004408 / 0.011008 (-0.006600) | 0.077100 / 0.038508 (0.038592) | 0.027361 / 0.023109 (0.004251) | 0.358170 / 0.275898 (0.082272) | 0.390125 / 0.323480 (0.066646) | 0.004736 / 0.007986 (-0.003250) | 0.004663 / 0.004328 (0.000334) | 0.077626 / 0.004250 (0.073376) | 0.037103 / 0.037052 (0.000051) | 0.360044 / 0.258489 (0.101555) | 0.411539 / 0.293841 (0.117698) | 0.030173 / 0.128546 (-0.098373) | 0.011618 / 0.075646 (-0.064028) | 0.086036 / 0.419271 (-0.333235) | 0.039077 / 0.043533 (-0.004456) | 0.382223 / 0.255139 (0.127084) | 0.384817 / 0.283200 (0.101618) | 0.094591 / 0.141683 (-0.047092) | 1.494961 / 1.452155 (0.042807) | 1.583769 / 1.492716 (0.091053) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.227467 / 0.018006 (0.209460) | 0.396648 / 0.000490 (0.396159) | 0.000382 / 0.000200 (0.000182) | 0.000057 / 0.000054 (0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025346 / 0.037411 (-0.012065) | 0.102086 / 0.014526 (0.087560) | 0.108570 / 0.176557 (-0.067986) | 0.158777 / 0.737135 (-0.578359) | 0.112885 / 0.296338 (-0.183453) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.460731 / 0.215209 (0.245522) | 4.556450 / 2.077655 (2.478795) | 2.258185 / 1.504120 (0.754065) | 2.122584 / 1.541195 (0.581389) | 2.224638 / 1.468490 (0.756148) | 0.691909 / 4.584777 (-3.892868) | 3.482634 / 3.745712 (-0.263078) | 2.772837 / 5.269862 (-2.497024) | 1.533897 / 4.565676 (-3.031780) | 0.083025 / 0.424275 (-0.341250) | 0.012629 / 0.007607 (0.005022) | 0.548397 / 0.226044 (0.322352) | 5.492005 / 2.268929 (3.223077) | 2.669841 / 55.444624 (-52.774784) | 2.366947 / 6.876477 (-4.509529) | 2.496795 / 2.142072 (0.354722) | 0.804868 / 4.805227 (-4.000359) | 0.151686 / 6.500664 (-6.348978) | 0.068333 / 0.075469 (-0.007136) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.320414 / 1.841788 (-0.521374) | 14.367567 / 8.074308 (6.293258) | 14.047702 / 10.191392 (3.856310) | 0.129087 / 0.680424 (-0.551337) | 0.016658 / 0.534201 (-0.517543) | 0.381949 / 0.579283 (-0.197335) | 0.390105 / 0.434364 (-0.044258) | 0.445947 / 0.540337 (-0.094390) | 0.531074 / 1.386936 (-0.855862) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#c67c9f3797ecc231b34d87ddef489c1238ec4046 \"CML watermark\")\n"
] | 2023-05-03T18:34:18 | 2023-05-04T14:31:55 | 2023-05-04T14:24:49 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5816",
"html_url": "https://github.com/huggingface/datasets/pull/5816",
"diff_url": "https://github.com/huggingface/datasets/pull/5816.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5816.patch",
"merged_at": "2023-05-04T14:24:49"
} | Preserve the `stopping_strategy` in the `RandomlyCyclingMultiSourcesExamplesIterable.shard_data_sources` to fix shuffling a dataset interleaved (from multiple sources) with probabilities.
Fix #5812
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5816/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5816/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5813 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5813/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5813/comments | https://api.github.com/repos/huggingface/datasets/issues/5813/events | https://github.com/huggingface/datasets/pull/5813 | 1,691,908,535 | PR_kwDODunzps5Pj0_E | 5,813 | [DO-NOT-MERGE] Debug Windows issue at #3 | {
"login": "HyukjinKwon",
"id": 6477701,
"node_id": "MDQ6VXNlcjY0Nzc3MDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/6477701?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HyukjinKwon",
"html_url": "https://github.com/HyukjinKwon",
"followers_url": "https://api.github.com/users/HyukjinKwon/followers",
"following_url": "https://api.github.com/users/HyukjinKwon/following{/other_user}",
"gists_url": "https://api.github.com/users/HyukjinKwon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HyukjinKwon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HyukjinKwon/subscriptions",
"organizations_url": "https://api.github.com/users/HyukjinKwon/orgs",
"repos_url": "https://api.github.com/users/HyukjinKwon/repos",
"events_url": "https://api.github.com/users/HyukjinKwon/events{/privacy}",
"received_events_url": "https://api.github.com/users/HyukjinKwon/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2023-05-02T07:19:34 | 2023-05-02T07:21:30 | 2023-05-02T07:21:30 | NONE | null | true | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5813",
"html_url": "https://github.com/huggingface/datasets/pull/5813",
"diff_url": "https://github.com/huggingface/datasets/pull/5813.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5813.patch",
"merged_at": null
} | TBD | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5813/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5813/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5812 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5812/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5812/comments | https://api.github.com/repos/huggingface/datasets/issues/5812/events | https://github.com/huggingface/datasets/issues/5812 | 1,691,798,169 | I_kwDODunzps5k1sqZ | 5,812 | Cannot shuffle interleaved IterableDataset with "all_exhausted" stopping strategy | {
"login": "off99555",
"id": 15215732,
"node_id": "MDQ6VXNlcjE1MjE1NzMy",
"avatar_url": "https://avatars.githubusercontent.com/u/15215732?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/off99555",
"html_url": "https://github.com/off99555",
"followers_url": "https://api.github.com/users/off99555/followers",
"following_url": "https://api.github.com/users/off99555/following{/other_user}",
"gists_url": "https://api.github.com/users/off99555/gists{/gist_id}",
"starred_url": "https://api.github.com/users/off99555/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/off99555/subscriptions",
"organizations_url": "https://api.github.com/users/off99555/orgs",
"repos_url": "https://api.github.com/users/off99555/repos",
"events_url": "https://api.github.com/users/off99555/events{/privacy}",
"received_events_url": "https://api.github.com/users/off99555/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 3287858981,
"node_id": "MDU6TGFiZWwzMjg3ODU4OTgx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/streaming",
"name": "streaming",
"color": "fef2c0",
"default": false,
"description": ""
}
] | closed | false | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 2023-05-02T05:26:17 | 2023-05-04T14:24:51 | 2023-05-04T14:24:51 | NONE | null | null | null | ### Describe the bug
Shuffling interleaved `IterableDataset` with "all_exhausted" strategy yields non-exhaustive sampling.
### Steps to reproduce the bug
```py
from datasets import IterableDataset, interleave_datasets
def gen(bias, length):
for i in range(length):
yield dict(a=bias+i)
seed = 42
probabilities = [0.2, 0.6, 0.2]
d1 = IterableDataset.from_generator(lambda: gen(0, 3))
d2 = IterableDataset.from_generator(lambda: gen(10, 4))
d3 = IterableDataset.from_generator(lambda: gen(20, 3))
ds = interleave_datasets([d1, d2, d3], probabilities=probabilities, seed=seed, stopping_strategy='all_exhausted')
ds = ds.shuffle(buffer_size=1000)
for x in ds:
print(x)
```
This code produces
```
{'a': 0}
{'a': 22}
{'a': 20}
{'a': 21}
{'a': 10}
{'a': 1}
```
### Expected behavior
It should produce a longer list of examples to exhaust all the datasets.
If you comment out the shuffle line, it will exhaust all the datasets properly.
Here is the output if you comment out shuffling:
```
{'a': 10}
{'a': 11}
{'a': 20}
{'a': 12}
{'a': 0}
{'a': 21}
{'a': 13}
{'a': 10}
{'a': 1}
{'a': 11}
{'a': 12}
{'a': 22}
{'a': 13}
{'a': 20}
{'a': 10}
{'a': 11}
{'a': 12}
{'a': 2}
```
### Environment info
- `datasets` version: 2.12.0
- Platform: Linux-5.10.147+-x86_64-with-glibc2.31
- Python version: 3.10.11
- Huggingface_hub version: 0.14.1
- PyArrow version: 9.0.0
- Pandas version: 1.5.3
This was run on Google Colab. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5812/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5812/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5810 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5810/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5810/comments | https://api.github.com/repos/huggingface/datasets/issues/5810/events | https://github.com/huggingface/datasets/pull/5810 | 1,689,917,822 | PR_kwDODunzps5PdJHI | 5,810 | Add `fn_kwargs` to `map` and `filter` of `IterableDataset` and `IterableDatasetDict` | {
"login": "yuukicammy",
"id": 3927621,
"node_id": "MDQ6VXNlcjM5Mjc2MjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/3927621?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yuukicammy",
"html_url": "https://github.com/yuukicammy",
"followers_url": "https://api.github.com/users/yuukicammy/followers",
"following_url": "https://api.github.com/users/yuukicammy/following{/other_user}",
"gists_url": "https://api.github.com/users/yuukicammy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yuukicammy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yuukicammy/subscriptions",
"organizations_url": "https://api.github.com/users/yuukicammy/orgs",
"repos_url": "https://api.github.com/users/yuukicammy/repos",
"events_url": "https://api.github.com/users/yuukicammy/events{/privacy}",
"received_events_url": "https://api.github.com/users/yuukicammy/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Sorry, the local test passed because it was inadvertently testing the main branch. I am currently fixing where the test failed.",
"- I have fixed the bug and addressed the above two points.\r\n- I have tested locally and confirmed that the test passes.\r\n\r\nPlease check the contents. @lhoestq \r\n\r\n5715a7e64bdd2951e6705aee58d592392e1538d6",
"Cool ! You can run `make style` to fix code formatting to fix the ci",
"I had forgotten about it. I did it. @lhoestq \r\n00248926a37c6f1387614aa388c36fdc105a59f5",
"Thanks for putting this together @yuukicammy ! Looking forward to using this new addition ASAP. \r\n@lhoestq - sorry to bother you with this, but if this looks good to you, any chance we could get this merged in? \r\n\r\nThanks again to you both! ",
"Yup there's just one test to remove and we can merge",
"Sorry for my understanding wrong! Correspondence has been addressed. @lhoestq \r\n ca511b7b29fdde51ffd69b58bda79220472e9e94\r\n\r\nThanks for your comment! @brianhill11 ",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006788 / 0.011353 (-0.004564) | 0.004372 / 0.011008 (-0.006636) | 0.097746 / 0.038508 (0.059238) | 0.034858 / 0.023109 (0.011749) | 0.298122 / 0.275898 (0.022224) | 0.335272 / 0.323480 (0.011792) | 0.005810 / 0.007986 (-0.002175) | 0.004944 / 0.004328 (0.000616) | 0.072352 / 0.004250 (0.068101) | 0.041730 / 0.037052 (0.004678) | 0.316482 / 0.258489 (0.057992) | 0.338710 / 0.293841 (0.044869) | 0.027975 / 0.128546 (-0.100571) | 0.008746 / 0.075646 (-0.066901) | 0.329336 / 0.419271 (-0.089935) | 0.051327 / 0.043533 (0.007794) | 0.300695 / 0.255139 (0.045556) | 0.322813 / 0.283200 (0.039613) | 0.101133 / 0.141683 (-0.040550) | 1.422767 / 1.452155 (-0.029388) | 1.538364 / 1.492716 (0.045648) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.016698 / 0.018006 (-0.001308) | 0.447042 / 0.000490 (0.446552) | 0.007609 / 0.000200 (0.007409) | 0.000277 / 0.000054 (0.000223) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026732 / 0.037411 (-0.010679) | 0.108295 / 0.014526 (0.093769) | 0.116905 / 0.176557 (-0.059652) | 0.173166 / 0.737135 (-0.563969) | 0.122560 / 0.296338 (-0.173779) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.394893 / 0.215209 (0.179683) | 3.950314 / 2.077655 (1.872659) | 1.780576 / 1.504120 (0.276456) | 1.579855 / 1.541195 (0.038660) | 1.711197 / 1.468490 (0.242707) | 0.521469 / 4.584777 (-4.063308) | 3.838850 / 3.745712 (0.093138) | 3.101095 / 5.269862 (-2.168767) | 1.531574 / 4.565676 (-3.034102) | 0.065291 / 0.424275 (-0.358984) | 0.011979 / 0.007607 (0.004372) | 0.496543 / 0.226044 (0.270498) | 4.965446 / 2.268929 (2.696517) | 2.250788 / 55.444624 (-53.193837) | 1.923231 / 6.876477 (-4.953245) | 2.075372 / 2.142072 (-0.066700) | 0.638708 / 4.805227 (-4.166519) | 0.142048 / 6.500664 (-6.358616) | 0.064225 / 0.075469 (-0.011244) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.211799 / 1.841788 (-0.629989) | 14.791822 / 8.074308 (6.717514) | 14.274993 / 10.191392 (4.083601) | 0.163942 / 0.680424 (-0.516482) | 0.017541 / 0.534201 (-0.516660) | 0.396440 / 0.579283 (-0.182843) | 0.427502 / 0.434364 (-0.006861) | 0.494273 / 0.540337 (-0.046064) | 0.586877 / 1.386936 (-0.800059) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006846 / 0.011353 (-0.004506) | 0.004854 / 0.011008 (-0.006154) | 0.075654 / 0.038508 (0.037146) | 0.034295 / 0.023109 (0.011186) | 0.378095 / 0.275898 (0.102197) | 0.407833 / 0.323480 (0.084353) | 0.006155 / 0.007986 (-0.001830) | 0.004259 / 0.004328 (-0.000070) | 0.076195 / 0.004250 (0.071944) | 0.051901 / 0.037052 (0.014849) | 0.375027 / 0.258489 (0.116538) | 0.428189 / 0.293841 (0.134348) | 0.028814 / 0.128546 (-0.099733) | 0.009209 / 0.075646 (-0.066438) | 0.083681 / 0.419271 (-0.335591) | 0.049158 / 0.043533 (0.005625) | 0.366669 / 0.255139 (0.111530) | 0.388767 / 0.283200 (0.105568) | 0.107837 / 0.141683 (-0.033845) | 1.476354 / 1.452155 (0.024199) | 1.580160 / 1.492716 (0.087443) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.218900 / 0.018006 (0.200894) | 0.445475 / 0.000490 (0.444985) | 0.000423 / 0.000200 (0.000223) | 0.000063 / 0.000054 (0.000008) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029740 / 0.037411 (-0.007671) | 0.115192 / 0.014526 (0.100666) | 0.122439 / 0.176557 (-0.054118) | 0.170639 / 0.737135 (-0.566496) | 0.128085 / 0.296338 (-0.168254) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.437745 / 0.215209 (0.222536) | 4.385695 / 2.077655 (2.308040) | 2.189893 / 1.504120 (0.685773) | 2.023160 / 1.541195 (0.481965) | 2.112798 / 1.468490 (0.644308) | 0.522497 / 4.584777 (-4.062280) | 3.881356 / 3.745712 (0.135644) | 3.206090 / 5.269862 (-2.063772) | 1.308241 / 4.565676 (-3.257435) | 0.065635 / 0.424275 (-0.358640) | 0.012288 / 0.007607 (0.004681) | 0.537265 / 0.226044 (0.311220) | 5.361641 / 2.268929 (3.092712) | 2.638941 / 55.444624 (-52.805684) | 2.344717 / 6.876477 (-4.531759) | 2.437619 / 2.142072 (0.295546) | 0.645079 / 4.805227 (-4.160149) | 0.143852 / 6.500664 (-6.356812) | 0.065796 / 0.075469 (-0.009673) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.276588 / 1.841788 (-0.565200) | 15.239396 / 8.074308 (7.165088) | 13.150591 / 10.191392 (2.959199) | 0.163635 / 0.680424 (-0.516789) | 0.017533 / 0.534201 (-0.516668) | 0.397659 / 0.579283 (-0.181624) | 0.425589 / 0.434364 (-0.008774) | 0.466570 / 0.540337 (-0.073768) | 0.563953 / 1.386936 (-0.822983) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#807d5c5ed4f8db7761b92bed498b2193acce8fb7 \"CML watermark\")\n"
] | 2023-04-30T13:23:01 | 2023-05-22T08:12:39 | 2023-05-22T08:05:31 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5810",
"html_url": "https://github.com/huggingface/datasets/pull/5810",
"diff_url": "https://github.com/huggingface/datasets/pull/5810.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5810.patch",
"merged_at": "2023-05-22T08:05:31"
} | # Overview
I've added an argument`fn_kwargs` for map and filter methods of `IterableDataset` and `IterableDatasetDict` classes.
# Details
Currently, the map and filter methods of some classes related to `IterableDataset` do not allow specifing the arguments passed to the function. This pull request adds `fn_kwargs` to pass arguments to the mapping function. This allows users to preprocess data more flexibly.
Added `fn_kwargs` to the following classes and methods (description of the argument is also added).
1. class `FilteredExamplesIterable`
2. method `filter` of class `IterableDataset`
3. method `map` of class `IterableDatasetDict`
4. method `filter` of class `IterableDatasetDict`
# Example of changes
Here's an example of how to use the new functionality:
```python
from datasets import IterableDatasetDict
def preprocess_function(example, a=None, b=None):
# do something
return example
dataset = IterableDatasetDict(...)
dataset = dataset.map(preprocess_function, fn_kwargs={"a": 1, "b": 2})
```
# Related Issues
This pull request is related to the following issue:
https://github.com/huggingface/datasets/issues/3444 .
# Testing
I have added unit tests to test the new functionality.
In test_iterable_dataset.py
- Added `test_filtered_examples_iterable_with_fn_kwargs` for [1](#details).
- Added `test_iterable_dataset_filter` for [2](#details).
- Added `test_iterable_dataset_map_with_fn_kwargs`. This is not a newly added feature, but was added because it was not tested.
In test_dataset_dict.py
- Added `_create_dummy_iterable_dataset` for [3](#details) and [4](#details).
- Added `_create_dummy_iterable_dataset_dict` for [3](#details) and [4](#details).
- Added `test_iterable_map` for [3](#details).
- Added `test_iterable_filter` for [4](#details).
Note that, there is no test for `IterableDatasetDict` at the current main branch. I thought about writing tests for `IterableDatasetDict` in a new file, but I decided to add them in the test file for `DatasetDict` (test_dataset_dict.py).
# Checklist
- [x] Format the code.
- [x] Added tests.
- [x] Passed tests locally. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5810/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5810/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5804 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5804/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5804/comments | https://api.github.com/repos/huggingface/datasets/issues/5804/events | https://github.com/huggingface/datasets/pull/5804 | 1,688,285,666 | PR_kwDODunzps5PX0Dk | 5,804 | Set dev version | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5804). All of your documentation changes will be reflected on that endpoint.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006448 / 0.011353 (-0.004905) | 0.004440 / 0.011008 (-0.006568) | 0.097837 / 0.038508 (0.059328) | 0.027754 / 0.023109 (0.004645) | 0.306462 / 0.275898 (0.030564) | 0.332454 / 0.323480 (0.008975) | 0.004984 / 0.007986 (-0.003001) | 0.004703 / 0.004328 (0.000375) | 0.075213 / 0.004250 (0.070962) | 0.036524 / 0.037052 (-0.000529) | 0.310149 / 0.258489 (0.051659) | 0.346392 / 0.293841 (0.052552) | 0.031012 / 0.128546 (-0.097534) | 0.011598 / 0.075646 (-0.064049) | 0.323066 / 0.419271 (-0.096206) | 0.042945 / 0.043533 (-0.000588) | 0.302286 / 0.255139 (0.047147) | 0.327813 / 0.283200 (0.044614) | 0.092540 / 0.141683 (-0.049143) | 1.532893 / 1.452155 (0.080739) | 1.556676 / 1.492716 (0.063960) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.195126 / 0.018006 (0.177120) | 0.399623 / 0.000490 (0.399133) | 0.003176 / 0.000200 (0.002976) | 0.000068 / 0.000054 (0.000014) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023612 / 0.037411 (-0.013799) | 0.097794 / 0.014526 (0.083268) | 0.104665 / 0.176557 (-0.071891) | 0.167145 / 0.737135 (-0.569990) | 0.108769 / 0.296338 (-0.187570) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.437818 / 0.215209 (0.222608) | 4.354896 / 2.077655 (2.277242) | 2.092832 / 1.504120 (0.588712) | 1.957630 / 1.541195 (0.416435) | 2.033135 / 1.468490 (0.564645) | 0.702316 / 4.584777 (-3.882461) | 3.448035 / 3.745712 (-0.297678) | 1.906762 / 5.269862 (-3.363100) | 1.253274 / 4.565676 (-3.312402) | 0.082486 / 0.424275 (-0.341789) | 0.012442 / 0.007607 (0.004835) | 0.532096 / 0.226044 (0.306052) | 5.366580 / 2.268929 (3.097652) | 2.441904 / 55.444624 (-53.002720) | 2.112116 / 6.876477 (-4.764361) | 2.185471 / 2.142072 (0.043398) | 0.797905 / 4.805227 (-4.007322) | 0.149811 / 6.500664 (-6.350853) | 0.066507 / 0.075469 (-0.008962) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.206300 / 1.841788 (-0.635487) | 13.620851 / 8.074308 (5.546543) | 14.190666 / 10.191392 (3.999274) | 0.142343 / 0.680424 (-0.538081) | 0.016867 / 0.534201 (-0.517334) | 0.381557 / 0.579283 (-0.197726) | 0.373935 / 0.434364 (-0.060429) | 0.437856 / 0.540337 (-0.102481) | 0.525235 / 1.386936 (-0.861701) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006598 / 0.011353 (-0.004755) | 0.004487 / 0.011008 (-0.006522) | 0.077582 / 0.038508 (0.039073) | 0.028008 / 0.023109 (0.004899) | 0.341602 / 0.275898 (0.065704) | 0.377105 / 0.323480 (0.053625) | 0.004999 / 0.007986 (-0.002986) | 0.004791 / 0.004328 (0.000462) | 0.076418 / 0.004250 (0.072167) | 0.038347 / 0.037052 (0.001295) | 0.343196 / 0.258489 (0.084707) | 0.382459 / 0.293841 (0.088618) | 0.030597 / 0.128546 (-0.097950) | 0.011579 / 0.075646 (-0.064067) | 0.085876 / 0.419271 (-0.333396) | 0.043241 / 0.043533 (-0.000292) | 0.343754 / 0.255139 (0.088615) | 0.380689 / 0.283200 (0.097489) | 0.096015 / 0.141683 (-0.045668) | 1.464419 / 1.452155 (0.012264) | 1.574010 / 1.492716 (0.081294) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.156433 / 0.018006 (0.138427) | 0.403179 / 0.000490 (0.402690) | 0.002415 / 0.000200 (0.002215) | 0.000082 / 0.000054 (0.000027) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024946 / 0.037411 (-0.012465) | 0.100568 / 0.014526 (0.086042) | 0.106440 / 0.176557 (-0.070117) | 0.158457 / 0.737135 (-0.578678) | 0.110774 / 0.296338 (-0.185564) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.434734 / 0.215209 (0.219525) | 4.343874 / 2.077655 (2.266220) | 2.059759 / 1.504120 (0.555639) | 1.855124 / 1.541195 (0.313930) | 1.908567 / 1.468490 (0.440077) | 0.695283 / 4.584777 (-3.889494) | 3.347724 / 3.745712 (-0.397988) | 2.979498 / 5.269862 (-2.290364) | 1.532040 / 4.565676 (-3.033636) | 0.083021 / 0.424275 (-0.341254) | 0.012522 / 0.007607 (0.004915) | 0.540934 / 0.226044 (0.314890) | 5.385690 / 2.268929 (3.116762) | 2.507409 / 55.444624 (-52.937216) | 2.160537 / 6.876477 (-4.715939) | 2.269195 / 2.142072 (0.127123) | 0.804718 / 4.805227 (-4.000509) | 0.152432 / 6.500664 (-6.348232) | 0.068783 / 0.075469 (-0.006686) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.294698 / 1.841788 (-0.547090) | 14.152792 / 8.074308 (6.078484) | 14.233132 / 10.191392 (4.041740) | 0.143655 / 0.680424 (-0.536768) | 0.016844 / 0.534201 (-0.517357) | 0.380246 / 0.579283 (-0.199037) | 0.381730 / 0.434364 (-0.052633) | 0.456838 / 0.540337 (-0.083499) | 0.543677 / 1.386936 (-0.843259) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#b28d5610887f2e107765f5f1557679184db08214 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008586 / 0.011353 (-0.002767) | 0.005886 / 0.011008 (-0.005122) | 0.114522 / 0.038508 (0.076014) | 0.040966 / 0.023109 (0.017857) | 0.366655 / 0.275898 (0.090757) | 0.408765 / 0.323480 (0.085285) | 0.006822 / 0.007986 (-0.001164) | 0.004508 / 0.004328 (0.000180) | 0.084715 / 0.004250 (0.080465) | 0.054007 / 0.037052 (0.016954) | 0.380500 / 0.258489 (0.122011) | 0.410377 / 0.293841 (0.116536) | 0.041040 / 0.128546 (-0.087507) | 0.013940 / 0.075646 (-0.061707) | 0.398456 / 0.419271 (-0.020816) | 0.059315 / 0.043533 (0.015782) | 0.353640 / 0.255139 (0.098501) | 0.388682 / 0.283200 (0.105482) | 0.121744 / 0.141683 (-0.019939) | 1.729306 / 1.452155 (0.277151) | 1.824768 / 1.492716 (0.332052) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.228806 / 0.018006 (0.210800) | 0.492790 / 0.000490 (0.492300) | 0.010815 / 0.000200 (0.010615) | 0.000372 / 0.000054 (0.000318) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031750 / 0.037411 (-0.005662) | 0.127160 / 0.014526 (0.112635) | 0.136717 / 0.176557 (-0.039839) | 0.205590 / 0.737135 (-0.531545) | 0.142596 / 0.296338 (-0.153742) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.486419 / 0.215209 (0.271210) | 4.858572 / 2.077655 (2.780918) | 2.173867 / 1.504120 (0.669747) | 1.934619 / 1.541195 (0.393424) | 2.104185 / 1.468490 (0.635695) | 0.837913 / 4.584777 (-3.746864) | 4.552192 / 3.745712 (0.806480) | 2.565040 / 5.269862 (-2.704822) | 1.808499 / 4.565676 (-2.757178) | 0.103283 / 0.424275 (-0.320993) | 0.015040 / 0.007607 (0.007433) | 0.602325 / 0.226044 (0.376281) | 6.038655 / 2.268929 (3.769727) | 2.759789 / 55.444624 (-52.684835) | 2.330990 / 6.876477 (-4.545487) | 2.404111 / 2.142072 (0.262038) | 1.011637 / 4.805227 (-3.793590) | 0.202142 / 6.500664 (-6.298522) | 0.079496 / 0.075469 (0.004026) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.429543 / 1.841788 (-0.412245) | 18.052409 / 8.074308 (9.978101) | 16.989154 / 10.191392 (6.797762) | 0.208981 / 0.680424 (-0.471443) | 0.020490 / 0.534201 (-0.513711) | 0.502746 / 0.579283 (-0.076537) | 0.491769 / 0.434364 (0.057405) | 0.581970 / 0.540337 (0.041632) | 0.695816 / 1.386936 (-0.691120) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008449 / 0.011353 (-0.002904) | 0.006633 / 0.011008 (-0.004375) | 0.088638 / 0.038508 (0.050130) | 0.040013 / 0.023109 (0.016904) | 0.413108 / 0.275898 (0.137210) | 0.446310 / 0.323480 (0.122830) | 0.006515 / 0.007986 (-0.001471) | 0.006223 / 0.004328 (0.001894) | 0.089823 / 0.004250 (0.085573) | 0.052029 / 0.037052 (0.014977) | 0.407263 / 0.258489 (0.148774) | 0.449416 / 0.293841 (0.155576) | 0.041810 / 0.128546 (-0.086736) | 0.014604 / 0.075646 (-0.061042) | 0.103728 / 0.419271 (-0.315543) | 0.058212 / 0.043533 (0.014679) | 0.408936 / 0.255139 (0.153797) | 0.436727 / 0.283200 (0.153528) | 0.124344 / 0.141683 (-0.017339) | 1.752112 / 1.452155 (0.299957) | 1.859104 / 1.492716 (0.366387) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.231172 / 0.018006 (0.213166) | 0.502974 / 0.000490 (0.502485) | 0.005586 / 0.000200 (0.005386) | 0.000137 / 0.000054 (0.000082) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034097 / 0.037411 (-0.003314) | 0.133780 / 0.014526 (0.119254) | 0.142321 / 0.176557 (-0.034236) | 0.199807 / 0.737135 (-0.537329) | 0.150073 / 0.296338 (-0.146266) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.515658 / 0.215209 (0.300449) | 5.129783 / 2.077655 (3.052129) | 2.534767 / 1.504120 (1.030648) | 2.352468 / 1.541195 (0.811274) | 2.430708 / 1.468490 (0.962218) | 0.850087 / 4.584777 (-3.734690) | 4.529622 / 3.745712 (0.783910) | 2.451986 / 5.269862 (-2.817876) | 1.569568 / 4.565676 (-2.996109) | 0.102907 / 0.424275 (-0.321368) | 0.014420 / 0.007607 (0.006813) | 0.635124 / 0.226044 (0.409080) | 6.260496 / 2.268929 (3.991568) | 3.094984 / 55.444624 (-52.349640) | 2.780629 / 6.876477 (-4.095847) | 2.947620 / 2.142072 (0.805548) | 1.002397 / 4.805227 (-3.802830) | 0.200502 / 6.500664 (-6.300162) | 0.076577 / 0.075469 (0.001107) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.505958 / 1.841788 (-0.335829) | 18.364986 / 8.074308 (10.290678) | 16.707214 / 10.191392 (6.515822) | 0.210976 / 0.680424 (-0.469447) | 0.022077 / 0.534201 (-0.512124) | 0.516174 / 0.579283 (-0.063109) | 0.502469 / 0.434364 (0.068105) | 0.626790 / 0.540337 (0.086453) | 0.747230 / 1.386936 (-0.639706) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#bc5fef5b6d91f009e4101684adcb374df2c170f6 \"CML watermark\")\n"
] | 2023-04-28T10:10:01 | 2023-04-28T10:18:51 | 2023-04-28T10:10:29 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5804",
"html_url": "https://github.com/huggingface/datasets/pull/5804",
"diff_url": "https://github.com/huggingface/datasets/pull/5804.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5804.patch",
"merged_at": "2023-04-28T10:10:29"
} | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5804/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5804/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5803 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5803/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5803/comments | https://api.github.com/repos/huggingface/datasets/issues/5803/events | https://github.com/huggingface/datasets/pull/5803 | 1,688,256,290 | PR_kwDODunzps5PXtte | 5,803 | Release: 2.12.0 | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5803). All of your documentation changes will be reflected on that endpoint.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008303 / 0.011353 (-0.003050) | 0.005681 / 0.011008 (-0.005327) | 0.111830 / 0.038508 (0.073322) | 0.039222 / 0.023109 (0.016112) | 0.336773 / 0.275898 (0.060875) | 0.376673 / 0.323480 (0.053193) | 0.006756 / 0.007986 (-0.001230) | 0.006078 / 0.004328 (0.001749) | 0.083552 / 0.004250 (0.079301) | 0.054430 / 0.037052 (0.017377) | 0.337310 / 0.258489 (0.078821) | 0.386138 / 0.293841 (0.092297) | 0.040068 / 0.128546 (-0.088478) | 0.013895 / 0.075646 (-0.061751) | 0.384174 / 0.419271 (-0.035097) | 0.058244 / 0.043533 (0.014711) | 0.342410 / 0.255139 (0.087271) | 0.362417 / 0.283200 (0.079217) | 0.123470 / 0.141683 (-0.018213) | 1.662938 / 1.452155 (0.210784) | 1.786488 / 1.492716 (0.293771) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.232629 / 0.018006 (0.214622) | 0.478252 / 0.000490 (0.477762) | 0.008519 / 0.000200 (0.008319) | 0.000111 / 0.000054 (0.000057) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031222 / 0.037411 (-0.006190) | 0.125875 / 0.014526 (0.111350) | 0.138995 / 0.176557 (-0.037562) | 0.213073 / 0.737135 (-0.524062) | 0.141848 / 0.296338 (-0.154490) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.463648 / 0.215209 (0.248439) | 4.582969 / 2.077655 (2.505314) | 2.104622 / 1.504120 (0.600502) | 1.887697 / 1.541195 (0.346502) | 1.946096 / 1.468490 (0.477606) | 0.809008 / 4.584777 (-3.775769) | 4.527871 / 3.745712 (0.782159) | 4.862721 / 5.269862 (-0.407141) | 2.423257 / 4.565676 (-2.142419) | 0.101080 / 0.424275 (-0.323196) | 0.014767 / 0.007607 (0.007160) | 0.574471 / 0.226044 (0.348427) | 5.746445 / 2.268929 (3.477516) | 2.682584 / 55.444624 (-52.762040) | 2.320113 / 6.876477 (-4.556364) | 2.474530 / 2.142072 (0.332458) | 0.992979 / 4.805227 (-3.812249) | 0.200812 / 6.500664 (-6.299852) | 0.076291 / 0.075469 (0.000822) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.395533 / 1.841788 (-0.446254) | 17.418803 / 8.074308 (9.344495) | 16.584875 / 10.191392 (6.393483) | 0.167739 / 0.680424 (-0.512685) | 0.020923 / 0.534201 (-0.513278) | 0.500788 / 0.579283 (-0.078496) | 0.510270 / 0.434364 (0.075906) | 0.589608 / 0.540337 (0.049270) | 0.694233 / 1.386936 (-0.692703) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008440 / 0.011353 (-0.002913) | 0.005871 / 0.011008 (-0.005137) | 0.085805 / 0.038508 (0.047297) | 0.039324 / 0.023109 (0.016215) | 0.400587 / 0.275898 (0.124689) | 0.431729 / 0.323480 (0.108249) | 0.006557 / 0.007986 (-0.001429) | 0.005778 / 0.004328 (0.001450) | 0.084394 / 0.004250 (0.080144) | 0.055274 / 0.037052 (0.018222) | 0.410568 / 0.258489 (0.152079) | 0.439952 / 0.293841 (0.146111) | 0.040335 / 0.128546 (-0.088211) | 0.013968 / 0.075646 (-0.061679) | 0.098765 / 0.419271 (-0.320507) | 0.055897 / 0.043533 (0.012364) | 0.387584 / 0.255139 (0.132445) | 0.412568 / 0.283200 (0.129368) | 0.120393 / 0.141683 (-0.021290) | 1.730996 / 1.452155 (0.278841) | 1.821538 / 1.492716 (0.328822) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.245688 / 0.018006 (0.227682) | 0.484888 / 0.000490 (0.484398) | 0.000485 / 0.000200 (0.000285) | 0.000068 / 0.000054 (0.000013) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032340 / 0.037411 (-0.005072) | 0.130819 / 0.014526 (0.116293) | 0.138491 / 0.176557 (-0.038065) | 0.196902 / 0.737135 (-0.540233) | 0.145404 / 0.296338 (-0.150935) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.487643 / 0.215209 (0.272434) | 4.818956 / 2.077655 (2.741301) | 2.332316 / 1.504120 (0.828196) | 2.102018 / 1.541195 (0.560823) | 2.156743 / 1.468490 (0.688253) | 0.803365 / 4.584777 (-3.781412) | 4.308561 / 3.745712 (0.562849) | 2.373331 / 5.269862 (-2.896530) | 1.539474 / 4.565676 (-3.026202) | 0.099081 / 0.424275 (-0.325194) | 0.014627 / 0.007607 (0.007020) | 0.609883 / 0.226044 (0.383838) | 6.092402 / 2.268929 (3.823474) | 2.858137 / 55.444624 (-52.586488) | 2.463256 / 6.876477 (-4.413220) | 2.637048 / 2.142072 (0.494976) | 0.959552 / 4.805227 (-3.845676) | 0.194170 / 6.500664 (-6.306495) | 0.075231 / 0.075469 (-0.000238) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.516502 / 1.841788 (-0.325285) | 18.077893 / 8.074308 (10.003585) | 16.507961 / 10.191392 (6.316569) | 0.171643 / 0.680424 (-0.508780) | 0.020378 / 0.534201 (-0.513823) | 0.491508 / 0.579283 (-0.087775) | 0.492136 / 0.434364 (0.057772) | 0.602258 / 0.540337 (0.061920) | 0.719882 / 1.386936 (-0.667054) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#330ac3e95fd3f2d61bac31b5b9c24399a5b54723 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006572 / 0.011353 (-0.004781) | 0.004647 / 0.011008 (-0.006362) | 0.098277 / 0.038508 (0.059769) | 0.027937 / 0.023109 (0.004828) | 0.339833 / 0.275898 (0.063935) | 0.398305 / 0.323480 (0.074825) | 0.005093 / 0.007986 (-0.002893) | 0.003374 / 0.004328 (-0.000954) | 0.075287 / 0.004250 (0.071037) | 0.037355 / 0.037052 (0.000303) | 0.339779 / 0.258489 (0.081290) | 0.403756 / 0.293841 (0.109915) | 0.030705 / 0.128546 (-0.097841) | 0.011596 / 0.075646 (-0.064050) | 0.323809 / 0.419271 (-0.095463) | 0.043357 / 0.043533 (-0.000176) | 0.342817 / 0.255139 (0.087678) | 0.386330 / 0.283200 (0.103130) | 0.088229 / 0.141683 (-0.053454) | 1.466017 / 1.452155 (0.013862) | 1.566551 / 1.492716 (0.073835) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.196276 / 0.018006 (0.178269) | 0.420321 / 0.000490 (0.419831) | 0.002234 / 0.000200 (0.002034) | 0.000071 / 0.000054 (0.000016) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023999 / 0.037411 (-0.013412) | 0.095117 / 0.014526 (0.080592) | 0.102544 / 0.176557 (-0.074013) | 0.164796 / 0.737135 (-0.572340) | 0.107030 / 0.296338 (-0.189309) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.429299 / 0.215209 (0.214089) | 4.272503 / 2.077655 (2.194849) | 2.101890 / 1.504120 (0.597771) | 1.978907 / 1.541195 (0.437713) | 2.008993 / 1.468490 (0.540503) | 0.695171 / 4.584777 (-3.889606) | 3.427050 / 3.745712 (-0.318662) | 1.892945 / 5.269862 (-3.376917) | 1.247156 / 4.565676 (-3.318521) | 0.082576 / 0.424275 (-0.341699) | 0.012526 / 0.007607 (0.004918) | 0.526338 / 0.226044 (0.300293) | 5.313855 / 2.268929 (3.044927) | 2.421134 / 55.444624 (-53.023490) | 2.072026 / 6.876477 (-4.804451) | 2.159846 / 2.142072 (0.017773) | 0.800753 / 4.805227 (-4.004474) | 0.150507 / 6.500664 (-6.350157) | 0.066378 / 0.075469 (-0.009091) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.218709 / 1.841788 (-0.623079) | 13.649239 / 8.074308 (5.574931) | 13.952762 / 10.191392 (3.761370) | 0.141967 / 0.680424 (-0.538457) | 0.016443 / 0.534201 (-0.517758) | 0.380408 / 0.579283 (-0.198875) | 0.377693 / 0.434364 (-0.056671) | 0.439819 / 0.540337 (-0.100518) | 0.529667 / 1.386936 (-0.857269) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006722 / 0.011353 (-0.004630) | 0.004495 / 0.011008 (-0.006513) | 0.075459 / 0.038508 (0.036951) | 0.028135 / 0.023109 (0.005026) | 0.349904 / 0.275898 (0.074006) | 0.390620 / 0.323480 (0.067140) | 0.005175 / 0.007986 (-0.002810) | 0.004720 / 0.004328 (0.000392) | 0.074243 / 0.004250 (0.069993) | 0.039084 / 0.037052 (0.002032) | 0.352486 / 0.258489 (0.093997) | 0.397549 / 0.293841 (0.103708) | 0.030596 / 0.128546 (-0.097950) | 0.011627 / 0.075646 (-0.064020) | 0.083394 / 0.419271 (-0.335878) | 0.042155 / 0.043533 (-0.001378) | 0.345668 / 0.255139 (0.090529) | 0.383474 / 0.283200 (0.100275) | 0.096530 / 0.141683 (-0.045153) | 1.493360 / 1.452155 (0.041206) | 1.572259 / 1.492716 (0.079543) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.162605 / 0.018006 (0.144599) | 0.409513 / 0.000490 (0.409023) | 0.002029 / 0.000200 (0.001829) | 0.000069 / 0.000054 (0.000015) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025824 / 0.037411 (-0.011588) | 0.102439 / 0.014526 (0.087913) | 0.109515 / 0.176557 (-0.067041) | 0.160650 / 0.737135 (-0.576486) | 0.112971 / 0.296338 (-0.183367) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.433293 / 0.215209 (0.218084) | 4.340286 / 2.077655 (2.262631) | 2.055857 / 1.504120 (0.551737) | 1.854451 / 1.541195 (0.313256) | 1.912752 / 1.468490 (0.444261) | 0.700076 / 4.584777 (-3.884701) | 3.361542 / 3.745712 (-0.384170) | 2.760204 / 5.269862 (-2.509658) | 1.477395 / 4.565676 (-3.088282) | 0.082868 / 0.424275 (-0.341407) | 0.012479 / 0.007607 (0.004872) | 0.532749 / 0.226044 (0.306704) | 5.323701 / 2.268929 (3.054772) | 2.509524 / 55.444624 (-52.935100) | 2.168668 / 6.876477 (-4.707809) | 2.259112 / 2.142072 (0.117040) | 0.806686 / 4.805227 (-3.998542) | 0.154620 / 6.500664 (-6.346044) | 0.068348 / 0.075469 (-0.007121) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.316512 / 1.841788 (-0.525276) | 14.158143 / 8.074308 (6.083835) | 14.110643 / 10.191392 (3.919251) | 0.143760 / 0.680424 (-0.536664) | 0.016851 / 0.534201 (-0.517350) | 0.376594 / 0.579283 (-0.202689) | 0.386957 / 0.434364 (-0.047407) | 0.466185 / 0.540337 (-0.074152) | 0.550269 / 1.386936 (-0.836667) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#8e1af7b30c94ce77abd9de732f19198e197d900c \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009457 / 0.011353 (-0.001896) | 0.006453 / 0.011008 (-0.004555) | 0.136392 / 0.038508 (0.097884) | 0.038378 / 0.023109 (0.015269) | 0.413171 / 0.275898 (0.137273) | 0.451605 / 0.323480 (0.128126) | 0.007123 / 0.007986 (-0.000863) | 0.006316 / 0.004328 (0.001987) | 0.103009 / 0.004250 (0.098758) | 0.049182 / 0.037052 (0.012130) | 0.398635 / 0.258489 (0.140146) | 0.463146 / 0.293841 (0.169305) | 0.056247 / 0.128546 (-0.072299) | 0.019589 / 0.075646 (-0.056058) | 0.475882 / 0.419271 (0.056610) | 0.094918 / 0.043533 (0.051385) | 0.416502 / 0.255139 (0.161363) | 0.447129 / 0.283200 (0.163929) | 0.133314 / 0.141683 (-0.008369) | 2.132888 / 1.452155 (0.680733) | 2.073383 / 1.492716 (0.580667) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.273037 / 0.018006 (0.255030) | 0.625675 / 0.000490 (0.625185) | 0.003449 / 0.000200 (0.003249) | 0.000185 / 0.000054 (0.000130) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031889 / 0.037411 (-0.005523) | 0.131673 / 0.014526 (0.117148) | 0.141575 / 0.176557 (-0.034982) | 0.214978 / 0.737135 (-0.522158) | 0.145586 / 0.296338 (-0.150752) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.711135 / 0.215209 (0.495926) | 7.162492 / 2.077655 (5.084837) | 2.906028 / 1.504120 (1.401908) | 2.488855 / 1.541195 (0.947660) | 2.574628 / 1.468490 (1.106138) | 1.587824 / 4.584777 (-2.996953) | 6.332962 / 3.745712 (2.587250) | 5.419578 / 5.269862 (0.149717) | 2.935413 / 4.565676 (-1.630263) | 0.169159 / 0.424275 (-0.255116) | 0.015358 / 0.007607 (0.007751) | 0.862036 / 0.226044 (0.635992) | 8.559256 / 2.268929 (6.290328) | 3.530756 / 55.444624 (-51.913868) | 2.626288 / 6.876477 (-4.250188) | 2.770063 / 2.142072 (0.627990) | 1.500116 / 4.805227 (-3.305112) | 0.265109 / 6.500664 (-6.235555) | 0.084944 / 0.075469 (0.009475) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.631060 / 1.841788 (-0.210728) | 19.022827 / 8.074308 (10.948519) | 22.973632 / 10.191392 (12.782240) | 0.296265 / 0.680424 (-0.384158) | 0.032317 / 0.534201 (-0.501884) | 0.624171 / 0.579283 (0.044888) | 0.690643 / 0.434364 (0.256279) | 0.691206 / 0.540337 (0.150869) | 0.758855 / 1.386936 (-0.628081) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009441 / 0.011353 (-0.001912) | 0.006270 / 0.011008 (-0.004739) | 0.110284 / 0.038508 (0.071776) | 0.035952 / 0.023109 (0.012842) | 0.521894 / 0.275898 (0.245996) | 0.582624 / 0.323480 (0.259144) | 0.011400 / 0.007986 (0.003414) | 0.004677 / 0.004328 (0.000348) | 0.115721 / 0.004250 (0.111470) | 0.048521 / 0.037052 (0.011469) | 0.497142 / 0.258489 (0.238653) | 0.573733 / 0.293841 (0.279892) | 0.055788 / 0.128546 (-0.072759) | 0.020949 / 0.075646 (-0.054697) | 0.132968 / 0.419271 (-0.286303) | 0.063045 / 0.043533 (0.019512) | 0.537769 / 0.255139 (0.282630) | 0.527560 / 0.283200 (0.244361) | 0.123756 / 0.141683 (-0.017927) | 1.994111 / 1.452155 (0.541956) | 2.104623 / 1.492716 (0.611907) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.279057 / 0.018006 (0.261051) | 0.537342 / 0.000490 (0.536852) | 0.007782 / 0.000200 (0.007582) | 0.000115 / 0.000054 (0.000060) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032018 / 0.037411 (-0.005394) | 0.133456 / 0.014526 (0.118930) | 0.142039 / 0.176557 (-0.034517) | 0.213769 / 0.737135 (-0.523366) | 0.143811 / 0.296338 (-0.152527) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.680142 / 0.215209 (0.464933) | 6.450439 / 2.077655 (4.372784) | 2.820724 / 1.504120 (1.316604) | 2.520407 / 1.541195 (0.979212) | 2.568972 / 1.468490 (1.100482) | 1.250584 / 4.584777 (-3.334193) | 6.108222 / 3.745712 (2.362509) | 3.065965 / 5.269862 (-2.203897) | 2.108675 / 4.565676 (-2.457002) | 0.167870 / 0.424275 (-0.256405) | 0.015127 / 0.007607 (0.007520) | 0.849645 / 0.226044 (0.623600) | 8.508727 / 2.268929 (6.239799) | 3.707897 / 55.444624 (-51.736727) | 3.009279 / 6.876477 (-3.867198) | 3.067179 / 2.142072 (0.925106) | 1.516370 / 4.805227 (-3.288858) | 0.264845 / 6.500664 (-6.235819) | 0.095137 / 0.075469 (0.019668) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.826306 / 1.841788 (-0.015481) | 20.119641 / 8.074308 (12.045333) | 21.532158 / 10.191392 (11.340766) | 0.278631 / 0.680424 (-0.401793) | 0.029494 / 0.534201 (-0.504707) | 0.621887 / 0.579283 (0.042604) | 0.686864 / 0.434364 (0.252500) | 0.695412 / 0.540337 (0.155074) | 0.864829 / 1.386936 (-0.522108) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#8e1af7b30c94ce77abd9de732f19198e197d900c \"CML watermark\")\n"
] | 2023-04-28T09:52:11 | 2023-04-28T10:18:56 | 2023-04-28T09:54:43 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5803",
"html_url": "https://github.com/huggingface/datasets/pull/5803",
"diff_url": "https://github.com/huggingface/datasets/pull/5803.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5803.patch",
"merged_at": "2023-04-28T09:54:43"
} | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5803/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5803/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5802 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5802/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5802/comments | https://api.github.com/repos/huggingface/datasets/issues/5802/events | https://github.com/huggingface/datasets/pull/5802 | 1,686,509,799 | PR_kwDODunzps5PR199 | 5,802 | Validate non-empty data_files | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007818 / 0.011353 (-0.003535) | 0.005456 / 0.011008 (-0.005552) | 0.114685 / 0.038508 (0.076177) | 0.038398 / 0.023109 (0.015289) | 0.351289 / 0.275898 (0.075391) | 0.389170 / 0.323480 (0.065690) | 0.006213 / 0.007986 (-0.001773) | 0.005796 / 0.004328 (0.001467) | 0.085315 / 0.004250 (0.081065) | 0.049251 / 0.037052 (0.012198) | 0.368119 / 0.258489 (0.109630) | 0.394725 / 0.293841 (0.100884) | 0.040390 / 0.128546 (-0.088157) | 0.014076 / 0.075646 (-0.061570) | 0.393771 / 0.419271 (-0.025500) | 0.058929 / 0.043533 (0.015397) | 0.349526 / 0.255139 (0.094387) | 0.378409 / 0.283200 (0.095210) | 0.114354 / 0.141683 (-0.027329) | 1.749244 / 1.452155 (0.297089) | 1.847946 / 1.492716 (0.355229) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.241648 / 0.018006 (0.223641) | 0.468419 / 0.000490 (0.467929) | 0.004311 / 0.000200 (0.004111) | 0.000091 / 0.000054 (0.000036) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029978 / 0.037411 (-0.007433) | 0.121832 / 0.014526 (0.107306) | 0.133516 / 0.176557 (-0.043041) | 0.199174 / 0.737135 (-0.537961) | 0.138181 / 0.296338 (-0.158158) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.478346 / 0.215209 (0.263137) | 4.723967 / 2.077655 (2.646312) | 2.107724 / 1.504120 (0.603604) | 1.874810 / 1.541195 (0.333615) | 1.911568 / 1.468490 (0.443078) | 0.800966 / 4.584777 (-3.783811) | 4.399032 / 3.745712 (0.653320) | 2.346160 / 5.269862 (-2.923702) | 1.506673 / 4.565676 (-3.059004) | 0.099119 / 0.424275 (-0.325156) | 0.014055 / 0.007607 (0.006448) | 0.582419 / 0.226044 (0.356375) | 5.789147 / 2.268929 (3.520218) | 2.632443 / 55.444624 (-52.812182) | 2.217630 / 6.876477 (-4.658846) | 2.337709 / 2.142072 (0.195637) | 0.995345 / 4.805227 (-3.809882) | 0.200040 / 6.500664 (-6.300624) | 0.076855 / 0.075469 (0.001386) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.386104 / 1.841788 (-0.455683) | 17.109772 / 8.074308 (9.035464) | 16.147612 / 10.191392 (5.956220) | 0.162846 / 0.680424 (-0.517577) | 0.020692 / 0.534201 (-0.513509) | 0.495752 / 0.579283 (-0.083531) | 0.475715 / 0.434364 (0.041351) | 0.619826 / 0.540337 (0.079488) | 0.720745 / 1.386936 (-0.666191) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008255 / 0.011353 (-0.003098) | 0.006118 / 0.011008 (-0.004890) | 0.088004 / 0.038508 (0.049496) | 0.039225 / 0.023109 (0.016116) | 0.399290 / 0.275898 (0.123392) | 0.432272 / 0.323480 (0.108792) | 0.007382 / 0.007986 (-0.000603) | 0.004576 / 0.004328 (0.000248) | 0.086511 / 0.004250 (0.082260) | 0.050472 / 0.037052 (0.013420) | 0.404160 / 0.258489 (0.145671) | 0.445356 / 0.293841 (0.151515) | 0.041549 / 0.128546 (-0.086997) | 0.014148 / 0.075646 (-0.061498) | 0.101697 / 0.419271 (-0.317574) | 0.057474 / 0.043533 (0.013941) | 0.395093 / 0.255139 (0.139954) | 0.418613 / 0.283200 (0.135414) | 0.123217 / 0.141683 (-0.018466) | 1.726146 / 1.452155 (0.273991) | 1.852746 / 1.492716 (0.360029) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.256876 / 0.018006 (0.238870) | 0.476336 / 0.000490 (0.475846) | 0.000465 / 0.000200 (0.000265) | 0.000068 / 0.000054 (0.000013) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034304 / 0.037411 (-0.003107) | 0.132617 / 0.014526 (0.118091) | 0.141712 / 0.176557 (-0.034845) | 0.198101 / 0.737135 (-0.539034) | 0.150877 / 0.296338 (-0.145461) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.504717 / 0.215209 (0.289508) | 5.035060 / 2.077655 (2.957405) | 2.494812 / 1.504120 (0.990692) | 2.306601 / 1.541195 (0.765406) | 2.481860 / 1.468490 (1.013370) | 0.826041 / 4.584777 (-3.758736) | 4.414748 / 3.745712 (0.669036) | 2.417899 / 5.269862 (-2.851963) | 1.574548 / 4.565676 (-2.991128) | 0.101712 / 0.424275 (-0.322563) | 0.014388 / 0.007607 (0.006781) | 0.616674 / 0.226044 (0.390630) | 6.180382 / 2.268929 (3.911453) | 2.969110 / 55.444624 (-52.475514) | 2.574383 / 6.876477 (-4.302094) | 2.711008 / 2.142072 (0.568935) | 0.997679 / 4.805227 (-3.807548) | 0.201241 / 6.500664 (-6.299423) | 0.076132 / 0.075469 (0.000663) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.542704 / 1.841788 (-0.299084) | 17.610700 / 8.074308 (9.536392) | 16.152973 / 10.191392 (5.961581) | 0.166040 / 0.680424 (-0.514384) | 0.020286 / 0.534201 (-0.513915) | 0.506724 / 0.579283 (-0.072559) | 0.484348 / 0.434364 (0.049984) | 0.606524 / 0.540337 (0.066187) | 0.734997 / 1.386936 (-0.651939) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#a200ec9126a0879f3d38d4e9e3787633a23af42e \"CML watermark\")\n"
] | 2023-04-27T09:51:36 | 2023-04-27T14:59:47 | 2023-04-27T14:51:40 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5802",
"html_url": "https://github.com/huggingface/datasets/pull/5802",
"diff_url": "https://github.com/huggingface/datasets/pull/5802.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5802.patch",
"merged_at": "2023-04-27T14:51:40"
} | This PR adds validation of `data_files`, so that they are non-empty (str, list, or dict) or `None` (default).
See: https://github.com/huggingface/datasets/pull/5787#discussion_r1178862327 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5802/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5802/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5800 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5800/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5800/comments | https://api.github.com/repos/huggingface/datasets/issues/5800/events | https://github.com/huggingface/datasets/pull/5800 | 1,686,348,096 | PR_kwDODunzps5PRTRh | 5,800 | Change downloaded file permission based on umask | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2023-04-27T08:13:30 | 2023-04-27T09:33:05 | 2023-04-27T09:30:16 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5800",
"html_url": "https://github.com/huggingface/datasets/pull/5800",
"diff_url": "https://github.com/huggingface/datasets/pull/5800.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5800.patch",
"merged_at": "2023-04-27T09:30:16"
} | This PR changes the permission of downloaded files to cache, so that the umask is taken into account.
Related to:
- #2157
Fix #5799.
CC: @stas00 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5800/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5800/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5799 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5799/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5799/comments | https://api.github.com/repos/huggingface/datasets/issues/5799/events | https://github.com/huggingface/datasets/issues/5799 | 1,686,334,572 | I_kwDODunzps5kg2xs | 5,799 | Files downloaded to cache do not respect umask | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 2023-04-27T08:06:05 | 2023-04-27T09:30:17 | 2023-04-27T09:30:17 | MEMBER | null | null | null | As reported by @stas00, files downloaded to the cache do not respect umask:
```bash
$ ls -l /path/to/cache/datasets/downloads/
-rw------- 1 uername username 150M Apr 25 16:41 5e646c1d600f065adaeb134e536f6f2f296a6d804bd1f0e1fdcd20ee28c185c6
```
Related to:
- #2065 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5799/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5799/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5796 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5796/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5796/comments | https://api.github.com/repos/huggingface/datasets/issues/5796/events | https://github.com/huggingface/datasets/pull/5796 | 1,685,451,919 | PR_kwDODunzps5PORm- | 5,796 | Spark docs | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010480 / 0.011353 (-0.000872) | 0.006743 / 0.011008 (-0.004265) | 0.126503 / 0.038508 (0.087995) | 0.036918 / 0.023109 (0.013808) | 0.387372 / 0.275898 (0.111474) | 0.456930 / 0.323480 (0.133450) | 0.008038 / 0.007986 (0.000052) | 0.005082 / 0.004328 (0.000753) | 0.093312 / 0.004250 (0.089062) | 0.065440 / 0.037052 (0.028387) | 0.378172 / 0.258489 (0.119683) | 0.430049 / 0.293841 (0.136208) | 0.054372 / 0.128546 (-0.074174) | 0.021875 / 0.075646 (-0.053772) | 0.441722 / 0.419271 (0.022450) | 0.063716 / 0.043533 (0.020183) | 0.375718 / 0.255139 (0.120579) | 0.413688 / 0.283200 (0.130488) | 0.122583 / 0.141683 (-0.019100) | 1.835992 / 1.452155 (0.383838) | 1.915862 / 1.492716 (0.423145) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.275305 / 0.018006 (0.257299) | 0.617170 / 0.000490 (0.616680) | 0.006467 / 0.000200 (0.006267) | 0.000117 / 0.000054 (0.000063) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031057 / 0.037411 (-0.006354) | 0.135178 / 0.014526 (0.120653) | 0.139265 / 0.176557 (-0.037292) | 0.221597 / 0.737135 (-0.515538) | 0.147632 / 0.296338 (-0.148706) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.640621 / 0.215209 (0.425411) | 6.354359 / 2.077655 (4.276704) | 2.748945 / 1.504120 (1.244825) | 2.396637 / 1.541195 (0.855442) | 2.395193 / 1.468490 (0.926703) | 1.209604 / 4.584777 (-3.375173) | 5.626901 / 3.745712 (1.881189) | 3.300941 / 5.269862 (-1.968920) | 2.123598 / 4.565676 (-2.442078) | 0.144270 / 0.424275 (-0.280005) | 0.015114 / 0.007607 (0.007507) | 0.812352 / 0.226044 (0.586307) | 8.024250 / 2.268929 (5.755322) | 3.557589 / 55.444624 (-51.887036) | 2.840632 / 6.876477 (-4.035845) | 3.152319 / 2.142072 (1.010246) | 1.447232 / 4.805227 (-3.357995) | 0.251740 / 6.500664 (-6.248924) | 0.083725 / 0.075469 (0.008256) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.568032 / 1.841788 (-0.273755) | 18.463860 / 8.074308 (10.389552) | 21.217395 / 10.191392 (11.026003) | 0.228457 / 0.680424 (-0.451967) | 0.031398 / 0.534201 (-0.502803) | 0.547627 / 0.579283 (-0.031656) | 0.642921 / 0.434364 (0.208557) | 0.687857 / 0.540337 (0.147520) | 0.800940 / 1.386936 (-0.585996) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009933 / 0.011353 (-0.001420) | 0.006065 / 0.011008 (-0.004943) | 0.102556 / 0.038508 (0.064048) | 0.034646 / 0.023109 (0.011537) | 0.437951 / 0.275898 (0.162053) | 0.482439 / 0.323480 (0.158959) | 0.007715 / 0.007986 (-0.000271) | 0.007426 / 0.004328 (0.003098) | 0.096427 / 0.004250 (0.092177) | 0.052983 / 0.037052 (0.015930) | 0.464533 / 0.258489 (0.206044) | 0.484848 / 0.293841 (0.191007) | 0.050415 / 0.128546 (-0.078131) | 0.021001 / 0.075646 (-0.054645) | 0.121214 / 0.419271 (-0.298058) | 0.061658 / 0.043533 (0.018125) | 0.431898 / 0.255139 (0.176759) | 0.482106 / 0.283200 (0.198907) | 0.128524 / 0.141683 (-0.013159) | 1.775714 / 1.452155 (0.323559) | 1.904738 / 1.492716 (0.412021) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.287641 / 0.018006 (0.269635) | 0.600667 / 0.000490 (0.600178) | 0.005097 / 0.000200 (0.004897) | 0.000112 / 0.000054 (0.000057) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032836 / 0.037411 (-0.004575) | 0.133114 / 0.014526 (0.118588) | 0.150874 / 0.176557 (-0.025683) | 0.217069 / 0.737135 (-0.520066) | 0.160387 / 0.296338 (-0.135951) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.668444 / 0.215209 (0.453235) | 6.240015 / 2.077655 (4.162360) | 2.808661 / 1.504120 (1.304542) | 2.336550 / 1.541195 (0.795356) | 2.538973 / 1.468490 (1.070483) | 1.189292 / 4.584777 (-3.395485) | 5.781028 / 3.745712 (2.035315) | 3.149895 / 5.269862 (-2.119967) | 2.130646 / 4.565676 (-2.435030) | 0.144944 / 0.424275 (-0.279331) | 0.014650 / 0.007607 (0.007043) | 0.792313 / 0.226044 (0.566269) | 7.933108 / 2.268929 (5.664180) | 3.527527 / 55.444624 (-51.917098) | 2.864271 / 6.876477 (-4.012205) | 3.098330 / 2.142072 (0.956258) | 1.421208 / 4.805227 (-3.384019) | 0.255638 / 6.500664 (-6.245026) | 0.086971 / 0.075469 (0.011502) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.585317 / 1.841788 (-0.256471) | 18.643133 / 8.074308 (10.568825) | 21.921256 / 10.191392 (11.729864) | 0.215493 / 0.680424 (-0.464931) | 0.028348 / 0.534201 (-0.505853) | 0.556925 / 0.579283 (-0.022358) | 0.631480 / 0.434364 (0.197116) | 0.654026 / 0.540337 (0.113689) | 0.799727 / 1.386936 (-0.587209) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#62520514b524b5904c7e4f0beddab1971212a96a \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006516 / 0.011353 (-0.004837) | 0.004500 / 0.011008 (-0.006509) | 0.097639 / 0.038508 (0.059131) | 0.028336 / 0.023109 (0.005227) | 0.377263 / 0.275898 (0.101365) | 0.409209 / 0.323480 (0.085729) | 0.004832 / 0.007986 (-0.003154) | 0.004629 / 0.004328 (0.000301) | 0.075046 / 0.004250 (0.070795) | 0.034080 / 0.037052 (-0.002972) | 0.377565 / 0.258489 (0.119076) | 0.419204 / 0.293841 (0.125363) | 0.030343 / 0.128546 (-0.098203) | 0.011465 / 0.075646 (-0.064182) | 0.322777 / 0.419271 (-0.096494) | 0.043774 / 0.043533 (0.000241) | 0.375808 / 0.255139 (0.120669) | 0.402665 / 0.283200 (0.119465) | 0.086811 / 0.141683 (-0.054872) | 1.518686 / 1.452155 (0.066531) | 1.540381 / 1.492716 (0.047664) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.197730 / 0.018006 (0.179724) | 0.409285 / 0.000490 (0.408795) | 0.004739 / 0.000200 (0.004539) | 0.000084 / 0.000054 (0.000029) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022974 / 0.037411 (-0.014437) | 0.096843 / 0.014526 (0.082317) | 0.103241 / 0.176557 (-0.073316) | 0.163691 / 0.737135 (-0.573444) | 0.107905 / 0.296338 (-0.188433) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.449408 / 0.215209 (0.234199) | 4.501375 / 2.077655 (2.423720) | 2.181491 / 1.504120 (0.677371) | 1.986153 / 1.541195 (0.444958) | 2.024735 / 1.468490 (0.556245) | 0.695368 / 4.584777 (-3.889409) | 3.416912 / 3.745712 (-0.328800) | 1.893343 / 5.269862 (-3.376519) | 1.275535 / 4.565676 (-3.290142) | 0.082772 / 0.424275 (-0.341503) | 0.012365 / 0.007607 (0.004758) | 0.553859 / 0.226044 (0.327814) | 5.540014 / 2.268929 (3.271085) | 2.634298 / 55.444624 (-52.810326) | 2.286686 / 6.876477 (-4.589790) | 2.384402 / 2.142072 (0.242330) | 0.806413 / 4.805227 (-3.998814) | 0.151757 / 6.500664 (-6.348907) | 0.067155 / 0.075469 (-0.008314) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.198776 / 1.841788 (-0.643012) | 13.517434 / 8.074308 (5.443126) | 13.926300 / 10.191392 (3.734908) | 0.141887 / 0.680424 (-0.538537) | 0.016571 / 0.534201 (-0.517630) | 0.383179 / 0.579283 (-0.196104) | 0.395189 / 0.434364 (-0.039175) | 0.479635 / 0.540337 (-0.060702) | 0.570576 / 1.386936 (-0.816360) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006691 / 0.011353 (-0.004662) | 0.004634 / 0.011008 (-0.006375) | 0.077087 / 0.038508 (0.038579) | 0.028281 / 0.023109 (0.005172) | 0.340108 / 0.275898 (0.064210) | 0.370611 / 0.323480 (0.047131) | 0.004997 / 0.007986 (-0.002988) | 0.003336 / 0.004328 (-0.000992) | 0.074814 / 0.004250 (0.070563) | 0.039001 / 0.037052 (0.001948) | 0.344225 / 0.258489 (0.085736) | 0.380621 / 0.293841 (0.086780) | 0.030858 / 0.128546 (-0.097689) | 0.011623 / 0.075646 (-0.064023) | 0.085016 / 0.419271 (-0.334256) | 0.042378 / 0.043533 (-0.001155) | 0.341428 / 0.255139 (0.086289) | 0.364823 / 0.283200 (0.081624) | 0.096695 / 0.141683 (-0.044988) | 1.527683 / 1.452155 (0.075528) | 1.585361 / 1.492716 (0.092645) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.184280 / 0.018006 (0.166274) | 0.397845 / 0.000490 (0.397355) | 0.004415 / 0.000200 (0.004215) | 0.000074 / 0.000054 (0.000020) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024296 / 0.037411 (-0.013115) | 0.101053 / 0.014526 (0.086527) | 0.108968 / 0.176557 (-0.067589) | 0.155732 / 0.737135 (-0.581403) | 0.112604 / 0.296338 (-0.183735) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.440819 / 0.215209 (0.225609) | 4.394017 / 2.077655 (2.316363) | 2.092456 / 1.504120 (0.588336) | 1.880186 / 1.541195 (0.338991) | 1.918035 / 1.468490 (0.449545) | 0.698059 / 4.584777 (-3.886718) | 3.422598 / 3.745712 (-0.323114) | 1.860465 / 5.269862 (-3.409396) | 1.157788 / 4.565676 (-3.407889) | 0.083566 / 0.424275 (-0.340709) | 0.012440 / 0.007607 (0.004832) | 0.549526 / 0.226044 (0.323481) | 5.500623 / 2.268929 (3.231694) | 2.546980 / 55.444624 (-52.897644) | 2.199527 / 6.876477 (-4.676949) | 2.297276 / 2.142072 (0.155203) | 0.801580 / 4.805227 (-4.003648) | 0.151842 / 6.500664 (-6.348822) | 0.067165 / 0.075469 (-0.008305) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.329097 / 1.841788 (-0.512691) | 13.830354 / 8.074308 (5.756046) | 14.155250 / 10.191392 (3.963858) | 0.144517 / 0.680424 (-0.535907) | 0.016738 / 0.534201 (-0.517463) | 0.379337 / 0.579283 (-0.199946) | 0.391382 / 0.434364 (-0.042982) | 0.459153 / 0.540337 (-0.081184) | 0.547287 / 1.386936 (-0.839649) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#2efb0289c887ec60d54e0715cd85c111cb45f9ee \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007176 / 0.011353 (-0.004177) | 0.005125 / 0.011008 (-0.005883) | 0.096060 / 0.038508 (0.057552) | 0.033262 / 0.023109 (0.010152) | 0.311461 / 0.275898 (0.035563) | 0.340673 / 0.323480 (0.017193) | 0.005700 / 0.007986 (-0.002286) | 0.005223 / 0.004328 (0.000894) | 0.072812 / 0.004250 (0.068561) | 0.042078 / 0.037052 (0.005025) | 0.320042 / 0.258489 (0.061553) | 0.346539 / 0.293841 (0.052698) | 0.035284 / 0.128546 (-0.093262) | 0.012021 / 0.075646 (-0.063625) | 0.331555 / 0.419271 (-0.087717) | 0.051058 / 0.043533 (0.007525) | 0.303001 / 0.255139 (0.047862) | 0.328431 / 0.283200 (0.045231) | 0.100954 / 0.141683 (-0.040729) | 1.407445 / 1.452155 (-0.044710) | 1.512826 / 1.492716 (0.020110) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.216442 / 0.018006 (0.198436) | 0.446298 / 0.000490 (0.445809) | 0.004701 / 0.000200 (0.004501) | 0.000084 / 0.000054 (0.000030) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028088 / 0.037411 (-0.009324) | 0.108669 / 0.014526 (0.094144) | 0.119597 / 0.176557 (-0.056960) | 0.178249 / 0.737135 (-0.558886) | 0.123914 / 0.296338 (-0.172424) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.413437 / 0.215209 (0.198228) | 4.136602 / 2.077655 (2.058947) | 1.875872 / 1.504120 (0.371752) | 1.680783 / 1.541195 (0.139588) | 1.757059 / 1.468490 (0.288569) | 0.711080 / 4.584777 (-3.873697) | 3.791701 / 3.745712 (0.045989) | 2.111612 / 5.269862 (-3.158250) | 1.351204 / 4.565676 (-3.214473) | 0.086477 / 0.424275 (-0.337798) | 0.012359 / 0.007607 (0.004752) | 0.504984 / 0.226044 (0.278940) | 5.040456 / 2.268929 (2.771527) | 2.266946 / 55.444624 (-53.177679) | 1.957827 / 6.876477 (-4.918650) | 2.120490 / 2.142072 (-0.021583) | 0.856148 / 4.805227 (-3.949079) | 0.172414 / 6.500664 (-6.328250) | 0.066833 / 0.075469 (-0.008636) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.198163 / 1.841788 (-0.643625) | 14.944930 / 8.074308 (6.870622) | 14.317196 / 10.191392 (4.125804) | 0.166104 / 0.680424 (-0.514320) | 0.017443 / 0.534201 (-0.516758) | 0.423025 / 0.579283 (-0.156258) | 0.437476 / 0.434364 (0.003112) | 0.500156 / 0.540337 (-0.040181) | 0.606226 / 1.386936 (-0.780710) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007417 / 0.011353 (-0.003936) | 0.005143 / 0.011008 (-0.005865) | 0.076401 / 0.038508 (0.037893) | 0.034818 / 0.023109 (0.011709) | 0.339633 / 0.275898 (0.063735) | 0.373839 / 0.323480 (0.050359) | 0.006004 / 0.007986 (-0.001982) | 0.005403 / 0.004328 (0.001075) | 0.074150 / 0.004250 (0.069899) | 0.050489 / 0.037052 (0.013436) | 0.343357 / 0.258489 (0.084868) | 0.377009 / 0.293841 (0.083168) | 0.035921 / 0.128546 (-0.092625) | 0.012197 / 0.075646 (-0.063449) | 0.087992 / 0.419271 (-0.331279) | 0.049452 / 0.043533 (0.005919) | 0.340495 / 0.255139 (0.085356) | 0.360277 / 0.283200 (0.077077) | 0.111114 / 0.141683 (-0.030569) | 1.463888 / 1.452155 (0.011734) | 1.548320 / 1.492716 (0.055604) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.228437 / 0.018006 (0.210431) | 0.445120 / 0.000490 (0.444631) | 0.000392 / 0.000200 (0.000192) | 0.000058 / 0.000054 (0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029965 / 0.037411 (-0.007446) | 0.113484 / 0.014526 (0.098958) | 0.125249 / 0.176557 (-0.051308) | 0.177201 / 0.737135 (-0.559934) | 0.128750 / 0.296338 (-0.167589) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.420089 / 0.215209 (0.204880) | 4.195772 / 2.077655 (2.118117) | 2.021539 / 1.504120 (0.517419) | 1.825118 / 1.541195 (0.283924) | 1.904090 / 1.468490 (0.435600) | 0.716276 / 4.584777 (-3.868501) | 3.742257 / 3.745712 (-0.003455) | 3.368880 / 5.269862 (-1.900981) | 1.728285 / 4.565676 (-2.837392) | 0.087656 / 0.424275 (-0.336619) | 0.012263 / 0.007607 (0.004656) | 0.524321 / 0.226044 (0.298277) | 5.217610 / 2.268929 (2.948682) | 2.474670 / 55.444624 (-52.969955) | 2.135452 / 6.876477 (-4.741025) | 2.292578 / 2.142072 (0.150505) | 0.852109 / 4.805227 (-3.953119) | 0.172031 / 6.500664 (-6.328633) | 0.065230 / 0.075469 (-0.010240) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.260494 / 1.841788 (-0.581293) | 15.019167 / 8.074308 (6.944859) | 14.647586 / 10.191392 (4.456193) | 0.170578 / 0.680424 (-0.509846) | 0.017619 / 0.534201 (-0.516582) | 0.423116 / 0.579283 (-0.156167) | 0.426680 / 0.434364 (-0.007684) | 0.519563 / 0.540337 (-0.020775) | 0.619335 / 1.386936 (-0.767601) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#e210dc20c19b5e6af05df9ca6e82984dfb42465f \"CML watermark\")\n"
] | 2023-04-26T17:39:43 | 2023-04-27T16:41:50 | 2023-04-27T16:34:45 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5796",
"html_url": "https://github.com/huggingface/datasets/pull/5796",
"diff_url": "https://github.com/huggingface/datasets/pull/5796.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5796.patch",
"merged_at": "2023-04-27T16:34:45"
} | Added a "Use with Spark" doc page to document `Dataset.from_spark` following https://github.com/huggingface/datasets/pull/5701
cc @maddiedawson | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5796/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5796/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5795 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5795/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5795/comments | https://api.github.com/repos/huggingface/datasets/issues/5795/events | https://github.com/huggingface/datasets/pull/5795 | 1,685,414,505 | PR_kwDODunzps5POJo8 | 5,795 | Fix spark imports | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010844 / 0.011353 (-0.000509) | 0.007329 / 0.011008 (-0.003680) | 0.133764 / 0.038508 (0.095256) | 0.040213 / 0.023109 (0.017103) | 0.413466 / 0.275898 (0.137568) | 0.452860 / 0.323480 (0.129380) | 0.008109 / 0.007986 (0.000123) | 0.005773 / 0.004328 (0.001444) | 0.109969 / 0.004250 (0.105718) | 0.053001 / 0.037052 (0.015949) | 0.416377 / 0.258489 (0.157888) | 0.477486 / 0.293841 (0.183645) | 0.056556 / 0.128546 (-0.071990) | 0.024322 / 0.075646 (-0.051324) | 0.437750 / 0.419271 (0.018479) | 0.087732 / 0.043533 (0.044199) | 0.421540 / 0.255139 (0.166401) | 0.429143 / 0.283200 (0.145944) | 0.144864 / 0.141683 (0.003181) | 1.882785 / 1.452155 (0.430631) | 1.980721 / 1.492716 (0.488005) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.285497 / 0.018006 (0.267491) | 0.601820 / 0.000490 (0.601331) | 0.005003 / 0.000200 (0.004804) | 0.000122 / 0.000054 (0.000067) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030673 / 0.037411 (-0.006739) | 0.126883 / 0.014526 (0.112357) | 0.137677 / 0.176557 (-0.038880) | 0.211504 / 0.737135 (-0.525632) | 0.144752 / 0.296338 (-0.151587) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.665845 / 0.215209 (0.450636) | 6.369040 / 2.077655 (4.291385) | 2.708979 / 1.504120 (1.204859) | 2.370842 / 1.541195 (0.829647) | 2.445987 / 1.468490 (0.977497) | 1.260806 / 4.584777 (-3.323971) | 5.979216 / 3.745712 (2.233504) | 3.334350 / 5.269862 (-1.935512) | 2.187298 / 4.565676 (-2.378379) | 0.155494 / 0.424275 (-0.268781) | 0.017351 / 0.007607 (0.009744) | 0.853626 / 0.226044 (0.627581) | 8.375001 / 2.268929 (6.106072) | 3.528312 / 55.444624 (-51.916313) | 2.890509 / 6.876477 (-3.985968) | 3.051016 / 2.142072 (0.908944) | 1.529811 / 4.805227 (-3.275416) | 0.273883 / 6.500664 (-6.226781) | 0.086617 / 0.075469 (0.011148) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.648231 / 1.841788 (-0.193557) | 19.487109 / 8.074308 (11.412801) | 23.474621 / 10.191392 (13.283229) | 0.221392 / 0.680424 (-0.459032) | 0.028878 / 0.534201 (-0.505323) | 0.582302 / 0.579283 (0.003019) | 0.615059 / 0.434364 (0.180695) | 0.656082 / 0.540337 (0.115745) | 0.740544 / 1.386936 (-0.646392) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010687 / 0.011353 (-0.000665) | 0.007114 / 0.011008 (-0.003894) | 0.135426 / 0.038508 (0.096918) | 0.041027 / 0.023109 (0.017918) | 0.466441 / 0.275898 (0.190543) | 0.503545 / 0.323480 (0.180065) | 0.009418 / 0.007986 (0.001432) | 0.004976 / 0.004328 (0.000647) | 0.101342 / 0.004250 (0.097092) | 0.058289 / 0.037052 (0.021237) | 0.473715 / 0.258489 (0.215226) | 0.539556 / 0.293841 (0.245715) | 0.063138 / 0.128546 (-0.065408) | 0.020429 / 0.075646 (-0.055217) | 0.124179 / 0.419271 (-0.295093) | 0.066400 / 0.043533 (0.022867) | 0.450793 / 0.255139 (0.195654) | 0.494163 / 0.283200 (0.210964) | 0.131179 / 0.141683 (-0.010504) | 1.876396 / 1.452155 (0.424241) | 1.974148 / 1.492716 (0.481432) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.313362 / 0.018006 (0.295356) | 0.602618 / 0.000490 (0.602129) | 0.008279 / 0.000200 (0.008079) | 0.000155 / 0.000054 (0.000101) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.037250 / 0.037411 (-0.000161) | 0.144151 / 0.014526 (0.129625) | 0.155733 / 0.176557 (-0.020824) | 0.214334 / 0.737135 (-0.522801) | 0.167124 / 0.296338 (-0.129214) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.686471 / 0.215209 (0.471262) | 6.749174 / 2.077655 (4.671520) | 3.024941 / 1.504120 (1.520821) | 2.553363 / 1.541195 (1.012168) | 2.679107 / 1.468490 (1.210617) | 1.317212 / 4.584777 (-3.267565) | 5.917575 / 3.745712 (2.171862) | 3.412715 / 5.269862 (-1.857146) | 2.203478 / 4.565676 (-2.362198) | 0.150387 / 0.424275 (-0.273888) | 0.015977 / 0.007607 (0.008370) | 0.862999 / 0.226044 (0.636954) | 8.706459 / 2.268929 (6.437530) | 3.762648 / 55.444624 (-51.681977) | 2.992544 / 6.876477 (-3.883933) | 3.135796 / 2.142072 (0.993724) | 1.504140 / 4.805227 (-3.301088) | 0.268265 / 6.500664 (-6.232399) | 0.083297 / 0.075469 (0.007828) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.690193 / 1.841788 (-0.151594) | 19.912854 / 8.074308 (11.838546) | 23.568217 / 10.191392 (13.376825) | 0.285125 / 0.680424 (-0.395299) | 0.030593 / 0.534201 (-0.503608) | 0.565305 / 0.579283 (-0.013978) | 0.659283 / 0.434364 (0.224919) | 0.678864 / 0.540337 (0.138527) | 0.793634 / 1.386936 (-0.593302) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#9d0edbe3f3258b7e580d1b58c0eea6637b5e22b2 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.011615 / 0.011353 (0.000262) | 0.006716 / 0.011008 (-0.004292) | 0.146868 / 0.038508 (0.108360) | 0.037621 / 0.023109 (0.014512) | 0.425563 / 0.275898 (0.149664) | 0.483217 / 0.323480 (0.159737) | 0.007830 / 0.007986 (-0.000156) | 0.005940 / 0.004328 (0.001612) | 0.100771 / 0.004250 (0.096521) | 0.063907 / 0.037052 (0.026854) | 0.422993 / 0.258489 (0.164503) | 0.496514 / 0.293841 (0.202673) | 0.056004 / 0.128546 (-0.072542) | 0.021441 / 0.075646 (-0.054206) | 0.453589 / 0.419271 (0.034317) | 0.067555 / 0.043533 (0.024022) | 0.442490 / 0.255139 (0.187351) | 0.503941 / 0.283200 (0.220742) | 0.134023 / 0.141683 (-0.007660) | 1.886329 / 1.452155 (0.434175) | 2.030867 / 1.492716 (0.538150) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.288063 / 0.018006 (0.270057) | 0.627177 / 0.000490 (0.626687) | 0.006335 / 0.000200 (0.006135) | 0.000171 / 0.000054 (0.000116) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032424 / 0.037411 (-0.004987) | 0.132749 / 0.014526 (0.118223) | 0.144727 / 0.176557 (-0.031829) | 0.232577 / 0.737135 (-0.504558) | 0.157315 / 0.296338 (-0.139024) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.623058 / 0.215209 (0.407849) | 6.272447 / 2.077655 (4.194792) | 2.506778 / 1.504120 (1.002658) | 2.203094 / 1.541195 (0.661899) | 2.346972 / 1.468490 (0.878482) | 1.358498 / 4.584777 (-3.226279) | 5.879670 / 3.745712 (2.133958) | 5.818406 / 5.269862 (0.548545) | 3.231936 / 4.565676 (-1.333741) | 0.154013 / 0.424275 (-0.270263) | 0.021541 / 0.007607 (0.013934) | 0.823746 / 0.226044 (0.597702) | 8.140304 / 2.268929 (5.871375) | 3.366911 / 55.444624 (-52.077714) | 2.696856 / 6.876477 (-4.179621) | 2.845743 / 2.142072 (0.703671) | 1.522363 / 4.805227 (-3.282864) | 0.278938 / 6.500664 (-6.221726) | 0.085044 / 0.075469 (0.009575) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.681348 / 1.841788 (-0.160440) | 19.686703 / 8.074308 (11.612395) | 22.995655 / 10.191392 (12.804263) | 0.218876 / 0.680424 (-0.461548) | 0.029334 / 0.534201 (-0.504867) | 0.560846 / 0.579283 (-0.018438) | 0.645210 / 0.434364 (0.210846) | 0.697842 / 0.540337 (0.157505) | 0.832875 / 1.386936 (-0.554061) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009509 / 0.011353 (-0.001844) | 0.006471 / 0.011008 (-0.004537) | 0.101477 / 0.038508 (0.062969) | 0.035281 / 0.023109 (0.012171) | 0.470032 / 0.275898 (0.194134) | 0.501475 / 0.323480 (0.177995) | 0.007641 / 0.007986 (-0.000344) | 0.006784 / 0.004328 (0.002455) | 0.096111 / 0.004250 (0.091861) | 0.055199 / 0.037052 (0.018146) | 0.470095 / 0.258489 (0.211606) | 0.530955 / 0.293841 (0.237114) | 0.056161 / 0.128546 (-0.072385) | 0.022055 / 0.075646 (-0.053591) | 0.121585 / 0.419271 (-0.297686) | 0.063736 / 0.043533 (0.020203) | 0.470771 / 0.255139 (0.215632) | 0.490546 / 0.283200 (0.207346) | 0.128825 / 0.141683 (-0.012858) | 1.898639 / 1.452155 (0.446484) | 2.052305 / 1.492716 (0.559589) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.322526 / 0.018006 (0.304520) | 0.628096 / 0.000490 (0.627607) | 0.006837 / 0.000200 (0.006637) | 0.000199 / 0.000054 (0.000145) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033830 / 0.037411 (-0.003581) | 0.136217 / 0.014526 (0.121691) | 0.147006 / 0.176557 (-0.029551) | 0.203950 / 0.737135 (-0.533185) | 0.150327 / 0.296338 (-0.146011) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.654287 / 0.215209 (0.439078) | 6.430306 / 2.077655 (4.352651) | 2.881750 / 1.504120 (1.377630) | 2.489505 / 1.541195 (0.948310) | 2.543037 / 1.468490 (1.074547) | 1.226682 / 4.584777 (-3.358094) | 5.902076 / 3.745712 (2.156364) | 3.335344 / 5.269862 (-1.934518) | 2.156738 / 4.565676 (-2.408939) | 0.151804 / 0.424275 (-0.272472) | 0.015238 / 0.007607 (0.007631) | 0.816364 / 0.226044 (0.590319) | 8.126367 / 2.268929 (5.857438) | 3.653222 / 55.444624 (-51.791402) | 2.886667 / 6.876477 (-3.989809) | 3.120852 / 2.142072 (0.978779) | 1.421423 / 4.805227 (-3.383804) | 0.264590 / 6.500664 (-6.236074) | 0.085716 / 0.075469 (0.010247) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.745258 / 1.841788 (-0.096530) | 19.379253 / 8.074308 (11.304945) | 23.827046 / 10.191392 (13.635654) | 0.267702 / 0.680424 (-0.412722) | 0.030253 / 0.534201 (-0.503948) | 0.542037 / 0.579283 (-0.037246) | 0.655946 / 0.434364 (0.221582) | 0.683525 / 0.540337 (0.143188) | 0.831333 / 1.386936 (-0.555603) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#5b011a258329375aa4dc7b414bd4e7b6363c5357 \"CML watermark\")\n"
] | 2023-04-26T17:09:32 | 2023-04-26T17:49:03 | 2023-04-26T17:39:12 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5795",
"html_url": "https://github.com/huggingface/datasets/pull/5795",
"diff_url": "https://github.com/huggingface/datasets/pull/5795.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5795.patch",
"merged_at": "2023-04-26T17:39:12"
} | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5795/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5795/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5790 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5790/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5790/comments | https://api.github.com/repos/huggingface/datasets/issues/5790/events | https://github.com/huggingface/datasets/pull/5790 | 1,683,229,126 | PR_kwDODunzps5PG0mJ | 5,790 | Allow to run CI on push to ci-branch | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007852 / 0.011353 (-0.003500) | 0.005804 / 0.011008 (-0.005204) | 0.098268 / 0.038508 (0.059760) | 0.036440 / 0.023109 (0.013331) | 0.299952 / 0.275898 (0.024054) | 0.335590 / 0.323480 (0.012111) | 0.006332 / 0.007986 (-0.001653) | 0.004218 / 0.004328 (-0.000110) | 0.074733 / 0.004250 (0.070483) | 0.055252 / 0.037052 (0.018200) | 0.300854 / 0.258489 (0.042365) | 0.353442 / 0.293841 (0.059601) | 0.036447 / 0.128546 (-0.092099) | 0.012638 / 0.075646 (-0.063009) | 0.336680 / 0.419271 (-0.082591) | 0.052436 / 0.043533 (0.008903) | 0.292606 / 0.255139 (0.037467) | 0.319676 / 0.283200 (0.036476) | 0.111137 / 0.141683 (-0.030546) | 1.449569 / 1.452155 (-0.002586) | 1.558110 / 1.492716 (0.065394) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.306043 / 0.018006 (0.288037) | 0.563174 / 0.000490 (0.562684) | 0.032227 / 0.000200 (0.032027) | 0.000491 / 0.000054 (0.000436) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029874 / 0.037411 (-0.007537) | 0.109330 / 0.014526 (0.094805) | 0.122579 / 0.176557 (-0.053978) | 0.181398 / 0.737135 (-0.555737) | 0.127124 / 0.296338 (-0.169215) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.417950 / 0.215209 (0.202741) | 4.163883 / 2.077655 (2.086228) | 1.985209 / 1.504120 (0.481089) | 1.793660 / 1.541195 (0.252465) | 1.895193 / 1.468490 (0.426703) | 0.694331 / 4.584777 (-3.890446) | 3.820170 / 3.745712 (0.074458) | 2.180556 / 5.269862 (-3.089305) | 1.490671 / 4.565676 (-3.075006) | 0.086132 / 0.424275 (-0.338143) | 0.012289 / 0.007607 (0.004682) | 0.511182 / 0.226044 (0.285137) | 5.117855 / 2.268929 (2.848927) | 2.403914 / 55.444624 (-53.040710) | 2.071107 / 6.876477 (-4.805369) | 2.184108 / 2.142072 (0.042036) | 0.835028 / 4.805227 (-3.970199) | 0.167707 / 6.500664 (-6.332957) | 0.066724 / 0.075469 (-0.008746) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.203921 / 1.841788 (-0.637867) | 15.214676 / 8.074308 (7.140368) | 14.971337 / 10.191392 (4.779945) | 0.170225 / 0.680424 (-0.510199) | 0.017924 / 0.534201 (-0.516277) | 0.428532 / 0.579283 (-0.150751) | 0.449157 / 0.434364 (0.014793) | 0.507723 / 0.540337 (-0.032614) | 0.615331 / 1.386936 (-0.771605) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008172 / 0.011353 (-0.003181) | 0.005405 / 0.011008 (-0.005603) | 0.074684 / 0.038508 (0.036176) | 0.039133 / 0.023109 (0.016024) | 0.342598 / 0.275898 (0.066700) | 0.377752 / 0.323480 (0.054272) | 0.006655 / 0.007986 (-0.001331) | 0.005788 / 0.004328 (0.001459) | 0.074014 / 0.004250 (0.069763) | 0.056225 / 0.037052 (0.019173) | 0.342330 / 0.258489 (0.083841) | 0.381052 / 0.293841 (0.087211) | 0.036574 / 0.128546 (-0.091973) | 0.012472 / 0.075646 (-0.063174) | 0.087574 / 0.419271 (-0.331698) | 0.050178 / 0.043533 (0.006646) | 0.351116 / 0.255139 (0.095977) | 0.363772 / 0.283200 (0.080572) | 0.118313 / 0.141683 (-0.023370) | 1.436691 / 1.452155 (-0.015463) | 1.551397 / 1.492716 (0.058680) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.265201 / 0.018006 (0.247195) | 0.561855 / 0.000490 (0.561366) | 0.000463 / 0.000200 (0.000263) | 0.000058 / 0.000054 (0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030540 / 0.037411 (-0.006871) | 0.118815 / 0.014526 (0.104289) | 0.127689 / 0.176557 (-0.048868) | 0.176211 / 0.737135 (-0.560924) | 0.133130 / 0.296338 (-0.163208) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.416318 / 0.215209 (0.201109) | 4.146806 / 2.077655 (2.069151) | 1.983437 / 1.504120 (0.479317) | 1.799733 / 1.541195 (0.258539) | 1.889026 / 1.468490 (0.420536) | 0.723330 / 4.584777 (-3.861447) | 3.817795 / 3.745712 (0.072083) | 2.158449 / 5.269862 (-3.111413) | 1.377348 / 4.565676 (-3.188328) | 0.088504 / 0.424275 (-0.335771) | 0.012560 / 0.007607 (0.004953) | 0.530382 / 0.226044 (0.304337) | 5.308529 / 2.268929 (3.039600) | 2.469655 / 55.444624 (-52.974970) | 2.136209 / 6.876477 (-4.740267) | 2.322997 / 2.142072 (0.180924) | 0.861396 / 4.805227 (-3.943831) | 0.172747 / 6.500664 (-6.327917) | 0.067617 / 0.075469 (-0.007852) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.263225 / 1.841788 (-0.578563) | 15.878025 / 8.074308 (7.803717) | 14.815627 / 10.191392 (4.624235) | 0.148722 / 0.680424 (-0.531702) | 0.018071 / 0.534201 (-0.516130) | 0.428389 / 0.579283 (-0.150894) | 0.428635 / 0.434364 (-0.005729) | 0.496953 / 0.540337 (-0.043385) | 0.592783 / 1.386936 (-0.794153) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#d2e5568dc7a47f9a99678d2889bd2e3c33afdd00 \"CML watermark\")\n"
] | 2023-04-25T13:57:26 | 2023-04-26T13:43:08 | 2023-04-26T13:35:47 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5790",
"html_url": "https://github.com/huggingface/datasets/pull/5790",
"diff_url": "https://github.com/huggingface/datasets/pull/5790.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5790.patch",
"merged_at": "2023-04-26T13:35:47"
} | This PR allows to run the CI on push to a branch named "ci-*", without needing to open a PR.
- This will allow to make CI tests without opening a PR, e.g., for future `huggingface-hub` releases, future dependency releases (like `fsspec`, `pandas`,...)
Note that to build the documentation, we already allow it on push to a branch named "doc-builder*".
See:
- #5788
CC: @Wauplin | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5790/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5790/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5788 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5788/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5788/comments | https://api.github.com/repos/huggingface/datasets/issues/5788/events | https://github.com/huggingface/datasets/pull/5788 | 1,681,136,256 | PR_kwDODunzps5O_v4B | 5,788 | Prepare tests for hfh 0.14 | {
"login": "Wauplin",
"id": 11801849,
"node_id": "MDQ6VXNlcjExODAxODQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/11801849?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Wauplin",
"html_url": "https://github.com/Wauplin",
"followers_url": "https://api.github.com/users/Wauplin/followers",
"following_url": "https://api.github.com/users/Wauplin/following{/other_user}",
"gists_url": "https://api.github.com/users/Wauplin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Wauplin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Wauplin/subscriptions",
"organizations_url": "https://api.github.com/users/Wauplin/orgs",
"repos_url": "https://api.github.com/users/Wauplin/repos",
"events_url": "https://api.github.com/users/Wauplin/events{/privacy}",
"received_events_url": "https://api.github.com/users/Wauplin/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007343 / 0.011353 (-0.004010) | 0.005145 / 0.011008 (-0.005863) | 0.099820 / 0.038508 (0.061312) | 0.033487 / 0.023109 (0.010378) | 0.313069 / 0.275898 (0.037171) | 0.335420 / 0.323480 (0.011940) | 0.005959 / 0.007986 (-0.002027) | 0.005373 / 0.004328 (0.001044) | 0.076568 / 0.004250 (0.072317) | 0.048702 / 0.037052 (0.011650) | 0.322957 / 0.258489 (0.064468) | 0.363044 / 0.293841 (0.069203) | 0.035070 / 0.128546 (-0.093476) | 0.012029 / 0.075646 (-0.063618) | 0.334664 / 0.419271 (-0.084607) | 0.050549 / 0.043533 (0.007017) | 0.310113 / 0.255139 (0.054974) | 0.324405 / 0.283200 (0.041205) | 0.097596 / 0.141683 (-0.044087) | 1.440741 / 1.452155 (-0.011414) | 1.531194 / 1.492716 (0.038478) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.220799 / 0.018006 (0.202793) | 0.438158 / 0.000490 (0.437668) | 0.007737 / 0.000200 (0.007537) | 0.000082 / 0.000054 (0.000027) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026888 / 0.037411 (-0.010523) | 0.106281 / 0.014526 (0.091755) | 0.117419 / 0.176557 (-0.059138) | 0.179144 / 0.737135 (-0.557992) | 0.122477 / 0.296338 (-0.173861) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.412667 / 0.215209 (0.197458) | 4.108784 / 2.077655 (2.031129) | 1.834300 / 1.504120 (0.330180) | 1.627256 / 1.541195 (0.086061) | 1.691036 / 1.468490 (0.222546) | 0.713405 / 4.584777 (-3.871372) | 3.839262 / 3.745712 (0.093550) | 2.108453 / 5.269862 (-3.161408) | 1.340740 / 4.565676 (-3.224936) | 0.087776 / 0.424275 (-0.336499) | 0.012730 / 0.007607 (0.005123) | 0.505323 / 0.226044 (0.279279) | 5.085176 / 2.268929 (2.816247) | 2.307165 / 55.444624 (-53.137459) | 1.936771 / 6.876477 (-4.939706) | 2.097391 / 2.142072 (-0.044681) | 0.856215 / 4.805227 (-3.949012) | 0.171826 / 6.500664 (-6.328838) | 0.066603 / 0.075469 (-0.008866) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.202126 / 1.841788 (-0.639661) | 15.173598 / 8.074308 (7.099290) | 15.012645 / 10.191392 (4.821253) | 0.162187 / 0.680424 (-0.518237) | 0.017462 / 0.534201 (-0.516739) | 0.423895 / 0.579283 (-0.155388) | 0.432010 / 0.434364 (-0.002354) | 0.503234 / 0.540337 (-0.037104) | 0.598948 / 1.386936 (-0.787988) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007099 / 0.011353 (-0.004254) | 0.005167 / 0.011008 (-0.005841) | 0.075551 / 0.038508 (0.037043) | 0.033050 / 0.023109 (0.009940) | 0.339629 / 0.275898 (0.063731) | 0.380486 / 0.323480 (0.057006) | 0.005776 / 0.007986 (-0.002209) | 0.004029 / 0.004328 (-0.000299) | 0.075074 / 0.004250 (0.070823) | 0.046709 / 0.037052 (0.009656) | 0.340203 / 0.258489 (0.081714) | 0.380849 / 0.293841 (0.087008) | 0.035027 / 0.128546 (-0.093519) | 0.012226 / 0.075646 (-0.063420) | 0.087525 / 0.419271 (-0.331747) | 0.049361 / 0.043533 (0.005828) | 0.341854 / 0.255139 (0.086715) | 0.359590 / 0.283200 (0.076390) | 0.100102 / 0.141683 (-0.041581) | 1.482759 / 1.452155 (0.030605) | 1.569905 / 1.492716 (0.077189) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.213615 / 0.018006 (0.195609) | 0.441117 / 0.000490 (0.440628) | 0.004932 / 0.000200 (0.004732) | 0.000093 / 0.000054 (0.000038) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031313 / 0.037411 (-0.006098) | 0.110191 / 0.014526 (0.095665) | 0.125320 / 0.176557 (-0.051237) | 0.177658 / 0.737135 (-0.559477) | 0.127928 / 0.296338 (-0.168410) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.426952 / 0.215209 (0.211743) | 4.247731 / 2.077655 (2.170076) | 2.107318 / 1.504120 (0.603198) | 1.843845 / 1.541195 (0.302650) | 1.894822 / 1.468490 (0.426332) | 0.696232 / 4.584777 (-3.888545) | 3.826516 / 3.745712 (0.080804) | 2.126688 / 5.269862 (-3.143174) | 1.327062 / 4.565676 (-3.238615) | 0.085693 / 0.424275 (-0.338582) | 0.012226 / 0.007607 (0.004619) | 0.521904 / 0.226044 (0.295859) | 5.219798 / 2.268929 (2.950869) | 2.524908 / 55.444624 (-52.919716) | 2.212078 / 6.876477 (-4.664399) | 2.373944 / 2.142072 (0.231871) | 0.833846 / 4.805227 (-3.971381) | 0.169639 / 6.500664 (-6.331025) | 0.064538 / 0.075469 (-0.010931) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.254930 / 1.841788 (-0.586858) | 15.585277 / 8.074308 (7.510969) | 14.762857 / 10.191392 (4.571465) | 0.146959 / 0.680424 (-0.533465) | 0.017451 / 0.534201 (-0.516750) | 0.424469 / 0.579283 (-0.154814) | 0.422359 / 0.434364 (-0.012004) | 0.489930 / 0.540337 (-0.050408) | 0.595856 / 1.386936 (-0.791080) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#213c72f52ae52b662f967d3218f66c70a3043048 \"CML watermark\")\n",
"@albertvillanova thanks for the review. As you prefer for the github CI config. I just took it from @lhoestq's branch when testing hfh==0.14.0. I think it's still relevant for next releases. In any case, I let you handle merging the PR :)",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008371 / 0.011353 (-0.002982) | 0.005210 / 0.011008 (-0.005798) | 0.105639 / 0.038508 (0.067131) | 0.045903 / 0.023109 (0.022794) | 0.391231 / 0.275898 (0.115333) | 0.438824 / 0.323480 (0.115345) | 0.006270 / 0.007986 (-0.001715) | 0.005950 / 0.004328 (0.001621) | 0.079685 / 0.004250 (0.075434) | 0.052121 / 0.037052 (0.015069) | 0.387787 / 0.258489 (0.129298) | 0.434322 / 0.293841 (0.140481) | 0.032598 / 0.128546 (-0.095948) | 0.012126 / 0.075646 (-0.063520) | 0.359658 / 0.419271 (-0.059613) | 0.046686 / 0.043533 (0.003154) | 0.391973 / 0.255139 (0.136834) | 0.421149 / 0.283200 (0.137949) | 0.105920 / 0.141683 (-0.035763) | 1.483008 / 1.452155 (0.030854) | 1.617010 / 1.492716 (0.124294) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.199111 / 0.018006 (0.181105) | 0.407995 / 0.000490 (0.407505) | 0.006706 / 0.000200 (0.006506) | 0.000229 / 0.000054 (0.000175) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030247 / 0.037411 (-0.007164) | 0.115977 / 0.014526 (0.101451) | 0.118112 / 0.176557 (-0.058444) | 0.182710 / 0.737135 (-0.554426) | 0.122483 / 0.296338 (-0.173855) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.430455 / 0.215209 (0.215246) | 4.314298 / 2.077655 (2.236643) | 1.898124 / 1.504120 (0.394005) | 1.734909 / 1.541195 (0.193715) | 1.802400 / 1.468490 (0.333910) | 0.717237 / 4.584777 (-3.867539) | 4.004705 / 3.745712 (0.258993) | 2.138901 / 5.269862 (-3.130960) | 1.254037 / 4.565676 (-3.311640) | 0.085594 / 0.424275 (-0.338681) | 0.013774 / 0.007607 (0.006166) | 0.535218 / 0.226044 (0.309174) | 5.373730 / 2.268929 (3.104801) | 2.371194 / 55.444624 (-53.073430) | 2.111206 / 6.876477 (-4.765270) | 2.225137 / 2.142072 (0.083064) | 0.838325 / 4.805227 (-3.966902) | 0.159176 / 6.500664 (-6.341488) | 0.072285 / 0.075469 (-0.003184) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.352232 / 1.841788 (-0.489555) | 16.926722 / 8.074308 (8.852414) | 16.709531 / 10.191392 (6.518139) | 0.159249 / 0.680424 (-0.521175) | 0.017667 / 0.534201 (-0.516534) | 0.426894 / 0.579283 (-0.152390) | 0.539903 / 0.434364 (0.105539) | 0.537471 / 0.540337 (-0.002866) | 0.619592 / 1.386936 (-0.767344) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008354 / 0.011353 (-0.002999) | 0.005366 / 0.011008 (-0.005642) | 0.080961 / 0.038508 (0.042453) | 0.046574 / 0.023109 (0.023465) | 0.345949 / 0.275898 (0.070051) | 0.394041 / 0.323480 (0.070562) | 0.006209 / 0.007986 (-0.001777) | 0.005980 / 0.004328 (0.001651) | 0.076235 / 0.004250 (0.071984) | 0.051833 / 0.037052 (0.014780) | 0.348786 / 0.258489 (0.090297) | 0.397421 / 0.293841 (0.103580) | 0.033026 / 0.128546 (-0.095520) | 0.012217 / 0.075646 (-0.063429) | 0.087439 / 0.419271 (-0.331832) | 0.045488 / 0.043533 (0.001955) | 0.352160 / 0.255139 (0.097021) | 0.379079 / 0.283200 (0.095879) | 0.116111 / 0.141683 (-0.025572) | 1.470177 / 1.452155 (0.018022) | 1.587499 / 1.492716 (0.094783) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.296149 / 0.018006 (0.278143) | 0.592362 / 0.000490 (0.591872) | 0.000492 / 0.000200 (0.000292) | 0.000064 / 0.000054 (0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.036599 / 0.037411 (-0.000813) | 0.113768 / 0.014526 (0.099242) | 0.116198 / 0.176557 (-0.060358) | 0.180329 / 0.737135 (-0.556806) | 0.123942 / 0.296338 (-0.172396) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.452445 / 0.215209 (0.237236) | 4.504330 / 2.077655 (2.426675) | 2.275645 / 1.504120 (0.771525) | 2.107765 / 1.541195 (0.566571) | 2.086363 / 1.468490 (0.617873) | 0.723721 / 4.584777 (-3.861056) | 3.825330 / 3.745712 (0.079618) | 2.162743 / 5.269862 (-3.107119) | 1.255953 / 4.565676 (-3.309724) | 0.085860 / 0.424275 (-0.338415) | 0.013790 / 0.007607 (0.006183) | 0.560257 / 0.226044 (0.334213) | 5.618180 / 2.268929 (3.349251) | 2.625423 / 55.444624 (-52.819202) | 2.374381 / 6.876477 (-4.502095) | 2.496560 / 2.142072 (0.354488) | 0.841120 / 4.805227 (-3.964107) | 0.161541 / 6.500664 (-6.339123) | 0.075270 / 0.075469 (-0.000199) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.432916 / 1.841788 (-0.408872) | 14.858534 / 8.074308 (6.784226) | 14.973521 / 10.191392 (4.782129) | 0.148312 / 0.680424 (-0.532112) | 0.016811 / 0.534201 (-0.517390) | 0.382623 / 0.579283 (-0.196660) | 0.389767 / 0.434364 (-0.044596) | 0.449657 / 0.540337 (-0.090680) | 0.533723 / 1.386936 (-0.853214) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#f8344350f15265a585188ac986ae49a8ed8289fe \"CML watermark\")\n",
"I agree it is good to have a way to run the CI on push, without needing to open a PR.\r\n\r\nBut I think the branch name should be more generic (and this is not specific to this PR). See:\r\n- #5790 ",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007208 / 0.011353 (-0.004145) | 0.005600 / 0.011008 (-0.005408) | 0.096129 / 0.038508 (0.057621) | 0.027834 / 0.023109 (0.004725) | 0.295106 / 0.275898 (0.019208) | 0.323983 / 0.323480 (0.000503) | 0.005164 / 0.007986 (-0.002822) | 0.003962 / 0.004328 (-0.000366) | 0.078339 / 0.004250 (0.074089) | 0.036974 / 0.037052 (-0.000078) | 0.310315 / 0.258489 (0.051826) | 0.338036 / 0.293841 (0.044195) | 0.042124 / 0.128546 (-0.086422) | 0.015886 / 0.075646 (-0.059760) | 0.337961 / 0.419271 (-0.081310) | 0.051507 / 0.043533 (0.007974) | 0.297505 / 0.255139 (0.042366) | 0.310728 / 0.283200 (0.027528) | 0.086312 / 0.141683 (-0.055371) | 1.356923 / 1.452155 (-0.095232) | 1.429366 / 1.492716 (-0.063350) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.205495 / 0.018006 (0.187489) | 0.460639 / 0.000490 (0.460149) | 0.003996 / 0.000200 (0.003796) | 0.000093 / 0.000054 (0.000038) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021970 / 0.037411 (-0.015442) | 0.090283 / 0.014526 (0.075757) | 0.098579 / 0.176557 (-0.077978) | 0.160437 / 0.737135 (-0.576699) | 0.102738 / 0.296338 (-0.193600) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.494474 / 0.215209 (0.279265) | 4.967453 / 2.077655 (2.889799) | 2.045852 / 1.504120 (0.541732) | 1.858022 / 1.541195 (0.316827) | 1.771874 / 1.468490 (0.303384) | 1.186368 / 4.584777 (-3.398408) | 4.974762 / 3.745712 (1.229050) | 2.616225 / 5.269862 (-2.653636) | 1.702971 / 4.565676 (-2.862705) | 0.124929 / 0.424275 (-0.299346) | 0.011774 / 0.007607 (0.004167) | 0.569643 / 0.226044 (0.343598) | 5.793114 / 2.268929 (3.524186) | 2.441561 / 55.444624 (-53.003064) | 1.862233 / 6.876477 (-5.014243) | 1.931142 / 2.142072 (-0.210931) | 1.148915 / 4.805227 (-3.656313) | 0.203914 / 6.500664 (-6.296750) | 0.062468 / 0.075469 (-0.013001) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.188708 / 1.841788 (-0.653080) | 13.710830 / 8.074308 (5.636522) | 15.695153 / 10.191392 (5.503761) | 0.171467 / 0.680424 (-0.508957) | 0.024509 / 0.534201 (-0.509692) | 0.450270 / 0.579283 (-0.129014) | 0.500712 / 0.434364 (0.066348) | 0.488632 / 0.540337 (-0.051706) | 0.574893 / 1.386936 (-0.812043) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007254 / 0.011353 (-0.004099) | 0.006199 / 0.011008 (-0.004809) | 0.072079 / 0.038508 (0.033571) | 0.026909 / 0.023109 (0.003800) | 0.355538 / 0.275898 (0.079640) | 0.358625 / 0.323480 (0.035145) | 0.005564 / 0.007986 (-0.002421) | 0.005278 / 0.004328 (0.000950) | 0.076469 / 0.004250 (0.072219) | 0.038269 / 0.037052 (0.001216) | 0.355214 / 0.258489 (0.096725) | 0.383219 / 0.293841 (0.089378) | 0.046516 / 0.128546 (-0.082030) | 0.015393 / 0.075646 (-0.060254) | 0.088506 / 0.419271 (-0.330765) | 0.050326 / 0.043533 (0.006793) | 0.327265 / 0.255139 (0.072126) | 0.370176 / 0.283200 (0.086976) | 0.102438 / 0.141683 (-0.039245) | 1.378969 / 1.452155 (-0.073186) | 1.441998 / 1.492716 (-0.050719) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.209044 / 0.018006 (0.191038) | 0.455733 / 0.000490 (0.455243) | 0.005856 / 0.000200 (0.005656) | 0.000116 / 0.000054 (0.000061) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025336 / 0.037411 (-0.012075) | 0.097449 / 0.014526 (0.082923) | 0.106301 / 0.176557 (-0.070255) | 0.153053 / 0.737135 (-0.584082) | 0.107938 / 0.296338 (-0.188401) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.491070 / 0.215209 (0.275861) | 5.049637 / 2.077655 (2.971982) | 2.064709 / 1.504120 (0.560589) | 1.782266 / 1.541195 (0.241072) | 1.798570 / 1.468490 (0.330080) | 0.988886 / 4.584777 (-3.595891) | 4.690324 / 3.745712 (0.944612) | 4.317355 / 5.269862 (-0.952507) | 2.347596 / 4.565676 (-2.218081) | 0.117249 / 0.424275 (-0.307026) | 0.011614 / 0.007607 (0.004007) | 0.630033 / 0.226044 (0.403988) | 6.140108 / 2.268929 (3.871180) | 2.638080 / 55.444624 (-52.806545) | 2.133017 / 6.876477 (-4.743459) | 2.123392 / 2.142072 (-0.018680) | 1.178056 / 4.805227 (-3.627171) | 0.209465 / 6.500664 (-6.291199) | 0.063234 / 0.075469 (-0.012235) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.238089 / 1.841788 (-0.603699) | 14.066866 / 8.074308 (5.992558) | 16.225480 / 10.191392 (6.034088) | 0.206466 / 0.680424 (-0.473958) | 0.027279 / 0.534201 (-0.506922) | 0.443006 / 0.579283 (-0.136277) | 0.509512 / 0.434364 (0.075148) | 0.479075 / 0.540337 (-0.061263) | 0.573546 / 1.386936 (-0.813390) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#c6015a070c66a5bbd84603d415ccc57cb668b44b \"CML watermark\")\n"
] | 2023-04-24T12:13:03 | 2023-04-25T14:32:56 | 2023-04-25T14:25:30 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5788",
"html_url": "https://github.com/huggingface/datasets/pull/5788",
"diff_url": "https://github.com/huggingface/datasets/pull/5788.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5788.patch",
"merged_at": "2023-04-25T14:25:30"
} | Related to the coming release of `huggingface_hub==0.14.0`. It will break some internal tests. The PR fixes these tests. Let's double-check the CI but I expect the fixed tests to be running fine with both `hfh<=0.13.4` and `hfh==0.14`. Worth case scenario, existing PRs will have to be rebased once this fix is merged.
See related [discussion](https://huggingface.slack.com/archives/C02V5EA0A95/p1682337463368609?thread_ts=1681994202.635609&cid=C02V5EA0A95) (private slack).
cc @lhoestq | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5788/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5788/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5787 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5787/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5787/comments | https://api.github.com/repos/huggingface/datasets/issues/5787/events | https://github.com/huggingface/datasets/pull/5787 | 1,680,965,959 | PR_kwDODunzps5O_KNU | 5,787 | Fix inferring module for unsupported data files | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"I think you can revert the last commit - it should fail if data_files={} IMO",
"The validation of non-empty data_files is addressed in this PR:\r\n- #5802",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008622 / 0.011353 (-0.002730) | 0.005970 / 0.011008 (-0.005038) | 0.117797 / 0.038508 (0.079289) | 0.040955 / 0.023109 (0.017846) | 0.419538 / 0.275898 (0.143640) | 0.455816 / 0.323480 (0.132336) | 0.006481 / 0.007986 (-0.001505) | 0.004507 / 0.004328 (0.000178) | 0.089073 / 0.004250 (0.084822) | 0.052389 / 0.037052 (0.015337) | 0.420053 / 0.258489 (0.161564) | 0.466886 / 0.293841 (0.173045) | 0.042660 / 0.128546 (-0.085886) | 0.014673 / 0.075646 (-0.060973) | 0.411229 / 0.419271 (-0.008042) | 0.076993 / 0.043533 (0.033460) | 0.431693 / 0.255139 (0.176554) | 0.446283 / 0.283200 (0.163084) | 0.131408 / 0.141683 (-0.010275) | 1.820339 / 1.452155 (0.368184) | 1.952946 / 1.492716 (0.460230) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.246543 / 0.018006 (0.228537) | 0.489806 / 0.000490 (0.489317) | 0.013999 / 0.000200 (0.013800) | 0.000323 / 0.000054 (0.000269) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032541 / 0.037411 (-0.004870) | 0.130569 / 0.014526 (0.116043) | 0.139630 / 0.176557 (-0.036926) | 0.217018 / 0.737135 (-0.520118) | 0.147914 / 0.296338 (-0.148425) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.494767 / 0.215209 (0.279558) | 4.949313 / 2.077655 (2.871658) | 2.277023 / 1.504120 (0.772903) | 2.036677 / 1.541195 (0.495482) | 2.064461 / 1.468490 (0.595970) | 0.842484 / 4.584777 (-3.742293) | 4.720646 / 3.745712 (0.974934) | 4.025673 / 5.269862 (-1.244189) | 2.198606 / 4.565676 (-2.367070) | 0.103042 / 0.424275 (-0.321233) | 0.014794 / 0.007607 (0.007187) | 0.617867 / 0.226044 (0.391822) | 6.197146 / 2.268929 (3.928218) | 2.804927 / 55.444624 (-52.639697) | 2.426420 / 6.876477 (-4.450057) | 2.515182 / 2.142072 (0.373109) | 1.008098 / 4.805227 (-3.797129) | 0.204982 / 6.500664 (-6.295682) | 0.078643 / 0.075469 (0.003174) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.490790 / 1.841788 (-0.350997) | 17.268042 / 8.074308 (9.193734) | 17.129647 / 10.191392 (6.938255) | 0.170351 / 0.680424 (-0.510073) | 0.021317 / 0.534201 (-0.512884) | 0.517068 / 0.579283 (-0.062215) | 0.500200 / 0.434364 (0.065836) | 0.641974 / 0.540337 (0.101637) | 0.763984 / 1.386936 (-0.622952) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008358 / 0.011353 (-0.002995) | 0.005710 / 0.011008 (-0.005298) | 0.091077 / 0.038508 (0.052569) | 0.040413 / 0.023109 (0.017303) | 0.416634 / 0.275898 (0.140736) | 0.451122 / 0.323480 (0.127642) | 0.006417 / 0.007986 (-0.001569) | 0.004360 / 0.004328 (0.000032) | 0.089543 / 0.004250 (0.085292) | 0.051137 / 0.037052 (0.014085) | 0.420228 / 0.258489 (0.161739) | 0.458649 / 0.293841 (0.164808) | 0.041828 / 0.128546 (-0.086718) | 0.014268 / 0.075646 (-0.061379) | 0.105301 / 0.419271 (-0.313970) | 0.058931 / 0.043533 (0.015398) | 0.413445 / 0.255139 (0.158306) | 0.443882 / 0.283200 (0.160682) | 0.124946 / 0.141683 (-0.016737) | 1.842259 / 1.452155 (0.390104) | 1.948162 / 1.492716 (0.455445) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.235799 / 0.018006 (0.217792) | 0.487667 / 0.000490 (0.487177) | 0.001112 / 0.000200 (0.000912) | 0.000094 / 0.000054 (0.000039) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034233 / 0.037411 (-0.003178) | 0.136593 / 0.014526 (0.122068) | 0.145598 / 0.176557 (-0.030959) | 0.206545 / 0.737135 (-0.530590) | 0.150781 / 0.296338 (-0.145558) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.522345 / 0.215209 (0.307136) | 5.192092 / 2.077655 (3.114438) | 2.543182 / 1.504120 (1.039062) | 2.285212 / 1.541195 (0.744018) | 2.312803 / 1.468490 (0.844313) | 0.859334 / 4.584777 (-3.725443) | 4.620235 / 3.745712 (0.874523) | 3.964060 / 5.269862 (-1.305802) | 2.046347 / 4.565676 (-2.519330) | 0.105284 / 0.424275 (-0.318991) | 0.015051 / 0.007607 (0.007444) | 0.646530 / 0.226044 (0.420485) | 6.386396 / 2.268929 (4.117467) | 3.131833 / 55.444624 (-52.312791) | 2.761898 / 6.876477 (-4.114579) | 2.833216 / 2.142072 (0.691143) | 1.026024 / 4.805227 (-3.779204) | 0.206776 / 6.500664 (-6.293888) | 0.078845 / 0.075469 (0.003376) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.580851 / 1.841788 (-0.260937) | 17.826213 / 8.074308 (9.751905) | 16.929460 / 10.191392 (6.738068) | 0.232483 / 0.680424 (-0.447941) | 0.021123 / 0.534201 (-0.513078) | 0.522196 / 0.579283 (-0.057087) | 0.503495 / 0.434364 (0.069131) | 0.622777 / 0.540337 (0.082440) | 0.753272 / 1.386936 (-0.633664) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#3f9dfbd93707665132abc862b14bb9b50597b739 \"CML watermark\")\n"
] | 2023-04-24T10:44:50 | 2023-04-27T13:06:01 | 2023-04-27T12:57:28 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5787",
"html_url": "https://github.com/huggingface/datasets/pull/5787",
"diff_url": "https://github.com/huggingface/datasets/pull/5787.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5787.patch",
"merged_at": "2023-04-27T12:57:28"
} | This PR raises a FileNotFoundError instead:
```
FileNotFoundError: No (supported) data files or dataset script found in <dataset_name>
```
Fix #5785. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5787/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5787/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5786 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5786/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5786/comments | https://api.github.com/repos/huggingface/datasets/issues/5786/events | https://github.com/huggingface/datasets/issues/5786 | 1,680,957,070 | I_kwDODunzps5kMV6O | 5,786 | Multiprocessing in a `filter` or `map` function with a Pytorch model | {
"login": "HugoLaurencon",
"id": 44556846,
"node_id": "MDQ6VXNlcjQ0NTU2ODQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/44556846?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HugoLaurencon",
"html_url": "https://github.com/HugoLaurencon",
"followers_url": "https://api.github.com/users/HugoLaurencon/followers",
"following_url": "https://api.github.com/users/HugoLaurencon/following{/other_user}",
"gists_url": "https://api.github.com/users/HugoLaurencon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HugoLaurencon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HugoLaurencon/subscriptions",
"organizations_url": "https://api.github.com/users/HugoLaurencon/orgs",
"repos_url": "https://api.github.com/users/HugoLaurencon/repos",
"events_url": "https://api.github.com/users/HugoLaurencon/events{/privacy}",
"received_events_url": "https://api.github.com/users/HugoLaurencon/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi ! PyTorch may hang when calling `load_state_dict()` in a subprocess. To fix that, set the multiprocessing start method to \"spawn\". Since `datasets` uses `multiprocess`, you should do:\r\n\r\n```python\r\n# Required to avoid issues with pytorch (otherwise hangs during load_state_dict in multiprocessing)\r\nimport multiprocess.context as ctx\r\nctx._force_start_method('spawn')\r\n```\r\n\r\nAlso make sure to run your main code in `if __name__ == \"__main__\":` to avoid issues with python multiprocesing",
"Thanks!"
] | 2023-04-24T10:38:07 | 2023-04-24T10:43:58 | 2023-04-24T10:43:58 | MEMBER | null | null | null | ### Describe the bug
I am trying to use a Pytorch model loaded on CPUs with multiple processes with a `.map` or a `.filter` method.
Usually, when dealing with models that are non-pickable, creating a class such that the `map` function is the method `__call__`, and adding `reduce` helps to solve the problem.
However, here, the command hangs without throwing an error.
### Steps to reproduce the bug
```
from datasets import Dataset
import torch
from torch import nn
from torchvision import models
class FilterFunction:
#__slots__ = ("path_model", "model") # Doesn't change anything uncommented
def __init__(self, path_model):
self.path_model = path_model
model = models.resnet50()
model.fc = nn.Sequential(
nn.Linear(2048, 512),
nn.ReLU(),
nn.Dropout(0.2),
nn.Linear(512, 10),
nn.LogSoftmax(dim=1)
)
model.load_state_dict(torch.load(path_model, map_location=torch.device("cpu")))
model.eval()
self.model = model
def __call__(self, batch):
return [True] * len(batch["id"])
# Comment this to have an error
def __reduce__(self):
return (self.__class__, (self.path_model,))
dataset = Dataset.from_dict({"id": [0, 1, 2, 4]})
# Download (100 MB) at https://github.com/emiliantolo/pytorch_nsfw_model/raw/master/ResNet50_nsfw_model.pth
path_model = "/fsx/hugo/nsfw_image/ResNet50_nsfw_model.pth"
filter_function = FilterFunction(path_model=path_model)
# Works
filtered_dataset = dataset.filter(filter_function, num_proc=1, batched=True, batch_size=2)
# Doesn't work
filtered_dataset = dataset.filter(filter_function, num_proc=2, batched=True, batch_size=2)
```
### Expected behavior
The command `filtered_dataset = dataset.filter(filter_function, num_proc=2, batched=True, batch_size=2)` should work and not hang.
### Environment info
Datasets: 2.11.0
Pyarrow: 11.0.0
Ubuntu | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5786/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5786/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5785 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5785/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5785/comments | https://api.github.com/repos/huggingface/datasets/issues/5785/events | https://github.com/huggingface/datasets/issues/5785 | 1,680,956,964 | I_kwDODunzps5kMV4k | 5,785 | Unsupported data files raise TypeError: 'NoneType' object is not iterable | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 2023-04-24T10:38:03 | 2023-04-27T12:57:30 | 2023-04-27T12:57:30 | MEMBER | null | null | null | Currently, we raise a TypeError for unsupported data files:
```
TypeError: 'NoneType' object is not iterable
```
See:
- https://github.com/huggingface/datasets-server/issues/1073
We should give a more informative error message. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5785/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5785/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5784 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5784/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5784/comments | https://api.github.com/repos/huggingface/datasets/issues/5784/events | https://github.com/huggingface/datasets/pull/5784 | 1,680,950,726 | PR_kwDODunzps5O_G9S | 5,784 | Raise subprocesses traceback when interrupting | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008959 / 0.011353 (-0.002394) | 0.005804 / 0.011008 (-0.005204) | 0.112663 / 0.038508 (0.074155) | 0.043406 / 0.023109 (0.020297) | 0.348582 / 0.275898 (0.072684) | 0.382332 / 0.323480 (0.058852) | 0.007469 / 0.007986 (-0.000517) | 0.006211 / 0.004328 (0.001883) | 0.086576 / 0.004250 (0.082326) | 0.059223 / 0.037052 (0.022170) | 0.361051 / 0.258489 (0.102562) | 0.411359 / 0.293841 (0.117518) | 0.043640 / 0.128546 (-0.084906) | 0.014239 / 0.075646 (-0.061408) | 0.389729 / 0.419271 (-0.029542) | 0.072319 / 0.043533 (0.028786) | 0.351025 / 0.255139 (0.095886) | 0.371893 / 0.283200 (0.088693) | 0.125994 / 0.141683 (-0.015688) | 1.675249 / 1.452155 (0.223094) | 1.808740 / 1.492716 (0.316024) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.255172 / 0.018006 (0.237166) | 0.536003 / 0.000490 (0.535514) | 0.000365 / 0.000200 (0.000165) | 0.000070 / 0.000054 (0.000015) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031989 / 0.037411 (-0.005423) | 0.126854 / 0.014526 (0.112328) | 0.142458 / 0.176557 (-0.034098) | 0.207821 / 0.737135 (-0.529314) | 0.145610 / 0.296338 (-0.150728) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.468924 / 0.215209 (0.253715) | 4.696677 / 2.077655 (2.619023) | 2.183133 / 1.504120 (0.679013) | 1.994219 / 1.541195 (0.453024) | 2.101375 / 1.468490 (0.632885) | 0.827168 / 4.584777 (-3.757609) | 4.710167 / 3.745712 (0.964455) | 2.377062 / 5.269862 (-2.892800) | 1.712245 / 4.565676 (-2.853431) | 0.100620 / 0.424275 (-0.323655) | 0.014302 / 0.007607 (0.006695) | 0.590813 / 0.226044 (0.364769) | 5.871991 / 2.268929 (3.603063) | 2.722229 / 55.444624 (-52.722395) | 2.323585 / 6.876477 (-4.552892) | 2.503289 / 2.142072 (0.361217) | 0.983644 / 4.805227 (-3.821583) | 0.193942 / 6.500664 (-6.306722) | 0.076493 / 0.075469 (0.001024) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.463107 / 1.841788 (-0.378681) | 17.876918 / 8.074308 (9.802610) | 16.755740 / 10.191392 (6.564348) | 0.167556 / 0.680424 (-0.512868) | 0.020514 / 0.534201 (-0.513687) | 0.508385 / 0.579283 (-0.070898) | 0.505873 / 0.434364 (0.071509) | 0.603630 / 0.540337 (0.063293) | 0.708856 / 1.386936 (-0.678080) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008504 / 0.011353 (-0.002849) | 0.005894 / 0.011008 (-0.005114) | 0.085523 / 0.038508 (0.047015) | 0.038780 / 0.023109 (0.015671) | 0.402869 / 0.275898 (0.126971) | 0.423819 / 0.323480 (0.100339) | 0.006427 / 0.007986 (-0.001559) | 0.004598 / 0.004328 (0.000269) | 0.079807 / 0.004250 (0.075556) | 0.050852 / 0.037052 (0.013799) | 0.403232 / 0.258489 (0.144743) | 0.452489 / 0.293841 (0.158648) | 0.041501 / 0.128546 (-0.087045) | 0.014996 / 0.075646 (-0.060650) | 0.101548 / 0.419271 (-0.317724) | 0.056993 / 0.043533 (0.013461) | 0.403153 / 0.255139 (0.148014) | 0.424587 / 0.283200 (0.141388) | 0.114507 / 0.141683 (-0.027176) | 1.707098 / 1.452155 (0.254943) | 1.799008 / 1.492716 (0.306291) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.288003 / 0.018006 (0.269996) | 0.496526 / 0.000490 (0.496036) | 0.010923 / 0.000200 (0.010723) | 0.000159 / 0.000054 (0.000105) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033948 / 0.037411 (-0.003463) | 0.142343 / 0.014526 (0.127817) | 0.143862 / 0.176557 (-0.032695) | 0.202655 / 0.737135 (-0.534480) | 0.151177 / 0.296338 (-0.145162) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.508003 / 0.215209 (0.292794) | 5.320394 / 2.077655 (3.242740) | 2.409854 / 1.504120 (0.905734) | 2.190656 / 1.541195 (0.649462) | 2.272171 / 1.468490 (0.803681) | 0.809492 / 4.584777 (-3.775285) | 4.554412 / 3.745712 (0.808699) | 4.413643 / 5.269862 (-0.856218) | 2.374034 / 4.565676 (-2.191642) | 0.099458 / 0.424275 (-0.324817) | 0.014553 / 0.007607 (0.006946) | 0.613916 / 0.226044 (0.387871) | 6.121430 / 2.268929 (3.852502) | 2.945661 / 55.444624 (-52.498964) | 2.595247 / 6.876477 (-4.281230) | 2.734047 / 2.142072 (0.591975) | 0.952217 / 4.805227 (-3.853010) | 0.196933 / 6.500664 (-6.303731) | 0.073391 / 0.075469 (-0.002078) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.475666 / 1.841788 (-0.366122) | 18.564281 / 8.074308 (10.489973) | 16.865259 / 10.191392 (6.673867) | 0.166494 / 0.680424 (-0.513930) | 0.020655 / 0.534201 (-0.513546) | 0.495120 / 0.579283 (-0.084163) | 0.502602 / 0.434364 (0.068238) | 0.622448 / 0.540337 (0.082110) | 0.721036 / 1.386936 (-0.665900) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#40c204c777793d64e8bb8ce357e9c07b3b303e41 \"CML watermark\")\n",
"Whoops mario you're off this week sorry. I'm taking the liberty to merge this one",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009079 / 0.011353 (-0.002274) | 0.005960 / 0.011008 (-0.005049) | 0.116530 / 0.038508 (0.078022) | 0.046649 / 0.023109 (0.023540) | 0.391906 / 0.275898 (0.116008) | 0.438892 / 0.323480 (0.115412) | 0.007134 / 0.007986 (-0.000851) | 0.004997 / 0.004328 (0.000668) | 0.085947 / 0.004250 (0.081697) | 0.059814 / 0.037052 (0.022762) | 0.396423 / 0.258489 (0.137934) | 0.455941 / 0.293841 (0.162100) | 0.042535 / 0.128546 (-0.086011) | 0.014667 / 0.075646 (-0.060980) | 0.402023 / 0.419271 (-0.017249) | 0.060381 / 0.043533 (0.016848) | 0.393829 / 0.255139 (0.138690) | 0.426557 / 0.283200 (0.143358) | 0.131519 / 0.141683 (-0.010163) | 1.758098 / 1.452155 (0.305943) | 1.848194 / 1.492716 (0.355478) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.236405 / 0.018006 (0.218399) | 0.611442 / 0.000490 (0.610952) | 0.005143 / 0.000200 (0.004943) | 0.000146 / 0.000054 (0.000092) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034317 / 0.037411 (-0.003094) | 0.182485 / 0.014526 (0.167959) | 0.183149 / 0.176557 (0.006592) | 0.293592 / 0.737135 (-0.443543) | 0.197137 / 0.296338 (-0.099202) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.475690 / 0.215209 (0.260481) | 4.757344 / 2.077655 (2.679690) | 2.184079 / 1.504120 (0.679959) | 1.956599 / 1.541195 (0.415404) | 2.043041 / 1.468490 (0.574551) | 0.817602 / 4.584777 (-3.767175) | 6.432267 / 3.745712 (2.686555) | 5.999402 / 5.269862 (0.729541) | 3.095970 / 4.565676 (-1.469706) | 0.181589 / 0.424275 (-0.242686) | 0.023286 / 0.007607 (0.015679) | 1.090318 / 0.226044 (0.864274) | 7.919330 / 2.268929 (5.650401) | 2.702821 / 55.444624 (-52.741804) | 2.375442 / 6.876477 (-4.501034) | 2.543075 / 2.142072 (0.401003) | 1.011763 / 4.805227 (-3.793464) | 0.203676 / 6.500664 (-6.296988) | 0.080075 / 0.075469 (0.004606) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.875420 / 1.841788 (0.033632) | 23.059278 / 8.074308 (14.984970) | 19.250807 / 10.191392 (9.059415) | 0.323678 / 0.680424 (-0.356746) | 0.028682 / 0.534201 (-0.505519) | 0.698231 / 0.579283 (0.118948) | 0.668129 / 0.434364 (0.233765) | 0.831218 / 0.540337 (0.290880) | 0.941191 / 1.386936 (-0.445745) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.013122 / 0.011353 (0.001769) | 0.006123 / 0.011008 (-0.004886) | 0.090493 / 0.038508 (0.051985) | 0.070660 / 0.023109 (0.047551) | 0.413486 / 0.275898 (0.137588) | 0.450364 / 0.323480 (0.126884) | 0.010288 / 0.007986 (0.002302) | 0.006590 / 0.004328 (0.002261) | 0.087174 / 0.004250 (0.082923) | 0.077304 / 0.037052 (0.040252) | 0.428480 / 0.258489 (0.169991) | 0.459872 / 0.293841 (0.166032) | 0.060477 / 0.128546 (-0.068069) | 0.014859 / 0.075646 (-0.060788) | 0.103915 / 0.419271 (-0.315356) | 0.087466 / 0.043533 (0.043933) | 0.418644 / 0.255139 (0.163505) | 0.433409 / 0.283200 (0.150209) | 0.166716 / 0.141683 (0.025033) | 1.712068 / 1.452155 (0.259914) | 1.827869 / 1.492716 (0.335153) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.372491 / 0.018006 (0.354484) | 0.493426 / 0.000490 (0.492937) | 0.005497 / 0.000200 (0.005297) | 0.000129 / 0.000054 (0.000074) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.036531 / 0.037411 (-0.000880) | 0.142152 / 0.014526 (0.127626) | 0.148183 / 0.176557 (-0.028373) | 0.212918 / 0.737135 (-0.524217) | 0.154092 / 0.296338 (-0.142246) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.551733 / 0.215209 (0.336524) | 5.421498 / 2.077655 (3.343843) | 2.418848 / 1.504120 (0.914728) | 2.213185 / 1.541195 (0.671991) | 2.294881 / 1.468490 (0.826391) | 0.827031 / 4.584777 (-3.757746) | 6.365622 / 3.745712 (2.619910) | 4.927996 / 5.269862 (-0.341866) | 2.756133 / 4.565676 (-1.809544) | 0.101474 / 0.424275 (-0.322801) | 0.014523 / 0.007607 (0.006916) | 0.619082 / 0.226044 (0.393037) | 6.200132 / 2.268929 (3.931204) | 3.015590 / 55.444624 (-52.429034) | 2.711181 / 6.876477 (-4.165296) | 2.857157 / 2.142072 (0.715084) | 0.993329 / 4.805227 (-3.811898) | 0.203364 / 6.500664 (-6.297301) | 0.079167 / 0.075469 (0.003698) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.709881 / 1.841788 (-0.131907) | 24.867536 / 8.074308 (16.793228) | 21.755361 / 10.191392 (11.563969) | 0.295837 / 0.680424 (-0.384586) | 0.031934 / 0.534201 (-0.502267) | 0.709994 / 0.579283 (0.130711) | 0.779656 / 0.434364 (0.345293) | 0.780669 / 0.540337 (0.240331) | 0.712808 / 1.386936 (-0.674128) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#cf4a1951bdca7175adac9c8b85550e89dcceb6fa \"CML watermark\")\n"
] | 2023-04-24T10:34:03 | 2023-04-26T16:04:42 | 2023-04-26T15:54:44 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5784",
"html_url": "https://github.com/huggingface/datasets/pull/5784",
"diff_url": "https://github.com/huggingface/datasets/pull/5784.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5784.patch",
"merged_at": "2023-04-26T15:54:44"
} | When a subprocess hangs in `filter` or `map`, one should be able to get the subprocess' traceback when interrupting the main process. Right now it shows nothing.
To do so I `.get()` the subprocesses async results even the main process is stopped with e.g. `KeyboardInterrupt`. I added a timeout in case the subprocess is hanging or crashed. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5784/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5784/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5782 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5782/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5782/comments | https://api.github.com/repos/huggingface/datasets/issues/5782/events | https://github.com/huggingface/datasets/issues/5782 | 1,679,622,367 | I_kwDODunzps5kHQDf | 5,782 | Support for various audio-loading backends instead of always relying on SoundFile | {
"login": "BoringDonut",
"id": 129098876,
"node_id": "U_kgDOB7HkfA",
"avatar_url": "https://avatars.githubusercontent.com/u/129098876?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BoringDonut",
"html_url": "https://github.com/BoringDonut",
"followers_url": "https://api.github.com/users/BoringDonut/followers",
"following_url": "https://api.github.com/users/BoringDonut/following{/other_user}",
"gists_url": "https://api.github.com/users/BoringDonut/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BoringDonut/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BoringDonut/subscriptions",
"organizations_url": "https://api.github.com/users/BoringDonut/orgs",
"repos_url": "https://api.github.com/users/BoringDonut/repos",
"events_url": "https://api.github.com/users/BoringDonut/events{/privacy}",
"received_events_url": "https://api.github.com/users/BoringDonut/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | [
"Hi! \r\n\r\nYou can use `set_transform`/`with_transform` to define a custom decoding for audio formats not supported by `soundfile`:\r\n```python\r\naudio_dataset_amr = Dataset.from_dict({\"audio\": [\"audio_samples/audio.amr\"]})\r\n\r\ndef decode_audio(batch):\r\n batch[\"audio\"] = [read_ffmpeg(audio_path) for audio_path in batch[\"audio\"]]\r\n return batch\r\n\r\naudio_dataset_amr.set_transform(decode_amr) \r\n```\r\n\r\nSupporting multiple backends is more work to maintain, but we could consider this if we get more requests such as this one.",
"Could it be put somewhere as an example tip or something?",
"Considering the number of times a custom decoding transform has been suggested as a solution, an example in the [docs](https://huggingface.co/docs/datasets/process#format-transform) would be nice.\r\n\r\ncc @stevhliu "
] | 2023-04-22T17:09:25 | 2023-05-10T20:23:04 | 2023-05-10T20:23:04 | NONE | null | null | null | ### Feature request
Introduce an option to select from a variety of audio-loading backends rather than solely relying on the SoundFile library. For instance, if the ffmpeg library is installed, it can serve as a fallback loading option.
### Motivation
- The SoundFile library, used in [features/audio.py](https://github.com/huggingface/datasets/blob/649d5a3315f9e7666713b6affe318ee00c7163a0/src/datasets/features/audio.py#L185), supports only a [limited number of audio formats](https://pysoundfile.readthedocs.io/en/latest/index.html?highlight=supported#soundfile.available_formats).
- However, current methods for creating audio datasets permit the inclusion of audio files in formats not supported by SoundFile.
- As a result, developers may potentially create a dataset they cannot read back.
In my most recent project, I dealt with phone call recordings in `.amr` or `.gsm` formats and was genuinely surprised when I couldn't read the dataset I had just packaged a minute prior. Nonetheless, I can still accurately read these files using the librosa library, which employs the audioread library that internally leverages ffmpeg to read such files.
Example:
```python
audio_dataset_amr = Dataset.from_dict({"audio": ["audio_samples/audio.amr"]}).cast_column("audio", Audio())
audio_dataset_amr.save_to_disk("audio_dataset_amr")
audio_dataset_amr = Dataset.load_from_disk("audio_dataset_amr")
print(audio_dataset_amr[0])
```
Results in:
```
Traceback (most recent call last):
...
raise LibsndfileError(err, prefix="Error opening {0!r}: ".format(self.name))
soundfile.LibsndfileError: Error opening <_io.BytesIO object at 0x7f316323e4d0>: Format not recognised.
```
While I acknowledge that support for these rare file types may not be a priority, I believe it's quite unfortunate that it's possible to create an unreadable dataset in this manner.
### Your contribution
I've created a [simple demo repository](https://github.com/BoringDonut/hf-datasets-ffmpeg-audio) that highlights the mentioned issue. It demonstrates how to create an .amr dataset that results in an error when attempting to read it just a few lines later.
Additionally, I've made a [fork with a rudimentary solution](https://github.com/BoringDonut/datasets/blob/fea73a8fbbc8876467c7e6422c9360546c6372d8/src/datasets/features/audio.py#L189) that utilizes ffmpeg to load files not supported by SoundFile.
Here you may see github actions fails to read `.amr` dataset using the version of the current dataset, but will work with the patched version:
- https://github.com/BoringDonut/hf-datasets-ffmpeg-audio/actions/runs/4773780420/jobs/8487063785
- https://github.com/BoringDonut/hf-datasets-ffmpeg-audio/actions/runs/4773780420/jobs/8487063829
As evident from the GitHub action above, this solution resolves the previously mentioned problem.
I'd be happy to create a proper pull request, provide runtime benchmarks and tests if you could offer some guidance on the following:
- Where should I incorporate the ffmpeg (or other backends) code? For example, should I create a new file or simply add a function within the Audio class?
- Is it feasible to pass the audio-loading function as an argument within the current architecture? This would be useful if I know in advance that I'll be reading files not supported by SoundFile.
A few more notes:
- In theory, it's possible to load audio using librosa/audioread since librosa is already expected to be installed. However, librosa [will soon discontinue audioread support](https://github.com/librosa/librosa/blob/aacb4c134002903ae56bbd4b4a330519a5abacc0/librosa/core/audio.py#L227). Moreover, using audioread on its own seems inconvenient because it requires a file [path as input](https://github.com/beetbox/audioread/blob/ff9535df934c48038af7be9617fdebb12078cc07/audioread/__init__.py#L108) and cannot work with bytes already loaded into memory or an open file descriptor (as mentioned in [librosa docs](https://librosa.org/doc/main/generated/librosa.load.html#librosa.load), only SoundFile backend supports an open file descriptor as an input). | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5782/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5782/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5781 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5781/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5781/comments | https://api.github.com/repos/huggingface/datasets/issues/5781/events | https://github.com/huggingface/datasets/issues/5781 | 1,679,580,460 | I_kwDODunzps5kHF0s | 5,781 | Error using `load_datasets` | {
"login": "gjyoungjr",
"id": 61463108,
"node_id": "MDQ6VXNlcjYxNDYzMTA4",
"avatar_url": "https://avatars.githubusercontent.com/u/61463108?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gjyoungjr",
"html_url": "https://github.com/gjyoungjr",
"followers_url": "https://api.github.com/users/gjyoungjr/followers",
"following_url": "https://api.github.com/users/gjyoungjr/following{/other_user}",
"gists_url": "https://api.github.com/users/gjyoungjr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gjyoungjr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gjyoungjr/subscriptions",
"organizations_url": "https://api.github.com/users/gjyoungjr/orgs",
"repos_url": "https://api.github.com/users/gjyoungjr/repos",
"events_url": "https://api.github.com/users/gjyoungjr/events{/privacy}",
"received_events_url": "https://api.github.com/users/gjyoungjr/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"It looks like an issue with your installation of scipy, can you try reinstalling it ?",
"Sorry for the late reply, but that worked @lhoestq . Thanks for the assist."
] | 2023-04-22T15:10:44 | 2023-05-02T23:41:25 | 2023-05-02T23:41:25 | NONE | null | null | null | ### Describe the bug
I tried to load a dataset using the `datasets` library in a conda jupyter notebook and got the below error.
```
ImportError: dlopen(/Users/gilbertyoung/miniforge3/envs/review_sense/lib/python3.8/site-packages/scipy/sparse/linalg/_isolve/_iterative.cpython-38-darwin.so, 0x0002): Library not loaded: @rpath/liblapack.3.dylib
Referenced from: <65B094A2-59D7-31AC-A966-4DB9E11D2A15> /Users/gilbertyoung/miniforge3/envs/review_sense/lib/python3.8/site-packages/scipy/sparse/linalg/_isolve/_iterative.cpython-38-darwin.so
Reason: tried: '/Users/gilbertyoung/miniforge3/envs/review_sense/lib/python3.8/site-packages/scipy/sparse/linalg/_isolve/liblapack.3.dylib' (no such file), '/Users/gilbertyoung/miniforge3/envs/review_sense/lib/python3.8/site-packages/scipy/sparse/linalg/_isolve/../../../../../../liblapack.3.dylib' (no such file), '/Users/gilbertyoung/miniforge3/envs/review_sense/lib/python3.8/site-packages/scipy/sparse/linalg/_isolve/liblapack.3.dylib' (no such file), '/Users/gilbertyoung/miniforge3/envs/review_sense/lib/python3.8/site-packages/scipy/sparse/linalg/_isolve/../../../../../../liblapack.3.dylib' (no such file), '/Users/gilbertyoung/miniforge3/envs/review_sense/bin/../lib/liblapack.3.dylib' (no such file), '/Users/gilbertyoung/miniforge3/envs/review_sense/bin/../lib/liblapack.3.dylib' (no such file), '/usr/local/lib/liblapack.3.dylib' (no such file), '/usr/lib/liblapack.3.dylib' (no such file, not in dyld cache)
```
### Steps to reproduce the bug
Run the `load_datasets` function
### Expected behavior
I expected the dataset to be loaded into my notebook.
### Environment info
name: review_sense
channels:
- apple
- conda-forge
dependencies:
- python=3.8
- pip>=19.0
- jupyter
- tensorflow-deps
#- scikit-learn
#- scipy
- pandas
- pandas-datareader
- matplotlib
- pillow
- tqdm
- requests
- h5py
- pyyaml
- flask
- boto3
- ipykernel
- seaborn
- pip:
- tensorflow-macos==2.9
- tensorflow-metal==0.5.0
- bayesian-optimization
- gym
- kaggle
- huggingface_hub
- datasets
- numpy
- huggingface
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5781/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5781/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5780 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5780/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5780/comments | https://api.github.com/repos/huggingface/datasets/issues/5780/events | https://github.com/huggingface/datasets/issues/5780 | 1,679,367,149 | I_kwDODunzps5kGRvt | 5,780 | TypeError: 'NoneType' object does not support item assignment | {
"login": "v-yunbin",
"id": 38179632,
"node_id": "MDQ6VXNlcjM4MTc5NjMy",
"avatar_url": "https://avatars.githubusercontent.com/u/38179632?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/v-yunbin",
"html_url": "https://github.com/v-yunbin",
"followers_url": "https://api.github.com/users/v-yunbin/followers",
"following_url": "https://api.github.com/users/v-yunbin/following{/other_user}",
"gists_url": "https://api.github.com/users/v-yunbin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/v-yunbin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/v-yunbin/subscriptions",
"organizations_url": "https://api.github.com/users/v-yunbin/orgs",
"repos_url": "https://api.github.com/users/v-yunbin/repos",
"events_url": "https://api.github.com/users/v-yunbin/events{/privacy}",
"received_events_url": "https://api.github.com/users/v-yunbin/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2023-04-22T06:22:43 | 2023-04-23T08:49:18 | 2023-04-23T08:49:18 | NONE | null | null | null | command:
```
def load_datasets(formats, data_dir=datadir, data_files=datafile):
dataset = load_dataset(formats, data_dir=datadir, data_files=datafile, split=split, streaming=True, **kwargs)
return dataset
raw_datasets = DatasetDict()
raw_datasets["train"] = load_datasets(“csv”, args.datadir, "train.csv", split=train_split)
raw_datasets["test"] = load_datasets(“csv”, args.datadir, "dev.csv", split=test_split)
raw_datasets = raw_datasets.cast_column("audio", Audio(sampling_rate=16000))
```
error:
```
main()
File "peft_adalora_whisper_large_training.py", line 502, in main
raw_datasets = raw_datasets.cast_column("audio", Audio(sampling_rate=16000))
File "/home/ybZhang/miniconda3/envs/whister/lib/python3.8/site-packages/datasets/dataset_dict.py", line 2015, in cast_column
info.features[column] = feature
TypeError: 'NoneType' object does not support item assignment
```
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5780/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5780/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5779 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5779/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5779/comments | https://api.github.com/repos/huggingface/datasets/issues/5779/events | https://github.com/huggingface/datasets/pull/5779 | 1,678,669,865 | PR_kwDODunzps5O3sHp | 5,779 | Call fs.makedirs in save_to_disk | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007490 / 0.011353 (-0.003862) | 0.004957 / 0.011008 (-0.006051) | 0.096952 / 0.038508 (0.058444) | 0.034125 / 0.023109 (0.011016) | 0.301926 / 0.275898 (0.026028) | 0.330538 / 0.323480 (0.007058) | 0.005999 / 0.007986 (-0.001987) | 0.003948 / 0.004328 (-0.000380) | 0.073024 / 0.004250 (0.068773) | 0.050020 / 0.037052 (0.012967) | 0.299987 / 0.258489 (0.041498) | 0.336077 / 0.293841 (0.042237) | 0.035781 / 0.128546 (-0.092765) | 0.012159 / 0.075646 (-0.063487) | 0.333311 / 0.419271 (-0.085960) | 0.059925 / 0.043533 (0.016392) | 0.297772 / 0.255139 (0.042633) | 0.313447 / 0.283200 (0.030247) | 0.100991 / 0.141683 (-0.040692) | 1.472182 / 1.452155 (0.020027) | 1.553010 / 1.492716 (0.060294) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.214222 / 0.018006 (0.196216) | 0.441579 / 0.000490 (0.441090) | 0.001030 / 0.000200 (0.000830) | 0.000194 / 0.000054 (0.000140) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026149 / 0.037411 (-0.011262) | 0.107324 / 0.014526 (0.092798) | 0.113390 / 0.176557 (-0.063167) | 0.170282 / 0.737135 (-0.566854) | 0.120601 / 0.296338 (-0.175737) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.411795 / 0.215209 (0.196585) | 4.091412 / 2.077655 (2.013757) | 1.819597 / 1.504120 (0.315477) | 1.623413 / 1.541195 (0.082218) | 1.658959 / 1.468490 (0.190469) | 0.697671 / 4.584777 (-3.887106) | 3.868855 / 3.745712 (0.123143) | 3.220448 / 5.269862 (-2.049414) | 1.796472 / 4.565676 (-2.769204) | 0.085817 / 0.424275 (-0.338458) | 0.012422 / 0.007607 (0.004815) | 0.520302 / 0.226044 (0.294258) | 5.062477 / 2.268929 (2.793548) | 2.275065 / 55.444624 (-53.169560) | 1.936717 / 6.876477 (-4.939759) | 2.069924 / 2.142072 (-0.072148) | 0.838964 / 4.805227 (-3.966264) | 0.170632 / 6.500664 (-6.330032) | 0.066011 / 0.075469 (-0.009458) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.190673 / 1.841788 (-0.651114) | 14.679478 / 8.074308 (6.605169) | 14.099743 / 10.191392 (3.908351) | 0.142556 / 0.680424 (-0.537868) | 0.017601 / 0.534201 (-0.516600) | 0.421301 / 0.579283 (-0.157982) | 0.418035 / 0.434364 (-0.016329) | 0.503799 / 0.540337 (-0.036539) | 0.588809 / 1.386936 (-0.798127) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007556 / 0.011353 (-0.003797) | 0.005283 / 0.011008 (-0.005725) | 0.075616 / 0.038508 (0.037107) | 0.034127 / 0.023109 (0.011018) | 0.345145 / 0.275898 (0.069247) | 0.377490 / 0.323480 (0.054010) | 0.006532 / 0.007986 (-0.001454) | 0.004145 / 0.004328 (-0.000183) | 0.074724 / 0.004250 (0.070473) | 0.048658 / 0.037052 (0.011605) | 0.339989 / 0.258489 (0.081500) | 0.398240 / 0.293841 (0.104399) | 0.037433 / 0.128546 (-0.091114) | 0.012410 / 0.075646 (-0.063237) | 0.088110 / 0.419271 (-0.331162) | 0.050635 / 0.043533 (0.007103) | 0.351878 / 0.255139 (0.096739) | 0.365707 / 0.283200 (0.082508) | 0.104342 / 0.141683 (-0.037341) | 1.438009 / 1.452155 (-0.014145) | 1.533616 / 1.492716 (0.040900) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.225570 / 0.018006 (0.207563) | 0.442482 / 0.000490 (0.441992) | 0.000402 / 0.000200 (0.000202) | 0.000063 / 0.000054 (0.000009) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030348 / 0.037411 (-0.007063) | 0.111402 / 0.014526 (0.096877) | 0.123365 / 0.176557 (-0.053192) | 0.175604 / 0.737135 (-0.561531) | 0.128458 / 0.296338 (-0.167881) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.426054 / 0.215209 (0.210845) | 4.255050 / 2.077655 (2.177395) | 2.039568 / 1.504120 (0.535448) | 1.856842 / 1.541195 (0.315647) | 1.923792 / 1.468490 (0.455301) | 0.701023 / 4.584777 (-3.883754) | 3.746632 / 3.745712 (0.000920) | 2.055563 / 5.269862 (-3.214298) | 1.308068 / 4.565676 (-3.257608) | 0.085524 / 0.424275 (-0.338751) | 0.012103 / 0.007607 (0.004496) | 0.522929 / 0.226044 (0.296885) | 5.258133 / 2.268929 (2.989205) | 2.458440 / 55.444624 (-52.986185) | 2.141681 / 6.876477 (-4.734796) | 2.258667 / 2.142072 (0.116595) | 0.842533 / 4.805227 (-3.962694) | 0.168089 / 6.500664 (-6.332575) | 0.063707 / 0.075469 (-0.011762) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.312252 / 1.841788 (-0.529536) | 14.939185 / 8.074308 (6.864877) | 14.479845 / 10.191392 (4.288453) | 0.162557 / 0.680424 (-0.517867) | 0.017660 / 0.534201 (-0.516541) | 0.423261 / 0.579283 (-0.156023) | 0.417693 / 0.434364 (-0.016671) | 0.495440 / 0.540337 (-0.044897) | 0.589932 / 1.386936 (-0.797004) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#4e3c86574155961097b367d5cddda5bd13c42b09 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008796 / 0.011353 (-0.002557) | 0.005828 / 0.011008 (-0.005180) | 0.118629 / 0.038508 (0.080121) | 0.042435 / 0.023109 (0.019326) | 0.383780 / 0.275898 (0.107882) | 0.420344 / 0.323480 (0.096864) | 0.006855 / 0.007986 (-0.001130) | 0.006290 / 0.004328 (0.001962) | 0.087160 / 0.004250 (0.082910) | 0.057568 / 0.037052 (0.020516) | 0.378761 / 0.258489 (0.120272) | 0.426496 / 0.293841 (0.132655) | 0.041772 / 0.128546 (-0.086774) | 0.014226 / 0.075646 (-0.061420) | 0.400097 / 0.419271 (-0.019174) | 0.060402 / 0.043533 (0.016870) | 0.381955 / 0.255139 (0.126816) | 0.399110 / 0.283200 (0.115911) | 0.124608 / 0.141683 (-0.017075) | 1.737856 / 1.452155 (0.285702) | 1.829034 / 1.492716 (0.336318) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.219941 / 0.018006 (0.201934) | 0.497156 / 0.000490 (0.496666) | 0.005094 / 0.000200 (0.004894) | 0.000097 / 0.000054 (0.000043) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032144 / 0.037411 (-0.005268) | 0.131782 / 0.014526 (0.117256) | 0.141543 / 0.176557 (-0.035014) | 0.211419 / 0.737135 (-0.525716) | 0.147338 / 0.296338 (-0.149001) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.478345 / 0.215209 (0.263136) | 4.749506 / 2.077655 (2.671851) | 2.195794 / 1.504120 (0.691674) | 1.978126 / 1.541195 (0.436932) | 2.059941 / 1.468490 (0.591451) | 0.821959 / 4.584777 (-3.762818) | 5.737479 / 3.745712 (1.991767) | 2.507125 / 5.269862 (-2.762737) | 2.051772 / 4.565676 (-2.513905) | 0.100619 / 0.424275 (-0.323656) | 0.014437 / 0.007607 (0.006830) | 0.599484 / 0.226044 (0.373440) | 5.977579 / 2.268929 (3.708651) | 2.708143 / 55.444624 (-52.736482) | 2.320279 / 6.876477 (-4.556198) | 2.510172 / 2.142072 (0.368100) | 1.006279 / 4.805227 (-3.798948) | 0.199812 / 6.500664 (-6.300853) | 0.077967 / 0.075469 (0.002498) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.510171 / 1.841788 (-0.331616) | 21.099446 / 8.074308 (13.025138) | 17.634225 / 10.191392 (7.442833) | 0.223506 / 0.680424 (-0.456918) | 0.023845 / 0.534201 (-0.510356) | 0.613489 / 0.579283 (0.034206) | 0.685735 / 0.434364 (0.251371) | 0.652485 / 0.540337 (0.112148) | 0.734756 / 1.386936 (-0.652180) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008444 / 0.011353 (-0.002909) | 0.005789 / 0.011008 (-0.005220) | 0.088297 / 0.038508 (0.049789) | 0.040847 / 0.023109 (0.017737) | 0.411748 / 0.275898 (0.135850) | 0.452320 / 0.323480 (0.128841) | 0.006689 / 0.007986 (-0.001296) | 0.006029 / 0.004328 (0.001701) | 0.086080 / 0.004250 (0.081830) | 0.053310 / 0.037052 (0.016257) | 0.402568 / 0.258489 (0.144079) | 0.459047 / 0.293841 (0.165206) | 0.041203 / 0.128546 (-0.087343) | 0.014216 / 0.075646 (-0.061431) | 0.102729 / 0.419271 (-0.316543) | 0.057170 / 0.043533 (0.013637) | 0.407137 / 0.255139 (0.151998) | 0.429703 / 0.283200 (0.146503) | 0.123528 / 0.141683 (-0.018155) | 1.690026 / 1.452155 (0.237872) | 1.797793 / 1.492716 (0.305077) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.264581 / 0.018006 (0.246575) | 0.498981 / 0.000490 (0.498492) | 0.000462 / 0.000200 (0.000262) | 0.000096 / 0.000054 (0.000041) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034613 / 0.037411 (-0.002798) | 0.136596 / 0.014526 (0.122070) | 0.142183 / 0.176557 (-0.034374) | 0.201816 / 0.737135 (-0.535320) | 0.148843 / 0.296338 (-0.147496) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.506708 / 0.215209 (0.291499) | 5.042829 / 2.077655 (2.965175) | 2.448414 / 1.504120 (0.944295) | 2.213251 / 1.541195 (0.672056) | 2.255805 / 1.468490 (0.787315) | 0.829929 / 4.584777 (-3.754848) | 5.145717 / 3.745712 (1.400004) | 2.493947 / 5.269862 (-2.775915) | 1.676171 / 4.565676 (-2.889506) | 0.102097 / 0.424275 (-0.322178) | 0.014545 / 0.007607 (0.006938) | 0.635473 / 0.226044 (0.409429) | 6.306767 / 2.268929 (4.037839) | 3.050284 / 55.444624 (-52.394341) | 2.653175 / 6.876477 (-4.223302) | 2.850569 / 2.142072 (0.708496) | 1.355280 / 4.805227 (-3.449947) | 0.248112 / 6.500664 (-6.252552) | 0.091993 / 0.075469 (0.016524) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.837509 / 1.841788 (-0.004279) | 21.268838 / 8.074308 (13.194530) | 17.338053 / 10.191392 (7.146660) | 0.232263 / 0.680424 (-0.448161) | 0.029093 / 0.534201 (-0.505108) | 0.651056 / 0.579283 (0.071773) | 0.617623 / 0.434364 (0.183259) | 0.773921 / 0.540337 (0.233584) | 0.705118 / 1.386936 (-0.681818) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#35846fd54fa16aa72ff344d15c98b5e08c5effe0 \"CML watermark\")\n"
] | 2023-04-21T15:04:28 | 2023-04-26T12:20:01 | 2023-04-26T12:11:15 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5779",
"html_url": "https://github.com/huggingface/datasets/pull/5779",
"diff_url": "https://github.com/huggingface/datasets/pull/5779.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5779.patch",
"merged_at": "2023-04-26T12:11:15"
} | We need to call `fs.makedirs` when saving a dataset using `save_to_disk`, because some fs implementations have actual directories (S3 and others don't)
Close https://github.com/huggingface/datasets/issues/5775 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5779/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5779/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5777 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5777/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5777/comments | https://api.github.com/repos/huggingface/datasets/issues/5777/events | https://github.com/huggingface/datasets/issues/5777 | 1,677,655,969 | I_kwDODunzps5j_v-h | 5,777 | datasets.load_dataset("code_search_net", "python") : NotADirectoryError: [Errno 20] Not a directory | {
"login": "jason-brian-anderson",
"id": 34688597,
"node_id": "MDQ6VXNlcjM0Njg4NTk3",
"avatar_url": "https://avatars.githubusercontent.com/u/34688597?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jason-brian-anderson",
"html_url": "https://github.com/jason-brian-anderson",
"followers_url": "https://api.github.com/users/jason-brian-anderson/followers",
"following_url": "https://api.github.com/users/jason-brian-anderson/following{/other_user}",
"gists_url": "https://api.github.com/users/jason-brian-anderson/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jason-brian-anderson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jason-brian-anderson/subscriptions",
"organizations_url": "https://api.github.com/users/jason-brian-anderson/orgs",
"repos_url": "https://api.github.com/users/jason-brian-anderson/repos",
"events_url": "https://api.github.com/users/jason-brian-anderson/events{/privacy}",
"received_events_url": "https://api.github.com/users/jason-brian-anderson/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Note:\r\nI listed the datasets and grepped around to find what appears to be an alternative source for this:\r\n\r\nraw_datasets = load_dataset(\"espejelomar/code_search_net_python_10000_examples\", \"python\")",
"Thanks for reporting, @jason-brian-anderson.\r\n\r\nYes, this is a known issue: the [CodeSearchNet](https://github.com/github/CodeSearchNet) repo has been archived (Apr 11, 2023) and their source data files are no longer accessible in their S3: e.g. https://s3.amazonaws.com/code-search-net/CodeSearchNet/v2/python.zip gives 403 Forbidden error. See:\r\n- https://huggingface.co/datasets/code_search_net/discussions/3\r\n\r\nWe have contacted one of the authors of the dataset to find a solution. I'll keep you informed.\r\n\r\nCC: @hamelsmu",
"cc: @julianeagu",
"This issue is fixed because we are hosting the CodeSearchNet data files in the Hugging Face Hub. See: https://huggingface.co/datasets/code_search_net/discussions/7"
] | 2023-04-21T02:08:07 | 2023-05-11T11:51:56 | 2023-05-11T11:51:56 | NONE | null | null | null | ### Describe the bug
While checking out the [tokenizer tutorial](https://huggingface.co/course/chapter6/2?fw=pt), i noticed getting an error while initially downloading the python dataset used in the examples.
The [collab with the error is here](https://colab.research.google.com/github/huggingface/notebooks/blob/master/course/en/chapter6/section2.ipynb#scrollTo=hGb69Yo3eV8S)
```
from datasets import load_dataset
import os
os.environ["HF_DATASETS_CACHE"] = "/workspace"
# This can take a few minutes to load, so grab a coffee or tea while you wait!
raw_datasets = load_dataset("code_search_net", "python")
```
yeilds:
```
ile /opt/conda/lib/python3.10/site-packages/datasets/download/streaming_download_manager.py:524, in xlistdir(path, use_auth_token)
522 main_hop, *rest_hops = _as_str(path).split("::")
523 if is_local_path(main_hop):
--> 524 return os.listdir(path)
525 else:
526 # globbing inside a zip in a private repo requires authentication
527 if not rest_hops and (main_hop.startswith("http://") or main_hop.startswith("https://")):
NotADirectoryError: [Errno 20] Not a directory: '/workspace/downloads/25ceeb4c25ab737d688bd56ea92bfbb1f199fe572470456cf2d675479f342ac7/python/final/jsonl/train'
```
I was able to reproduce this erro both in the collab and on my own pytorch/pytorch container pulled from the dockerhub official pytorch image, so i think it may be a server side thing.
### Steps to reproduce the bug
Steps to reproduce the issue:
1. run `raw_datasets = load_dataset("code_search_net", "python")`
### Expected behavior
expect the code to not exception during dataset pull.
### Environment info
i tried both the default HF_DATASETS_CACHE on Collab, and on my local container. i then pointed to the HF_DATASETS_CACHE to a large capacity local storage and the problem was consisten across all 3 scenarios. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5777/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5777/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5775 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5775/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5775/comments | https://api.github.com/repos/huggingface/datasets/issues/5775/events | https://github.com/huggingface/datasets/issues/5775 | 1,677,089,901 | I_kwDODunzps5j9lxt | 5,775 | ArrowDataset.save_to_disk lost some logic of remote | {
"login": "Zoupers",
"id": 29817738,
"node_id": "MDQ6VXNlcjI5ODE3NzM4",
"avatar_url": "https://avatars.githubusercontent.com/u/29817738?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Zoupers",
"html_url": "https://github.com/Zoupers",
"followers_url": "https://api.github.com/users/Zoupers/followers",
"following_url": "https://api.github.com/users/Zoupers/following{/other_user}",
"gists_url": "https://api.github.com/users/Zoupers/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Zoupers/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Zoupers/subscriptions",
"organizations_url": "https://api.github.com/users/Zoupers/orgs",
"repos_url": "https://api.github.com/users/Zoupers/repos",
"events_url": "https://api.github.com/users/Zoupers/events{/privacy}",
"received_events_url": "https://api.github.com/users/Zoupers/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"We just fixed this on `main` and will do a new release soon :)"
] | 2023-04-20T16:58:01 | 2023-04-26T12:11:36 | 2023-04-26T12:11:17 | NONE | null | null | null | ### Describe the bug
https://github.com/huggingface/datasets/blob/e7ce0ac60c7efc10886471932854903a7c19f172/src/datasets/arrow_dataset.py#L1371
Here is the bug point, when I want to save from a `DatasetDict` class and the items of the instance is like `[('train', Dataset({features: ..., num_rows: ...}))]` , there is no guarantee that there exists a directory name `train` under `dataset_dict_path`.
### Steps to reproduce the bug
1. Mock a DatasetDict with items like what I said.
2. using save_to_disk with storage_options, u can use local sftp. code may like below
```python
from datasets import load_dataset
dataset = load_dataset(...)
dataset.save_to_disk('sftp:///tmp', storage_options={'host': 'localhost', 'username': 'admin'})
```
I suppose u can reproduce the bug by these steps.
### Expected behavior
Should create the folder if it does not exists, just like we do locally.
### Environment info
- `datasets` version: 2.11.0
- Platform: Linux-6.2.10-arch1-1-x86_64-with-glibc2.35
- Python version: 3.10.9
- Huggingface_hub version: 0.13.2
- PyArrow version: 11.0.0
- Pandas version: 1.5.3 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5775/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5775/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5774 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5774/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5774/comments | https://api.github.com/repos/huggingface/datasets/issues/5774/events | https://github.com/huggingface/datasets/pull/5774 | 1,676,716,662 | PR_kwDODunzps5OxIMe | 5,774 | Fix style | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010336 / 0.011353 (-0.001017) | 0.007085 / 0.011008 (-0.003924) | 0.135577 / 0.038508 (0.097069) | 0.038301 / 0.023109 (0.015192) | 0.427919 / 0.275898 (0.152021) | 0.461451 / 0.323480 (0.137971) | 0.008929 / 0.007986 (0.000944) | 0.005260 / 0.004328 (0.000931) | 0.103481 / 0.004250 (0.099231) | 0.054885 / 0.037052 (0.017833) | 0.434956 / 0.258489 (0.176467) | 0.466915 / 0.293841 (0.173074) | 0.052403 / 0.128546 (-0.076144) | 0.021128 / 0.075646 (-0.054518) | 0.466847 / 0.419271 (0.047576) | 0.085096 / 0.043533 (0.041563) | 0.439935 / 0.255139 (0.184796) | 0.453613 / 0.283200 (0.170413) | 0.123913 / 0.141683 (-0.017769) | 1.930114 / 1.452155 (0.477959) | 2.052083 / 1.492716 (0.559366) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.280612 / 0.018006 (0.262606) | 0.583937 / 0.000490 (0.583447) | 0.004542 / 0.000200 (0.004342) | 0.000117 / 0.000054 (0.000063) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.035901 / 0.037411 (-0.001510) | 0.160357 / 0.014526 (0.145831) | 0.141661 / 0.176557 (-0.034896) | 0.234915 / 0.737135 (-0.502220) | 0.164110 / 0.296338 (-0.132228) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.659901 / 0.215209 (0.444692) | 6.529102 / 2.077655 (4.451447) | 2.635324 / 1.504120 (1.131204) | 2.275777 / 1.541195 (0.734583) | 2.343205 / 1.468490 (0.874715) | 1.241310 / 4.584777 (-3.343467) | 5.683784 / 3.745712 (1.938072) | 3.377162 / 5.269862 (-1.892700) | 2.176404 / 4.565676 (-2.389273) | 0.144303 / 0.424275 (-0.279972) | 0.016352 / 0.007607 (0.008745) | 0.817383 / 0.226044 (0.591339) | 8.148356 / 2.268929 (5.879428) | 3.489277 / 55.444624 (-51.955347) | 2.848086 / 6.876477 (-4.028391) | 2.973304 / 2.142072 (0.831232) | 1.517821 / 4.805227 (-3.287407) | 0.278794 / 6.500664 (-6.221870) | 0.096385 / 0.075469 (0.020916) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.631693 / 1.841788 (-0.210095) | 19.564716 / 8.074308 (11.490408) | 23.583081 / 10.191392 (13.391689) | 0.252363 / 0.680424 (-0.428061) | 0.027644 / 0.534201 (-0.506557) | 0.579634 / 0.579283 (0.000351) | 0.645702 / 0.434364 (0.211338) | 0.667302 / 0.540337 (0.126965) | 0.766425 / 1.386936 (-0.620511) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.011186 / 0.011353 (-0.000167) | 0.007327 / 0.011008 (-0.003681) | 0.105441 / 0.038508 (0.066933) | 0.040293 / 0.023109 (0.017184) | 0.480557 / 0.275898 (0.204659) | 0.522049 / 0.323480 (0.198569) | 0.007779 / 0.007986 (-0.000207) | 0.007338 / 0.004328 (0.003009) | 0.104744 / 0.004250 (0.100494) | 0.059463 / 0.037052 (0.022411) | 0.494055 / 0.258489 (0.235566) | 0.534340 / 0.293841 (0.240499) | 0.062800 / 0.128546 (-0.065746) | 0.020687 / 0.075646 (-0.054959) | 0.135833 / 0.419271 (-0.283439) | 0.087472 / 0.043533 (0.043939) | 0.465019 / 0.255139 (0.209880) | 0.526713 / 0.283200 (0.243513) | 0.131424 / 0.141683 (-0.010259) | 1.884759 / 1.452155 (0.432605) | 2.015817 / 1.492716 (0.523101) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.237032 / 0.018006 (0.219026) | 0.605209 / 0.000490 (0.604719) | 0.006653 / 0.000200 (0.006453) | 0.000264 / 0.000054 (0.000210) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034982 / 0.037411 (-0.002430) | 0.141409 / 0.014526 (0.126883) | 0.151635 / 0.176557 (-0.024922) | 0.217298 / 0.737135 (-0.519837) | 0.171945 / 0.296338 (-0.124393) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.678596 / 0.215209 (0.463387) | 6.802432 / 2.077655 (4.724777) | 3.021617 / 1.504120 (1.517497) | 2.722508 / 1.541195 (1.181313) | 2.728194 / 1.468490 (1.259704) | 1.245863 / 4.584777 (-3.338914) | 5.762676 / 3.745712 (2.016963) | 5.497855 / 5.269862 (0.227994) | 2.855764 / 4.565676 (-1.709912) | 0.157359 / 0.424275 (-0.266916) | 0.015562 / 0.007607 (0.007955) | 0.865559 / 0.226044 (0.639515) | 8.553052 / 2.268929 (6.284123) | 3.905544 / 55.444624 (-51.539081) | 3.272528 / 6.876477 (-3.603949) | 3.399481 / 2.142072 (1.257408) | 1.540155 / 4.805227 (-3.265072) | 0.275871 / 6.500664 (-6.224793) | 0.092346 / 0.075469 (0.016877) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.753646 / 1.841788 (-0.088142) | 20.074050 / 8.074308 (11.999742) | 23.920391 / 10.191392 (13.728999) | 0.257161 / 0.680424 (-0.423263) | 0.027805 / 0.534201 (-0.506396) | 0.565605 / 0.579283 (-0.013678) | 0.643277 / 0.434364 (0.208914) | 0.633504 / 0.540337 (0.093167) | 0.754317 / 1.386936 (-0.632619) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#2d34c7968ea1a3fe7d4fa7cdf23673e0354f69ac \"CML watermark\")\n"
] | 2023-04-20T13:21:32 | 2023-04-20T13:34:26 | 2023-04-20T13:24:28 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5774",
"html_url": "https://github.com/huggingface/datasets/pull/5774",
"diff_url": "https://github.com/huggingface/datasets/pull/5774.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5774.patch",
"merged_at": "2023-04-20T13:24:28"
} | Fix C419 issues | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5774/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5774/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5772 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5772/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5772/comments | https://api.github.com/repos/huggingface/datasets/issues/5772/events | https://github.com/huggingface/datasets/pull/5772 | 1,675,033,510 | PR_kwDODunzps5OreXV | 5,772 | Fix JSON builder when missing keys in first row | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009262 / 0.011353 (-0.002091) | 0.006157 / 0.011008 (-0.004851) | 0.125960 / 0.038508 (0.087451) | 0.036213 / 0.023109 (0.013104) | 0.399331 / 0.275898 (0.123433) | 0.453597 / 0.323480 (0.130117) | 0.006990 / 0.007986 (-0.000995) | 0.007320 / 0.004328 (0.002991) | 0.100321 / 0.004250 (0.096070) | 0.048870 / 0.037052 (0.011818) | 0.396284 / 0.258489 (0.137795) | 0.475619 / 0.293841 (0.181778) | 0.052329 / 0.128546 (-0.076217) | 0.019564 / 0.075646 (-0.056083) | 0.430942 / 0.419271 (0.011670) | 0.063224 / 0.043533 (0.019692) | 0.391717 / 0.255139 (0.136578) | 0.448342 / 0.283200 (0.165142) | 0.114055 / 0.141683 (-0.027628) | 1.793204 / 1.452155 (0.341049) | 1.895151 / 1.492716 (0.402435) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.283699 / 0.018006 (0.265693) | 0.597194 / 0.000490 (0.596704) | 0.007143 / 0.000200 (0.006944) | 0.000602 / 0.000054 (0.000548) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034761 / 0.037411 (-0.002651) | 0.124555 / 0.014526 (0.110030) | 0.149126 / 0.176557 (-0.027430) | 0.220335 / 0.737135 (-0.516801) | 0.153109 / 0.296338 (-0.143229) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.620210 / 0.215209 (0.405001) | 6.229937 / 2.077655 (4.152282) | 2.615203 / 1.504120 (1.111083) | 2.239337 / 1.541195 (0.698143) | 2.262138 / 1.468490 (0.793648) | 1.196498 / 4.584777 (-3.388279) | 5.609932 / 3.745712 (1.864220) | 3.031347 / 5.269862 (-2.238515) | 2.025530 / 4.565676 (-2.540146) | 0.139828 / 0.424275 (-0.284447) | 0.015476 / 0.007607 (0.007869) | 0.768964 / 0.226044 (0.542920) | 7.728677 / 2.268929 (5.459748) | 3.336407 / 55.444624 (-52.108217) | 2.700055 / 6.876477 (-4.176422) | 2.765223 / 2.142072 (0.623151) | 1.409073 / 4.805227 (-3.396155) | 0.246849 / 6.500664 (-6.253815) | 0.081231 / 0.075469 (0.005762) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.593836 / 1.841788 (-0.247952) | 18.020525 / 8.074308 (9.946216) | 21.766822 / 10.191392 (11.575430) | 0.258615 / 0.680424 (-0.421809) | 0.026895 / 0.534201 (-0.507306) | 0.529823 / 0.579283 (-0.049460) | 0.623470 / 0.434364 (0.189106) | 0.628171 / 0.540337 (0.087833) | 0.745249 / 1.386936 (-0.641687) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008624 / 0.011353 (-0.002729) | 0.006317 / 0.011008 (-0.004691) | 0.097315 / 0.038508 (0.058807) | 0.035217 / 0.023109 (0.012108) | 0.440197 / 0.275898 (0.164299) | 0.473863 / 0.323480 (0.150383) | 0.006722 / 0.007986 (-0.001264) | 0.006444 / 0.004328 (0.002116) | 0.102056 / 0.004250 (0.097806) | 0.047142 / 0.037052 (0.010089) | 0.452476 / 0.258489 (0.193986) | 0.487619 / 0.293841 (0.193778) | 0.052456 / 0.128546 (-0.076090) | 0.018735 / 0.075646 (-0.056911) | 0.114656 / 0.419271 (-0.304616) | 0.062577 / 0.043533 (0.019044) | 0.444471 / 0.255139 (0.189332) | 0.494264 / 0.283200 (0.211065) | 0.117112 / 0.141683 (-0.024571) | 1.848965 / 1.452155 (0.396810) | 1.984008 / 1.492716 (0.491292) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.290494 / 0.018006 (0.272488) | 0.588415 / 0.000490 (0.587925) | 0.000459 / 0.000200 (0.000259) | 0.000080 / 0.000054 (0.000026) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032873 / 0.037411 (-0.004538) | 0.131139 / 0.014526 (0.116614) | 0.140268 / 0.176557 (-0.036289) | 0.204561 / 0.737135 (-0.532574) | 0.147443 / 0.296338 (-0.148895) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.636899 / 0.215209 (0.421690) | 6.236139 / 2.077655 (4.158484) | 2.801468 / 1.504120 (1.297348) | 2.398808 / 1.541195 (0.857613) | 2.493150 / 1.468490 (1.024659) | 1.228845 / 4.584777 (-3.355932) | 5.675874 / 3.745712 (1.930162) | 3.084939 / 5.269862 (-2.184922) | 2.061310 / 4.565676 (-2.504367) | 0.142285 / 0.424275 (-0.281990) | 0.014972 / 0.007607 (0.007365) | 0.786599 / 0.226044 (0.560555) | 7.876036 / 2.268929 (5.607107) | 3.476136 / 55.444624 (-51.968489) | 2.847922 / 6.876477 (-4.028555) | 3.040326 / 2.142072 (0.898253) | 1.448538 / 4.805227 (-3.356690) | 0.257230 / 6.500664 (-6.243434) | 0.085137 / 0.075469 (0.009668) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.668173 / 1.841788 (-0.173615) | 18.668520 / 8.074308 (10.594212) | 20.535542 / 10.191392 (10.344150) | 0.244580 / 0.680424 (-0.435844) | 0.026364 / 0.534201 (-0.507837) | 0.531753 / 0.579283 (-0.047530) | 0.616578 / 0.434364 (0.182214) | 0.618906 / 0.540337 (0.078569) | 0.738785 / 1.386936 (-0.648151) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#f7265cafa3103d77d6d52aa897088faefcd96659 \"CML watermark\")\n"
] | 2023-04-19T14:32:57 | 2023-04-21T06:45:13 | 2023-04-21T06:35:27 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5772",
"html_url": "https://github.com/huggingface/datasets/pull/5772",
"diff_url": "https://github.com/huggingface/datasets/pull/5772.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5772.patch",
"merged_at": "2023-04-21T06:35:27"
} | Until now, the JSON builder only considered the keys present in the first element of the list:
- Either explicitly: by passing index 0 in `dataset[0].keys()`
- Or implicitly: `pa.Table.from_pylist(dataset)`, where "schema (default None): If not passed, will be inferred from the first row of the mapping values"
This PR fixes the bug by considering the union of the keys present in all the rows.
Fix #5726. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5772/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5772/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5771 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5771/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5771/comments | https://api.github.com/repos/huggingface/datasets/issues/5771/events | https://github.com/huggingface/datasets/issues/5771 | 1,674,828,380 | I_kwDODunzps5j09pc | 5,771 | Support cloud storage for loading datasets | {
"login": "eli-osherovich",
"id": 2437102,
"node_id": "MDQ6VXNlcjI0MzcxMDI=",
"avatar_url": "https://avatars.githubusercontent.com/u/2437102?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eli-osherovich",
"html_url": "https://github.com/eli-osherovich",
"followers_url": "https://api.github.com/users/eli-osherovich/followers",
"following_url": "https://api.github.com/users/eli-osherovich/following{/other_user}",
"gists_url": "https://api.github.com/users/eli-osherovich/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eli-osherovich/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eli-osherovich/subscriptions",
"organizations_url": "https://api.github.com/users/eli-osherovich/orgs",
"repos_url": "https://api.github.com/users/eli-osherovich/repos",
"events_url": "https://api.github.com/users/eli-osherovich/events{/privacy}",
"received_events_url": "https://api.github.com/users/eli-osherovich/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892865,
"node_id": "MDU6TGFiZWwxOTM1ODkyODY1",
"url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate",
"name": "duplicate",
"color": "cfd3d7",
"default": true,
"description": "This issue or pull request already exists"
},
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | [
"A duplicate of https://github.com/huggingface/datasets/issues/5281"
] | 2023-04-19T12:43:53 | 2023-05-07T17:47:41 | 2023-05-07T17:47:41 | CONTRIBUTOR | null | null | null | ### Feature request
It seems that the the current implementation supports cloud storage only for `load_from_disk`. It would be nice if a similar functionality existed in `load_dataset`.
### Motivation
Motivation is pretty clear -- let users work with datasets located in the cloud.
### Your contribution
I can help implementing this. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5771/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5771/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5770 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5770/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5770/comments | https://api.github.com/repos/huggingface/datasets/issues/5770/events | https://github.com/huggingface/datasets/pull/5770 | 1,673,581,555 | PR_kwDODunzps5OmntV | 5,770 | Add IterableDataset.from_spark | {
"login": "maddiedawson",
"id": 106995444,
"node_id": "U_kgDOBmCe9A",
"avatar_url": "https://avatars.githubusercontent.com/u/106995444?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/maddiedawson",
"html_url": "https://github.com/maddiedawson",
"followers_url": "https://api.github.com/users/maddiedawson/followers",
"following_url": "https://api.github.com/users/maddiedawson/following{/other_user}",
"gists_url": "https://api.github.com/users/maddiedawson/gists{/gist_id}",
"starred_url": "https://api.github.com/users/maddiedawson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/maddiedawson/subscriptions",
"organizations_url": "https://api.github.com/users/maddiedawson/orgs",
"repos_url": "https://api.github.com/users/maddiedawson/repos",
"events_url": "https://api.github.com/users/maddiedawson/events{/privacy}",
"received_events_url": "https://api.github.com/users/maddiedawson/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi again @lhoestq this is ready for review! Not sure I have permission to add people to the reviewers list...",
"Cool ! I think you can define `IterableDataset.from_spark` instead of adding `streaming=` in `Dataset.from_spark`, it can be more intuitive IMO :)",
"Thanks for reviewing! I moved the streaming behavior to IterableDataset.from_spark",
"Thanks Quentin! I'll flesh out the docs in a follow-up PR",
"Friendly ping @lhoestq ",
"Thanks @lhoestq ! I fixed the partition order thing and added more unit tests.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006165 / 0.011353 (-0.005188) | 0.004497 / 0.011008 (-0.006511) | 0.099142 / 0.038508 (0.060634) | 0.027479 / 0.023109 (0.004369) | 0.352491 / 0.275898 (0.076593) | 0.402993 / 0.323480 (0.079513) | 0.004885 / 0.007986 (-0.003100) | 0.003315 / 0.004328 (-0.001013) | 0.075787 / 0.004250 (0.071537) | 0.035320 / 0.037052 (-0.001732) | 0.368401 / 0.258489 (0.109912) | 0.409090 / 0.293841 (0.115249) | 0.030125 / 0.128546 (-0.098421) | 0.011670 / 0.075646 (-0.063976) | 0.324381 / 0.419271 (-0.094890) | 0.050815 / 0.043533 (0.007283) | 0.352598 / 0.255139 (0.097460) | 0.389189 / 0.283200 (0.105989) | 0.092873 / 0.141683 (-0.048810) | 1.485140 / 1.452155 (0.032986) | 1.545586 / 1.492716 (0.052869) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.199522 / 0.018006 (0.181516) | 0.404576 / 0.000490 (0.404087) | 0.003322 / 0.000200 (0.003122) | 0.000074 / 0.000054 (0.000020) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022945 / 0.037411 (-0.014466) | 0.095512 / 0.014526 (0.080987) | 0.103077 / 0.176557 (-0.073480) | 0.163918 / 0.737135 (-0.573217) | 0.105560 / 0.296338 (-0.190779) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.417360 / 0.215209 (0.202151) | 4.161693 / 2.077655 (2.084039) | 1.851941 / 1.504120 (0.347821) | 1.649872 / 1.541195 (0.108677) | 1.682099 / 1.468490 (0.213609) | 0.693187 / 4.584777 (-3.891590) | 3.462528 / 3.745712 (-0.283184) | 1.839893 / 5.269862 (-3.429968) | 1.155945 / 4.565676 (-3.409731) | 0.082611 / 0.424275 (-0.341664) | 0.012076 / 0.007607 (0.004469) | 0.514325 / 0.226044 (0.288280) | 5.155052 / 2.268929 (2.886123) | 2.307280 / 55.444624 (-53.137345) | 1.966483 / 6.876477 (-4.909994) | 2.018892 / 2.142072 (-0.123181) | 0.803068 / 4.805227 (-4.002159) | 0.152213 / 6.500664 (-6.348451) | 0.066320 / 0.075469 (-0.009149) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.218578 / 1.841788 (-0.623209) | 13.563869 / 8.074308 (5.489561) | 13.954596 / 10.191392 (3.763204) | 0.151527 / 0.680424 (-0.528897) | 0.016655 / 0.534201 (-0.517546) | 0.380637 / 0.579283 (-0.198646) | 0.395854 / 0.434364 (-0.038509) | 0.459111 / 0.540337 (-0.081226) | 0.560219 / 1.386936 (-0.826717) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006427 / 0.011353 (-0.004926) | 0.004728 / 0.011008 (-0.006280) | 0.080525 / 0.038508 (0.042017) | 0.027294 / 0.023109 (0.004185) | 0.414688 / 0.275898 (0.138790) | 0.449882 / 0.323480 (0.126402) | 0.004771 / 0.007986 (-0.003214) | 0.003402 / 0.004328 (-0.000926) | 0.078748 / 0.004250 (0.074497) | 0.037046 / 0.037052 (-0.000007) | 0.417398 / 0.258489 (0.158909) | 0.462921 / 0.293841 (0.169080) | 0.030364 / 0.128546 (-0.098182) | 0.011810 / 0.075646 (-0.063837) | 0.089787 / 0.419271 (-0.329485) | 0.039806 / 0.043533 (-0.003727) | 0.403401 / 0.255139 (0.148262) | 0.439477 / 0.283200 (0.156278) | 0.088431 / 0.141683 (-0.053252) | 1.534373 / 1.452155 (0.082219) | 1.592316 / 1.492716 (0.099600) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.217701 / 0.018006 (0.199695) | 0.384770 / 0.000490 (0.384280) | 0.000437 / 0.000200 (0.000237) | 0.000061 / 0.000054 (0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024952 / 0.037411 (-0.012459) | 0.098728 / 0.014526 (0.084202) | 0.106324 / 0.176557 (-0.070233) | 0.155484 / 0.737135 (-0.581651) | 0.109503 / 0.296338 (-0.186836) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.450639 / 0.215209 (0.235430) | 4.523110 / 2.077655 (2.445455) | 2.224810 / 1.504120 (0.720690) | 2.119516 / 1.541195 (0.578321) | 2.225192 / 1.468490 (0.756702) | 0.695397 / 4.584777 (-3.889380) | 3.433559 / 3.745712 (-0.312153) | 2.633127 / 5.269862 (-2.636735) | 1.448471 / 4.565676 (-3.117206) | 0.082262 / 0.424275 (-0.342013) | 0.012246 / 0.007607 (0.004639) | 0.561243 / 0.226044 (0.335199) | 5.652711 / 2.268929 (3.383782) | 2.689771 / 55.444624 (-52.754853) | 2.359512 / 6.876477 (-4.516965) | 2.471098 / 2.142072 (0.329026) | 0.802955 / 4.805227 (-4.002272) | 0.151142 / 6.500664 (-6.349522) | 0.067494 / 0.075469 (-0.007975) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.306879 / 1.841788 (-0.534909) | 14.030775 / 8.074308 (5.956467) | 12.917790 / 10.191392 (2.726398) | 0.141269 / 0.680424 (-0.539155) | 0.016264 / 0.534201 (-0.517937) | 0.411957 / 0.579283 (-0.167326) | 0.393235 / 0.434364 (-0.041129) | 0.505144 / 0.540337 (-0.035193) | 0.590660 / 1.386936 (-0.796276) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#7790ebd7072eafff755fb575b392f3efa74069e4 \"CML watermark\")\n"
] | 2023-04-18T17:47:53 | 2023-05-17T14:07:32 | 2023-05-17T14:00:38 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5770",
"html_url": "https://github.com/huggingface/datasets/pull/5770",
"diff_url": "https://github.com/huggingface/datasets/pull/5770.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5770.patch",
"merged_at": "2023-05-17T14:00:38"
} | Follow-up from https://github.com/huggingface/datasets/pull/5701
Related issue: https://github.com/huggingface/datasets/issues/5678 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5770/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5770/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5769 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5769/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5769/comments | https://api.github.com/repos/huggingface/datasets/issues/5769/events | https://github.com/huggingface/datasets/issues/5769 | 1,673,441,182 | I_kwDODunzps5jvq-e | 5,769 | Tiktoken tokenizers are not pickable | {
"login": "markovalexander",
"id": 22663468,
"node_id": "MDQ6VXNlcjIyNjYzNDY4",
"avatar_url": "https://avatars.githubusercontent.com/u/22663468?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/markovalexander",
"html_url": "https://github.com/markovalexander",
"followers_url": "https://api.github.com/users/markovalexander/followers",
"following_url": "https://api.github.com/users/markovalexander/following{/other_user}",
"gists_url": "https://api.github.com/users/markovalexander/gists{/gist_id}",
"starred_url": "https://api.github.com/users/markovalexander/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/markovalexander/subscriptions",
"organizations_url": "https://api.github.com/users/markovalexander/orgs",
"repos_url": "https://api.github.com/users/markovalexander/repos",
"events_url": "https://api.github.com/users/markovalexander/events{/privacy}",
"received_events_url": "https://api.github.com/users/markovalexander/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Thanks for reporting, @markovalexander.\r\n\r\nUnfortunately, I'm not able to reproduce the issue: the `tiktoken` tokenizer can be used within `Dataset.map`, both in my local machine and in a Colab notebook: https://colab.research.google.com/drive/1DhJroZgk0sNFJ2Mrz-jYgrmh9jblXaCG?usp=sharing\r\n\r\nAre you sure you are using `datasets` version 2.11.0?"
] | 2023-04-18T16:07:40 | 2023-05-04T18:55:57 | 2023-05-04T18:55:57 | NONE | null | null | null | ### Describe the bug
Since tiktoken tokenizer is not pickable, it is not possible to use it inside `dataset.map()` with multiprocessing enabled. However, you [made](https://github.com/huggingface/datasets/issues/5536) tiktoken's tokenizers pickable in `datasets==2.10.0` for caching. For some reason, this logic does not work in dataset processing and raises `TypeError: cannot pickle 'builtins.CoreBPE' object`
### Steps to reproduce the bug
```
from datasets import load_dataset
import tiktoken
dataset = load_dataset("stas/openwebtext-10k")
enc = tiktoken.get_encoding("gpt2")
tokenized = dataset.map(
process,
remove_columns=['text'],
desc="tokenizing the OWT splits",
num_proc=2,
)
def process(example):
ids = enc.encode(example['text'])
ids.append(enc.eot_token)
out = {'ids': ids, 'len': len(ids)}
return out
```
### Expected behavior
starts processing dataset
### Environment info
- `datasets` version: 2.11.0
- Platform: Linux-5.15.0-1021-oracle-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.13.4
- PyArrow version: 9.0.0
- Pandas version: 2.0.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5769/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5769/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5768 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5768/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5768/comments | https://api.github.com/repos/huggingface/datasets/issues/5768/events | https://github.com/huggingface/datasets/issues/5768 | 1,672,494,561 | I_kwDODunzps5jsD3h | 5,768 | load_dataset("squad") doesn't work in 2.7.1 and 2.10.1 | {
"login": "yaseen157",
"id": 57412770,
"node_id": "MDQ6VXNlcjU3NDEyNzcw",
"avatar_url": "https://avatars.githubusercontent.com/u/57412770?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yaseen157",
"html_url": "https://github.com/yaseen157",
"followers_url": "https://api.github.com/users/yaseen157/followers",
"following_url": "https://api.github.com/users/yaseen157/following{/other_user}",
"gists_url": "https://api.github.com/users/yaseen157/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yaseen157/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yaseen157/subscriptions",
"organizations_url": "https://api.github.com/users/yaseen157/orgs",
"repos_url": "https://api.github.com/users/yaseen157/repos",
"events_url": "https://api.github.com/users/yaseen157/events{/privacy}",
"received_events_url": "https://api.github.com/users/yaseen157/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Thanks for reporting, @yaseen157.\r\n\r\nCould you please give the complete error stack trace?",
"I am not able to reproduce your issue: the dataset loads perfectly on my local machine and on a Colab notebook: https://colab.research.google.com/drive/1Fbdoa1JdNz8DOdX6gmIsOK1nCT8Abj4O?usp=sharing\r\n```python\r\nIn [1]: from datasets import load_dataset\r\n\r\nIn [2]: ds = load_dataset(\"squad\")\r\nDownloading builder script: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5.27k/5.27k [00:00<00:00, 3.22MB/s]\r\nDownloading metadata: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2.36k/2.36k [00:00<00:00, 1.60MB/s]\r\nDownloading readme: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 7.67k/7.67k [00:00<00:00, 4.58MB/s]\r\nDownloading and preparing dataset squad/plain_text to ...t/.cache/huggingface/datasets/squad/plain_text/1.0.0/d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453...\r\nDownloading data: 30.3MB [00:00, 91.8MB/s] | 0/2 [00:00<?, ?it/s]\r\nDownloading data: 4.85MB [00:00, 75.3MB/s] \r\nDownloading data files: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 2.31it/s]\r\nExtracting data files: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 2157.01it/s]\r\nDataset squad downloaded and prepared to .../.cache/huggingface/datasets/squad/plain_text/1.0.0/d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453. Subsequent calls will reuse this data.\r\n100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 463.95it/s]\r\n\r\nIn [3]: ds\r\nOut[3]: \r\nDatasetDict({\r\n train: Dataset({\r\n features: ['id', 'title', 'context', 'question', 'answers'],\r\n num_rows: 87599\r\n })\r\n validation: Dataset({\r\n features: ['id', 'title', 'context', 'question', 'answers'],\r\n num_rows: 10570\r\n })\r\n})\r\n```",
"I am at a complete loss for what's happening here. A quick summary, I have 3 machines to try this with:\r\n1) My windows 10 laptop\r\n2) Linux machine1, super computer login node\r\n3) Linux machine2, super computer compute node\r\n\r\nLet's define the following as a test script for the machines:\r\n\r\n```\r\nimport traceback\r\nimport datasets\r\nprint(f\"{datasets.__version__=}\")\r\ntry:\r\n ds = datasets.load_dataset(\"squad\")\r\nexcept:\r\n traceback.print_exc()\r\nelse:\r\n print(\"Success!\")\r\n```\r\n\r\nThe Windows laptop enters some sort of traceback recursion loop:\r\n\r\n> datasets.__version__='2.7.1'\r\n> Downloading and preparing dataset squad/plain_text to C:/Users/yr3g17/.cache/huggingface/datasets/squad/plain_text/1.0.0/d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453...\r\n> Downloading data files: 100%|██████████| 2/2 [00:00<?, ?it/s]\r\n> Traceback (most recent call last):\r\n> File \"<string>\", line 1, in <module>\r\n> File \"C:\\Users\\yr3g17\\AppData\\Local\\Programs\\Python\\Python39\\lib\\multiprocessing\\spawn.py\", line 116, in spawn_main\r\n> exitcode = _main(fd, parent_sentinel)\r\n> File \"C:\\Users\\yr3g17\\AppData\\Local\\Programs\\Python\\Python39\\lib\\multiprocessing\\spawn.py\", line 125, in _main\r\n> prepare(preparation_data)\r\n> File \"C:\\Users\\yr3g17\\AppData\\Local\\Programs\\Python\\Python39\\lib\\multiprocessing\\spawn.py\", line 236, in prepare\r\n> _fixup_main_from_path(data['init_main_from_path'])\r\n> File \"C:\\Users\\yr3g17\\AppData\\Local\\Programs\\Python\\Python39\\lib\\multiprocessing\\spawn.py\", line 287, in _fixup_main_from_path\r\n> main_content = runpy.run_path(main_path,\r\n> File \"C:\\Users\\yr3g17\\AppData\\Local\\Programs\\Python\\Python39\\lib\\runpy.py\", line 267, in run_path\r\n> code, fname = _get_code_from_file(run_name, path_name)\r\n> File \"C:\\Users\\yr3g17\\AppData\\Local\\Programs\\Python\\Python39\\lib\\runpy.py\", line 237, in _get_code_from_file\r\n> with io.open_code(decoded_path) as f:\r\n> OSError: [Errno 22] Invalid argument: 'C:\\\\Users\\\\yr3g17\\\\OneDrive - University of Southampton\\\\Documents\\\\PhD-repository\\\\<input>'\r\n> Traceback (most recent call last):\r\n> File \"<string>\", line 1, in <module>\r\n> File \"C:\\Users\\yr3g17\\AppData\\Local\\Programs\\Python\\Python39\\lib\\multiprocessing\\spawn.py\", line 116, in spawn_main\r\n> exitcode = _main(fd, parent_sentinel)\r\n> File \"C:\\Users\\yr3g17\\AppData\\Local\\Programs\\Python\\Python39\\lib\\multiprocessing\\spawn.py\", line 125, in _main\r\n> prepare(preparation_data)\r\n**this error traceback is endlessly recursive**\r\n\r\nThis is a brand new issue that started today and I didn't even realise was a thing, as I had been using my windows machine to follow tracebacks for the other machines...\r\n\r\nI suspect this issue had something to do with my filepath naming, but I couldn't confirm this when I spent time trying to debug this myself weeks ago, something to do with files being locked and never released. I'm not too concerned about my laptop not working here because I've had so many issues with Microsoft OneDrive and my filesystem.\r\n\r\nLinux machines 1 and 2 were working fine for months, but have all of a sudden stopped working. Trying to run linux machine 1 (login node), I get:\r\n\r\n> datasets.__version__='2.10.1'\r\n> Downloading and preparing dataset json/squad to /home/yr3g17/.cache/hugg\r\ningface/datasets/json/squad-d733af945be1d2c2/0.0.0/0f7e3662623656454fcd2\r\nb650f34e886a7db4b9104504885bd462096cc7a9f51...\r\n> Downloading data files: 100%|███████████████████████████████████████████\r\n█████████████████████████████████████████████| 2/2 [00:00<00:00, 4042.70\r\nit/s]\r\n>Extracting data files: 100%|███████████████████████████████████████\r\n███████████████████████████████████████████████████| 2/2 [00:00<00:00, 1\r\n11.15it/s]\r\n> Generating train split: 0 examples [00:00, ? examples/s]\r\n\r\n and hangs here. This has not happened to me before on the Linux machine. If I forcefully keyboard interrupt, I get:\r\n \r\n> Traceback (most recent call last):\r\n> File \"<stdin>\", line 2, in <module>\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/d\r\n> atasets/load.py\", line 1782, in load_dataset\r\n> builder_instance.download_and_prepare(\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/d\r\n> atasets/builder.py\", line 793, in download_and_prepare\r\n> with FileLock(lock_path) if is_local else contextlib.nullcontext():\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/d\r\n> atasets/utils/filelock.py\", line 320, in __enter__\r\n> self.acquire()\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/d\r\n> atasets/utils/filelock.py\", line 282, in acquire\r\n> time.sleep(poll_intervall)\r\n\r\nWhich also appears to be file lock related! I resolved this by navigating to my ~/.cache/huggingface/datasets directory and wiping out anything to do with the squad dataset in *.lock files. Now I get:\r\n\r\n```\r\nfrom datasets import load_dataset\r\ndataset_load(\"squad\")\r\n\r\n```\r\n> Downloading and preparing dataset squad/plain_text to /home/yr3g17/.cache/huggingface/datasets/squad/plain_text/1.0.0/d6ec3ceb99ca480ce37cdd35555d6cb\r\n> 2511d223b9150cce08a837ef62ffea453...\r\n> Downloading data files: 100%|██████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 44.75it/s]\r\n> Extracting data files: 100%|███████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 8.54it/s]\r\n> Dataset squad downloaded and prepared to /home/yr3g17/.cache/huggingface/datasets/squad/plain_text/1.0.0/d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150\r\n> cce08a837ef62ffea453. Subsequent calls will reuse this data.\r\n> 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 19.77it/s]\r\n> DatasetDict({\r\n> train: Dataset({\r\n> features: ['id', 'title', 'context', 'question', 'answers'],\r\n> num_rows: 87599\r\n> })\r\n> validation: Dataset({\r\n> features: ['id', 'title', 'context', 'question', 'answers'],\r\n> num_rows: 10570\r\n> })\r\n> })\r\n> \r\n\r\nWhich all seems fine right, it's doing what it should be. But now, without ever leaving the IDE, I \"make a subsequent call\" to reuse the data by repeating the command. I encounter the following traceback\r\n\r\n`load_dataset(\"squad\")`\r\n\r\n> Traceback (most recent call last):\r\n> File \"<stdin>\", line 1, in <module>\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/datasets/load.py\", line 1759, in load_dataset\r\n> builder_instance = load_dataset_builder(\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/datasets/load.py\", line 1496, in load_dataset_builder\r\n> dataset_module = dataset_module_factory(\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/datasets/load.py\", line 1151, in dataset_module_factory\r\n> ).get_module()\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/datasets/load.py\", line 631, in get_module\r\n> data_files = DataFilesDict.from_local_or_remote(\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/datasets/data_files.py\", line 796, in from_local_or_remote\r\n> DataFilesList.from_local_or_remote(\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/datasets/data_files.py\", line 764, in from_local_or_remote\r\n> data_files = resolve_patterns_locally_or_by_urls(base_path, patterns, allowed_extensions)\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/datasets/data_files.py\", line 369, in resolve_patterns_locally_or_by_urls\r\n> raise FileNotFoundError(error_msg)\r\n> FileNotFoundError: Unable to resolve any data file that matches '['train[-._ 0-9/]**', '**[-._ 0-9/]train[-._ 0-9/]**', 'training[-._ 0-9/]**', '**[-\r\n> ._ 0-9/]training[-._ 0-9/]**']' at /mainfs/home/yr3g17/.cache/huggingface/datasets/squad with any supported extension ['csv', 'tsv', 'json', 'jsonl',\r\n> 'parquet', 'txt', 'blp', 'bmp', 'dib', 'bufr', 'cur', 'pcx', 'dcx', 'dds', 'ps', 'eps', 'fit', 'fits', 'fli', 'flc', 'ftc', 'ftu', 'gbr', 'gif', 'gr\r\n> ib', 'h5', 'hdf', 'png', 'apng', 'jp2', 'j2k', 'jpc', 'jpf', 'jpx', 'j2c', 'icns', 'ico', 'im', 'iim', 'tif', 'tiff', 'jfif', 'jpe', 'jpg', 'jpeg', '\r\n> mpg', 'mpeg', 'msp', 'pcd', 'pxr', 'pbm', 'pgm', 'ppm', 'pnm', 'psd', 'bw', 'rgb', 'rgba', 'sgi', 'ras', 'tga', 'icb', 'vda', 'vst', 'webp', 'wmf', '\r\n> emf', 'xbm', 'xpm', 'BLP', 'BMP', 'DIB', 'BUFR', 'CUR', 'PCX', 'DCX', 'DDS', 'PS', 'EPS', 'FIT', 'FITS', 'FLI', 'FLC', 'FTC', 'FTU', 'GBR', 'GIF', 'G\r\n> RIB', 'H5', 'HDF', 'PNG', 'APNG', 'JP2', 'J2K', 'JPC', 'JPF', 'JPX', 'J2C', 'ICNS', 'ICO', 'IM', 'IIM', 'TIF', 'TIFF', 'JFIF', 'JPE', 'JPG', 'JPEG',\r\n> 'MPG', 'MPEG', 'MSP', 'PCD', 'PXR', 'PBM', 'PGM', 'PPM', 'PNM', 'PSD', 'BW', 'RGB', 'RGBA', 'SGI', 'RAS', 'TGA', 'ICB', 'VDA', 'VST', 'WEBP', 'WMF',\r\n> 'EMF', 'XBM', 'XPM', 'aiff', 'au', 'avr', 'caf', 'flac', 'htk', 'svx', 'mat4', 'mat5', 'mpc2k', 'ogg', 'paf', 'pvf', 'raw', 'rf64', 'sd2', 'sds', 'ir\r\n> cam', 'voc', 'w64', 'wav', 'nist', 'wavex', 'wve', 'xi', 'mp3', 'opus', 'AIFF', 'AU', 'AVR', 'CAF', 'FLAC', 'HTK', 'SVX', 'MAT4', 'MAT5', 'MPC2K', 'O\r\n> GG', 'PAF', 'PVF', 'RAW', 'RF64', 'SD2', 'SDS', 'IRCAM', 'VOC', 'W64', 'WAV', 'NIST', 'WAVEX', 'WVE', 'XI', 'MP3', 'OPUS', 'zip']\r\n\r\nIt doesn't even appear like I can reliably repeat this process. I'll nuke squad files in my dataset cache and run the Python code again (which downloads a new copy of the dataset to cache). It will either fail (as it just did in the quote above), or it will successfully recall the dataset.\r\n\r\nI repeated this nuking process a few times until calling load_dataset was reliably giving me the correct result (no filelocking issues or tracebacks). I then sent the test script as a job to the supercomputer compute nodes (which do not have internet access and therefore depend on cached data from Linux machine 1 login nodes)\r\n\r\n> Using the latest cached version of the module from /home/yr3g17/.cache/huggingface/modules/datasets_modules/datasets/squad/8730650fed465361f38ac4d810\r\n> ccdd16e8fc87b56498e52fb7e2cadaefc1f177 (last modified on Tue Feb 14 10:12:56 2023) since it couldn't be found locally at squad., or remotely on the Hugging Face Hub.\r\n> Traceback (most recent call last):\r\n> File \"/mainfs/scratch/yr3g17/squad_qanswering/3054408/0/../../main.py\", line 5, in <module>\r\n> dataset = load_dataset(\"squad\")\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/datasets/load.py\", line 1759, in load_dataset\r\n> builder_instance = load_dataset_builder(\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/datasets/load.py\", line 1522, in load_dataset_builder\r\n> builder_instance: DatasetBuilder = builder_cls(\r\n> TypeError: 'NoneType' object is not callable\r\n\r\nand I have absolutely no idea why the second and third machines are producing different tracebacks. I have previously run these exact scripts successfully on the login and compute nodes of the supercomputer, this issue I'm raising has appeared fairly recently for me. This, is where I encounter the TypeError that I opened this issue with, which I was able to traceback (using my laptop before it too started not working) to whatever was dynamically importing \"builder_cls\". That bit of code wasn't doing importing builder_cls correctly and would effectively make the assignment \"builder_cls=None\" resulting in the TypeError. Does any of this help?",
"I'm back on linux machine 1 (login node) now. After submitting that as a job to machine 2 and it failing with TypeError, linux machine 1 now produces identical traceback to machine 2:\r\n\r\n> (arkroyal) [yr3g17@cyan52 squad_qanswering]$ python\r\n> Python 3.10.8 (main, Nov 24 2022, 14:13:03) [GCC 11.2.0] on linux\r\n> Type \"help\", \"copyright\", \"credits\" or \"license\" for more information.\r\n>\r\n> from datasets import load_dataset\r\n> load_dataset(\"squad\")\r\n>\r\n> Traceback (most recent call last):\r\n> File \"<stdin>\", line 1, in <module>\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/datasets/load.py\", line 1759, in load_dataset\r\n> builder_instance = load_dataset_builder(\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/datasets/load.py\", line 1522, in load_dataset_builder\r\n> builder_instance: DatasetBuilder = builder_cls(\r\n> TypeError: 'NoneType' object is not callable\r\n\r\nI thought it might be useful to provide you with my cache file structure:\r\n\r\n>_home_yr3g17_.cache_huggingface_datasets_casino_default_1.1.0_302c3b1ac78c48091deabe83a11f4003c7b472a4e11a8eb92799653785bd5da1.lock\r\n>_home_yr3g17_.cache_huggingface_datasets_imdb_plain_text_1.0.0_2fdd8b9bcadd6e7055e742a706876ba43f19faee861df134affd7a3f60fc38a1.lock\r\n>_home_yr3g17_.cache_huggingface_datasets_squad_plain_text_1.0.0_d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453.lock\r\n>_home_yr3g17_.cache_huggingface_datasets_yelp_review_full_yelp_review_full_1.0.0_e8e18e19d7be9e75642fc66b198abadb116f73599ec89a69ba5dd8d1e57ba0bf.lock\r\n> casino\r\n> downloads\r\n> imdb\r\n> json\r\n> squad\r\n> squad_v2\r\n> yelp_review_full\r\n\r\nThe inside of squad/plain_text/1.0.0/ looks like\r\n\r\n> d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453\r\n> d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453.incomplete_info.lock\r\n> d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453_builder.lock\r\n",
"I see this is quite a complex use case...\r\n\r\nLet's try multiple things:\r\n- First, update `datasets` and make sure you use the same version in all machines, so that we can easily compare different behaviors.\r\n ```\r\n pip install -U datasets\r\n ```\r\n- Second, wherever you run the `load_dataset(\"squad\")` command, make sure there is not a local directory named \"squad\". The datasets library gives priority to any local file/directory over the datasets on the Hugging Face Hub\r\n - I tell you this, because in one of your trace backs, it seems it refers to a local directory:\r\n ```\r\n Downloading and preparing dataset json/squad to /home/yr3g17/.cache/huggingface/datasets/json/squad-d733af945be1d2c2/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51...\r\n ```\r\n- Third, to use the \"squad\" dataset from the Hub, you need to have internet connection, so that you can download the \"squad\" Python loading script from the Hub. Do all your machines have internet connection?\r\n - I ask this because of this error message:\r\n ```\r\n Using the latest cached version of the module from /home/yr3g17/.cache/huggingface/modules/datasets_modules/datasets/squad/8730650fed465361f38ac4d810ccdd16e8fc87b56498e52fb7e2cadaefc1f177 (last modified on Tue Feb 14 10:12:56 2023) since it couldn't be found locally at squad., or remotely on the Hugging Face Hub.\r\n ```\r\n- Fourth, to be sure that we avoid any issues with the cache, it is a good idea to remove it and regenerate it. Remove `.cache/huggingface/datasets` and also `.cache/huggingface/modules`\r\n- Fifth, as an additional debugging tool, let's be sure we use the latest \"squad\" Python loading script by passing the revision parameter:\r\n ```\r\n ds = load_dataset(\"squad\", revision=\"5fe18c4c680f9922d794e3f4dd673a751c74ee37\")\r\n ```",
"Additionally, we just had an infrastructure issue on the Hugging Face Hub at around 11:30 today. That might have contributed to the connectivity issue... It is fixed now.\r\n\r\nhttps://status.huggingface.co/",
"Hi again, thanks for your help and insight Albert Villanova.\r\n\r\nIt's all working now, so thank you for that. For the benefit of anyone else who ends up in this thread, I solved the problem by addressing Albert's advice:\r\n\r\n(1) Both Windows and Linux machine 1 (have internet access) and can now access the SQuAD dataset. The supercomputer login node can only access version 2.7.1, but my Windows laptop is running on datasets 2.11.0 just fine. I suspect it was just a perfect storm alongside the aforementioned \"infrastructure issue\".\r\n\r\n(2) I did have a local directory called squad, because I was using a local copy of evaluate's \"SQuAD\" metric. The supercomputer compute nodes do not have internet access and treat `metric = evaluate.load('<x>')` as a way of loading a metric at the local path `./<x>/<x>.py`, which could've been a related issue as I was storing the metric under `squad/squad.py`. Don't be lazy like me and store the evaluation code under a path with a name that can be misinterpreted.\r\n\r\n(3) I can't give internet access to the supercomputer compute nodes, so local files do just fine here.\r\n\r\n(4) The windows and Linux machine 1 can both access the internet and were getting fresh copies of the dataset from the huggingface hub. Linux machine 2 was working after I cleared the contents of ~/.cache/huggingface/....\r\n\r\nI feel silly now, knowing it was all so simple! Sorry about that Albert, and thanks again for the help. I've not raised a Github issue like this before, so I'm not sure if I should be close my own issues or if this is something you guys do?",
"Thanks for your detailed feedback which for sure will be useful to other community members."
] | 2023-04-18T07:10:56 | 2023-04-20T10:27:23 | 2023-04-20T10:27:22 | NONE | null | null | null | ### Describe the bug
There is an issue that seems to be unique to the "squad" dataset, in which it cannot be loaded using standard methods. This issue is most quickly reproduced from the command line, using the HF examples to verify a dataset is loaded properly.
This is not a problem with "squad_v2" dataset for example.
### Steps to reproduce the bug
cmd line
> $ python -c "from datasets import load_dataset; print(load_dataset('squad', split='train')[0])"
OR
Python IDE
> from datasets import load_dataset
> load_dataset("squad")
### Expected behavior
I expected to either see the output described here from running the very same command in command line ([https://huggingface.co/docs/datasets/installation]), or any output that does not raise Python's TypeError.
There is some funky behaviour in the dataset builder portion of the codebase that means it is trying to import the squad dataset with an incorrect path, or the squad dataset couldn't be downloaded. I'm not really sure what the problem is beyond that. Messing around with caching I did manage to get it to load the dataset once, and then couldn't repeat this.
### Environment info
datasets=2.7.1 **or** 2.10.1, python=3.10.8, Linux 3.10.0-1160.36.2.el7.x86_64 **or** Windows 10-64
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5768/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5768/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5767 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5767/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5767/comments | https://api.github.com/repos/huggingface/datasets/issues/5767/events | https://github.com/huggingface/datasets/issues/5767 | 1,672,433,979 | I_kwDODunzps5jr1E7 | 5,767 | How to use Distill-BERT with different datasets? | {
"login": "sauravtii",
"id": 109907638,
"node_id": "U_kgDOBo0Otg",
"avatar_url": "https://avatars.githubusercontent.com/u/109907638?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sauravtii",
"html_url": "https://github.com/sauravtii",
"followers_url": "https://api.github.com/users/sauravtii/followers",
"following_url": "https://api.github.com/users/sauravtii/following{/other_user}",
"gists_url": "https://api.github.com/users/sauravtii/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sauravtii/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sauravtii/subscriptions",
"organizations_url": "https://api.github.com/users/sauravtii/orgs",
"repos_url": "https://api.github.com/users/sauravtii/repos",
"events_url": "https://api.github.com/users/sauravtii/events{/privacy}",
"received_events_url": "https://api.github.com/users/sauravtii/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Closing this one in favor of the same issue opened in the `transformers` repo."
] | 2023-04-18T06:25:12 | 2023-04-20T16:52:05 | 2023-04-20T16:52:05 | NONE | null | null | null | ### Describe the bug
- `transformers` version: 4.11.3
- Platform: Linux-5.4.0-58-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyTorch version (GPU?): 1.12.0+cu102 (True)
- Tensorflow version (GPU?): 2.10.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Steps to reproduce the bug
I recently read [this](https://huggingface.co/docs/transformers/quicktour#train-with-tensorflow:~:text=The%20most%20important%20thing%20to%20remember%20is%20you%20need%20to%20instantiate%20a%20tokenizer%20with%20the%20same%20model%20name%20to%20ensure%20you%E2%80%99re%20using%20the%20same%20tokenization%20rules%20a%20model%20was%20pretrained%20with.) and was wondering how to use distill-BERT (which is pre-trained with imdb dataset) with a different dataset (for eg. [this](https://huggingface.co/datasets/yhavinga/imdb_dutch) dataset)?
### Expected behavior
Distill-BERT should work with different datasets.
### Environment info
- `datasets` version: 1.12.1
- Platform: Linux-5.4.0-58-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 11.0.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5767/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5767/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5764 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5764/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5764/comments | https://api.github.com/repos/huggingface/datasets/issues/5764/events | https://github.com/huggingface/datasets/issues/5764 | 1,670,740,198 | I_kwDODunzps5jlXjm | 5,764 | ConnectionError: Couldn't reach https://www.dropbox.com/s/zts98j4vkqtsns6/aclImdb_v2.tar?dl=1 | {
"login": "sauravtii",
"id": 109907638,
"node_id": "U_kgDOBo0Otg",
"avatar_url": "https://avatars.githubusercontent.com/u/109907638?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sauravtii",
"html_url": "https://github.com/sauravtii",
"followers_url": "https://api.github.com/users/sauravtii/followers",
"following_url": "https://api.github.com/users/sauravtii/following{/other_user}",
"gists_url": "https://api.github.com/users/sauravtii/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sauravtii/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sauravtii/subscriptions",
"organizations_url": "https://api.github.com/users/sauravtii/orgs",
"repos_url": "https://api.github.com/users/sauravtii/repos",
"events_url": "https://api.github.com/users/sauravtii/events{/privacy}",
"received_events_url": "https://api.github.com/users/sauravtii/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Thanks for reporting, @sauravtii.\r\n\r\nUnfortunately, I'm not able to reproduce the issue:\r\n```python\r\nIn [1]: from datasets import load_dataset\r\n\r\nIn [2]: ds = load_dataset(\"josianem/imdb\")\r\n\r\nIn [2]: ds\r\nOut[2]: \r\nDatasetDict({\r\n train: Dataset({\r\n features: ['text', 'label'],\r\n num_rows: 25799\r\n })\r\n test: Dataset({\r\n features: ['text', 'label'],\r\n num_rows: 25000\r\n })\r\n unsupervised: Dataset({\r\n features: ['text', 'label'],\r\n num_rows: 50000\r\n })\r\n})\r\n```\r\n\r\nCould you please retry to load the dataset? Maybe there was a temporary connection issue to Dropbox.",
"Thanks @albertvillanova. I am facing another issue now\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"sample.py\", line 4, in <module>\r\n dataset = load_dataset(\"josianem/imdb\")\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/load.py\", line 1112, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/builder.py\", line 636, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/builder.py\", line 738, in _download_and_prepare\r\n verify_splits(self.info.splits, split_dict)\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/utils/info_utils.py\", line 74, in verify_splits\r\n raise NonMatchingSplitsSizesError(str(bad_splits))\r\ndatasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=34501348, num_examples=25799, dataset_name='imdb'), 'recorded': SplitInfo(name='train', num_bytes=0, num_examples=0, dataset_name='imdb')}, {'expected': SplitInfo(name='test', num_bytes=32650697, num_examples=25000, dataset_name='imdb'), 'recorded': SplitInfo(name='test', num_bytes=0, num_examples=0, dataset_name='imdb')}, {'expected': SplitInfo(name='unsupervised', num_bytes=67106814, num_examples=50000, dataset_name='imdb'), 'recorded': SplitInfo(name='unsupervised', num_bytes=0, num_examples=0, dataset_name='imdb')}]\r\n```\r\n\r\nThis is my code\r\n\r\n```\r\nfrom datasets import load_dataset, load_metric\r\n\r\ndataset = load_dataset(\"josianem/imdb\")\r\n```",
"Your connection didn't work and you got an empty dataset (`num_bytes=0, num_examples=0`):\r\n```\r\ndatasets.utils.info_utils.NonMatchingSplitsSizesError: \r\n[\r\n {\r\n 'expected': SplitInfo(name='train', num_bytes=34501348, num_examples=25799, dataset_name='imdb'), \r\n 'recorded': SplitInfo(name='train', num_bytes=0, num_examples=0, dataset_name='imdb')\r\n }, \r\n {\r\n 'expected': SplitInfo(name='test', num_bytes=32650697, num_examples=25000, dataset_name='imdb'), \r\n 'recorded': SplitInfo(name='test', num_bytes=0, num_examples=0, dataset_name='imdb')\r\n }, \r\n {\r\n 'expected': SplitInfo(name='unsupervised', num_bytes=67106814, num_examples=50000, dataset_name='imdb'), \r\n 'recorded': SplitInfo(name='unsupervised', num_bytes=0, num_examples=0, dataset_name='imdb')\r\n }\r\n]\r\n```\r\n\r\nCould you please try the link in your browser and see if it works? https://www.dropbox.com/s/zts98j4vkqtsns6/aclImdb_v2.tar?dl=1\r\n- If it does not work, you should contact the author of the dataset in their Community tab (https://huggingface.co/datasets/josianem/imdb/discussions) and inform them, so that they can host their data elsewhere, for example on the Hugging Face Hub itself\r\n\r\nIf the link works, you should try to load the dataset but forcing the re-download of the data files (so that the cache is refreshed with the actual data file), by passing `download_mode=\"force_redownload\"`:\r\n```python\r\ndataset = load_dataset(\"josianem/imdb\", download_mode=\"force_redownload\")\r\n```",
"After pasting the link in the browser, it did start the download so it seems that the link is working. But even after including the `download_mode` in my code I am facing the same issue:\r\n\r\nError:\r\n```\r\nTraceback (most recent call last):\r\n File \"sample.py\", line 4, in <module>\r\n dataset = load_dataset(\"josianem/imdb\", download_mode=\"force_redownload\")\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/load.py\", line 1112, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/builder.py\", line 636, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/builder.py\", line 704, in _download_and_prepare\r\n split_generators = self._split_generators(dl_manager, **split_generators_kwargs)\r\n File \"/home/saurav/.cache/huggingface/modules/datasets_modules/datasets/imdb/cc6ab4acab2799be15d5d217c24548b856156dafdc850165fdc4f2031f27ff2f/imdb.py\", line 79, in _split_generators\r\n archive = dl_manager.download(_DOWNLOAD_URL)\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/utils/download_manager.py\", line 196, in download\r\n downloaded_path_or_paths = map_nested(\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/utils/py_utils.py\", line 197, in map_nested\r\n return function(data_struct)\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/utils/download_manager.py\", line 217, in _download\r\n return cached_path(url_or_filename, download_config=download_config)\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/utils/file_utils.py\", line 289, in cached_path\r\n output_path = get_from_cache(\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/utils/file_utils.py\", line 606, in get_from_cache\r\n raise ConnectionError(\"Couldn't reach {}\".format(url))\r\nConnectionError: Couldn't reach https://www.dropbox.com/s/zts98j4vkqtsns6/aclImdb_v2.tar?dl=1\r\n```\r\n\r\nMy code:\r\n```\r\nfrom datasets import load_dataset, load_metric\r\n\r\ndataset = load_dataset(\"josianem/imdb\", download_mode=\"force_redownload\")\r\n```",
"I have tried again to reproduce your issue without success: the dataset loads perfectly, both in my local machine and in a Colab notebook.\r\n- See: https://colab.research.google.com/drive/1dky3T0XGFuldggy22NNQQN-UqOFqvnuY?usp=sharing\r\n\r\nI think the cause maight be that you are using a very old version of `datasets`. Please, could you update it and retry?\r\n```\r\npip install -U datasets\r\n```",
"That worked!! Thanks @albertvillanova : )\r\n\r\n```\r\nDownloading builder script: 100%|███████| 4.20k/4.20k [00:00<00:00, 6.69MB/s]\r\nDownloading metadata: 100%|█████████████| 2.60k/2.60k [00:00<00:00, 3.41MB/s]\r\nDownloading readme: 100%|███████████████| 7.52k/7.52k [00:00<00:00, 12.6MB/s]\r\nDownloading and preparing dataset imdb/plain_text to /home/saurav/.cache/huggingface/datasets/josianem___imdb/plain_text/1.0.0/cc6ab4acab2799be15d5d217c24548b856156dafdc850165fdc4f2031f27ff2f...\r\nDownloading data: 100%|███████████████████| 301M/301M [01:32<00:00, 3.25MB/s]\r\nDataset imdb downloaded and prepared to /home/saurav/.cache/huggingface/datasets/josianem___imdb/plain_text/1.0.0/cc6ab4acab2799be15d5d217c24548b856156dafdc850165fdc4f2031f27ff2f. Subsequent calls will reuse this data.\r\n100%|█████████████████████████████████████████| 3/3 [00:00<00:00, 794.83it/s]\r\n```\r\n\r\nThe code I used:\r\n```\r\nfrom datasets import load_dataset, load_metric\r\n\r\ndataset = load_dataset(\"josianem/imdb\", download_mode=\"force_redownload\")\r\n\r\n```\r\n\r\nBut when I remove `download_mode=\"force_redownload\"` I get the same error. Any guess on that?",
"That is because the cache got the \"empty\" download file the first time you tried and got the connection error.\r\n\r\nThen, once you no longer get the connection error, you need to refresh the cache by passing `download_mode=\"force_redownload\"`."
] | 2023-04-17T09:08:18 | 2023-04-18T07:18:20 | 2023-04-18T07:18:20 | NONE | null | null | null | ### Describe the bug
I want to use this (https://huggingface.co/datasets/josianem/imdb) dataset therefore I am trying to load it using the following code:
```
dataset = load_dataset("josianem/imdb")
```
The dataset is not getting loaded and gives the error message as the following:
```
Traceback (most recent call last):
File "sample.py", line 3, in <module>
dataset = load_dataset("josianem/imdb")
File "/home/saurav/.local/lib/python3.8/site-packages/datasets/load.py", line 1112, in load_dataset
builder_instance.download_and_prepare(
File "/home/saurav/.local/lib/python3.8/site-packages/datasets/builder.py", line 636, in download_and_prepare
self._download_and_prepare(
File "/home/saurav/.local/lib/python3.8/site-packages/datasets/builder.py", line 704, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/home/saurav/.cache/huggingface/modules/datasets_modules/datasets/imdb/cc6ab4acab2799be15d5d217c24548b856156dafdc850165fdc4f2031f27ff2f/imdb.py", line 79, in _split_generators
archive = dl_manager.download(_DOWNLOAD_URL)
File "/home/saurav/.local/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 196, in download
downloaded_path_or_paths = map_nested(
File "/home/saurav/.local/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 197, in map_nested
return function(data_struct)
File "/home/saurav/.local/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 217, in _download
return cached_path(url_or_filename, download_config=download_config)
File "/home/saurav/.local/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 289, in cached_path
output_path = get_from_cache(
File "/home/saurav/.local/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 606, in get_from_cache
raise ConnectionError("Couldn't reach {}".format(url))
ConnectionError: Couldn't reach https://www.dropbox.com/s/zts98j4vkqtsns6/aclImdb_v2.tar?dl=1
```
### Steps to reproduce the bug
You can reproduce the error by using the following code:
```
from datasets import load_dataset, load_metric
dataset = load_dataset("josianem/imdb")
```
### Expected behavior
The dataset should get loaded (I am using this dataset for the first time so not much aware of the exact behavior).
### Environment info
- `datasets` version: 1.12.1
- Platform: Linux-5.4.0-58-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 11.0.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5764/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5764/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5763 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5763/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5763/comments | https://api.github.com/repos/huggingface/datasets/issues/5763/events | https://github.com/huggingface/datasets/pull/5763 | 1,670,476,302 | PR_kwDODunzps5OcMI7 | 5,763 | fix typo: "mow" -> "now" | {
"login": "csris",
"id": 1967608,
"node_id": "MDQ6VXNlcjE5Njc2MDg=",
"avatar_url": "https://avatars.githubusercontent.com/u/1967608?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/csris",
"html_url": "https://github.com/csris",
"followers_url": "https://api.github.com/users/csris/followers",
"following_url": "https://api.github.com/users/csris/following{/other_user}",
"gists_url": "https://api.github.com/users/csris/gists{/gist_id}",
"starred_url": "https://api.github.com/users/csris/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/csris/subscriptions",
"organizations_url": "https://api.github.com/users/csris/orgs",
"repos_url": "https://api.github.com/users/csris/repos",
"events_url": "https://api.github.com/users/csris/events{/privacy}",
"received_events_url": "https://api.github.com/users/csris/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006804 / 0.011353 (-0.004549) | 0.004984 / 0.011008 (-0.006024) | 0.096781 / 0.038508 (0.058273) | 0.033049 / 0.023109 (0.009939) | 0.297681 / 0.275898 (0.021783) | 0.329553 / 0.323480 (0.006073) | 0.005697 / 0.007986 (-0.002289) | 0.004019 / 0.004328 (-0.000310) | 0.072691 / 0.004250 (0.068441) | 0.046921 / 0.037052 (0.009868) | 0.311467 / 0.258489 (0.052978) | 0.337616 / 0.293841 (0.043775) | 0.042400 / 0.128546 (-0.086146) | 0.011919 / 0.075646 (-0.063727) | 0.331390 / 0.419271 (-0.087881) | 0.051004 / 0.043533 (0.007471) | 0.295317 / 0.255139 (0.040178) | 0.316570 / 0.283200 (0.033371) | 0.099283 / 0.141683 (-0.042400) | 1.430583 / 1.452155 (-0.021572) | 1.493550 / 1.492716 (0.000834) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.213634 / 0.018006 (0.195628) | 0.432557 / 0.000490 (0.432067) | 0.001586 / 0.000200 (0.001386) | 0.000079 / 0.000054 (0.000025) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025249 / 0.037411 (-0.012162) | 0.105433 / 0.014526 (0.090908) | 0.113474 / 0.176557 (-0.063082) | 0.168799 / 0.737135 (-0.568336) | 0.119363 / 0.296338 (-0.176975) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.412450 / 0.215209 (0.197241) | 4.117432 / 2.077655 (2.039777) | 1.935176 / 1.504120 (0.431056) | 1.745674 / 1.541195 (0.204479) | 1.853872 / 1.468490 (0.385382) | 0.703429 / 4.584777 (-3.881348) | 3.756981 / 3.745712 (0.011269) | 3.730607 / 5.269862 (-1.539255) | 1.839052 / 4.565676 (-2.726624) | 0.087574 / 0.424275 (-0.336701) | 0.012293 / 0.007607 (0.004686) | 0.517234 / 0.226044 (0.291190) | 5.189759 / 2.268929 (2.920831) | 2.418739 / 55.444624 (-53.025885) | 2.081424 / 6.876477 (-4.795053) | 2.204464 / 2.142072 (0.062392) | 0.842768 / 4.805227 (-3.962459) | 0.169014 / 6.500664 (-6.331650) | 0.063711 / 0.075469 (-0.011758) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.180636 / 1.841788 (-0.661152) | 14.816088 / 8.074308 (6.741779) | 14.290085 / 10.191392 (4.098693) | 0.165267 / 0.680424 (-0.515156) | 0.017290 / 0.534201 (-0.516911) | 0.419678 / 0.579283 (-0.159605) | 0.418164 / 0.434364 (-0.016200) | 0.492210 / 0.540337 (-0.048127) | 0.588528 / 1.386936 (-0.798408) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007144 / 0.011353 (-0.004209) | 0.005223 / 0.011008 (-0.005785) | 0.073583 / 0.038508 (0.035075) | 0.033534 / 0.023109 (0.010425) | 0.339020 / 0.275898 (0.063122) | 0.366546 / 0.323480 (0.043066) | 0.006245 / 0.007986 (-0.001741) | 0.004081 / 0.004328 (-0.000247) | 0.073089 / 0.004250 (0.068839) | 0.047024 / 0.037052 (0.009971) | 0.342540 / 0.258489 (0.084051) | 0.379743 / 0.293841 (0.085902) | 0.037551 / 0.128546 (-0.090995) | 0.012246 / 0.075646 (-0.063400) | 0.084796 / 0.419271 (-0.334476) | 0.052256 / 0.043533 (0.008723) | 0.342675 / 0.255139 (0.087536) | 0.367157 / 0.283200 (0.083957) | 0.102939 / 0.141683 (-0.038744) | 1.409039 / 1.452155 (-0.043115) | 1.526137 / 1.492716 (0.033420) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.208143 / 0.018006 (0.190136) | 0.437940 / 0.000490 (0.437450) | 0.000424 / 0.000200 (0.000224) | 0.000056 / 0.000054 (0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028321 / 0.037411 (-0.009091) | 0.110417 / 0.014526 (0.095891) | 0.119449 / 0.176557 (-0.057107) | 0.168081 / 0.737135 (-0.569054) | 0.126658 / 0.296338 (-0.169681) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.429302 / 0.215209 (0.214093) | 4.270547 / 2.077655 (2.192892) | 2.061323 / 1.504120 (0.557203) | 1.857877 / 1.541195 (0.316682) | 1.873317 / 1.468490 (0.404827) | 0.688750 / 4.584777 (-3.896027) | 3.767951 / 3.745712 (0.022239) | 2.011436 / 5.269862 (-3.258426) | 1.299965 / 4.565676 (-3.265712) | 0.084799 / 0.424275 (-0.339476) | 0.012082 / 0.007607 (0.004475) | 0.521981 / 0.226044 (0.295937) | 5.265333 / 2.268929 (2.996405) | 2.494326 / 55.444624 (-52.950298) | 2.144672 / 6.876477 (-4.731804) | 2.365624 / 2.142072 (0.223551) | 0.839868 / 4.805227 (-3.965359) | 0.166614 / 6.500664 (-6.334050) | 0.063804 / 0.075469 (-0.011665) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.264623 / 1.841788 (-0.577164) | 14.946515 / 8.074308 (6.872207) | 14.450115 / 10.191392 (4.258723) | 0.163878 / 0.680424 (-0.516546) | 0.017501 / 0.534201 (-0.516700) | 0.420992 / 0.579283 (-0.158291) | 0.423005 / 0.434364 (-0.011359) | 0.489505 / 0.540337 (-0.050832) | 0.594631 / 1.386936 (-0.792305) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#fd893098627230cc734f6009ad04cf885c979ac4 \"CML watermark\")\n"
] | 2023-04-17T06:03:44 | 2023-04-17T15:01:53 | 2023-04-17T14:54:46 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5763",
"html_url": "https://github.com/huggingface/datasets/pull/5763",
"diff_url": "https://github.com/huggingface/datasets/pull/5763.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5763.patch",
"merged_at": "2023-04-17T14:54:46"
} | I noticed a typo as I was reading the datasets documentation. This PR contains a trivial fix changing "mow" to "now." | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5763/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5763/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5762 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5762/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5762/comments | https://api.github.com/repos/huggingface/datasets/issues/5762/events | https://github.com/huggingface/datasets/issues/5762 | 1,670,326,470 | I_kwDODunzps5jjyjG | 5,762 | Not able to load the pile | {
"login": "surya-narayanan",
"id": 17240858,
"node_id": "MDQ6VXNlcjE3MjQwODU4",
"avatar_url": "https://avatars.githubusercontent.com/u/17240858?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/surya-narayanan",
"html_url": "https://github.com/surya-narayanan",
"followers_url": "https://api.github.com/users/surya-narayanan/followers",
"following_url": "https://api.github.com/users/surya-narayanan/following{/other_user}",
"gists_url": "https://api.github.com/users/surya-narayanan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/surya-narayanan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/surya-narayanan/subscriptions",
"organizations_url": "https://api.github.com/users/surya-narayanan/orgs",
"repos_url": "https://api.github.com/users/surya-narayanan/repos",
"events_url": "https://api.github.com/users/surya-narayanan/events{/privacy}",
"received_events_url": "https://api.github.com/users/surya-narayanan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Thanks for reporting, @surya-narayanan.\r\n\r\nI see you already started a discussion about this on the Community tab of the corresponding dataset: https://huggingface.co/datasets/EleutherAI/the_pile/discussions/10\r\nLet's continue the discussion there!"
] | 2023-04-17T03:09:10 | 2023-04-17T09:37:27 | 2023-04-17T09:37:27 | NONE | null | null | null | ### Describe the bug
Got this error when I am trying to load the pile dataset
```
TypeError: Couldn't cast array of type
struct<file: string, id: string>
to
{'id': Value(dtype='string', id=None)}
```
### Steps to reproduce the bug
Please visit the following sample notebook
https://colab.research.google.com/drive/1JHcjawcHL6QHhi5VcqYd07W2QCEj2nWK#scrollTo=ulJP3eJCI-tB
### Expected behavior
The pile should work
### Environment info
- `datasets` version: 2.11.0
- Platform: Linux-5.10.147+-x86_64-with-glibc2.31
- Python version: 3.9.16
- Huggingface_hub version: 0.13.4
- PyArrow version: 9.0.0
- Pandas version: 1.5.3 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5762/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5762/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5758 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5758/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5758/comments | https://api.github.com/repos/huggingface/datasets/issues/5758/events | https://github.com/huggingface/datasets/pull/5758 | 1,669,920,923 | PR_kwDODunzps5OaY9S | 5,758 | Fixes #5757 | {
"login": "eli-osherovich",
"id": 2437102,
"node_id": "MDQ6VXNlcjI0MzcxMDI=",
"avatar_url": "https://avatars.githubusercontent.com/u/2437102?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eli-osherovich",
"html_url": "https://github.com/eli-osherovich",
"followers_url": "https://api.github.com/users/eli-osherovich/followers",
"following_url": "https://api.github.com/users/eli-osherovich/following{/other_user}",
"gists_url": "https://api.github.com/users/eli-osherovich/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eli-osherovich/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eli-osherovich/subscriptions",
"organizations_url": "https://api.github.com/users/eli-osherovich/orgs",
"repos_url": "https://api.github.com/users/eli-osherovich/repos",
"events_url": "https://api.github.com/users/eli-osherovich/events{/privacy}",
"received_events_url": "https://api.github.com/users/eli-osherovich/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The CI can be fixed by merging `main` into your branch. Can you do that before we merge ?",
"_The documentation is not available anymore as the PR was closed or merged._",
"Done.\n\nOn Thu, Apr 20, 2023 at 6:01 PM Quentin Lhoest ***@***.***>\nwrote:\n\n> The CI can be fixed by merging main into your branch. Can you do that\n> before we merge ?\n>\n> —\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/datasets/pull/5758#issuecomment-1516488124>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AASS73QPLA735AMN4PFDYRTXCFFTJANCNFSM6AAAAAAXACBUQU>\n> .\n> You are receiving this because you authored the thread.Message ID:\n> ***@***.***>\n>\n",
"Nice thanks !",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007161 / 0.011353 (-0.004192) | 0.005099 / 0.011008 (-0.005909) | 0.099301 / 0.038508 (0.060793) | 0.034144 / 0.023109 (0.011034) | 0.298273 / 0.275898 (0.022375) | 0.329009 / 0.323480 (0.005529) | 0.005486 / 0.007986 (-0.002500) | 0.003887 / 0.004328 (-0.000441) | 0.074769 / 0.004250 (0.070518) | 0.047505 / 0.037052 (0.010453) | 0.306550 / 0.258489 (0.048061) | 0.335380 / 0.293841 (0.041540) | 0.034796 / 0.128546 (-0.093750) | 0.012152 / 0.075646 (-0.063495) | 0.332194 / 0.419271 (-0.087077) | 0.049661 / 0.043533 (0.006128) | 0.296832 / 0.255139 (0.041693) | 0.316417 / 0.283200 (0.033218) | 0.098234 / 0.141683 (-0.043449) | 1.494114 / 1.452155 (0.041959) | 1.566468 / 1.492716 (0.073751) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.221309 / 0.018006 (0.203303) | 0.440855 / 0.000490 (0.440365) | 0.003025 / 0.000200 (0.002825) | 0.000076 / 0.000054 (0.000021) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026594 / 0.037411 (-0.010817) | 0.110406 / 0.014526 (0.095880) | 0.116117 / 0.176557 (-0.060439) | 0.173502 / 0.737135 (-0.563633) | 0.121988 / 0.296338 (-0.174351) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.403307 / 0.215209 (0.188098) | 4.034146 / 2.077655 (1.956492) | 1.852162 / 1.504120 (0.348042) | 1.675643 / 1.541195 (0.134448) | 1.748851 / 1.468490 (0.280360) | 0.703458 / 4.584777 (-3.881319) | 3.809055 / 3.745712 (0.063343) | 2.118060 / 5.269862 (-3.151801) | 1.338394 / 4.565676 (-3.227282) | 0.086319 / 0.424275 (-0.337956) | 0.012195 / 0.007607 (0.004588) | 0.520814 / 0.226044 (0.294769) | 5.201074 / 2.268929 (2.932145) | 2.418384 / 55.444624 (-53.026240) | 2.085496 / 6.876477 (-4.790980) | 2.245638 / 2.142072 (0.103565) | 0.849042 / 4.805227 (-3.956185) | 0.171912 / 6.500664 (-6.328752) | 0.065691 / 0.075469 (-0.009778) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.159985 / 1.841788 (-0.681803) | 14.910867 / 8.074308 (6.836559) | 14.473926 / 10.191392 (4.282534) | 0.181532 / 0.680424 (-0.498891) | 0.017203 / 0.534201 (-0.516998) | 0.420805 / 0.579283 (-0.158479) | 0.426455 / 0.434364 (-0.007909) | 0.497086 / 0.540337 (-0.043251) | 0.593909 / 1.386936 (-0.793027) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007688 / 0.011353 (-0.003665) | 0.005353 / 0.011008 (-0.005656) | 0.076869 / 0.038508 (0.038361) | 0.035030 / 0.023109 (0.011921) | 0.344649 / 0.275898 (0.068751) | 0.387669 / 0.323480 (0.064190) | 0.005913 / 0.007986 (-0.002072) | 0.004107 / 0.004328 (-0.000221) | 0.074111 / 0.004250 (0.069860) | 0.049351 / 0.037052 (0.012299) | 0.346061 / 0.258489 (0.087572) | 0.395499 / 0.293841 (0.101658) | 0.035549 / 0.128546 (-0.092997) | 0.012340 / 0.075646 (-0.063307) | 0.087031 / 0.419271 (-0.332241) | 0.049088 / 0.043533 (0.005556) | 0.342774 / 0.255139 (0.087635) | 0.362037 / 0.283200 (0.078837) | 0.100329 / 0.141683 (-0.041354) | 1.442349 / 1.452155 (-0.009806) | 1.551079 / 1.492716 (0.058363) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.228458 / 0.018006 (0.210452) | 0.446190 / 0.000490 (0.445701) | 0.000413 / 0.000200 (0.000213) | 0.000056 / 0.000054 (0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029884 / 0.037411 (-0.007527) | 0.117527 / 0.014526 (0.103002) | 0.123221 / 0.176557 (-0.053335) | 0.172290 / 0.737135 (-0.564845) | 0.128682 / 0.296338 (-0.167657) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.420905 / 0.215209 (0.205696) | 4.199342 / 2.077655 (2.121687) | 2.007327 / 1.504120 (0.503207) | 1.814732 / 1.541195 (0.273537) | 1.893999 / 1.468490 (0.425509) | 0.712259 / 4.584777 (-3.872518) | 3.843402 / 3.745712 (0.097690) | 3.198514 / 5.269862 (-2.071348) | 1.678732 / 4.565676 (-2.886945) | 0.086435 / 0.424275 (-0.337840) | 0.012233 / 0.007607 (0.004626) | 0.526121 / 0.226044 (0.300077) | 5.190578 / 2.268929 (2.921650) | 2.473259 / 55.444624 (-52.971366) | 2.142795 / 6.876477 (-4.733682) | 2.277594 / 2.142072 (0.135521) | 0.846117 / 4.805227 (-3.959110) | 0.169458 / 6.500664 (-6.331206) | 0.065017 / 0.075469 (-0.010452) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.272479 / 1.841788 (-0.569309) | 15.086473 / 8.074308 (7.012165) | 14.659728 / 10.191392 (4.468336) | 0.163915 / 0.680424 (-0.516509) | 0.017561 / 0.534201 (-0.516640) | 0.422074 / 0.579283 (-0.157209) | 0.421963 / 0.434364 (-0.012401) | 0.490321 / 0.540337 (-0.050016) | 0.586854 / 1.386936 (-0.800083) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#e7ce0ac60c7efc10886471932854903a7c19f172 \"CML watermark\")\n"
] | 2023-04-16T11:56:01 | 2023-04-20T15:37:49 | 2023-04-20T15:30:48 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5758",
"html_url": "https://github.com/huggingface/datasets/pull/5758",
"diff_url": "https://github.com/huggingface/datasets/pull/5758.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5758.patch",
"merged_at": "2023-04-20T15:30:48"
} | Fixes the bug #5757 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5758/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5758/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5757 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5757/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5757/comments | https://api.github.com/repos/huggingface/datasets/issues/5757/events | https://github.com/huggingface/datasets/issues/5757 | 1,669,910,503 | I_kwDODunzps5jiM_n | 5,757 | Tilde (~) is not supported | {
"login": "eli-osherovich",
"id": 2437102,
"node_id": "MDQ6VXNlcjI0MzcxMDI=",
"avatar_url": "https://avatars.githubusercontent.com/u/2437102?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eli-osherovich",
"html_url": "https://github.com/eli-osherovich",
"followers_url": "https://api.github.com/users/eli-osherovich/followers",
"following_url": "https://api.github.com/users/eli-osherovich/following{/other_user}",
"gists_url": "https://api.github.com/users/eli-osherovich/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eli-osherovich/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eli-osherovich/subscriptions",
"organizations_url": "https://api.github.com/users/eli-osherovich/orgs",
"repos_url": "https://api.github.com/users/eli-osherovich/repos",
"events_url": "https://api.github.com/users/eli-osherovich/events{/privacy}",
"received_events_url": "https://api.github.com/users/eli-osherovich/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2023-04-16T11:48:10 | 2023-04-20T15:30:51 | 2023-04-20T15:30:51 | CONTRIBUTOR | null | null | null | ### Describe the bug
It seems that `~` is not recognized correctly in local paths. Whenever I try to use it I get an exception
### Steps to reproduce the bug
```python
load_dataset("imagefolder", data_dir="~/data/my_dataset")
```
Will generate the following error:
```
EmptyDatasetError: The directory at /path/to/cwd/~/data/datasets/clementine_tagged_per_cam doesn't contain any data files
```
### Expected behavior
Load the dataset.
### Environment info
datasets==2.11.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5757/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5757/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5756 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5756/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5756/comments | https://api.github.com/repos/huggingface/datasets/issues/5756/events | https://github.com/huggingface/datasets/issues/5756 | 1,669,678,080 | I_kwDODunzps5jhUQA | 5,756 | Calling shuffle on a IterableDataset with streaming=True, gives "ValueError: cannot reshape array" | {
"login": "rohfle",
"id": 21077341,
"node_id": "MDQ6VXNlcjIxMDc3MzQx",
"avatar_url": "https://avatars.githubusercontent.com/u/21077341?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rohfle",
"html_url": "https://github.com/rohfle",
"followers_url": "https://api.github.com/users/rohfle/followers",
"following_url": "https://api.github.com/users/rohfle/following{/other_user}",
"gists_url": "https://api.github.com/users/rohfle/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rohfle/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rohfle/subscriptions",
"organizations_url": "https://api.github.com/users/rohfle/orgs",
"repos_url": "https://api.github.com/users/rohfle/repos",
"events_url": "https://api.github.com/users/rohfle/events{/privacy}",
"received_events_url": "https://api.github.com/users/rohfle/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi! I've merged a PR on the Hub with a fix: https://huggingface.co/datasets/fashion_mnist/discussions/3",
"Thanks, this appears to have fixed the issue.\r\n\r\nI've created a PR for the same change in the mnist dataset: https://huggingface.co/datasets/mnist/discussions/3/files"
] | 2023-04-16T04:59:47 | 2023-04-18T03:40:56 | 2023-04-18T03:40:56 | NONE | null | null | null | ### Describe the bug
When calling shuffle on a IterableDataset with streaming=True, I get the following error:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/administrator/Documents/Projects/huggingface/jax-diffusers-sprint-consistency-models/virtualenv/lib/python3.10/site-packages/datasets/iterable_dataset.py", line 937, in __iter__
for key, example in ex_iterable:
File "/home/administrator/Documents/Projects/huggingface/jax-diffusers-sprint-consistency-models/virtualenv/lib/python3.10/site-packages/datasets/iterable_dataset.py", line 627, in __iter__
for x in self.ex_iterable:
File "/home/administrator/Documents/Projects/huggingface/jax-diffusers-sprint-consistency-models/virtualenv/lib/python3.10/site-packages/datasets/iterable_dataset.py", line 138, in __iter__
yield from self.generate_examples_fn(**kwargs_with_shuffled_shards)
File "/home/administrator/.cache/huggingface/modules/datasets_modules/datasets/mnist/fda16c03c4ecfb13f165ba7e29cf38129ce035011519968cdaf74894ce91c9d4/mnist.py", line 111, in _generate_examples
images = np.frombuffer(f.read(), dtype=np.uint8).reshape(size, 28, 28)
ValueError: cannot reshape array of size 59992 into shape (60000,28,28)
```
Tested with the fashion_mnist and mnist datasets
### Steps to reproduce the bug
Code to reproduce
```python
from datasets import load_dataset
SHUFFLE_SEED = 42
SHUFFLE_BUFFER_SIZE = 10_000
dataset = load_dataset('fashion_mnist', streaming=True).shuffle(seed=SHUFFLE_SEED, buffer_size=SHUFFLE_BUFFER_SIZE)
next(iter(dataset['train']))
```
### Expected behavior
A random item from the dataset and no error
### Environment info
- `datasets` version: 2.11.0
- Platform: Linux-5.15.0-69-generic-x86_64-with-glibc2.35
- Python version: 3.10.6
- Huggingface_hub version: 0.13.4
- PyArrow version: 11.0.0
- Pandas version: 2.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5756/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5756/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5755 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5755/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5755/comments | https://api.github.com/repos/huggingface/datasets/issues/5755/events | https://github.com/huggingface/datasets/issues/5755 | 1,669,048,438 | I_kwDODunzps5je6h2 | 5,755 | ImportError: cannot import name 'DeprecatedEnum' from 'datasets.utils.deprecation_utils' | {
"login": "fivejjs",
"id": 1405491,
"node_id": "MDQ6VXNlcjE0MDU0OTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1405491?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fivejjs",
"html_url": "https://github.com/fivejjs",
"followers_url": "https://api.github.com/users/fivejjs/followers",
"following_url": "https://api.github.com/users/fivejjs/following{/other_user}",
"gists_url": "https://api.github.com/users/fivejjs/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fivejjs/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fivejjs/subscriptions",
"organizations_url": "https://api.github.com/users/fivejjs/orgs",
"repos_url": "https://api.github.com/users/fivejjs/repos",
"events_url": "https://api.github.com/users/fivejjs/events{/privacy}",
"received_events_url": "https://api.github.com/users/fivejjs/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"update the version. fix"
] | 2023-04-14T23:28:54 | 2023-04-14T23:36:19 | 2023-04-14T23:36:19 | NONE | null | null | null | ### Describe the bug
The module moved to new place?
### Steps to reproduce the bug
in the import step,
```python
from datasets.utils.deprecation_utils import DeprecatedEnum
```
error:
```
ImportError: cannot import name 'DeprecatedEnum' from 'datasets.utils.deprecation_utils'
```
### Expected behavior
import successfully
### Environment info
python==3.9.16
datasets==1.18.3 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5755/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5755/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5754 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5754/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5754/comments | https://api.github.com/repos/huggingface/datasets/issues/5754/events | https://github.com/huggingface/datasets/pull/5754 | 1,668,755,035 | PR_kwDODunzps5OWozh | 5,754 | Minor tqdm fixes | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006479 / 0.011353 (-0.004874) | 0.004592 / 0.011008 (-0.006416) | 0.097239 / 0.038508 (0.058731) | 0.028609 / 0.023109 (0.005499) | 0.309225 / 0.275898 (0.033327) | 0.340015 / 0.323480 (0.016535) | 0.004857 / 0.007986 (-0.003129) | 0.004649 / 0.004328 (0.000320) | 0.074770 / 0.004250 (0.070520) | 0.038351 / 0.037052 (0.001299) | 0.313360 / 0.258489 (0.054871) | 0.350256 / 0.293841 (0.056416) | 0.030770 / 0.128546 (-0.097776) | 0.011591 / 0.075646 (-0.064055) | 0.322444 / 0.419271 (-0.096828) | 0.043704 / 0.043533 (0.000171) | 0.311790 / 0.255139 (0.056651) | 0.339183 / 0.283200 (0.055984) | 0.088041 / 0.141683 (-0.053642) | 1.490649 / 1.452155 (0.038494) | 1.561789 / 1.492716 (0.069072) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.208984 / 0.018006 (0.190978) | 0.406105 / 0.000490 (0.405616) | 0.003152 / 0.000200 (0.002952) | 0.000074 / 0.000054 (0.000019) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022622 / 0.037411 (-0.014790) | 0.095819 / 0.014526 (0.081294) | 0.105132 / 0.176557 (-0.071424) | 0.165684 / 0.737135 (-0.571451) | 0.106706 / 0.296338 (-0.189632) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.426126 / 0.215209 (0.210917) | 4.233864 / 2.077655 (2.156209) | 1.918727 / 1.504120 (0.414607) | 1.729905 / 1.541195 (0.188710) | 1.760342 / 1.468490 (0.291852) | 0.695449 / 4.584777 (-3.889328) | 3.413531 / 3.745712 (-0.332181) | 1.904557 / 5.269862 (-3.365305) | 1.270604 / 4.565676 (-3.295072) | 0.083018 / 0.424275 (-0.341257) | 0.012760 / 0.007607 (0.005152) | 0.523991 / 0.226044 (0.297947) | 5.236132 / 2.268929 (2.967204) | 2.360959 / 55.444624 (-53.083665) | 1.996533 / 6.876477 (-4.879943) | 2.072934 / 2.142072 (-0.069138) | 0.804133 / 4.805227 (-4.001094) | 0.150976 / 6.500664 (-6.349688) | 0.065503 / 0.075469 (-0.009966) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.211828 / 1.841788 (-0.629960) | 13.657743 / 8.074308 (5.583435) | 13.887148 / 10.191392 (3.695756) | 0.145996 / 0.680424 (-0.534428) | 0.016562 / 0.534201 (-0.517639) | 0.380359 / 0.579283 (-0.198924) | 0.388698 / 0.434364 (-0.045666) | 0.440373 / 0.540337 (-0.099965) | 0.531753 / 1.386936 (-0.855183) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006444 / 0.011353 (-0.004909) | 0.004569 / 0.011008 (-0.006439) | 0.076239 / 0.038508 (0.037731) | 0.028462 / 0.023109 (0.005352) | 0.365540 / 0.275898 (0.089642) | 0.398242 / 0.323480 (0.074762) | 0.005785 / 0.007986 (-0.002200) | 0.003346 / 0.004328 (-0.000982) | 0.076296 / 0.004250 (0.072046) | 0.039853 / 0.037052 (0.002800) | 0.367684 / 0.258489 (0.109195) | 0.409570 / 0.293841 (0.115730) | 0.030536 / 0.128546 (-0.098010) | 0.011534 / 0.075646 (-0.064112) | 0.084962 / 0.419271 (-0.334309) | 0.042708 / 0.043533 (-0.000825) | 0.344058 / 0.255139 (0.088919) | 0.389096 / 0.283200 (0.105897) | 0.090559 / 0.141683 (-0.051124) | 1.507101 / 1.452155 (0.054946) | 1.563977 / 1.492716 (0.071260) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.228740 / 0.018006 (0.210734) | 0.396890 / 0.000490 (0.396400) | 0.000392 / 0.000200 (0.000192) | 0.000060 / 0.000054 (0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025052 / 0.037411 (-0.012360) | 0.099951 / 0.014526 (0.085426) | 0.106847 / 0.176557 (-0.069710) | 0.156666 / 0.737135 (-0.580469) | 0.110344 / 0.296338 (-0.185994) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.442363 / 0.215209 (0.227154) | 4.429571 / 2.077655 (2.351917) | 2.076501 / 1.504120 (0.572381) | 1.875226 / 1.541195 (0.334031) | 1.909093 / 1.468490 (0.440603) | 0.703047 / 4.584777 (-3.881730) | 3.457036 / 3.745712 (-0.288676) | 2.866648 / 5.269862 (-2.403214) | 1.524430 / 4.565676 (-3.041246) | 0.083687 / 0.424275 (-0.340588) | 0.012251 / 0.007607 (0.004643) | 0.543945 / 0.226044 (0.317901) | 5.440559 / 2.268929 (3.171630) | 2.522924 / 55.444624 (-52.921700) | 2.188770 / 6.876477 (-4.687707) | 2.249632 / 2.142072 (0.107559) | 0.813499 / 4.805227 (-3.991728) | 0.152861 / 6.500664 (-6.347803) | 0.067189 / 0.075469 (-0.008280) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.284255 / 1.841788 (-0.557533) | 14.207864 / 8.074308 (6.133556) | 14.279691 / 10.191392 (4.088299) | 0.167027 / 0.680424 (-0.513396) | 0.016455 / 0.534201 (-0.517746) | 0.380798 / 0.579283 (-0.198485) | 0.390013 / 0.434364 (-0.044351) | 0.445493 / 0.540337 (-0.094845) | 0.526278 / 1.386936 (-0.860658) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#3fdb46c526b9d070df0eb2d56b0ecacdace7cb9a \"CML watermark\")\n"
] | 2023-04-14T18:15:14 | 2023-04-20T15:27:58 | 2023-04-20T15:21:00 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5754",
"html_url": "https://github.com/huggingface/datasets/pull/5754",
"diff_url": "https://github.com/huggingface/datasets/pull/5754.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5754.patch",
"merged_at": "2023-04-20T15:21:00"
} | `GeneratorBasedBuilder`'s TQDM bars were not used as context managers. This PR fixes that (missed these bars in https://github.com/huggingface/datasets/pull/5560).
Also, this PR modifies the single-proc `save_to_disk` to fix the issue with the TQDM bar not accumulating the progress in the multi-shard setting (again, this bug was introduced by me in the linked PR 😎) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5754/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5754/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5753 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5753/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5753/comments | https://api.github.com/repos/huggingface/datasets/issues/5753/events | https://github.com/huggingface/datasets/issues/5753 | 1,668,659,536 | I_kwDODunzps5jdblQ | 5,753 | [IterableDatasets] Add column followed by interleave datasets gives bogus outputs | {
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Problem with the code snippet! Using global vars and functions was not a good idea with iterable datasets!\r\n\r\nIf we update to:\r\n```python\r\nfrom datasets import load_dataset\r\n\r\noriginal_dataset = load_dataset(\"librispeech_asr\", \"clean\", split=\"validation\", streaming=True)\r\n\r\n# now add a new column to our streaming dataset using our hack\r\nname = \"new_column\"\r\ncolumn_1 = [f\"new dataset 1, row {i}\" for i in range(50)]\r\n\r\nnew_features = original_dataset.features.copy()\r\nnew_features[name] = new_features[\"file\"] # I know that \"file\" has the right column type to match our new feature\r\n\r\ndef add_column_fn_1(example, idx):\r\n if name in example:\r\n raise ValueError(f\"Error when adding {name}: column {name} is already in the dataset.\")\r\n return {name: column_1[idx]}\r\n\r\nmodified_dataset_1 = original_dataset.map(add_column_fn_1, with_indices=True, features=new_features)\r\n\r\n# now create a second modified dataset using the same trick\r\ncolumn_2 = [f\"new dataset 2, row {i}\" for i in range(50)]\r\n\r\ndef add_column_fn_2(example, idx):\r\n if name in example:\r\n raise ValueError(f\"Error when adding {name}: column {name} is already in the dataset.\")\r\n return {name: column_2[idx]}\r\n\r\nmodified_dataset_2 = original_dataset.map(add_column_fn_2, with_indices=True, features=new_features)\r\n\r\ninterleaved_dataset = interleave_datasets([modified_dataset_1, modified_dataset_2])\r\n\r\nfor i, sample in enumerate(interleaved_dataset):\r\n print(sample[\"new_column\"])\r\n if i == 10:\r\n break\r\n```\r\nwe get the correct outputs:\r\n```python\r\nnew dataset 1, row 0\r\nnew dataset 2, row 0\r\nnew dataset 1, row 1\r\nnew dataset 2, row 1\r\nnew dataset 1, row 2\r\nnew dataset 2, row 2\r\nnew dataset 1, row 3\r\nnew dataset 2, row 3\r\nnew dataset 1, row 4\r\nnew dataset 2, row 4\r\nnew dataset 1, row 5\r\n```\r\n"
] | 2023-04-14T17:32:31 | 2023-04-14T17:45:52 | 2023-04-14T17:36:37 | CONTRIBUTOR | null | null | null | ### Describe the bug
If we add a new column to our iterable dataset using the hack described in #5752, when we then interleave datasets the new column is pinned to one value.
### Steps to reproduce the bug
What we're going to do here is:
1. Load an iterable dataset in streaming mode (`original_dataset`)
2. Add a new column to this dataset using the hack in #5752 (`modified_dataset_1`)
3. Create another new dataset by adding a column with the same key but different values (`modified_dataset_2`)
4. Interleave our new datasets (`modified_dataset_1` + `modified_dataset_2`)
5. Check the value of our newly added column (`new_column`)
```python
from datasets import load_dataset
# load an iterable dataset
original_dataset = load_dataset("librispeech_asr", "clean", split="validation", streaming=True)
# now add a new column to our streaming dataset using our hack from 5752
name = "new_column"
column = [f"new dataset 1, row {i}" for i in range(50)]
new_features = original_dataset.features.copy()
new_features[name] = new_features["file"] # I know that "file" has the right column type to match our new feature
def add_column_fn(example, idx):
if name in example:
raise ValueError(f"Error when adding {name}: column {name} is already in the dataset.")
return {name: column[idx]}
modified_dataset_1 = original_dataset.map(add_column_fn, with_indices=True, features=new_features)
# now create a second modified dataset using the same trick
column = [f"new dataset 2, row {i}" for i in range(50)]
def add_column_fn(example, idx):
if name in example:
raise ValueError(f"Error when adding {name}: column {name} is already in the dataset.")
return {name: column[idx]}
modified_dataset_2 = original_dataset.map(add_column_fn, with_indices=True, features=new_features)
# interleave these datasets
interleaved_dataset = interleave_datasets([modified_dataset_1, modified_dataset_2])
# now check what the value of the added column is
for i, sample in enumerate(interleaved_dataset):
print(sample["new_column"])
if i == 10:
break
```
**Print Output:**
```
new dataset 2, row 0
new dataset 2, row 0
new dataset 2, row 1
new dataset 2, row 1
new dataset 2, row 2
new dataset 2, row 2
new dataset 2, row 3
new dataset 2, row 3
new dataset 2, row 4
new dataset 2, row 4
new dataset 2, row 5
```
We see that we only get outputs from our second dataset.
### Expected behavior
We should interleave between dataset 1 and 2 and increase in row value:
```
new dataset 1, row 0
new dataset 2, row 0
new dataset 1, row 1
new dataset 2, row 1
new dataset 1, row 2
new dataset 2, row 2
...
```
### Environment info
- datasets version: 2.10.2.dev0
- Platform: Linux-4.19.0-23-cloud-amd64-x86_64-with-glibc2.28
- Python version: 3.9.16
- Huggingface_hub version: 0.13.3
- PyArrow version: 10.0.1
- Pandas version: 1.5.2 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5753/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5753/timeline | null | completed | false |