state
stringclasses
2 values
created_at
stringlengths
20
20
active_lock_reason
null
url
stringlengths
61
61
assignee
dict
reactions
dict
draft
bool
2 classes
labels_url
stringlengths
75
75
user
dict
html_url
stringlengths
49
51
assignees
list
locked
bool
1 class
updated_at
stringlengths
20
20
closed_at
stringlengths
20
20
milestone
dict
comments
sequence
state_reason
stringclasses
3 values
labels
list
title
stringlengths
1
290
author_association
stringclasses
3 values
timeline_url
stringlengths
70
70
body
stringlengths
0
228k
repository_url
stringclasses
1 value
pull_request
dict
id
int64
773M
2.11B
comments_url
stringlengths
70
70
node_id
stringlengths
18
32
performed_via_github_app
null
number
int64
1.62k
6.64k
events_url
stringlengths
68
68
is_pull_request
bool
2 classes
closed
2023-03-02T16:42:39Z
null
https://api.github.com/repos/huggingface/datasets/issues/5603
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5603/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/5603/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://github.com/huggingface/datasets/pull/5603
[]
false
2023-03-03T15:45:32Z
2023-03-03T15:38:28Z
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008550 / 0.011353 (-0.002803) | 0.004476 / 0.011008 (-0.006532) | 0.100902 / 0.038508 (0.062394) | 0.029684 / 0.023109 (0.006575) | 0.308081 / 0.275898 (0.032183) | 0.363435 / 0.323480 (0.039955) | 0.006987 / 0.007986 (-0.000999) | 0.003401 / 0.004328 (-0.000927) | 0.078218 / 0.004250 (0.073967) | 0.036657 / 0.037052 (-0.000395) | 0.319670 / 0.258489 (0.061181) | 0.349952 / 0.293841 (0.056111) | 0.033416 / 0.128546 (-0.095130) | 0.011511 / 0.075646 (-0.064135) | 0.323888 / 0.419271 (-0.095384) | 0.042429 / 0.043533 (-0.001104) | 0.307310 / 0.255139 (0.052171) | 0.329459 / 0.283200 (0.046259) | 0.085209 / 0.141683 (-0.056474) | 1.475893 / 1.452155 (0.023739) | 1.502782 / 1.492716 (0.010065) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.200137 / 0.018006 (0.182131) | 0.411269 / 0.000490 (0.410780) | 0.000415 / 0.000200 (0.000215) | 0.000061 / 0.000054 (0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022626 / 0.037411 (-0.014785) | 0.097045 / 0.014526 (0.082519) | 0.102955 / 0.176557 (-0.073602) | 0.148411 / 0.737135 (-0.588725) | 0.107238 / 0.296338 (-0.189100) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.421683 / 0.215209 (0.206474) | 4.203031 / 2.077655 (2.125376) | 1.908232 / 1.504120 (0.404112) | 1.698867 / 1.541195 (0.157672) | 1.743561 / 1.468490 (0.275071) | 0.693199 / 4.584777 (-3.891578) | 3.361022 / 3.745712 (-0.384690) | 2.989610 / 5.269862 (-2.280251) | 1.533036 / 4.565676 (-3.032641) | 0.082675 / 0.424275 (-0.341601) | 0.012419 / 0.007607 (0.004812) | 0.531543 / 0.226044 (0.305499) | 5.330595 / 2.268929 (3.061666) | 2.347519 / 55.444624 (-53.097105) | 1.975672 / 6.876477 (-4.900804) | 2.039541 / 2.142072 (-0.102532) | 0.810281 / 4.805227 (-3.994946) | 0.148917 / 6.500664 (-6.351747) | 0.065441 / 0.075469 (-0.010028) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.266213 / 1.841788 (-0.575574) | 13.628106 / 8.074308 (5.553798) | 13.852191 / 10.191392 (3.660799) | 0.149004 / 0.680424 (-0.531420) | 0.028549 / 0.534201 (-0.505652) | 0.399824 / 0.579283 (-0.179459) | 0.401231 / 0.434364 (-0.033133) | 0.473251 / 0.540337 (-0.067086) | 0.561094 / 1.386936 (-0.825842) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006669 / 0.011353 (-0.004684) | 0.004477 / 0.011008 (-0.006532) | 0.077514 / 0.038508 (0.039006) | 0.027489 / 0.023109 (0.004380) | 0.341935 / 0.275898 (0.066037) | 0.377392 / 0.323480 (0.053912) | 0.004947 / 0.007986 (-0.003039) | 0.004600 / 0.004328 (0.000271) | 0.075938 / 0.004250 (0.071687) | 0.039586 / 0.037052 (0.002534) | 0.344966 / 0.258489 (0.086477) | 0.392181 / 0.293841 (0.098340) | 0.031838 / 0.128546 (-0.096708) | 0.011572 / 0.075646 (-0.064075) | 0.085811 / 0.419271 (-0.333461) | 0.042250 / 0.043533 (-0.001283) | 0.345605 / 0.255139 (0.090466) | 0.367814 / 0.283200 (0.084615) | 0.090683 / 0.141683 (-0.051000) | 1.483168 / 1.452155 (0.031014) | 1.559724 / 1.492716 (0.067008) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.235655 / 0.018006 (0.217649) | 0.399016 / 0.000490 (0.398527) | 0.003096 / 0.000200 (0.002896) | 0.000077 / 0.000054 (0.000022) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024454 / 0.037411 (-0.012957) | 0.100710 / 0.014526 (0.086185) | 0.107950 / 0.176557 (-0.068606) | 0.161560 / 0.737135 (-0.575576) | 0.111840 / 0.296338 (-0.184498) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.441362 / 0.215209 (0.226153) | 4.428105 / 2.077655 (2.350450) | 2.074501 / 1.504120 (0.570381) | 1.866672 / 1.541195 (0.325477) | 1.928266 / 1.468490 (0.459776) | 0.703561 / 4.584777 (-3.881216) | 3.396537 / 3.745712 (-0.349175) | 3.047369 / 5.269862 (-2.222492) | 1.595133 / 4.565676 (-2.970543) | 0.084028 / 0.424275 (-0.340247) | 0.012349 / 0.007607 (0.004741) | 0.539354 / 0.226044 (0.313310) | 5.401535 / 2.268929 (3.132606) | 2.499874 / 55.444624 (-52.944750) | 2.161406 / 6.876477 (-4.715071) | 2.197385 / 2.142072 (0.055313) | 0.810864 / 4.805227 (-3.994363) | 0.152277 / 6.500664 (-6.348387) | 0.067266 / 0.075469 (-0.008203) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.280900 / 1.841788 (-0.560887) | 13.815731 / 8.074308 (5.741423) | 13.007438 / 10.191392 (2.816046) | 0.129711 / 0.680424 (-0.550713) | 0.016852 / 0.534201 (-0.517349) | 0.380775 / 0.579283 (-0.198508) | 0.384143 / 0.434364 (-0.050221) | 0.459954 / 0.540337 (-0.080383) | 0.549335 / 1.386936 (-0.837601) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#8805d67bd81ce48f481d5c1e56b84e6ebcaa2b2b \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009570 / 0.011353 (-0.001783) | 0.005219 / 0.011008 (-0.005789) | 0.098472 / 0.038508 (0.059964) | 0.035429 / 0.023109 (0.012320) | 0.303086 / 0.275898 (0.027188) | 0.365926 / 0.323480 (0.042446) | 0.008797 / 0.007986 (0.000811) | 0.004220 / 0.004328 (-0.000108) | 0.076670 / 0.004250 (0.072419) | 0.045596 / 0.037052 (0.008543) | 0.309476 / 0.258489 (0.050987) | 0.343958 / 0.293841 (0.050117) | 0.038741 / 0.128546 (-0.089805) | 0.011990 / 0.075646 (-0.063657) | 0.332326 / 0.419271 (-0.086945) | 0.048897 / 0.043533 (0.005364) | 0.296002 / 0.255139 (0.040863) | 0.322048 / 0.283200 (0.038849) | 0.104403 / 0.141683 (-0.037280) | 1.461777 / 1.452155 (0.009622) | 1.516362 / 1.492716 (0.023645) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.201565 / 0.018006 (0.183559) | 0.435781 / 0.000490 (0.435291) | 0.004215 / 0.000200 (0.004015) | 0.000282 / 0.000054 (0.000227) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027272 / 0.037411 (-0.010139) | 0.106157 / 0.014526 (0.091631) | 0.116948 / 0.176557 (-0.059609) | 0.160404 / 0.737135 (-0.576731) | 0.122518 / 0.296338 (-0.173820) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.397721 / 0.215209 (0.182512) | 3.966433 / 2.077655 (1.888778) | 1.755410 / 1.504120 (0.251290) | 1.566480 / 1.541195 (0.025285) | 1.623684 / 1.468490 (0.155194) | 0.696820 / 4.584777 (-3.887957) | 3.750437 / 3.745712 (0.004725) | 2.105875 / 5.269862 (-3.163986) | 1.442026 / 4.565676 (-3.123650) | 0.085026 / 0.424275 (-0.339249) | 0.012239 / 0.007607 (0.004632) | 0.502613 / 0.226044 (0.276569) | 5.049016 / 2.268929 (2.780087) | 2.314499 / 55.444624 (-53.130126) | 1.967943 / 6.876477 (-4.908534) | 2.033507 / 2.142072 (-0.108565) | 0.861908 / 4.805227 (-3.943319) | 0.167784 / 6.500664 (-6.332880) | 0.063022 / 0.075469 (-0.012447) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.210434 / 1.841788 (-0.631353) | 14.979319 / 8.074308 (6.905011) | 14.095263 / 10.191392 (3.903871) | 0.174203 / 0.680424 (-0.506221) | 0.028547 / 0.534201 (-0.505654) | 0.442509 / 0.579283 (-0.136774) | 0.445811 / 0.434364 (0.011447) | 0.531313 / 0.540337 (-0.009024) | 0.636541 / 1.386936 (-0.750395) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007341 / 0.011353 (-0.004012) | 0.005197 / 0.011008 (-0.005811) | 0.075413 / 0.038508 (0.036905) | 0.033261 / 0.023109 (0.010152) | 0.339596 / 0.275898 (0.063698) | 0.376051 / 0.323480 (0.052571) | 0.005827 / 0.007986 (-0.002159) | 0.005473 / 0.004328 (0.001144) | 0.074851 / 0.004250 (0.070600) | 0.049059 / 0.037052 (0.012007) | 0.357182 / 0.258489 (0.098693) | 0.384589 / 0.293841 (0.090748) | 0.037122 / 0.128546 (-0.091424) | 0.012298 / 0.075646 (-0.063348) | 0.088191 / 0.419271 (-0.331081) | 0.052002 / 0.043533 (0.008469) | 0.343216 / 0.255139 (0.088077) | 0.364534 / 0.283200 (0.081334) | 0.105462 / 0.141683 (-0.036221) | 1.486717 / 1.452155 (0.034562) | 1.584725 / 1.492716 (0.092009) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.199210 / 0.018006 (0.181203) | 0.439069 / 0.000490 (0.438580) | 0.000436 / 0.000200 (0.000236) | 0.000059 / 0.000054 (0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029931 / 0.037411 (-0.007480) | 0.109564 / 0.014526 (0.095038) | 0.122284 / 0.176557 (-0.054273) | 0.170819 / 0.737135 (-0.566317) | 0.125886 / 0.296338 (-0.170452) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.422724 / 0.215209 (0.207515) | 4.210304 / 2.077655 (2.132650) | 2.001481 / 1.504120 (0.497361) | 1.810818 / 1.541195 (0.269623) | 1.901367 / 1.468490 (0.432877) | 0.686004 / 4.584777 (-3.898773) | 3.768850 / 3.745712 (0.023138) | 2.079501 / 5.269862 (-3.190360) | 1.326970 / 4.565676 (-3.238706) | 0.085991 / 0.424275 (-0.338284) | 0.012298 / 0.007607 (0.004690) | 0.526878 / 0.226044 (0.300833) | 5.267241 / 2.268929 (2.998312) | 2.451781 / 55.444624 (-52.992843) | 2.109143 / 6.876477 (-4.767333) | 2.185426 / 2.142072 (0.043353) | 0.830165 / 4.805227 (-3.975063) | 0.166167 / 6.500664 (-6.334497) | 0.064077 / 0.075469 (-0.011392) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.270430 / 1.841788 (-0.571358) | 14.844852 / 8.074308 (6.770544) | 13.196672 / 10.191392 (3.005280) | 0.162853 / 0.680424 (-0.517571) | 0.017727 / 0.534201 (-0.516474) | 0.424803 / 0.579283 (-0.154480) | 0.439970 / 0.434364 (0.005606) | 0.530691 / 0.540337 (-0.009647) | 0.630474 / 1.386936 (-0.756462) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#24fb01b720ef4203d4ae6225f43cba912b1f6d55 \"CML watermark\")\n" ]
null
[]
Don't compute checksums if not necessary in `datasets-cli test`
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/5603/timeline
we only need them if there exists a `dataset_infos.json`
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/5603.diff", "html_url": "https://github.com/huggingface/datasets/pull/5603", "merged_at": "2023-03-03T15:38:28Z", "patch_url": "https://github.com/huggingface/datasets/pull/5603.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5603" }
1,607,143,509
https://api.github.com/repos/huggingface/datasets/issues/5603/comments
PR_kwDODunzps5LJZzG
null
5,603
https://api.github.com/repos/huggingface/datasets/issues/5603/events
true
open
2023-03-02T15:51:12Z
null
https://api.github.com/repos/huggingface/datasets/issues/5602
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5602/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/5602/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4", "events_url": "https://api.github.com/users/amyeroberts/events{/privacy}", "followers_url": "https://api.github.com/users/amyeroberts/followers", "following_url": "https://api.github.com/users/amyeroberts/following{/other_user}", "gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/amyeroberts", "id": 22614925, "login": "amyeroberts", "node_id": "MDQ6VXNlcjIyNjE0OTI1", "organizations_url": "https://api.github.com/users/amyeroberts/orgs", "received_events_url": "https://api.github.com/users/amyeroberts/received_events", "repos_url": "https://api.github.com/users/amyeroberts/repos", "site_admin": false, "starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions", "type": "User", "url": "https://api.github.com/users/amyeroberts" }
https://github.com/huggingface/datasets/pull/5602
[]
false
2023-04-12T15:54:53Z
null
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5602). All of your documentation changes will be reflected on that endpoint.", "This is a great PR! Thinking about the UX though, maybe we could do it without the extra argument? Before this PR, the logic in `to_tf_dataset` was that if the user passed a single column name in either `columns` or `label_cols`, we converted it to a length-1 list. Then, later in the code, we convert output dicts with only one key to naked Tensors.\r\n\r\nWould it be easier if we removed the argument, but instead treated the cases differently? Passing a column name as a string could yield a single naked Tensor in the output as before, but passing a list of length 1 would yield a full dict? That way if you wanted dict output with a single key you could just say `columns=[col_name]`.\r\n\r\n(I'm not totally convinced this is a good idea yet, it just seems like it might be more intuitive)", "@Rocketknight1 Happy to implement it that way - it's certainly cleaner to not have another arg. In this case, am I right in saying we'd effectively set `return_dict` [here](https://github.com/huggingface/datasets/blob/6569014a9948eab7d031a3587405e64ba92d6c59/src/datasets/arrow_dataset.py#L410) - where columns are made into a list if they were a string? \r\n\r\nThere only concern I have is this changes the default behaviour, which might break things for people who were happily using `columns=[\"my_col_str\"]` before. \r\n\r\n\r\n", "@amyeroberts That's correct! Probably the simplest way to implement it would be to just add the flag there.\r\n\r\nAnd yeah, I'm aware this might be a slightly breaking change, but we've mostly tried to move users to `prepare_tf_dataset` in `transformers` at this point, so hopefully as long as that method doesn't break then most users won't be negatively affected by the change.", "@lhoestq @Rocketknight1 - I've remove the `return_dict` argument and implemented @Rocketknight1 's suggestion. LMK what you think :) ", "@lhoestq Of course :) I've opened a draft PR here for the updates needed in transformers examples and docs to keep the returned data structure consistent: https://github.com/huggingface/transformers/pull/21935. Note: even with the different structure, `model.fit` can still successfully be called. \r\n\r\nFor the [link you shared](https://github.com/huggingface/datasets/pull/url) - for me it returns a 404 error. Is there another link I could follow to see how to run the transformers CI with this branch? \r\n\r\nCurrently looking into the failing tests 😭 ", "Oh sorry - I fixed the URL: https://github.com/huggingface/transformers/commit/4eb55bbd593adf2e49362613ee32a11ddc4a854d", "The error shows `There appear to be 80 leaked shared_memory objects to clean up at shutdown`. IIRC to_tf_dataset does some shared memory stuff for multiprocessing - maybe @Rocketknight1 you know what's going on ?", "@lhoestq That warning appears anytime you interrupt a process using Python `SharedMemory` objects - it's only a problem if you still get the error when the process finishes normally! Our implementation of `to_tf_dataset` should clean things up properly.", "Ok, not sure why it fails then :/", "Hmm, will investigate! Sorry, I misread - I thought that warning was coming up in the context of another error", "IMO outputing different types based on nuances in the input could confuse users.\r\n\r\nAlso, in the ideal scenario,`to_tf_function` should return a `tf.data.Dataset` that iterates over the underlying Arrow data and yields (unprocessed) dicts of TF tensors, and all the model-specific code should live in Transformers (e.g., in `prepare_tf_dataset`). So the goal would be to make `to_tf_dataset` more user-friendly, not more complex :).", "I think we agree @mariosasko :) \r\n\r\n> Also, in the ideal scenario,to_tf_function should return a tf.data.Dataset that iterates over the underlying Arrow data and yields (unprocessed) dicts of TF tensors\r\n\r\nThis I'll leave for another PR as it's outside the scope of this one and @Rocketknight1 will have far more knowledge and ideas about what is possible\r\n\r\n> all the model-specific code should live in Transformers (e.g., in prepare_tf_dataset\r\n\r\nAgreed! This PR isn't really a model specific change - although it was highlighted when trying to train a model. We definitely want to move model specific things out of datasets as much as possible. \r\n\r\n> IMO outputing different types based on nuances in the input could confuse users.\r\n> So the goal would be to make to_tf_dataset more user-friendly, not more complex :).\r\n\r\nThe aim was to move more towards being able to return the dict of TF tensors you suggest, whilst maintaining backwards compatibility. Personally, I found it surprising to be returned a tuple structure when I was using `to_tf_dataset`. The aim was to make `to_tf_dataset` more user friendly, but I agree that it has the potential to be confusing. \r\n\r\nFor context, the thought process behind this design was to: \r\n* Not add even more arguments to `to_tf_dataset`. \r\n* Have a feature selection -> return type logic in keeping with `datasets` e.g. `dataset['train'][:10]['feat1']` returns a list of values, whereas `dataset['train'][:10]['feat1', 'feat2']` returns a dictionary. \r\n\r\nVery happy to add any suggestions or changes you might have about how to make this design better! :) \r\n", "Hi ! Anything blocking here ? I'b be happy to help", "Hi @lhoestq - sorry this hasn't been very active for the past ~1.5 weeks. There's nothing specific blocking, other than not being able to replicate without running on CI, and still need to test a bit more to narrow down the issue. I should have time tomorrow to pick it up again :) ", "@lhoestq @Rocketknight1 Friendly ping for a review :) ", "Awesome ! What about showing a warning that this change is about to happen in the next version of `datasets`, and then apply this change in a subsequent major release ? This way folks at twitter won't hate us: https://github.com/twitter/the-algorithm/blob/138bb519975407d4ea0dc1478d897d451ef05dab/trust_and_safety_models/toxicity/data/mb_generator.py#L142-L148", "@lhoestq Sounds good! How would you like this warning to happen? I could open a PR to add a warning message within `to_tf_dataset`?", "Yup sounds good :)" ]
null
[]
Return dict structure if columns are lists - to_tf_dataset
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/5602/timeline
This PR introduces new logic to `to_tf_dataset` affecting the returned data structure, enabling a dictionary structure to be returned, even if only one feature column is selected. If the passed in `columns` or `label_cols` to `to_tf_dataset` are a list, they are returned as a dictionary, respectively. If they are a string, the tensor is returned. An outline of the behaviour: ``` dataset,to_tf_dataset(columns=["col_1"], label_cols="col_2") # ({'col_1': col_1}, col_2} dataset,to_tf_dataset(columns="col1", label_cols="col_2") # (col1, col2) dataset,to_tf_dataset(columns="col1") # col1 dataset,to_tf_dataset(columns=["col_1"], labels=["col_2"]) # ({'col1': tensor}, {'col2': tensor}} dataset,to_tf_dataset(columns="col_1", labels=["col_2"]) # (col1, {'col2': tensor}} ``` ## Motivation Currently, when calling `to_tf_dataset`, the returned dataset will always return a tuple structure if a single feature column is used. This can cause issues when calling `model.fit` on models which train without labels e.g. [TFVitMAEForPreTraining](https://github.com/huggingface/transformers/blob/b6f47b539377ac1fd845c7adb4ccaa5eb514e126/src/transformers/models/vit_mae/modeling_vit_mae.py#L849). Specifically, [this line](https://github.com/huggingface/transformers/blob/d9e28d91a8b2d09b51a33155d3a03ad9fcfcbd1f/src/transformers/modeling_tf_utils.py#L1521) where it's assumed the input `x` is a dictionary if there is no label. ## Example Previous behaviour ```python In [1]: import tensorflow as tf ...: from datasets import load_dataset ...: ...: ...: def transform(batch): ...: def _transform_img(img): ...: img = img.convert("RGB") ...: img = tf.keras.utils.img_to_array(img) ...: img = tf.image.resize(img, (224, 224)) ...: img /= 255.0 ...: img = tf.transpose(img, perm=[2, 0, 1]) ...: return img ...: batch['pixel_values'] = [_transform_img(pil_img) for pil_img in batch['img']] ...: return batch ...: ...: ...: def collate_fn(examples): ...: pixel_values = tf.stack([example["pixel_values"] for example in examples]) ...: return {"pixel_values": pixel_values} ...: ...: ...: dataset = load_dataset('cifar10')['train'] ...: dataset = dataset.with_transform(transform) ...: dataset.to_tf_dataset(batch_size=8, columns=['pixel_values'], collate_fn=collate_fn) Out[1]: <PrefetchDataset element_spec=TensorSpec(shape=(None, 3, 224, 224), dtype=tf.float32, name=None)> ``` New behaviour ```python In [1]: import tensorflow as tf ...: from datasets import load_dataset ...: ...: ...: def transform(batch): ...: def _transform_img(img): ...: img = img.convert("RGB") ...: img = tf.keras.utils.img_to_array(img) ...: img = tf.image.resize(img, (224, 224)) ...: img /= 255.0 ...: img = tf.transpose(img, perm=[2, 0, 1]) ...: return img ...: batch['pixel_values'] = [_transform_img(pil_img) for pil_img in batch['img']] ...: return batch ...: ...: ...: def collate_fn(examples): ...: pixel_values = tf.stack([example["pixel_values"] for example in examples]) ...: return {"pixel_values": pixel_values} ...: ...: ...: dataset = load_dataset('cifar10')['train'] ...: dataset = dataset.with_transform(transform) ...: dataset.to_tf_dataset(batch_size=8, columns=['pixel_values'], collate_fn=collate_fn) Out[1]: <PrefetchDataset element_spec={'pixel_values': TensorSpec(shape=(None, 3, 224, 224), dtype=tf.float32, name=None)}> ```
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/5602.diff", "html_url": "https://github.com/huggingface/datasets/pull/5602", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/5602.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5602" }
1,607,054,110
https://api.github.com/repos/huggingface/datasets/issues/5602/comments
PR_kwDODunzps5LJGfa
null
5,602
https://api.github.com/repos/huggingface/datasets/issues/5602/events
true
closed
2023-03-02T12:08:39Z
null
https://api.github.com/repos/huggingface/datasets/issues/5601
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5601/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/5601/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/107404835?v=4", "events_url": "https://api.github.com/users/OleksandrKorovii/events{/privacy}", "followers_url": "https://api.github.com/users/OleksandrKorovii/followers", "following_url": "https://api.github.com/users/OleksandrKorovii/following{/other_user}", "gists_url": "https://api.github.com/users/OleksandrKorovii/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/OleksandrKorovii", "id": 107404835, "login": "OleksandrKorovii", "node_id": "U_kgDOBmbeIw", "organizations_url": "https://api.github.com/users/OleksandrKorovii/orgs", "received_events_url": "https://api.github.com/users/OleksandrKorovii/received_events", "repos_url": "https://api.github.com/users/OleksandrKorovii/repos", "site_admin": false, "starred_url": "https://api.github.com/users/OleksandrKorovii/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/OleksandrKorovii/subscriptions", "type": "User", "url": "https://api.github.com/users/OleksandrKorovii" }
https://github.com/huggingface/datasets/issues/5601
[]
false
2023-03-14T16:55:35Z
2023-03-14T16:55:34Z
null
[ "Hi! \r\n\r\nIt's better to report this kind of issue in the `huggingface_hub` repo, so if you still haven't resolved it, I suggest you open an issue there.", "Yeah, I solved it. Problem was in osxkeychain. When I do `hugginface-cli login` it's add token with default account (username)`hg_user` but my repo contain other username. When I changed username in keychain - it works now." ]
completed
[]
Authorization error
NONE
https://api.github.com/repos/huggingface/datasets/issues/5601/timeline
### Describe the bug Get `Authorization error` when try to push data into hugginface datasets hub. ### Steps to reproduce the bug I did all steps in the [tutorial](https://huggingface.co/docs/datasets/share), 1. `huggingface-cli login` with WRITE token 2. `git lfs install` 3. `git clone https://huggingface.co/datasets/namespace/your_dataset_name` 4. ``` cp /somewhere/data/*.json . git lfs track *.json git add .gitattributes git add *.json git commit -m "add json files" ``` but when I execute `git push` I got the error: ``` Uploading LFS objects: 0% (0/1), 0 B | 0 B/s, done. batch response: Authorization error. error: failed to push some refs to 'https://huggingface.co/datasets/zeusfsx/ukrainian-news' ``` Size of data ~100Gb. I have five json files - different parts. ### Expected behavior All my data pushed into hub ### Environment info - `datasets` version: 2.10.1 - Platform: macOS-13.2.1-arm64-arm-64bit - Python version: 3.10.10 - PyArrow version: 11.0.0 - Pandas version: 1.5.3
https://api.github.com/repos/huggingface/datasets
null
1,606,685,976
https://api.github.com/repos/huggingface/datasets/issues/5601/comments
I_kwDODunzps5fxBUY
null
5,601
https://api.github.com/repos/huggingface/datasets/issues/5601/events
false
closed
2023-03-02T11:00:27Z
null
https://api.github.com/repos/huggingface/datasets/issues/5600
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5600/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/5600/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/76955987?v=4", "events_url": "https://api.github.com/users/salahiguiliz/events{/privacy}", "followers_url": "https://api.github.com/users/salahiguiliz/followers", "following_url": "https://api.github.com/users/salahiguiliz/following{/other_user}", "gists_url": "https://api.github.com/users/salahiguiliz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/salahiguiliz", "id": 76955987, "login": "salahiguiliz", "node_id": "MDQ6VXNlcjc2OTU1OTg3", "organizations_url": "https://api.github.com/users/salahiguiliz/orgs", "received_events_url": "https://api.github.com/users/salahiguiliz/received_events", "repos_url": "https://api.github.com/users/salahiguiliz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/salahiguiliz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/salahiguiliz/subscriptions", "type": "User", "url": "https://api.github.com/users/salahiguiliz" }
https://github.com/huggingface/datasets/issues/5600
[]
false
2023-03-13T17:59:35Z
2023-03-13T17:59:35Z
null
[ "Hi! \r\n\r\n> (see example of DreamboothDatasets)\r\n\r\n\r\nCould you please provide a link to it? If you are referring to the example in the `diffusers` repo, your issue is unrelated to `datasets` as that example uses `Dataset` from PyTorch to load data." ]
completed
[]
Dataloader getitem not working for DreamboothDatasets
NONE
https://api.github.com/repos/huggingface/datasets/issues/5600/timeline
### Describe the bug Dataloader getitem is not working as before (see example of [DreamboothDatasets](https://github.com/huggingface/peft/blob/main/examples/lora_dreambooth/train_dreambooth.py#L451C14-L529)) moving Datasets to 2.8.0 solved the issue. ### Steps to reproduce the bug 1- using DreamBoothDataset to load some images 2- error after loading when trying to visualise the images ### Expected behavior I was expecting a numpy array of the image ### Environment info - Platform: Linux-5.10.147+-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyArrow version: 9.0.0 - Pandas version: 1.3.5
https://api.github.com/repos/huggingface/datasets
null
1,606,585,596
https://api.github.com/repos/huggingface/datasets/issues/5600/comments
I_kwDODunzps5fwoz8
null
5,600
https://api.github.com/repos/huggingface/datasets/issues/5600/events
false
closed
2023-03-01T13:54:06Z
null
https://api.github.com/repos/huggingface/datasets/issues/5598
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/5598/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/5598/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://github.com/huggingface/datasets/pull/5598
[]
false
2023-03-02T13:47:13Z
2023-03-02T13:40:17Z
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008823 / 0.011353 (-0.002529) | 0.004738 / 0.011008 (-0.006270) | 0.102338 / 0.038508 (0.063830) | 0.030603 / 0.023109 (0.007494) | 0.302995 / 0.275898 (0.027097) | 0.362080 / 0.323480 (0.038600) | 0.007096 / 0.007986 (-0.000889) | 0.003493 / 0.004328 (-0.000835) | 0.079129 / 0.004250 (0.074878) | 0.037966 / 0.037052 (0.000914) | 0.310412 / 0.258489 (0.051923) | 0.346740 / 0.293841 (0.052899) | 0.033795 / 0.128546 (-0.094751) | 0.011595 / 0.075646 (-0.064051) | 0.325189 / 0.419271 (-0.094083) | 0.041679 / 0.043533 (-0.001854) | 0.302339 / 0.255139 (0.047200) | 0.322519 / 0.283200 (0.039319) | 0.089058 / 0.141683 (-0.052625) | 1.496223 / 1.452155 (0.044068) | 1.512562 / 1.492716 (0.019845) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.009298 / 0.018006 (-0.008709) | 0.406726 / 0.000490 (0.406236) | 0.003753 / 0.000200 (0.003553) | 0.000082 / 0.000054 (0.000028) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023327 / 0.037411 (-0.014084) | 0.098175 / 0.014526 (0.083649) | 0.106040 / 0.176557 (-0.070516) | 0.151934 / 0.737135 (-0.585201) | 0.108465 / 0.296338 (-0.187873) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.419073 / 0.215209 (0.203864) | 4.188012 / 2.077655 (2.110358) | 1.857667 / 1.504120 (0.353547) | 1.664124 / 1.541195 (0.122929) | 1.704341 / 1.468490 (0.235851) | 0.699671 / 4.584777 (-3.885106) | 3.391110 / 3.745712 (-0.354602) | 1.871136 / 5.269862 (-3.398725) | 1.176794 / 4.565676 (-3.388882) | 0.083322 / 0.424275 (-0.340953) | 0.012450 / 0.007607 (0.004843) | 0.525058 / 0.226044 (0.299014) | 5.265425 / 2.268929 (2.996497) | 2.320672 / 55.444624 (-53.123952) | 1.964806 / 6.876477 (-4.911671) | 2.027055 / 2.142072 (-0.115017) | 0.819768 / 4.805227 (-3.985459) | 0.149638 / 6.500664 (-6.351026) | 0.064774 / 0.075469 (-0.010695) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.204575 / 1.841788 (-0.637212) | 13.651878 / 8.074308 (5.577570) | 13.751973 / 10.191392 (3.560581) | 0.154781 / 0.680424 (-0.525643) | 0.028887 / 0.534201 (-0.505314) | 0.404905 / 0.579283 (-0.174379) | 0.411320 / 0.434364 (-0.023043) | 0.485026 / 0.540337 (-0.055311) | 0.579690 / 1.386936 (-0.807246) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006615 / 0.011353 (-0.004737) | 0.004606 / 0.011008 (-0.006402) | 0.076099 / 0.038508 (0.037591) | 0.027247 / 0.023109 (0.004137) | 0.360731 / 0.275898 (0.084833) | 0.393688 / 0.323480 (0.070208) | 0.005079 / 0.007986 (-0.002906) | 0.003345 / 0.004328 (-0.000984) | 0.077184 / 0.004250 (0.072934) | 0.037850 / 0.037052 (0.000797) | 0.379738 / 0.258489 (0.121249) | 0.400474 / 0.293841 (0.106633) | 0.031581 / 0.128546 (-0.096966) | 0.011508 / 0.075646 (-0.064138) | 0.084966 / 0.419271 (-0.334306) | 0.041740 / 0.043533 (-0.001793) | 0.349887 / 0.255139 (0.094748) | 0.384405 / 0.283200 (0.101205) | 0.089022 / 0.141683 (-0.052661) | 1.503448 / 1.452155 (0.051293) | 1.564870 / 1.492716 (0.072154) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.233581 / 0.018006 (0.215574) | 0.413819 / 0.000490 (0.413330) | 0.000398 / 0.000200 (0.000198) | 0.000060 / 0.000054 (0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024805 / 0.037411 (-0.012607) | 0.101348 / 0.014526 (0.086822) | 0.108701 / 0.176557 (-0.067856) | 0.160011 / 0.737135 (-0.577124) | 0.111696 / 0.296338 (-0.184642) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.436303 / 0.215209 (0.221094) | 4.368684 / 2.077655 (2.291029) | 2.082366 / 1.504120 (0.578247) | 1.888108 / 1.541195 (0.346913) | 1.958295 / 1.468490 (0.489804) | 0.700858 / 4.584777 (-3.883919) | 3.408321 / 3.745712 (-0.337391) | 1.872960 / 5.269862 (-3.396902) | 1.165116 / 4.565676 (-3.400560) | 0.083556 / 0.424275 (-0.340719) | 0.012348 / 0.007607 (0.004741) | 0.536551 / 0.226044 (0.310506) | 5.359974 / 2.268929 (3.091045) | 2.539043 / 55.444624 (-52.905581) | 2.200314 / 6.876477 (-4.676162) | 2.222051 / 2.142072 (0.079979) | 0.808567 / 4.805227 (-3.996661) | 0.151222 / 6.500664 (-6.349442) | 0.066351 / 0.075469 (-0.009118) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.265502 / 1.841788 (-0.576286) | 13.692066 / 8.074308 (5.617758) | 13.124507 / 10.191392 (2.933115) | 0.129545 / 0.680424 (-0.550879) | 0.016827 / 0.534201 (-0.517374) | 0.380326 / 0.579283 (-0.198957) | 0.387268 / 0.434364 (-0.047096) | 0.463722 / 0.540337 (-0.076616) | 0.553681 / 1.386936 (-0.833255) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#6569014a9948eab7d031a3587405e64ba92d6c59 \"CML watermark\")\n" ]
null
[]
Fix push_to_hub with no dataset_infos
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/5598/timeline
As reported in https://github.com/vijaydwivedi75/lrgb/issues/10, `push_to_hub` fails if the remote repository already exists and has a README.md without `dataset_info` in the YAML tags cc @clefourrier
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/5598.diff", "html_url": "https://github.com/huggingface/datasets/pull/5598", "merged_at": "2023-03-02T13:40:17Z", "patch_url": "https://github.com/huggingface/datasets/pull/5598.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5598" }
1,605,018,478
https://api.github.com/repos/huggingface/datasets/issues/5598/comments
PR_kwDODunzps5LCMiX
null
5,598
https://api.github.com/repos/huggingface/datasets/issues/5598/events
true
closed
2023-03-01T12:58:18Z
null
https://api.github.com/repos/huggingface/datasets/issues/5597
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5597/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/5597/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/3585459?v=4", "events_url": "https://api.github.com/users/speedcell4/events{/privacy}", "followers_url": "https://api.github.com/users/speedcell4/followers", "following_url": "https://api.github.com/users/speedcell4/following{/other_user}", "gists_url": "https://api.github.com/users/speedcell4/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/speedcell4", "id": 3585459, "login": "speedcell4", "node_id": "MDQ6VXNlcjM1ODU0NTk=", "organizations_url": "https://api.github.com/users/speedcell4/orgs", "received_events_url": "https://api.github.com/users/speedcell4/received_events", "repos_url": "https://api.github.com/users/speedcell4/repos", "site_admin": false, "starred_url": "https://api.github.com/users/speedcell4/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/speedcell4/subscriptions", "type": "User", "url": "https://api.github.com/users/speedcell4" }
https://github.com/huggingface/datasets/issues/5597
[]
false
2023-03-02T13:30:41Z
2023-03-02T03:47:00Z
null
[ "We won't support in-place modifications since `datasets` is based on the Apache Arrow format which doesn't support in-place modifications.\r\n\r\nIn your case the old dataset is garbage collected pretty quickly so you won't have memory issues.\r\n\r\nNote that datasets loaded from disk (memory mapped) are not loaded in memory, and therefore the new dataset actually use the same buffers as the old one.", "Thank you for your detailed reply.\r\n\r\n> In your case the old dataset is garbage collected pretty quickly so you won't have memory issues.\r\n\r\nI understand this, but it still copies the old dataset to create the new one, is this correct? So maybe it is not memory-consuming, but time-consuming?", "Indeed, and because of that it is more efficient to add multiple rows at once instead of one by one, using `concatenate_datasets` for example." ]
completed
[ { "color": "ffffff", "default": true, "description": "This will not be worked on", "id": 1935892913, "name": "wontfix", "node_id": "MDU6TGFiZWwxOTM1ODkyOTEz", "url": "https://api.github.com/repos/huggingface/datasets/labels/wontfix" } ]
in-place dataset update
NONE
https://api.github.com/repos/huggingface/datasets/issues/5597/timeline
### Motivation For the circumstance that I creat an empty `Dataset` and keep appending new rows into it, I found that it leads to creating a new dataset at each call. It looks quite memory-consuming. I just wonder if there is any more efficient way to do this. ```python from datasets import Dataset ds = Dataset.from_list([]) ds.add_item({'a': [1, 2, 3], 'b': 4}) print(ds) >>> Dataset({ >>> features: [], >>> num_rows: 0 >>> }) ds = ds.add_item({'a': [1, 2, 3], 'b': 4}) print(ds) >>> Dataset({ >>> features: ['a', 'b'], >>> num_rows: 1 >>> }) ``` ### Feature request Call for in-place dataset update functions, that update the existing `Dataset` in place without creating a new copy. The interface is supposed to keep the same style as PyTorch, such as the in-place version of a `function` is named `function_`. For example, the in-pace version of `add_item`, i.e., `add_item_`, immediately updates the `Dataset`. ```python from datasets import Dataset ds = Dataset.from_list([]) ds.add_item({'a': [1, 2, 3], 'b': 4}) print(ds) >>> Dataset({ >>> features: [], >>> num_rows: 0 >>> }) ds.add_item_({'a': [1, 2, 3], 'b': 4}) print(ds) >>> Dataset({ >>> features: ['a', 'b'], >>> num_rows: 1 >>> }) ``` ### Related Functions * `.map` * `.filter` * `.add_item`
https://api.github.com/repos/huggingface/datasets
null
1,604,928,721
https://api.github.com/repos/huggingface/datasets/issues/5597/comments
I_kwDODunzps5fqUTR
null
5,597
https://api.github.com/repos/huggingface/datasets/issues/5597/events
false
closed
2023-03-01T12:53:08Z
null
https://api.github.com/repos/huggingface/datasets/issues/5596
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5596/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/5596/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/44069155?v=4", "events_url": "https://api.github.com/users/loubnabnl/events{/privacy}", "followers_url": "https://api.github.com/users/loubnabnl/followers", "following_url": "https://api.github.com/users/loubnabnl/following{/other_user}", "gists_url": "https://api.github.com/users/loubnabnl/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/loubnabnl", "id": 44069155, "login": "loubnabnl", "node_id": "MDQ6VXNlcjQ0MDY5MTU1", "organizations_url": "https://api.github.com/users/loubnabnl/orgs", "received_events_url": "https://api.github.com/users/loubnabnl/received_events", "repos_url": "https://api.github.com/users/loubnabnl/repos", "site_admin": false, "starred_url": "https://api.github.com/users/loubnabnl/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/loubnabnl/subscriptions", "type": "User", "url": "https://api.github.com/users/loubnabnl" }
https://github.com/huggingface/datasets/issues/5596
[]
false
2023-12-05T03:22:00Z
2023-03-02T11:12:11Z
null
[ "Apparently some JSON objects have a `\"labels\"` field. Since this field is not present in every object, you must specify all the fields types in the README.md\r\n\r\nEDIT: actually specifying the feature types doesn’t solve the issue, it raises an error because “labels” is missing in the data", "We've updated the dataset to remove the extra `labels` field from some files, closing this issue. Thanks!", "A similar error occurs in the Pile dataset (EleutherAI/the_pile)\r\n\r\nLoading the dataset produces the following error.\r\n\r\n```\r\nTypeError: Couldn't cast array of type\r\nstruct<file: string, id: string>\r\nto\r\n{'id': Value(dtype='string', id=None)}\r\n```\r\n", "I think this was fixed in https://huggingface.co/datasets/EleutherAI/the_pile/discussions/11", "i have the same problem ,how to solve :\r\n raise TypeError(f\"Couldn't cast array of type\\n{array.type}\\nto\\n{feature}\")\r\nTypeError: Couldn't cast array of type\r\nlist<item: string>\r\nto\r\n{'content': Value(dtype='string', id=None), 'role': Value(dtype='string', id=None)}" ]
completed
[]
[TypeError: Couldn't cast array of type] Can only load a subset of the dataset
NONE
https://api.github.com/repos/huggingface/datasets/issues/5596/timeline
### Describe the bug I'm trying to load this [dataset](https://huggingface.co/datasets/bigcode-data/the-stack-gh-issues) which consists of jsonl files and I get the following error: ``` casted_values = _c(array.values, feature[0]) File "/opt/conda/lib/python3.7/site-packages/datasets/table.py", line 1839, in wrapper return func(array, *args, **kwargs) File "/opt/conda/lib/python3.7/site-packages/datasets/table.py", line 2132, in cast_array_to_feature raise TypeError(f"Couldn't cast array of type\n{array.type}\nto\n{feature}") TypeError: Couldn't cast array of type struct<type: string, action: string, datetime: timestamp[s], author: string, title: string, description: string, comment_id: int64, comment: string, labels: list<item: string>> to {'type': Value(dtype='string', id=None), 'action': Value(dtype='string', id=None), 'datetime': Value(dtype='timestamp[s]', id=None), 'author': Value(dtype='string', id=None), 'title': Value(dtype='string', id=None), 'description': Value(dtype='string', id=None), 'comment_id': Value(dtype='int64', id=None), 'comment': Value(dtype='string', id=None)} ``` But I can succesfully load a subset of the dataset, for example this works: ```python ds = load_dataset('bigcode-data/the-stack-gh-issues', split="train", data_files=[f"data/data-{x}.jsonl" for x in range(10)]) ``` and `ds.features` returns: ``` {'repo': Value(dtype='string', id=None), 'org': Value(dtype='string', id=None), 'issue_id': Value(dtype='int64', id=None), 'issue_number': Value(dtype='int64', id=None), 'pull_request': {'user_login': Value(dtype='string', id=None), 'repo': Value(dtype='string', id=None), 'number': Value(dtype='int64', id=None)}, 'events': [{'type': Value(dtype='string', id=None), 'action': Value(dtype='string', id=None), 'datetime': Value(dtype='timestamp[s]', id=None), 'author': Value(dtype='string', id=None), 'title': Value(dtype='string', id=None), 'description': Value(dtype='string', id=None), 'comment_id': Value(dtype='int64', id=None), 'comment': Value(dtype='string', id=None)}]} ``` So I'm not sure if there's an issue with just some of the files. Grateful if you have any suggestions to fix the issue. Side note: I saw this related [issue](https://github.com/huggingface/datasets/issues/3637) and tried to write a loading script to have `events` as a `Sequence` and not `list` [here](https://huggingface.co/datasets/bigcode-data/the-stack-gh-issues/blob/main/loading.py) (the script was renamed). It worked with a subset locally but doesn't for the remote dataset it can't find https://huggingface.co/datasets/bigcode-data/the-stack-gh-issues/resolve/main/data. ### Steps to reproduce the bug ```python from datasets import load_dataset ds = load_dataset('bigcode-data/the-stack-gh-issues', split="train") ``` ### Expected behavior Load the entire dataset succesfully. ### Environment info - `datasets` version: 2.10.1 - Platform: Linux-4.19.0-23-cloud-amd64-x86_64-with-debian-10.13 - Python version: 3.7.12 - PyArrow version: 9.0.0 - Pandas version: 1.3.4
https://api.github.com/repos/huggingface/datasets
null
1,604,919,993
https://api.github.com/repos/huggingface/datasets/issues/5596/comments
I_kwDODunzps5fqSK5
null
5,596
https://api.github.com/repos/huggingface/datasets/issues/5596/events
false
closed
2023-03-01T01:33:45Z
null
https://api.github.com/repos/huggingface/datasets/issues/5595
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5595/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/5595/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/46943923?v=4", "events_url": "https://api.github.com/users/lazarust/events{/privacy}", "followers_url": "https://api.github.com/users/lazarust/followers", "following_url": "https://api.github.com/users/lazarust/following{/other_user}", "gists_url": "https://api.github.com/users/lazarust/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lazarust", "id": 46943923, "login": "lazarust", "node_id": "MDQ6VXNlcjQ2OTQzOTIz", "organizations_url": "https://api.github.com/users/lazarust/orgs", "received_events_url": "https://api.github.com/users/lazarust/received_events", "repos_url": "https://api.github.com/users/lazarust/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lazarust/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lazarust/subscriptions", "type": "User", "url": "https://api.github.com/users/lazarust" }
https://github.com/huggingface/datasets/pull/5595
[]
false
2023-04-04T08:20:19Z
2023-04-04T08:19:14Z
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5595). All of your documentation changes will be reflected on that endpoint.", "It looks like this issue hasn't been fixed yet, so let's wait a bit more.", "@lazarust thanks for your work, but unfortunately we cannot merge it.\r\n\r\nSee my comment in: https://github.com/huggingface/datasets/issues/5477#issuecomment-1495512688\r\n\r\nThe fix was released yesterday (2023-04-03) only in `pandas-2.0.0`:\r\n- https://github.com/pandas-dev/pandas/releases/tag/v2.0.0\r\n\r\nbut it will not be back-ported to `pandas-1`:\r\n- https://github.com/pandas-dev/pandas/pull/48576#issuecomment-1466467159\r\n\r\nAlso note that `pandas-2.0.0` dropped support for Python 3.7:\r\n- https://github.com/pandas-dev/pandas/issues/41678\r\n- https://github.com/pandas-dev/pandas/pull/41989\r\n\r\nTherefore, we cannot unpin `sqlalchemy` until we drop support for Python 3.7 (these Python users cannot use `pandas-2`). See our latest CI checks below:\r\n- \"CI / test\" fails because it runs on Python 3.7\r\n- \"CI / test_py310\" succeeds because it runs on Python 3.10 " ]
null
[]
Unpins sqlAlchemy
NONE
https://api.github.com/repos/huggingface/datasets/issues/5595/timeline
Closes #5477
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/5595.diff", "html_url": "https://github.com/huggingface/datasets/pull/5595", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/5595.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5595" }
1,604,070,629
https://api.github.com/repos/huggingface/datasets/issues/5595/comments
PR_kwDODunzps5K--V9
null
5,595
https://api.github.com/repos/huggingface/datasets/issues/5595/events
true
closed
2023-02-28T23:40:53Z
null
https://api.github.com/repos/huggingface/datasets/issues/5594
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5594/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/5594/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/24687672?v=4", "events_url": "https://api.github.com/users/simran-khanuja/events{/privacy}", "followers_url": "https://api.github.com/users/simran-khanuja/followers", "following_url": "https://api.github.com/users/simran-khanuja/following{/other_user}", "gists_url": "https://api.github.com/users/simran-khanuja/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/simran-khanuja", "id": 24687672, "login": "simran-khanuja", "node_id": "MDQ6VXNlcjI0Njg3Njcy", "organizations_url": "https://api.github.com/users/simran-khanuja/orgs", "received_events_url": "https://api.github.com/users/simran-khanuja/received_events", "repos_url": "https://api.github.com/users/simran-khanuja/repos", "site_admin": false, "starred_url": "https://api.github.com/users/simran-khanuja/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/simran-khanuja/subscriptions", "type": "User", "url": "https://api.github.com/users/simran-khanuja" }
https://github.com/huggingface/datasets/issues/5594
[]
false
2023-11-04T20:45:56Z
2023-07-24T14:22:18Z
null
[ "Hi! I cannot reproduce this error on my machine.\r\n\r\nThe raised error could mean that one of the downloaded files is corrupted. To verify this is not the case, you can run `load_dataset` as follows:\r\n```python\r\ntrain_dataset = load_dataset('xtreme', 'udpos.English', split=\"train\", cache_dir=args.cache_dir, download_mode=\"force_redownload\", verification_mode=\"all_checks\")\r\n```", "Hi! Apologies for the delayed response! I tried the above and it doesn't solve the issue. Actually, the dataset gets downloaded most times, but sometimes this error occurs (at random afaik). Is it possible that there is a server issue for this particular dataset? I am able to download other datasets using the same code on the same machine with no issues :( I get this error now : \r\n```\r\nDownloading data: 16%|███████████████▌ | 55.9M/355M [04:45<25:25, 196kB/s]\r\nTraceback (most recent call last):\r\n File \"/home/skhanuja/Optimal-Resource-Allocation-for-Multilingual-Finetuning/src/train_al.py\", line 1107, in <module>\r\n main()\r\n File \"/home/skhanuja/Optimal-Resource-Allocation-for-Multilingual-Finetuning/src/train_al.py\", line 439, in main\r\n en_dataset = load_dataset(\"xtreme\", \"udpos.English\", split=\"train\", download_mode=\"force_redownload\", verification_mode=\"all_checks\")\r\n File \"/home/skhanuja/miniconda3/envs/multilingual_ft/lib/python3.10/site-packages/datasets/load.py\", line 1782, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"/home/skhanuja/miniconda3/envs/multilingual_ft/lib/python3.10/site-packages/datasets/builder.py\", line 872, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/home/skhanuja/miniconda3/envs/multilingual_ft/lib/python3.10/site-packages/datasets/builder.py\", line 1649, in _download_and_prepare\r\n super()._download_and_prepare(\r\n File \"/home/skhanuja/miniconda3/envs/multilingual_ft/lib/python3.10/site-packages/datasets/builder.py\", line 949, in _download_and_prepare\r\n verify_checksums(\r\n File \"/home/skhanuja/miniconda3/envs/multilingual_ft/lib/python3.10/site-packages/datasets/utils/info_utils.py\", line 62, in verify_checksums\r\n raise NonMatchingChecksumError(\r\ndatasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:\r\n['https://lindat.mff.cuni.cz/repository/xmlui/bitstream/handle/11234/1-3105/ud-treebanks-v2.5.tgz']\r\nSet `verification_mode='no_checks'` to skip checksums verification and ignore this error\r\n```", "If this happens randomly, then this means the data file from the error message is not always downloaded correctly. \r\n\r\nThe only solution in this scenario is to download the dataset again by passing `download_mode=\"force_redownload\"` to the `load_dataset` call.", "Wow. I effectively have to redownload a dataset of 1TB because of this now?\r\nBecause 3% of its parts are broken?\r\n\r\nWhy is this downloader library so sh*t and badly documented also? I found almost nothing on the net, at least finally this issue about the problem here.\r\nNo words to express how disappointed I am by that dataset tool provided by Huggingface here, which I sadly have to use because HF is the only place where the Dataset I plan to work with is hosted....\r\n\r\nI mean... checksum check after download... or hitting timeout of a part... and redownload if not matching... that's content of every junior developer training session.\r\n\r\nI added `verification_mode=\"all_checks\"`. And it really calculated checksums for 4096 parts of ~350 MB... But then did nothing and tried to extract still, hitting the error again. \r\n\r\nEDIT: Apparently it is able to fix it by getting a little help: Just delete the broken parts and associated files from `~/.cache/huggingface/datasets/downloads`", "I'm getting it too, although just retrying fixed it. Nevertheless, the dataset is too large to have re-downloaded the whole thing, for it's probably just one file with an issue. It would be good to know if there's a way people could manually examine the files (first for sizes, then possibly checksums)... going to the web or elsewhere to compare and correct it by hand, if ever needed.", "Okay, no, it got further but it is repeatedly giving me:\r\n```/home/jaggz/.cache/huggingface/modules/datasets_modules/datasets/mozilla-foundation--common_voice_11_0/3f27acf10f303eac5b6fbbbe02495aeddb46ecffdb0a2fe3507fcfbf89094631/common_voice_11_0.py\", line 195, in _generate_examples\r\nresult[\"audio\"] = {\"path\": path, \"bytes\": file.read()}\r\n^^^^^^^^^^^\r\nFile \"/usr/lib/python3.11/tarfile.py\", line 687, in read\r\nraise ReadError(\"unexpected end of data\")\r\ntarfile.ReadError: unexpected end of data\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\nFile \"/home/jaggz/src/transformers/examples/pytorch/speech-recognition/run_speech_recognition_seq2seq.py\", line 625, in <module>\r\nmain()\r\nFile \"/home/jaggz/src/transformers/examples/pytorch/speech-recognition/run_speech_recognition_seq2seq.py\", line 360, in main\r\nraw_datasets[\"train\"] = load_dataset(\r\n^^^^^^^^^^^^^\r\nFile \"/home/jaggz/venvs/pynow/lib/python3.11/site-packages/datasets/load.py\", line 2153, in load_dataset\r\nbuilder_instance.download_and_prepare(\r\nFile \"/home/jaggz/venvs/pynow/lib/python3.11/site-packages/datasets/builder.py\", line 954, in download_and_prepare\r\nself._download_and_prepare(\r\nFile \"/home/jaggz/venvs/pynow/lib/python3.11/site-packages/datasets/builder.py\", line 1717, in _download_and_prepare\r\nsuper()._download_and_prepare(\r\nFile \"/home/jaggz/venvs/pynow/lib/python3.11/site-packages/datasets/builder.py\", line 1049, in _download_and_prepare\r\nself._prepare_split(split_generator, **prepare_split_kwargs)\r\nFile \"/home/jaggz/venvs/pynow/lib/python3.11/site-packages/datasets/builder.py\", line 1555, in _prepare_split\r\nfor job_id, done, content in self._prepare_split_single(\r\nFile \"/home/jaggz/venvs/pynow/lib/python3.11/site-packages/datasets/builder.py\", line 1712, in _prepare_split_single\r\nraise DatasetGenerationError(\"An error occurred while generating the dataset\") from e\r\ndatasets.builder.DatasetGenerationError: An error occurred while generating the datase\r\n", "@RuntimeRacer \r\n> EDIT: Apparently it is able to fix it by getting a little help: Just delete the broken parts and associated files from `~/.cache/huggingface/datasets/downloads`\r\n\r\nHow do you know the broken parts?\r\nMine's consistently erroring and.. yeah, really this thing should be able to check the files (but where's that even done)...\r\n\r\n2023-11-02 00:14:09.846055: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.\r\nTo enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.\r\n/home/j/src/transformers/examples/pytorch/speech-recognition/run_speech_recognition_seq2seq.py:299: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v4.34. Please use `token` instead.\r\n warnings.warn(\r\n11/02/2023 00:14:37 - WARNING - __main__ - Process rank: 0, device: cuda:0, n_gpu: 1, distributed training: False, 16-bits training: True\r\n11/02/2023 00:14:37 - INFO - __main__ - Training/evaluation parameters Seq2SeqTrainingArguments(\r\n_n_gpu=1,\r\nadafactor=False,\r\nadam_beta1=0.9,\r\nadam_beta2=0.999,\r\n...\r\nlogging_dir=./whisper-tiny-en/runs/Nov02_00-14-28_jsys,\r\n...\r\nrun_name=./whisper-tiny-en,\r\n...\r\nweight_decay=0.0,\r\n)\r\n11/02/2023 00:14:37 - INFO - __main__ - Training/evaluation parameters Seq2SeqTrainingArguments(\r\n_n_gpu=1,\r\nadafactor=False,\r\n...\r\nlogging_dir=./whisper-tiny-en/runs/Nov02_00-14-28_jsys,\r\n...\r\nweight_decay=0.0,\r\n)\r\n\r\nDownloading data files: 0%| | 0/5 [00:00<?, ?it/s]\r\nDownloading data files: 100%|██████████| 5/5 [00:00<00:00, 2426.42it/s]\r\n\r\nExtracting data files: 0%| | 0/5 [00:00<?, ?it/s]\r\nExtracting data files: 100%|██████████| 5/5 [00:00<00:00, 421.16it/s]\r\n\r\nDownloading data files: 0%| | 0/5 [00:00<?, ?it/s]\r\nDownloading data files: 100%|██████████| 5/5 [00:00<00:00, 18707.87it/s]\r\n\r\nExtracting data files: 0%| | 0/5 [00:00<?, ?it/s]\r\nExtracting data files: 100%|██████████| 5/5 [00:00<00:00, 3754.97it/s]\r\n\r\nGenerating train split: 0 examples [00:00, ? examples/s]\r\n\r\nReading metadata...: 0it [00:00, ?it/s]\u001b[A\r\n...\r\nReading metadata...: 948736it [00:23, 40632.92it/s] \r\n\r\nGenerating train split: 1 examples [00:23, 23.37s/ examples]\r\n...\r\nGenerating train split: 948736 examples [08:28, 1866.15 examples/s]\r\n\r\nGenerating validation split: 0 examples [00:00, ? examples/s]\r\n\r\nReading metadata...: 0it [00:00, ?it/s]\u001b[A\r\n\r\nReading metadata...: 16089it [00:00, 157411.88it/s]\u001b[A\r\nReading metadata...: 16354it [00:00, 158233.27it/s]\r\n\r\nGenerating validation split: 1 examples [00:00, 7.60 examples/s]\r\nGenerating validation split: 16354 examples [00:14, 1154.77 examples/s]\r\n\r\nGenerating test split: 0 examples [00:00, ? examples/s]\r\n\r\nReading metadata...: 0it [00:00, ?it/s]\u001b[A\r\nReading metadata...: 16354it [00:00, 194855.03it/s]\r\n\r\nGenerating test split: 1 examples [00:00, 4.53 examples/s]\r\nGenerating test split: 16354 examples [00:07, 2105.43 examples/s]\r\n\r\nGenerating other split: 0 examples [00:00, ? examples/s]\r\n\r\nReading metadata...: 0it [00:00, ?it/s]\u001b[A\r\nReading metadata...: 290846it [00:01, 235823.90it/s]\r\n\r\nGenerating other split: 1 examples [00:01, 1.27s/ examples]\r\n...\r\nGenerating other split: 290846 examples [02:12, 2196.96 examples/s]\r\nGenerating invalidated split: 0 examples [00:00, ? examples/s]\r\nReading metadata...: 252599it [00:01, 241965.85it/s]\r\n\r\nGenerating invalidated split: 1 examples [00:01, 1.08s/ examples]\r\n...\r\nGenerating invalidated split: 60130 examples [00:34, 1764.14 examples/s]\r\nTraceback (most recent call last):\r\n File \"/home/j/venvs/pycur/lib/python3.11/site-packages/datasets/builder.py\", line 1676, in _prepare_split_single\r\n for key, record in generator:\r\n File \"/home/j/.cache/huggingface/modules/datasets_modules/datasets/mozilla-foundation--common_voice_11_0/3f27acf10f303eac5b6fbbbe02495aeddb46ecffdb0a2fe3507fcfbf89094631/common_voice_11_0.py\", line 195, in _generate_examples\r\n result[\"audio\"] = {\"path\": path, \"bytes\": file.read()}\r\n ^^^^^^^^^^^\r\n File \"/usr/lib/python3.11/tarfile.py\", line 687, in read\r\n raise ReadError(\"unexpected end of data\")\r\ntarfile.ReadError: unexpected end of data\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"/home/j/src/transformers/examples/pytorch/speech-recognition/run_speech_recognition_seq2seq.py\", line 625, in <module>\r\n main()\r\n File \"/home/j/src/transformers/examples/pytorch/speech-recognition/run_speech_recognition_seq2seq.py\", line 360, in main\r\n raw_datasets[\"train\"] = load_dataset(\r\n ^^^^^^^^^^^^^\r\n File \"/home/j/venvs/pycur/lib/python3.11/site-packages/datasets/load.py\", line 2153, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"/home/j/venvs/pycur/lib/python3.11/site-packages/datasets/builder.py\", line 954, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/home/j/venvs/pycur/lib/python3.11/site-packages/datasets/builder.py\", line 1717, in _download_and_prepare\r\n super()._download_and_prepare(\r\n File \"/home/j/venvs/pycur/lib/python3.11/site-packages/datasets/builder.py\", line 1049, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"/home/j/venvs/pycur/lib/python3.11/site-packages/datasets/builder.py\", line 1555, in _prepare_split\r\n for job_id, done, content in self._prepare_split_single(\r\n File \"/home/j/venvs/pycur/lib/python3.11/site-packages/datasets/builder.py\", line 1712, in _prepare_split_single\r\n raise DatasetGenerationError(\"An error occurred while generating the dataset\") from e\r\ndatasets.builder.DatasetGenerationError: An error occurred while generating the dataset\r\n", "@jaggzh Hi, I actually came around with a fix for this, wasn't that easy to solve since there were a lot of hidden pitfalls in the code, and it's quite hacky, but I was able to download the full dataset.\r\n\r\nI just didn't create a PR for it yet since I was too lazy to create a fork and change my local repo's origin. 😅 \r\nLet me try to do this tonight, I'll give you a ping once it's up.\r\n\r\nEDIT: And no, what I wrote above about adding a param to the download config does NOT solve it apparently. A code fix is required here.", "@jaggzh PR is up: https://github.com/huggingface/datasets/pull/6380\r\n\r\n🤞 on approval for merge to the main repo.", "@mariosasko Can you re-open this? We really need some better diagnostics output, at the least, to locate which files are contributing, some checksum output, etc. I can't even tell if this is a mozilla...py issue or huggingface datasets or ....", "@RuntimeRacer \r\nBeautiful, thank you so much. I patched with your PR and am re-running now.\r\n(I'm running this script: https://github.com/huggingface/transformers/blob/main/examples/pytorch/speech-recognition/run_speech_recognition_seq2seq.py)\r\nOkay, actually it failed; so now I'm running with verification_mode='all_checks' added to the load_data() call and it's re-running now. Wish me luck.\r\n(Note: It's generating checksums; I don't see an option that handles anything between basic_checks and all_checks -- Something checking dl'ed files' lengths would be a good common fix I'd think; corruption is more rare nowadays than a short file (although maybe your patch helps prevent that in the first place.) :}", "@RuntimeRacer \r\nNo luck. Sigh.\r\n[Edit: My tmux copy didn't get some data. That was weird. I'm adding in the initial part of the output:]\r\n```\r\nDownloading data files: 100%|██████████| 5/5 [00:00<00:00, 2190.69it/s]\r\nComputing checksums: 100%|██████████| 41/41 [11:39<00:00, 17.05s/it] Extracting data files: 100%|██████████| 5/5 [00:00<00:00, 12.37it/s]\r\nDownloading data files: 100%|██████████| 5/5 [00:00<00:00, 107.64it/s]\r\nExtracting data files: 100%|██████████| 5/5 [00:00<00:00, 3149.82it/s]\r\nReading metadata...: 948736it [00:03, 243227.36it/s]s/s]\r\n...\r\n```\r\n```\r\n...\r\nReading metadata...: 252599it [00:01, 249267.71it/s]xamples/s]\r\nGenerating invalidated split: 60130 examples [00:31, 1916.33 examples/s]\r\nTraceback (most recent call last):\r\nFile \"/home/j/src/py/datasets/src/datasets/builder.py\", line 1676, in _prepare_split_single\r\nfor key, record in generator:\r\nFile \"/home/j/.cache/huggingface/modules/datasets_modules/datasets/mozilla-foundation--common_voice_11_0/3f27acf10f303eac5b6fbbbe02495aeddb46ecffdb0a2fe3507fcfbf89094631/common_voice_11_0.py\", line 195, in _generate_examples\r\nresult[\"audio\"] = {\"path\": path, \"bytes\": file.read()}\r\n^^^^^^^^^^^\r\nFile \"/usr/lib/python3.11/tarfile.py\", line 687, in read\r\nraise ReadError(\"unexpected end of data\")\r\ntarfile.ReadError: unexpected end of data\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\nFile \"/home/j/src/transformers/examples/pytorch/speech-recognition/run_speech_recognition_seq2seq.py\", line 627, in <module>\r\nmain()\r\nFile \"/home/j/src/transformers/examples/pytorch/speech-recognition/run_speech_recognition_seq2seq.py\", line 360, in main\r\nraw_datasets[\"train\"] = load_dataset(\r\n^^^^^^^^^^^^^\r\nFile \"/home/j/src/py/datasets/src/datasets/load.py\", line 2153, in load_dataset\r\nbuilder_instance.download_and_prepare(\r\nFile \"/home/j/src/py/datasets/src/datasets/builder.py\", line 954, in download_and_prepare\r\nself._download_and_prepare(\r\nFile \"/home/j/src/py/datasets/src/datasets/builder.py\", line 1717, in _download_and_prepare\r\nsuper()._download_and_prepare(\r\nFile \"/home/j/src/py/datasets/src/datasets/builder.py\", line 1049, in _download_and_prepare\r\nself._prepare_split(split_generator, **prepare_split_kwargs)\r\nFile \"/home/j/src/py/datasets/src/datasets/builder.py\", line 1555, in _prepare_split\r\nfor job_id, done, content in self._prepare_split_single(\r\nFile \"/home/j/src/py/datasets/src/datasets/builder.py\", line 1712\r\n```", "I'm unable to reproduce this error. Based on https://github.com/psf/requests/issues/4956, newer releases of `urllib3` check the returned content length by default, so perhaps updating `requests` and `urllib3` to the latest versions (`pip install -U requests urllib3`) and loading the dataset with `datasets.load_dataset(\"xtreme\", \"udpos.English\", download_config=datasets.DownloadConfig(resume_download=True))` (re-run when it fails to resume the download) can fix the issue.", "@jaggzh I think you will need to re-download the whole dataset with my patched code. Files which have already been downloaded and marked as complete by the broken downloader won't be detected even on re-run (I described that in the PR).\r\nI also had to download reazonspeech, which is over 1TB, twice. 🙈 \r\nFor re-download, you need to manually delete the dataset files from your local machine's huggingface download cache.\r\n\r\n@mariosasko Not sure how you tested it, but it's not an issue in `requests` or `urllib`. The problem is the huggingface downloader, which generates a nested download thread for the actual download I think.\r\nThe issue I had with the reazonspeech dataset (https://huggingface.co/datasets/reazon-research/reazonspeech/tree/main) basically was, that it started downloading a part, but sometimes the connection would 'starve' and only continue with a few kilobytes, and eventually stop receiving any data at all.\r\nSometimes it would even recover during the download and finish properly.\r\nHowever, if it did not recover, the request would hit the really generous default timeout (which is 100 seconds I think), however the exception thrown by the failure inside `urllib`, isn't captured or handled by the upper level downloader code of the `datasets` library.\r\n`datasets` even has a retry mechanism, which would continue interrupted downloads if they have the `.incomplete` suffix, which isn't cleared if, for example, a manual `CTRL+C` is sent by the user to the python process.\r\nBut: If it runs into that edge case I described above (TL;DR: connection starves after minutes + timeout exception which isn't captured), the cache downloader will consider the download as successful and remove the `.incomplete` suffix nevertheless, leaving the archive file in a corrupted state.\r\n\r\nHonestly, I spent hours on trying to figure out what was even going on and why the retry mechanics of the cache downloader didn't work at all.\r\nBut it is indeed an issue caused by the download process itself not receiving any info about actual content size and filesize size on disk of the archive to be downloaded, thus, having no direct control in case something fails on the request level.\r\n\r\nIMHO, this requires a major refactor of the way this part of the downloader works.\r\nYet I was able to quick-fix it by adding some synthetic Exception handling and explicit retry-handling in the code, als done in my PR.", "@RuntimeRacer \r\nUgh. It took a day. I'm seeing if I can get some debug code in here to examine the files myself. (I'm not sure why checksum tests would fail, so, yeah, I think you're right -- this stuff needs some work. Going through ipdb right now to try to get some idea of what's going on in the code).", "@RuntimeRacer Data can only be appended to the `.incomplete` files if `load_dataset` is called with `download_config=DownloadConfig(resume_download=True)`. \r\n\r\nWhere exactly does this exception happen (in the code)? The error stack trace would help a lot.", "@mariosasko I do not have a trace of this exception nor do I know which type it is. I am honestly not even sure if an exception is thrown, or the process just aborts without error.\r\n\r\n> @RuntimeRacer Data can only be appended to the .incomplete files if load_dataset is called with download_config=DownloadConfig(resume_download=True).\r\n\r\nWell, I think I did a very clear explaination of the issue in the PR I shared, and the description above, but maybe I wasn't precise enough. Let me try to explain once more:\r\n\r\nWhat you mention here is the \"normal\" case, if the process is aborted. In this case, there will be files with `.incomplete` suffix, which the cache downloader can continue to download. That is correct.\r\n\r\nBUT: What I am talking about all the time is an edge case: if the download step crashes / timeouts internally, the cache downloader will NOT be aware of this, and REMOVES the `.incomplete` suffix.\r\nIt does NOT know that the file is incomplete when the `http_get` function returns and will remove the `.incomplete` suffix in any case once `http_get` returns.\r\nBut the problem is that `http_get` returns without failure, even if the download failed.\r\nAnd this is still a problem even with latest `urllib` and `requests` library.\r\n", "@RuntimeRacer Updating `urllib3` and `requests` to the latest versions fixes the issue explained in this [blog](https://blog.petrzemek.net/2018/04/22/on-incomplete-http-reads-and-the-requests-library-in-python/) post. \r\n\r\nHowever, the issue explained above seems more similar to [this](https://stackoverflow.com/questions/52731196/python-3-6-5-requests-with-streaming-getting-stuck-in-iter-content-even-if-chun) one. To address it, we can reduce the default timeout to 10 seconds (btw, this was the initial value, but it was causing problems for some users) and expose a config variable so that users can easily control it. Additionally, we can re-run `http_get` similarly to https://github.com/huggingface/huggingface_hub/pull/1766 when the connection/timeout error happens to make the logic even more robust. Would this work for you? The last part is what you did in the PR, right?\r\n\r\n@jaggzh From all the datasets mentioned in this issue, `xtreme` is the only one that stores the data file checksums in the metadata. So, the checksum check has no effect when enabled for the rest of the datasets.", "(I don't have any .incomplete files, just the extraction errors.)\r\nI was going through the code to try to relate filenames to the hex/hash files, but realized I might not need to.\r\nSo instead I coded up a script in bash to examine the tar files for validity (had an issue with bash subshells not adding to my array so I had cgpt recode it in perl).\r\n\r\n```perl\r\n#!/usr/bin/perl\r\nuse strict;\r\nuse warnings;\r\n\r\n# Initialize the array to store tar files\r\nmy @tars;\r\n\r\n# Open the current directory\r\nopendir(my $dh, '.') or die \"Cannot open directory: $!\";\r\n\r\n# Read files in the current directory\r\nwhile (my $f = readdir($dh)) {\r\n # Skip files ending with lock, json, or py\r\n next if $f =~ /\\.(lock|json|py)$/;\r\n\r\n # Use the `file` command to determine the type of file\r\n my $ft = `file \"$f\"`;\r\n\r\n # If it's a tar archive, add it to the list\r\n if ($ft =~ /tar archive/) {\r\n push @tars, $f;\r\n }\r\n}\r\n\r\nclosedir($dh);\r\n\r\nprint \"Final Tars count: \" . scalar(@tars) . \"\\n\";\r\n\r\n# Iterate over the tar files and check them\r\nforeach my $i (0 .. $#tars) {\r\n my $f = $tars[$i];\r\n printf '%d/%d ', $i+1, scalar(@tars);\r\n \r\n # Use `ls -lgG` to list the files, similar to the original bash script\r\n system(\"ls -lgG '$f'\");\r\n\r\n # Check the integrity of the tar file\r\n my $errfn = \"/tmp/$f.tarerr\";\r\n if (system(\"tar tf '$f' > /dev/null 2> '$errfn'\") != 0) {\r\n print \" BAD $f\\n\";\r\n print \" ERR: \";\r\n system(\"cat '$errfn'\");\r\n }\r\n\r\n # Remove the error file if it exists\r\n unlink $errfn if -e $errfn;\r\n}\r\n```\r\n\r\nThis found one hash file that errored in the tar extraction, and one small tmp* file that also was supposedly a tar and was erroring. I removed those two and re-data loaded.. it grabbed just what it needed and I'm on my way. Yay!\r\n\r\nSo... is there a way for the datasets api to get file sizes? That would be a very easy and fast test, leaving checksum slowdowns for extra-messed-up situations.\r\n\r\n", "> @RuntimeRacer Updating `urllib3` and `requests` to the latest versions fixes the issue explained in this [blog](https://blog.petrzemek.net/2018/04/22/on-incomplete-http-reads-and-the-requests-library-in-python/) post.\r\n> \r\n> However, the issue explained above seems more similar to [this](https://stackoverflow.com/questions/52731196/python-3-6-5-requests-with-streaming-getting-stuck-in-iter-content-even-if-chun) one. To address it, we can reduce the default timeout to 10 seconds (btw, this was the initial value, but it was causing problems for some users) and expose a config variable so that users can easily control it. Additionally, we can re-run `http_get` similarly to [huggingface/huggingface_hub#1766](https://github.com/huggingface/huggingface_hub/pull/1766) when the connection/timeout error happens to make the logic even more robust. Would this work for you? The last part is what you did in the PR, right?\r\n> \r\n> @jaggzh From all the datasets mentioned in this issue, `xtreme` is the only one that stores the data file checksums in the metadata. So, the checksum check has no effect when enabled for the rest of the datasets.\r\n\r\n@mariosasko Well if you look at my commit date, you will see that I run into this problem still in October. The blog post you mention and the update in the pull request for `urllib` was from July: https://github.com/psf/requests/issues/4956#issuecomment-1648632935\r\n\r\nBut yeah the [issue on StackOverflow](https://stackoverflow.com/questions/52731196/python-3-6-5-requests-with-streaming-getting-stuck-in-iter-content-even-if-chun) you mentioned seems like that's the source issue I was running into there.\r\nI experimented with timeouts, but changing them didn't help to resolve the issue of the starving connection unfortunately.\r\nHowever, https://github.com/huggingface/huggingface_hub/pull/1766 seems like that could be working; it's very similar to my change. So yeah I think this would fix it probably.\r\n\r\nAlso I can confirm the checksum option did not work for [reazonspeech](https://huggingface.co/datasets/reazon-research/reazonspeech/tree/main) as well. So maybe it's a double edge case that only occurs for some datasets. 🤷‍♂️ ", "Also, the hf urls to files -- while I can't see a way of getting a listing from the hf site side -- do include the file size in the http header response. So we do have a quick way of just verifying lengths for resume. (This message may not be interesting to you all).\r\n\r\nFirst, a json clip (mozilla-foundation___common_voice_11_0/en/11.0.0/3f27acf10f303eac5b6fbbbe02495aeddb46ecffdb0a2fe3507fcfbf89094631/dataset_info.json):\r\n\r\n* I don't know how specific this .json is to mozilla common voice\r\n* Note that *dataset_size* is not the dataset size :) DatasetInfo class docs indicate it might be their \"combined size in bytes of the Arrow tables for all splits.\"\r\n* *num_bytes*: does match the individual file size though, and matches the http header (further down)\r\n```\r\n{\r\n \"builder_name\" : \"common_voice_11_0\",\r\n...\r\n \"config_name\" : \"en\",\r\n \"dataset_name\" : \"common_voice_11_0\",\r\n \"dataset_size\" : 1680793952,\r\n...\r\n \"download_checksums\" : {\r\n...\r\n \"https://huggingface.co/datasets/mozilla-foundation/common_voice_11_0/resolve/main/audio/en/invalidated/en_invalidated_3.tar\" : {\r\n \"checksum\" : null,\r\n \"num_bytes\" : 2110853120\r\n },\r\n...\r\n```\r\n\r\n```bash\r\n~/.cache/huggingface/datasets/downloads$ ls -lgG b45f82cb87bab2c35361857fcd46042ab658b42c37dc9a455248c2866c9b8f40* | cut -c 14-\r\n```\r\n```\r\n2110853120 Nov 1 16:28 b45f82cb87bab2c35361857fcd46042ab658b42c37dc9a455248c2866c9b8f40\r\n148 Nov 1 16:28 b45f82cb87bab2c35361857fcd46042ab658b42c37dc9a455248c2866c9b8f40.json\r\n0 Nov 1 16:07 b45f82cb87bab2c35361857fcd46042ab658b42c37dc9a455248c2866c9b8f40.lock\r\n```\r\n\r\n* Note the -L to follow redirects. Two headers are below:\r\n\r\n```bash\r\n$ curl -I -L https://huggingface.co/datasets/mozilla-foundation/common_voice_11_0/resolve/main/audio/en/invalidated/en_invalidated_3.tar\r\n```\r\n```\r\nHTTP/2 302 \r\ncontent-type: text/plain; charset=utf-8\r\ncontent-length: 1215\r\nlocation: https://cdn-lfs.huggingface.co/repos/00/ce/00ce867b4ae70bd23a10b60c32a8626d87b2666fc088ad03f86b94788faff554/984086fc250badece2992e8be4d7c4430f7c1208fb8bf37dc7c4aecdc803b220?response-content-disposition=attachment%3B+filename*%3DUTF-8%27%27en_invalidated_3.tar%3B+filename%3D%22en_invalidated_3.tar%22%3B&response-content-type=application%2Fx-tar&Expires=1699389040&Policy=eyJTdGF0ZW1lbnQiOlt7IkNvbmRpdGlvbiI6eyJEYXRlTGVzc1RoYW4iOnsiQVdTOkVwb2NoVGltZSI6MTY5OTM4OTA0MH19LCJSZXNvdXJjZSI6Imh0dHBzOi8vY2RuLWxmcy5odWdnaW5nZmFjZS5jby9yZXBvcy8wMC9jZS8wMGNlODY3YjRhZTcwYmQyM2ExMGI2MGMzMmE4NjI2ZDg3YjI2NjZmYzA4OGFkMDNmODZiOTQ3ODhmYWZmNTU0Lzk4NDA4NmZjMjUwYmFkZWNlMjk5MmU4YmU0ZDdjNDQzMGY3YzEyMDhmYjhiZjM3ZGM3YzRhZWNkYzgwM2IyMjA%7EcmVzcG9uc2UtY29udGVudC1kaXNwb3NpdGlvbj0qJnJlc3BvbnNlLWNvbnRlbnQtdHlwZT0qIn1dfQ__&Signature=WYc32e75PqbKSAv3KTpG86ooFT6oOyDDQpCt1i2B8gVS10J3qvpZlDmxaBgnGlCCl7SRiAvhIQctgwooNtWbUeDqK3T4bAo0-OOrGCuVi-%7EKWUBcoHce7nHWpl%7Ex9ubHS%7EFoYcGB2SCEqh5fIgGjNV-VKRX6TSXkRto5bclQq4VCJKHufDsJ114A1V4Qu%7EYiRIWKG4Gi93Xv4OFhyWY0uqykvP5c0x02F%7ELX0m3WbW-eXBk6Fw2xnV1XLrEkdR-9Ax2vHqMYIIw6yV0wWEc1hxE393P9mMG1TNDj%7EXDuCoOaA7LbrwBCxai%7Ew2MopdPamTXyOia5-FnSqEdsV29v4Q__&Key-Pair-Id=KVTP0A1DKRTAX\r\ndate: Sat, 04 Nov 2023 20:30:40 GMT\r\nx-powered-by: huggingface-moon\r\nx-request-id: Root=1-6546a9f0-5e7f729d09bdb38e35649a7e\r\naccess-control-allow-origin: https://huggingface.co\r\nvary: Origin, Accept\r\naccess-control-expose-headers: X-Repo-Commit,X-Request-Id,X-Error-Code,X-Error-Message,ETag,Link,Accept-Ranges,Content-Range\r\nx-repo-commit: 23b4059922516c140711b91831aa3393a22e9b80\r\naccept-ranges: bytes\r\nx-linked-size: 2110853120\r\nx-linked-etag: \"984086fc250badece2992e8be4d7c4430f7c1208fb8bf37dc7c4aecdc803b220\"\r\nx-cache: Miss from cloudfront\r\nvia: 1.1 f31a6426ebd75ce4393909b12f5cbdcc.cloudfront.net (CloudFront)\r\nx-amz-cf-pop: LAX53-P4\r\nx-amz-cf-id: BcYMFcHVcxPome2IjAvx0ZU90G41QlNI_HEHDGDqCQaEPvrOsnsGXw==\r\n\r\nHTTP/2 200 \r\ncontent-type: application/x-tar\r\ncontent-length: 2110853120\r\ndate: Sat, 04 Nov 2023 20:19:35 GMT\r\nlast-modified: Fri, 18 Nov 2022 15:08:22 GMT\r\netag: \"acac28988e2f7e73b68e865179fbd008\"\r\nx-amz-storage-class: INTELLIGENT_TIERING\r\nx-amz-version-id: LgTuOcd9FGN4JnAXp26O.1v2VW42GPtF\r\ncontent-disposition: attachment; filename*=UTF-8''en_invalidated_3.tar; filename=\"en_invalidated_3.tar\";\r\naccept-ranges: bytes\r\nserver: AmazonS3\r\nx-cache: Hit from cloudfront\r\nvia: 1.1 d07c8167eda81d307ca96358727f505e.cloudfront.net (CloudFront)\r\nx-amz-cf-pop: LAX50-P5\r\nx-amz-cf-id: 6oNZg_V8U1M_JXsMHQAPuRmDfxbY2BnMUWcVH0nz3VnfEZCzF5lgkQ==\r\nage: 666\r\ncache-control: public, max-age=604800, immutable, s-maxage=604800\r\nvary: Origin\r\n\r\n```\r\n" ]
completed
[]
Error while downloading the xtreme udpos dataset
NONE
https://api.github.com/repos/huggingface/datasets/issues/5594/timeline
### Describe the bug Hi, I am facing an error while downloading the xtreme udpos dataset using load_dataset. I have datasets 2.10.1 installed ```Downloading and preparing dataset xtreme/udpos.Arabic to /compute/tir-1-18/skhanuja/multilingual_ft/cache/data/xtreme/udpos.Arabic/1.0.0/29f5d57a48779f37ccb75cb8708d1095448aad0713b425bdc1ff9a4a128a56e4... Downloading data: 16%|██████████████▏ | 56.9M/355M [03:11<16:43, 297kB/s] Generating train split: 0%| | 0/6075 [00:00<?, ? examples/s]Traceback (most recent call last): File "/home/skhanuja/miniconda3/envs/multilingual_ft/lib/python3.10/site-packages/datasets/builder.py", line 1608, in _prepare_split_single for key, record in generator: File "/home/skhanuja/.cache/huggingface/modules/datasets_modules/datasets/xtreme/29f5d57a48779f37ccb75cb8708d1095448aad0713b425bdc1ff9a4a128a56e4/xtreme.py", line 732, in _generate_examples yield from UdposParser.generate_examples(config=self.config, filepath=filepath, **kwargs) File "/home/skhanuja/.cache/huggingface/modules/datasets_modules/datasets/xtreme/29f5d57a48779f37ccb75cb8708d1095448aad0713b425bdc1ff9a4a128a56e4/xtreme.py", line 921, in generate_examples for path, file in filepath: File "/home/skhanuja/miniconda3/envs/multilingual_ft/lib/python3.10/site-packages/datasets/download/download_manager.py", line 158, in __iter__ yield from self.generator(*self.args, **self.kwargs) File "/home/skhanuja/miniconda3/envs/multilingual_ft/lib/python3.10/site-packages/datasets/download/download_manager.py", line 211, in _iter_from_path yield from cls._iter_tar(f) File "/home/skhanuja/miniconda3/envs/multilingual_ft/lib/python3.10/site-packages/datasets/download/download_manager.py", line 167, in _iter_tar for tarinfo in stream: File "/home/skhanuja/miniconda3/envs/multilingual_ft/lib/python3.10/tarfile.py", line 2475, in __iter__ tarinfo = self.next() File "/home/skhanuja/miniconda3/envs/multilingual_ft/lib/python3.10/tarfile.py", line 2344, in next raise ReadError("unexpected end of data") tarfile.ReadError: unexpected end of data The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/home/skhanuja/Optimal-Resource-Allocation-for-Multilingual-Finetuning/src/train_al.py", line 855, in <module> main() File "/home/skhanuja/Optimal-Resource-Allocation-for-Multilingual-Finetuning/src/train_al.py", line 487, in main train_dataset = load_dataset(dataset_name, source_language, split="train", cache_dir=args.cache_dir, download_mode="force_redownload") File "/home/skhanuja/miniconda3/envs/multilingual_ft/lib/python3.10/site-packages/datasets/load.py", line 1782, in load_dataset builder_instance.download_and_prepare( File "/home/skhanuja/miniconda3/envs/multilingual_ft/lib/python3.10/site-packages/datasets/builder.py", line 872, in download_and_prepare self._download_and_prepare( File "/home/skhanuja/miniconda3/envs/multilingual_ft/lib/python3.10/site-packages/datasets/builder.py", line 1649, in _download_and_prepare super()._download_and_prepare( File "/home/skhanuja/miniconda3/envs/multilingual_ft/lib/python3.10/site-packages/datasets/builder.py", line 967, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/skhanuja/miniconda3/envs/multilingual_ft/lib/python3.10/site-packages/datasets/builder.py", line 1488, in _prepare_split for job_id, done, content in self._prepare_split_single( File "/home/skhanuja/miniconda3/envs/multilingual_ft/lib/python3.10/site-packages/datasets/builder.py", line 1644, in _prepare_split_single raise DatasetGenerationError("An error occurred while generating the dataset") from e datasets.builder.DatasetGenerationError: An error occurred while generating the dataset ``` ### Steps to reproduce the bug ``` train_dataset = load_dataset('xtreme', 'udpos.English', split="train", cache_dir=args.cache_dir, download_mode="force_redownload") ``` ### Expected behavior Download the udpos dataset ### Environment info - `datasets` version: 2.10.1 - Platform: Linux-3.10.0-957.1.3.el7.x86_64-x86_64-with-glibc2.17 - Python version: 3.10.8 - PyArrow version: 10.0.1 - Pandas version: 1.5.2
https://api.github.com/repos/huggingface/datasets
null
1,603,980,995
https://api.github.com/repos/huggingface/datasets/issues/5594/comments
I_kwDODunzps5fms7D
null
5,594
https://api.github.com/repos/huggingface/datasets/issues/5594/events
false
closed
2023-02-28T18:42:37Z
null
https://api.github.com/repos/huggingface/datasets/issues/5592
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5592/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/5592/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/stevhliu", "id": 59462357, "login": "stevhliu", "node_id": "MDQ6VXNlcjU5NDYyMzU3", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "repos_url": "https://api.github.com/users/stevhliu/repos", "site_admin": false, "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "type": "User", "url": "https://api.github.com/users/stevhliu" }
https://github.com/huggingface/datasets/pull/5592
[]
false
2023-02-28T19:26:33Z
2023-02-28T19:19:15Z
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009526 / 0.011353 (-0.001827) | 0.005132 / 0.011008 (-0.005876) | 0.101312 / 0.038508 (0.062804) | 0.035703 / 0.023109 (0.012594) | 0.301788 / 0.275898 (0.025890) | 0.368411 / 0.323480 (0.044932) | 0.008163 / 0.007986 (0.000177) | 0.005462 / 0.004328 (0.001134) | 0.077282 / 0.004250 (0.073031) | 0.044139 / 0.037052 (0.007086) | 0.312280 / 0.258489 (0.053791) | 0.351870 / 0.293841 (0.058029) | 0.038266 / 0.128546 (-0.090281) | 0.012051 / 0.075646 (-0.063595) | 0.335109 / 0.419271 (-0.084163) | 0.047596 / 0.043533 (0.004064) | 0.300931 / 0.255139 (0.045792) | 0.325705 / 0.283200 (0.042505) | 0.100472 / 0.141683 (-0.041211) | 1.475037 / 1.452155 (0.022882) | 1.520059 / 1.492716 (0.027343) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.211096 / 0.018006 (0.193089) | 0.442988 / 0.000490 (0.442498) | 0.003644 / 0.000200 (0.003444) | 0.000090 / 0.000054 (0.000036) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027492 / 0.037411 (-0.009919) | 0.108981 / 0.014526 (0.094455) | 0.117836 / 0.176557 (-0.058720) | 0.161220 / 0.737135 (-0.575915) | 0.124765 / 0.296338 (-0.171574) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.413480 / 0.215209 (0.198271) | 4.111355 / 2.077655 (2.033700) | 1.933024 / 1.504120 (0.428904) | 1.727467 / 1.541195 (0.186272) | 1.827106 / 1.468490 (0.358616) | 0.688209 / 4.584777 (-3.896568) | 3.759672 / 3.745712 (0.013960) | 2.163806 / 5.269862 (-3.106056) | 1.473521 / 4.565676 (-3.092155) | 0.082859 / 0.424275 (-0.341416) | 0.012320 / 0.007607 (0.004713) | 0.515321 / 0.226044 (0.289277) | 5.158651 / 2.268929 (2.889722) | 2.489123 / 55.444624 (-52.955501) | 2.218910 / 6.876477 (-4.657566) | 2.257306 / 2.142072 (0.115233) | 0.861477 / 4.805227 (-3.943750) | 0.165857 / 6.500664 (-6.334807) | 0.063723 / 0.075469 (-0.011746) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.195163 / 1.841788 (-0.646625) | 14.954518 / 8.074308 (6.880210) | 14.272289 / 10.191392 (4.080897) | 0.167420 / 0.680424 (-0.513004) | 0.028907 / 0.534201 (-0.505294) | 0.450117 / 0.579283 (-0.129166) | 0.448532 / 0.434364 (0.014168) | 0.534406 / 0.540337 (-0.005931) | 0.633468 / 1.386936 (-0.753468) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007658 / 0.011353 (-0.003694) | 0.005266 / 0.011008 (-0.005742) | 0.075293 / 0.038508 (0.036785) | 0.034442 / 0.023109 (0.011333) | 0.346558 / 0.275898 (0.070660) | 0.391496 / 0.323480 (0.068017) | 0.005852 / 0.007986 (-0.002133) | 0.004121 / 0.004328 (-0.000207) | 0.074254 / 0.004250 (0.070004) | 0.048361 / 0.037052 (0.011309) | 0.344613 / 0.258489 (0.086124) | 0.401497 / 0.293841 (0.107656) | 0.037243 / 0.128546 (-0.091303) | 0.012505 / 0.075646 (-0.063142) | 0.087188 / 0.419271 (-0.332084) | 0.050114 / 0.043533 (0.006581) | 0.340454 / 0.255139 (0.085315) | 0.361087 / 0.283200 (0.077887) | 0.104692 / 0.141683 (-0.036991) | 1.419432 / 1.452155 (-0.032722) | 1.524709 / 1.492716 (0.031993) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.231820 / 0.018006 (0.213814) | 0.445791 / 0.000490 (0.445301) | 0.000442 / 0.000200 (0.000242) | 0.000061 / 0.000054 (0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030445 / 0.037411 (-0.006967) | 0.111183 / 0.014526 (0.096657) | 0.123494 / 0.176557 (-0.053063) | 0.173121 / 0.737135 (-0.564014) | 0.124968 / 0.296338 (-0.171371) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.428854 / 0.215209 (0.213645) | 4.270262 / 2.077655 (2.192608) | 2.012075 / 1.504120 (0.507955) | 1.826564 / 1.541195 (0.285370) | 1.931699 / 1.468490 (0.463209) | 0.728762 / 4.584777 (-3.856015) | 3.879640 / 3.745712 (0.133928) | 3.325715 / 5.269862 (-1.944147) | 1.818573 / 4.565676 (-2.747104) | 0.087879 / 0.424275 (-0.336396) | 0.012530 / 0.007607 (0.004923) | 0.530249 / 0.226044 (0.304204) | 5.286110 / 2.268929 (3.017181) | 2.566649 / 55.444624 (-52.877975) | 2.210162 / 6.876477 (-4.666315) | 2.297562 / 2.142072 (0.155490) | 0.906161 / 4.805227 (-3.899066) | 0.171914 / 6.500664 (-6.328750) | 0.064182 / 0.075469 (-0.011287) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.285781 / 1.841788 (-0.556006) | 16.159072 / 8.074308 (8.084763) | 14.087492 / 10.191392 (3.896100) | 0.148789 / 0.680424 (-0.531635) | 0.018078 / 0.534201 (-0.516123) | 0.427748 / 0.579283 (-0.151535) | 0.447079 / 0.434364 (0.012715) | 0.535917 / 0.540337 (-0.004421) | 0.627491 / 1.386936 (-0.759445) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#88fa043d08c12923709c0492e037130c99c029fb \"CML watermark\")\n" ]
null
[]
Fix docstring example
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/5592/timeline
Fixes #5581 to use the correct output for the `set_format` method.
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/5592.diff", "html_url": "https://github.com/huggingface/datasets/pull/5592", "merged_at": "2023-02-28T19:19:15Z", "patch_url": "https://github.com/huggingface/datasets/pull/5592.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5592" }
1,603,619,124
https://api.github.com/repos/huggingface/datasets/issues/5592/comments
PR_kwDODunzps5K9dWr
null
5,592
https://api.github.com/repos/huggingface/datasets/issues/5592/events
true
closed
2023-02-28T18:09:05Z
null
https://api.github.com/repos/huggingface/datasets/issues/5591
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5591/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/5591/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://github.com/huggingface/datasets/pull/5591
[]
false
2023-02-28T18:16:31Z
2023-02-28T18:09:15Z
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5591). All of your documentation changes will be reflected on that endpoint.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008826 / 0.011353 (-0.002527) | 0.004595 / 0.011008 (-0.006413) | 0.103387 / 0.038508 (0.064879) | 0.030241 / 0.023109 (0.007132) | 0.351202 / 0.275898 (0.075303) | 0.417601 / 0.323480 (0.094121) | 0.007121 / 0.007986 (-0.000865) | 0.003497 / 0.004328 (-0.000831) | 0.079256 / 0.004250 (0.075006) | 0.037617 / 0.037052 (0.000564) | 0.380542 / 0.258489 (0.122053) | 0.397863 / 0.293841 (0.104022) | 0.034291 / 0.128546 (-0.094255) | 0.011767 / 0.075646 (-0.063879) | 0.323737 / 0.419271 (-0.095534) | 0.041502 / 0.043533 (-0.002031) | 0.352982 / 0.255139 (0.097843) | 0.378618 / 0.283200 (0.095418) | 0.091671 / 0.141683 (-0.050012) | 1.499278 / 1.452155 (0.047123) | 1.517489 / 1.492716 (0.024773) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.190108 / 0.018006 (0.172102) | 0.414404 / 0.000490 (0.413915) | 0.001064 / 0.000200 (0.000864) | 0.000066 / 0.000054 (0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023214 / 0.037411 (-0.014198) | 0.099351 / 0.014526 (0.084825) | 0.105227 / 0.176557 (-0.071330) | 0.150620 / 0.737135 (-0.586516) | 0.109323 / 0.296338 (-0.187015) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.412463 / 0.215209 (0.197254) | 4.138123 / 2.077655 (2.060469) | 1.845163 / 1.504120 (0.341043) | 1.641108 / 1.541195 (0.099913) | 1.715471 / 1.468490 (0.246981) | 0.697397 / 4.584777 (-3.887380) | 3.449829 / 3.745712 (-0.295883) | 1.959309 / 5.269862 (-3.310553) | 1.285754 / 4.565676 (-3.279923) | 0.082746 / 0.424275 (-0.341529) | 0.012523 / 0.007607 (0.004916) | 0.524745 / 0.226044 (0.298700) | 5.257085 / 2.268929 (2.988156) | 2.293163 / 55.444624 (-53.151461) | 1.958309 / 6.876477 (-4.918168) | 2.016106 / 2.142072 (-0.125966) | 0.814359 / 4.805227 (-3.990869) | 0.149443 / 6.500664 (-6.351221) | 0.066013 / 0.075469 (-0.009456) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.248495 / 1.841788 (-0.593292) | 14.303301 / 8.074308 (6.228993) | 14.238533 / 10.191392 (4.047141) | 0.161421 / 0.680424 (-0.519003) | 0.028779 / 0.534201 (-0.505422) | 0.396511 / 0.579283 (-0.182772) | 0.412784 / 0.434364 (-0.021580) | 0.473984 / 0.540337 (-0.066353) | 0.569610 / 1.386936 (-0.817327) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007003 / 0.011353 (-0.004350) | 0.004621 / 0.011008 (-0.006387) | 0.079418 / 0.038508 (0.040910) | 0.028659 / 0.023109 (0.005550) | 0.340594 / 0.275898 (0.064696) | 0.377972 / 0.323480 (0.054492) | 0.005421 / 0.007986 (-0.002565) | 0.004852 / 0.004328 (0.000523) | 0.077579 / 0.004250 (0.073329) | 0.042662 / 0.037052 (0.005610) | 0.342264 / 0.258489 (0.083775) | 0.387255 / 0.293841 (0.093414) | 0.032574 / 0.128546 (-0.095972) | 0.011820 / 0.075646 (-0.063826) | 0.087960 / 0.419271 (-0.331312) | 0.045199 / 0.043533 (0.001667) | 0.341785 / 0.255139 (0.086646) | 0.365014 / 0.283200 (0.081814) | 0.096129 / 0.141683 (-0.045554) | 1.498962 / 1.452155 (0.046807) | 1.557331 / 1.492716 (0.064615) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.236216 / 0.018006 (0.218210) | 0.440189 / 0.000490 (0.439699) | 0.000399 / 0.000200 (0.000199) | 0.000060 / 0.000054 (0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026357 / 0.037411 (-0.011055) | 0.104485 / 0.014526 (0.089959) | 0.109616 / 0.176557 (-0.066941) | 0.163005 / 0.737135 (-0.574130) | 0.113859 / 0.296338 (-0.182479) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.437452 / 0.215209 (0.222243) | 4.371854 / 2.077655 (2.294199) | 2.056845 / 1.504120 (0.552725) | 1.856071 / 1.541195 (0.314876) | 1.957978 / 1.468490 (0.489488) | 0.703171 / 4.584777 (-3.881606) | 3.433889 / 3.745712 (-0.311823) | 1.968321 / 5.269862 (-3.301541) | 1.204947 / 4.565676 (-3.360729) | 0.084499 / 0.424275 (-0.339777) | 0.012729 / 0.007607 (0.005122) | 0.537534 / 0.226044 (0.311490) | 5.383346 / 2.268929 (3.114417) | 2.522136 / 55.444624 (-52.922488) | 2.192715 / 6.876477 (-4.683762) | 2.243579 / 2.142072 (0.101507) | 0.811136 / 4.805227 (-3.994091) | 0.154015 / 6.500664 (-6.346649) | 0.069324 / 0.075469 (-0.006145) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.294232 / 1.841788 (-0.547556) | 14.809448 / 8.074308 (6.735140) | 13.510074 / 10.191392 (3.318682) | 0.158033 / 0.680424 (-0.522391) | 0.016703 / 0.534201 (-0.517498) | 0.393976 / 0.579283 (-0.185307) | 0.385983 / 0.434364 (-0.048381) | 0.476691 / 0.540337 (-0.063646) | 0.565694 / 1.386936 (-0.821242) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#b0dd3126196e8fcd9ba81a6602b46623b4e77e6e \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009155 / 0.011353 (-0.002198) | 0.005227 / 0.011008 (-0.005781) | 0.099767 / 0.038508 (0.061259) | 0.035338 / 0.023109 (0.012229) | 0.293913 / 0.275898 (0.018015) | 0.366976 / 0.323480 (0.043496) | 0.007802 / 0.007986 (-0.000184) | 0.005286 / 0.004328 (0.000958) | 0.075117 / 0.004250 (0.070867) | 0.042336 / 0.037052 (0.005284) | 0.304690 / 0.258489 (0.046201) | 0.343496 / 0.293841 (0.049655) | 0.038745 / 0.128546 (-0.089802) | 0.012275 / 0.075646 (-0.063371) | 0.334455 / 0.419271 (-0.084817) | 0.052611 / 0.043533 (0.009078) | 0.293229 / 0.255139 (0.038090) | 0.314340 / 0.283200 (0.031140) | 0.108676 / 0.141683 (-0.033007) | 1.444495 / 1.452155 (-0.007659) | 1.492244 / 1.492716 (-0.000472) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.204852 / 0.018006 (0.186846) | 0.438202 / 0.000490 (0.437712) | 0.005043 / 0.000200 (0.004843) | 0.000282 / 0.000054 (0.000228) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027268 / 0.037411 (-0.010143) | 0.109497 / 0.014526 (0.094972) | 0.117187 / 0.176557 (-0.059369) | 0.162551 / 0.737135 (-0.574584) | 0.124175 / 0.296338 (-0.172164) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.401667 / 0.215209 (0.186458) | 4.010274 / 2.077655 (1.932619) | 1.882617 / 1.504120 (0.378497) | 1.721960 / 1.541195 (0.180765) | 1.806874 / 1.468490 (0.338384) | 0.711253 / 4.584777 (-3.873524) | 3.806585 / 3.745712 (0.060873) | 3.713011 / 5.269862 (-1.556851) | 1.896558 / 4.565676 (-2.669119) | 0.086092 / 0.424275 (-0.338184) | 0.012129 / 0.007607 (0.004522) | 0.504905 / 0.226044 (0.278861) | 5.050794 / 2.268929 (2.781865) | 2.324331 / 55.444624 (-53.120293) | 2.020170 / 6.876477 (-4.856307) | 2.079685 / 2.142072 (-0.062388) | 0.854782 / 4.805227 (-3.950445) | 0.166754 / 6.500664 (-6.333910) | 0.062434 / 0.075469 (-0.013035) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.187897 / 1.841788 (-0.653891) | 14.618517 / 8.074308 (6.544209) | 13.205760 / 10.191392 (3.014368) | 0.154322 / 0.680424 (-0.526102) | 0.029243 / 0.534201 (-0.504958) | 0.442390 / 0.579283 (-0.136893) | 0.434651 / 0.434364 (0.000287) | 0.523082 / 0.540337 (-0.017256) | 0.602675 / 1.386936 (-0.784261) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007214 / 0.011353 (-0.004139) | 0.005225 / 0.011008 (-0.005783) | 0.076497 / 0.038508 (0.037989) | 0.032761 / 0.023109 (0.009652) | 0.336005 / 0.275898 (0.060107) | 0.373547 / 0.323480 (0.050067) | 0.005460 / 0.007986 (-0.002526) | 0.003933 / 0.004328 (-0.000395) | 0.074540 / 0.004250 (0.070289) | 0.047785 / 0.037052 (0.010733) | 0.341917 / 0.258489 (0.083428) | 0.396978 / 0.293841 (0.103137) | 0.036763 / 0.128546 (-0.091783) | 0.012043 / 0.075646 (-0.063603) | 0.087632 / 0.419271 (-0.331640) | 0.049376 / 0.043533 (0.005843) | 0.335169 / 0.255139 (0.080030) | 0.354852 / 0.283200 (0.071652) | 0.100180 / 0.141683 (-0.041503) | 1.443422 / 1.452155 (-0.008733) | 1.518618 / 1.492716 (0.025901) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.209593 / 0.018006 (0.191587) | 0.444028 / 0.000490 (0.443538) | 0.004545 / 0.000200 (0.004345) | 0.000100 / 0.000054 (0.000046) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029676 / 0.037411 (-0.007735) | 0.115444 / 0.014526 (0.100918) | 0.121765 / 0.176557 (-0.054791) | 0.171037 / 0.737135 (-0.566098) | 0.128592 / 0.296338 (-0.167746) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.428556 / 0.215209 (0.213347) | 4.228531 / 2.077655 (2.150877) | 2.039190 / 1.504120 (0.535070) | 1.836518 / 1.541195 (0.295324) | 1.897040 / 1.468490 (0.428550) | 0.698893 / 4.584777 (-3.885884) | 3.753998 / 3.745712 (0.008286) | 2.097731 / 5.269862 (-3.172131) | 1.338315 / 4.565676 (-3.227361) | 0.087119 / 0.424275 (-0.337156) | 0.012149 / 0.007607 (0.004542) | 0.520774 / 0.226044 (0.294730) | 5.227420 / 2.268929 (2.958492) | 2.522235 / 55.444624 (-52.922389) | 2.194213 / 6.876477 (-4.682264) | 2.241406 / 2.142072 (0.099333) | 0.843119 / 4.805227 (-3.962109) | 0.169128 / 6.500664 (-6.331536) | 0.065071 / 0.075469 (-0.010398) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.254490 / 1.841788 (-0.587298) | 15.037137 / 8.074308 (6.962829) | 13.115333 / 10.191392 (2.923941) | 0.181743 / 0.680424 (-0.498681) | 0.017748 / 0.534201 (-0.516453) | 0.425758 / 0.579283 (-0.153525) | 0.429926 / 0.434364 (-0.004438) | 0.524386 / 0.540337 (-0.015951) | 0.643044 / 1.386936 (-0.743892) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#09e820e79a3b879855b514e2a62d84b738013940 \"CML watermark\")\n" ]
null
[]
set dev version
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/5591/timeline
null
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/5591.diff", "html_url": "https://github.com/huggingface/datasets/pull/5591", "merged_at": "2023-02-28T18:09:15Z", "patch_url": "https://github.com/huggingface/datasets/pull/5591.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5591" }
1,603,571,407
https://api.github.com/repos/huggingface/datasets/issues/5591/comments
PR_kwDODunzps5K9S79
null
5,591
https://api.github.com/repos/huggingface/datasets/issues/5591/events
true
closed
2023-02-28T17:58:11Z
null
https://api.github.com/repos/huggingface/datasets/issues/5590
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5590/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/5590/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://github.com/huggingface/datasets/pull/5590
[]
false
2023-02-28T18:16:27Z
2023-02-28T18:06:08Z
null
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008717 / 0.011353 (-0.002636) | 0.004570 / 0.011008 (-0.006439) | 0.100228 / 0.038508 (0.061720) | 0.030076 / 0.023109 (0.006967) | 0.317919 / 0.275898 (0.042021) | 0.366360 / 0.323480 (0.042880) | 0.007008 / 0.007986 (-0.000978) | 0.003498 / 0.004328 (-0.000831) | 0.077607 / 0.004250 (0.073356) | 0.036106 / 0.037052 (-0.000946) | 0.314128 / 0.258489 (0.055639) | 0.351450 / 0.293841 (0.057609) | 0.033697 / 0.128546 (-0.094849) | 0.011424 / 0.075646 (-0.064222) | 0.323867 / 0.419271 (-0.095404) | 0.042073 / 0.043533 (-0.001460) | 0.304564 / 0.255139 (0.049425) | 0.334865 / 0.283200 (0.051665) | 0.087791 / 0.141683 (-0.053892) | 1.488075 / 1.452155 (0.035920) | 1.513676 / 1.492716 (0.020959) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.010936 / 0.018006 (-0.007070) | 0.409610 / 0.000490 (0.409121) | 0.004820 / 0.000200 (0.004620) | 0.000079 / 0.000054 (0.000025) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023931 / 0.037411 (-0.013481) | 0.096826 / 0.014526 (0.082300) | 0.105764 / 0.176557 (-0.070792) | 0.153241 / 0.737135 (-0.583895) | 0.108976 / 0.296338 (-0.187363) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.412833 / 0.215209 (0.197624) | 4.129735 / 2.077655 (2.052081) | 1.819049 / 1.504120 (0.314929) | 1.617411 / 1.541195 (0.076216) | 1.682353 / 1.468490 (0.213863) | 0.688987 / 4.584777 (-3.895790) | 3.388276 / 3.745712 (-0.357436) | 1.857452 / 5.269862 (-3.412410) | 1.158020 / 4.565676 (-3.407657) | 0.082161 / 0.424275 (-0.342114) | 0.012319 / 0.007607 (0.004712) | 0.523052 / 0.226044 (0.297008) | 5.237726 / 2.268929 (2.968797) | 2.275605 / 55.444624 (-53.169020) | 1.931664 / 6.876477 (-4.944813) | 1.970026 / 2.142072 (-0.172046) | 0.805240 / 4.805227 (-3.999988) | 0.148431 / 6.500664 (-6.352233) | 0.064707 / 0.075469 (-0.010762) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.196456 / 1.841788 (-0.645332) | 13.750113 / 8.074308 (5.675805) | 13.853543 / 10.191392 (3.662151) | 0.137892 / 0.680424 (-0.542532) | 0.028304 / 0.534201 (-0.505897) | 0.400128 / 0.579283 (-0.179155) | 0.410409 / 0.434364 (-0.023955) | 0.479165 / 0.540337 (-0.061172) | 0.575002 / 1.386936 (-0.811934) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006587 / 0.011353 (-0.004766) | 0.004526 / 0.011008 (-0.006482) | 0.075673 / 0.038508 (0.037165) | 0.027429 / 0.023109 (0.004320) | 0.341808 / 0.275898 (0.065910) | 0.379520 / 0.323480 (0.056040) | 0.004972 / 0.007986 (-0.003014) | 0.003354 / 0.004328 (-0.000975) | 0.075373 / 0.004250 (0.071123) | 0.038347 / 0.037052 (0.001294) | 0.343671 / 0.258489 (0.085181) | 0.389632 / 0.293841 (0.095791) | 0.031694 / 0.128546 (-0.096853) | 0.011458 / 0.075646 (-0.064188) | 0.084210 / 0.419271 (-0.335062) | 0.042662 / 0.043533 (-0.000871) | 0.339436 / 0.255139 (0.084297) | 0.367493 / 0.283200 (0.084294) | 0.091604 / 0.141683 (-0.050079) | 1.526762 / 1.452155 (0.074607) | 1.569110 / 1.492716 (0.076394) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.211496 / 0.018006 (0.193489) | 0.404868 / 0.000490 (0.404379) | 0.004267 / 0.000200 (0.004067) | 0.000083 / 0.000054 (0.000029) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025189 / 0.037411 (-0.012222) | 0.099139 / 0.014526 (0.084613) | 0.105898 / 0.176557 (-0.070659) | 0.160997 / 0.737135 (-0.576138) | 0.110158 / 0.296338 (-0.186180) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.444286 / 0.215209 (0.229077) | 4.445479 / 2.077655 (2.367824) | 2.118920 / 1.504120 (0.614800) | 1.908296 / 1.541195 (0.367102) | 1.947211 / 1.468490 (0.478721) | 0.704850 / 4.584777 (-3.879927) | 3.395990 / 3.745712 (-0.349723) | 1.892529 / 5.269862 (-3.377332) | 1.172190 / 4.565676 (-3.393486) | 0.084235 / 0.424275 (-0.340040) | 0.012588 / 0.007607 (0.004981) | 0.546962 / 0.226044 (0.320918) | 5.475842 / 2.268929 (3.206913) | 2.575280 / 55.444624 (-52.869344) | 2.245658 / 6.876477 (-4.630818) | 2.274767 / 2.142072 (0.132695) | 0.813755 / 4.805227 (-3.991473) | 0.151927 / 6.500664 (-6.348737) | 0.067167 / 0.075469 (-0.008302) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.267666 / 1.841788 (-0.574122) | 13.658905 / 8.074308 (5.584597) | 13.207249 / 10.191392 (3.015857) | 0.128590 / 0.680424 (-0.551833) | 0.016531 / 0.534201 (-0.517670) | 0.385050 / 0.579283 (-0.194233) | 0.388945 / 0.434364 (-0.045419) | 0.472378 / 0.540337 (-0.067959) | 0.568929 / 1.386936 (-0.818007) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#87cd5f7f7fda60d0f91f50424bcc3f327fe0d059 \"CML watermark\")\n", "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009339 / 0.011353 (-0.002014) | 0.005197 / 0.011008 (-0.005811) | 0.100698 / 0.038508 (0.062190) | 0.035484 / 0.023109 (0.012375) | 0.299030 / 0.275898 (0.023132) | 0.366603 / 0.323480 (0.043124) | 0.007909 / 0.007986 (-0.000077) | 0.005683 / 0.004328 (0.001355) | 0.077719 / 0.004250 (0.073469) | 0.042147 / 0.037052 (0.005094) | 0.310174 / 0.258489 (0.051685) | 0.342720 / 0.293841 (0.048879) | 0.039679 / 0.128546 (-0.088867) | 0.012042 / 0.075646 (-0.063605) | 0.335663 / 0.419271 (-0.083609) | 0.051137 / 0.043533 (0.007604) | 0.298218 / 0.255139 (0.043079) | 0.316398 / 0.283200 (0.033198) | 0.108906 / 0.141683 (-0.032776) | 1.422823 / 1.452155 (-0.029331) | 1.472955 / 1.492716 (-0.019761) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.205845 / 0.018006 (0.187839) | 0.445942 / 0.000490 (0.445453) | 0.003553 / 0.000200 (0.003353) | 0.000083 / 0.000054 (0.000028) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025506 / 0.037411 (-0.011906) | 0.107494 / 0.014526 (0.092969) | 0.116226 / 0.176557 (-0.060331) | 0.157313 / 0.737135 (-0.579822) | 0.123822 / 0.296338 (-0.172516) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.400908 / 0.215209 (0.185699) | 3.980232 / 2.077655 (1.902578) | 1.805410 / 1.504120 (0.301290) | 1.615698 / 1.541195 (0.074503) | 1.677213 / 1.468490 (0.208723) | 0.697882 / 4.584777 (-3.886895) | 3.752781 / 3.745712 (0.007069) | 2.076062 / 5.269862 (-3.193800) | 1.446909 / 4.565676 (-3.118768) | 0.084572 / 0.424275 (-0.339703) | 0.011917 / 0.007607 (0.004310) | 0.511815 / 0.226044 (0.285771) | 5.121487 / 2.268929 (2.852558) | 2.277642 / 55.444624 (-53.166982) | 1.930393 / 6.876477 (-4.946084) | 1.965855 / 2.142072 (-0.176218) | 0.843391 / 4.805227 (-3.961837) | 0.163581 / 6.500664 (-6.337083) | 0.062547 / 0.075469 (-0.012922) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.223930 / 1.841788 (-0.617858) | 14.354466 / 8.074308 (6.280158) | 14.015159 / 10.191392 (3.823767) | 0.148658 / 0.680424 (-0.531766) | 0.028469 / 0.534201 (-0.505732) | 0.437614 / 0.579283 (-0.141669) | 0.435452 / 0.434364 (0.001089) | 0.523623 / 0.540337 (-0.016715) | 0.625109 / 1.386936 (-0.761827) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006917 / 0.011353 (-0.004436) | 0.005080 / 0.011008 (-0.005928) | 0.075806 / 0.038508 (0.037298) | 0.032402 / 0.023109 (0.009293) | 0.331105 / 0.275898 (0.055207) | 0.361226 / 0.323480 (0.037746) | 0.005694 / 0.007986 (-0.002292) | 0.003810 / 0.004328 (-0.000518) | 0.076886 / 0.004250 (0.072635) | 0.046158 / 0.037052 (0.009106) | 0.338791 / 0.258489 (0.080302) | 0.385733 / 0.293841 (0.091892) | 0.035590 / 0.128546 (-0.092956) | 0.011997 / 0.075646 (-0.063649) | 0.087854 / 0.419271 (-0.331417) | 0.048985 / 0.043533 (0.005452) | 0.331248 / 0.255139 (0.076109) | 0.354633 / 0.283200 (0.071434) | 0.101609 / 0.141683 (-0.040074) | 1.496899 / 1.452155 (0.044745) | 1.570469 / 1.492716 (0.077753) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.180871 / 0.018006 (0.162865) | 0.449417 / 0.000490 (0.448928) | 0.004300 / 0.000200 (0.004100) | 0.000102 / 0.000054 (0.000048) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029054 / 0.037411 (-0.008358) | 0.110888 / 0.014526 (0.096362) | 0.121736 / 0.176557 (-0.054821) | 0.172563 / 0.737135 (-0.564572) | 0.126565 / 0.296338 (-0.169773) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.419545 / 0.215209 (0.204336) | 4.193685 / 2.077655 (2.116031) | 2.049967 / 1.504120 (0.545847) | 1.855038 / 1.541195 (0.313843) | 1.899822 / 1.468490 (0.431332) | 0.709123 / 4.584777 (-3.875654) | 3.795939 / 3.745712 (0.050227) | 2.076055 / 5.269862 (-3.193807) | 1.335864 / 4.565676 (-3.229812) | 0.085555 / 0.424275 (-0.338720) | 0.012197 / 0.007607 (0.004590) | 0.516164 / 0.226044 (0.290119) | 5.158983 / 2.268929 (2.890054) | 2.445581 / 55.444624 (-52.999044) | 2.122256 / 6.876477 (-4.754221) | 2.160011 / 2.142072 (0.017939) | 0.840251 / 4.805227 (-3.964976) | 0.165924 / 6.500664 (-6.334740) | 0.064080 / 0.075469 (-0.011389) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.285292 / 1.841788 (-0.556495) | 14.561084 / 8.074308 (6.486776) | 12.899269 / 10.191392 (2.707877) | 0.185657 / 0.680424 (-0.494767) | 0.017866 / 0.534201 (-0.516335) | 0.425365 / 0.579283 (-0.153918) | 0.427183 / 0.434364 (-0.007181) | 0.529773 / 0.540337 (-0.010564) | 0.642061 / 1.386936 (-0.744875) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#0628013d009dd5150e8a1c1a4ac9d93887b88a76 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008995 / 0.011353 (-0.002357) | 0.004540 / 0.011008 (-0.006469) | 0.099675 / 0.038508 (0.061167) | 0.030338 / 0.023109 (0.007229) | 0.307167 / 0.275898 (0.031269) | 0.338789 / 0.323480 (0.015309) | 0.007293 / 0.007986 (-0.000692) | 0.004681 / 0.004328 (0.000352) | 0.077475 / 0.004250 (0.073225) | 0.036399 / 0.037052 (-0.000654) | 0.304615 / 0.258489 (0.046126) | 0.351611 / 0.293841 (0.057770) | 0.034449 / 0.128546 (-0.094097) | 0.011565 / 0.075646 (-0.064082) | 0.322765 / 0.419271 (-0.096506) | 0.041971 / 0.043533 (-0.001562) | 0.307492 / 0.255139 (0.052354) | 0.327240 / 0.283200 (0.044040) | 0.087110 / 0.141683 (-0.054573) | 1.484600 / 1.452155 (0.032445) | 1.536651 / 1.492716 (0.043934) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.185876 / 0.018006 (0.167869) | 0.404276 / 0.000490 (0.403787) | 0.001592 / 0.000200 (0.001392) | 0.000072 / 0.000054 (0.000018) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023272 / 0.037411 (-0.014139) | 0.096273 / 0.014526 (0.081747) | 0.105400 / 0.176557 (-0.071157) | 0.149720 / 0.737135 (-0.587416) | 0.107807 / 0.296338 (-0.188532) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.420072 / 0.215209 (0.204863) | 4.184108 / 2.077655 (2.106454) | 1.880690 / 1.504120 (0.376570) | 1.673103 / 1.541195 (0.131909) | 1.715792 / 1.468490 (0.247302) | 0.695771 / 4.584777 (-3.889006) | 3.450224 / 3.745712 (-0.295488) | 2.999218 / 5.269862 (-2.270644) | 1.585571 / 4.565676 (-2.980106) | 0.082105 / 0.424275 (-0.342170) | 0.012453 / 0.007607 (0.004846) | 0.528538 / 0.226044 (0.302494) | 5.287951 / 2.268929 (3.019023) | 2.289127 / 55.444624 (-53.155497) | 1.956503 / 6.876477 (-4.919974) | 2.004498 / 2.142072 (-0.137575) | 0.813547 / 4.805227 (-3.991681) | 0.151574 / 6.500664 (-6.349090) | 0.063763 / 0.075469 (-0.011706) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.239125 / 1.841788 (-0.602662) | 13.627676 / 8.074308 (5.553368) | 13.747815 / 10.191392 (3.556423) | 0.157745 / 0.680424 (-0.522679) | 0.028590 / 0.534201 (-0.505611) | 0.397472 / 0.579283 (-0.181811) | 0.405925 / 0.434364 (-0.028439) | 0.477942 / 0.540337 (-0.062396) | 0.572379 / 1.386936 (-0.814557) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006637 / 0.011353 (-0.004716) | 0.004657 / 0.011008 (-0.006351) | 0.082056 / 0.038508 (0.043548) | 0.027974 / 0.023109 (0.004865) | 0.342887 / 0.275898 (0.066989) | 0.375938 / 0.323480 (0.052458) | 0.004958 / 0.007986 (-0.003028) | 0.004738 / 0.004328 (0.000409) | 0.080449 / 0.004250 (0.076198) | 0.038138 / 0.037052 (0.001085) | 0.345636 / 0.258489 (0.087147) | 0.385992 / 0.293841 (0.092151) | 0.033265 / 0.128546 (-0.095281) | 0.011965 / 0.075646 (-0.063681) | 0.091441 / 0.419271 (-0.327830) | 0.051407 / 0.043533 (0.007874) | 0.353758 / 0.255139 (0.098619) | 0.372118 / 0.283200 (0.088919) | 0.093947 / 0.141683 (-0.047735) | 1.468197 / 1.452155 (0.016042) | 1.554677 / 1.492716 (0.061960) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.222034 / 0.018006 (0.204027) | 0.403658 / 0.000490 (0.403169) | 0.003242 / 0.000200 (0.003042) | 0.000082 / 0.000054 (0.000027) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025335 / 0.037411 (-0.012076) | 0.100404 / 0.014526 (0.085878) | 0.107858 / 0.176557 (-0.068698) | 0.156115 / 0.737135 (-0.581021) | 0.113967 / 0.296338 (-0.182372) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.437567 / 0.215209 (0.222358) | 4.362486 / 2.077655 (2.284832) | 2.067315 / 1.504120 (0.563195) | 1.857669 / 1.541195 (0.316475) | 1.926380 / 1.468490 (0.457890) | 0.703905 / 4.584777 (-3.880872) | 3.437139 / 3.745712 (-0.308573) | 3.051931 / 5.269862 (-2.217930) | 1.356494 / 4.565676 (-3.209182) | 0.083679 / 0.424275 (-0.340596) | 0.012507 / 0.007607 (0.004900) | 0.539572 / 0.226044 (0.313528) | 5.405790 / 2.268929 (3.136861) | 2.532769 / 55.444624 (-52.911855) | 2.181950 / 6.876477 (-4.694527) | 2.212627 / 2.142072 (0.070554) | 0.807468 / 4.805227 (-3.997759) | 0.152146 / 6.500664 (-6.348518) | 0.068891 / 0.075469 (-0.006578) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.286972 / 1.841788 (-0.554816) | 13.987186 / 8.074308 (5.912878) | 13.115065 / 10.191392 (2.923673) | 0.162143 / 0.680424 (-0.518281) | 0.016767 / 0.534201 (-0.517434) | 0.384766 / 0.579283 (-0.194517) | 0.397438 / 0.434364 (-0.036926) | 0.470850 / 0.540337 (-0.069487) | 0.562216 / 1.386936 (-0.824720) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#2843fceabc428932754ba497f643d6e94173b91e \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010877 / 0.011353 (-0.000476) | 0.005739 / 0.011008 (-0.005269) | 0.118542 / 0.038508 (0.080034) | 0.042266 / 0.023109 (0.019157) | 0.359317 / 0.275898 (0.083419) | 0.412995 / 0.323480 (0.089515) | 0.009158 / 0.007986 (0.001173) | 0.006343 / 0.004328 (0.002014) | 0.089587 / 0.004250 (0.085336) | 0.047899 / 0.037052 (0.010847) | 0.358745 / 0.258489 (0.100256) | 0.421316 / 0.293841 (0.127476) | 0.044540 / 0.128546 (-0.084006) | 0.013872 / 0.075646 (-0.061774) | 0.399856 / 0.419271 (-0.019415) | 0.056484 / 0.043533 (0.012951) | 0.356922 / 0.255139 (0.101783) | 0.385598 / 0.283200 (0.102398) | 0.116039 / 0.141683 (-0.025644) | 1.726095 / 1.452155 (0.273940) | 1.888643 / 1.492716 (0.395927) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.269517 / 0.018006 (0.251511) | 0.511204 / 0.000490 (0.510714) | 0.001906 / 0.000200 (0.001706) | 0.000103 / 0.000054 (0.000048) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031133 / 0.037411 (-0.006278) | 0.128513 / 0.014526 (0.113987) | 0.139639 / 0.176557 (-0.036918) | 0.189778 / 0.737135 (-0.547358) | 0.145219 / 0.296338 (-0.151120) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.486693 / 0.215209 (0.271484) | 4.851999 / 2.077655 (2.774344) | 2.255334 / 1.504120 (0.751214) | 2.052271 / 1.541195 (0.511077) | 2.143262 / 1.468490 (0.674772) | 0.835765 / 4.584777 (-3.749012) | 4.451280 / 3.745712 (0.705568) | 2.534392 / 5.269862 (-2.735469) | 1.747817 / 4.565676 (-2.817859) | 0.101186 / 0.424275 (-0.323089) | 0.014281 / 0.007607 (0.006674) | 0.616164 / 0.226044 (0.390120) | 6.161789 / 2.268929 (3.892860) | 2.815347 / 55.444624 (-52.629277) | 2.408305 / 6.876477 (-4.468172) | 2.508240 / 2.142072 (0.366167) | 1.017709 / 4.805227 (-3.787519) | 0.198272 / 6.500664 (-6.302392) | 0.075663 / 0.075469 (0.000194) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.435501 / 1.841788 (-0.406287) | 18.149581 / 8.074308 (10.075273) | 16.619011 / 10.191392 (6.427619) | 0.205080 / 0.680424 (-0.475344) | 0.033780 / 0.534201 (-0.500421) | 0.515768 / 0.579283 (-0.063515) | 0.542628 / 0.434364 (0.108264) | 0.634067 / 0.540337 (0.093730) | 0.757841 / 1.386936 (-0.629095) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008541 / 0.011353 (-0.002812) | 0.005733 / 0.011008 (-0.005275) | 0.089859 / 0.038508 (0.051351) | 0.039379 / 0.023109 (0.016270) | 0.402037 / 0.275898 (0.126139) | 0.454046 / 0.323480 (0.130566) | 0.006652 / 0.007986 (-0.001334) | 0.004555 / 0.004328 (0.000227) | 0.087651 / 0.004250 (0.083401) | 0.054934 / 0.037052 (0.017881) | 0.404468 / 0.258489 (0.145979) | 0.467127 / 0.293841 (0.173286) | 0.042034 / 0.128546 (-0.086512) | 0.014225 / 0.075646 (-0.061421) | 0.103281 / 0.419271 (-0.315990) | 0.057767 / 0.043533 (0.014234) | 0.396391 / 0.255139 (0.141252) | 0.429364 / 0.283200 (0.146165) | 0.120193 / 0.141683 (-0.021489) | 1.794029 / 1.452155 (0.341875) | 1.875431 / 1.492716 (0.382714) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.325707 / 0.018006 (0.307701) | 0.503841 / 0.000490 (0.503351) | 0.010224 / 0.000200 (0.010024) | 0.000137 / 0.000054 (0.000082) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.035289 / 0.037411 (-0.002123) | 0.139018 / 0.014526 (0.124492) | 0.145112 / 0.176557 (-0.031445) | 0.202616 / 0.737135 (-0.534519) | 0.152975 / 0.296338 (-0.143363) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.493110 / 0.215209 (0.277901) | 4.885713 / 2.077655 (2.808058) | 2.344417 / 1.504120 (0.840297) | 2.135734 / 1.541195 (0.594540) | 2.254118 / 1.468490 (0.785628) | 0.811516 / 4.584777 (-3.773261) | 4.484454 / 3.745712 (0.738742) | 2.459913 / 5.269862 (-2.809948) | 1.553106 / 4.565676 (-3.012570) | 0.100943 / 0.424275 (-0.323332) | 0.014848 / 0.007607 (0.007241) | 0.626214 / 0.226044 (0.400170) | 6.206925 / 2.268929 (3.937997) | 2.986549 / 55.444624 (-52.458076) | 2.521895 / 6.876477 (-4.354582) | 2.610917 / 2.142072 (0.468845) | 0.998496 / 4.805227 (-3.806731) | 0.199405 / 6.500664 (-6.301260) | 0.077355 / 0.075469 (0.001886) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.525135 / 1.841788 (-0.316653) | 18.708407 / 8.074308 (10.634099) | 16.049482 / 10.191392 (5.858090) | 0.170986 / 0.680424 (-0.509437) | 0.021090 / 0.534201 (-0.513111) | 0.511734 / 0.579283 (-0.067549) | 0.495507 / 0.434364 (0.061143) | 0.628578 / 0.540337 (0.088241) | 0.749546 / 1.386936 (-0.637390) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#2843fceabc428932754ba497f643d6e94173b91e \"CML watermark\")\n" ]
null
[]
Release: 2.10.1
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/5590/timeline
null
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/5590.diff", "html_url": "https://github.com/huggingface/datasets/pull/5590", "merged_at": "2023-02-28T18:06:08Z", "patch_url": "https://github.com/huggingface/datasets/pull/5590.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5590" }
1,603,549,504
https://api.github.com/repos/huggingface/datasets/issues/5590/comments
PR_kwDODunzps5K9N_H
null
5,590
https://api.github.com/repos/huggingface/datasets/issues/5590/events
true
closed
2023-02-28T17:52:04Z
null
https://api.github.com/repos/huggingface/datasets/issues/5589
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5589/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/5589/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://github.com/huggingface/datasets/pull/5589
[]
false
2023-09-24T10:07:33Z
2023-03-21T14:18:18Z
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008442 / 0.011353 (-0.002911) | 0.004567 / 0.011008 (-0.006441) | 0.100688 / 0.038508 (0.062180) | 0.029568 / 0.023109 (0.006459) | 0.306993 / 0.275898 (0.031095) | 0.362626 / 0.323480 (0.039146) | 0.006983 / 0.007986 (-0.001002) | 0.003424 / 0.004328 (-0.000905) | 0.079050 / 0.004250 (0.074799) | 0.036087 / 0.037052 (-0.000966) | 0.318205 / 0.258489 (0.059716) | 0.353882 / 0.293841 (0.060041) | 0.033091 / 0.128546 (-0.095455) | 0.011468 / 0.075646 (-0.064178) | 0.321125 / 0.419271 (-0.098146) | 0.040645 / 0.043533 (-0.002888) | 0.309827 / 0.255139 (0.054688) | 0.344848 / 0.283200 (0.061648) | 0.087100 / 0.141683 (-0.054583) | 1.465123 / 1.452155 (0.012968) | 1.499457 / 1.492716 (0.006741) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.171619 / 0.018006 (0.153613) | 0.410198 / 0.000490 (0.409709) | 0.002391 / 0.000200 (0.002191) | 0.000075 / 0.000054 (0.000021) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022913 / 0.037411 (-0.014499) | 0.097275 / 0.014526 (0.082749) | 0.103902 / 0.176557 (-0.072655) | 0.148855 / 0.737135 (-0.588281) | 0.107247 / 0.296338 (-0.189092) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.413139 / 0.215209 (0.197930) | 4.131760 / 2.077655 (2.054105) | 1.854491 / 1.504120 (0.350371) | 1.625524 / 1.541195 (0.084329) | 1.666665 / 1.468490 (0.198175) | 0.687105 / 4.584777 (-3.897672) | 3.327124 / 3.745712 (-0.418588) | 1.830820 / 5.269862 (-3.439042) | 1.147930 / 4.565676 (-3.417746) | 0.081586 / 0.424275 (-0.342689) | 0.012422 / 0.007607 (0.004815) | 0.523723 / 0.226044 (0.297678) | 5.246977 / 2.268929 (2.978049) | 2.288350 / 55.444624 (-53.156275) | 1.933740 / 6.876477 (-4.942737) | 1.954356 / 2.142072 (-0.187716) | 0.804434 / 4.805227 (-4.000793) | 0.147621 / 6.500664 (-6.353043) | 0.064835 / 0.075469 (-0.010634) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.244841 / 1.841788 (-0.596947) | 13.758465 / 8.074308 (5.684157) | 13.984576 / 10.191392 (3.793184) | 0.144860 / 0.680424 (-0.535564) | 0.028616 / 0.534201 (-0.505584) | 0.401928 / 0.579283 (-0.177355) | 0.415294 / 0.434364 (-0.019069) | 0.476483 / 0.540337 (-0.063854) | 0.569257 / 1.386936 (-0.817679) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006556 / 0.011353 (-0.004797) | 0.004502 / 0.011008 (-0.006507) | 0.074828 / 0.038508 (0.036319) | 0.027537 / 0.023109 (0.004427) | 0.339961 / 0.275898 (0.064063) | 0.372491 / 0.323480 (0.049011) | 0.005010 / 0.007986 (-0.002976) | 0.004624 / 0.004328 (0.000295) | 0.074459 / 0.004250 (0.070208) | 0.037539 / 0.037052 (0.000486) | 0.341031 / 0.258489 (0.082542) | 0.383397 / 0.293841 (0.089556) | 0.031706 / 0.128546 (-0.096840) | 0.011542 / 0.075646 (-0.064104) | 0.084882 / 0.419271 (-0.334389) | 0.041860 / 0.043533 (-0.001673) | 0.338699 / 0.255139 (0.083560) | 0.365666 / 0.283200 (0.082467) | 0.088966 / 0.141683 (-0.052717) | 1.502493 / 1.452155 (0.050339) | 1.570746 / 1.492716 (0.078030) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.217547 / 0.018006 (0.199541) | 0.392407 / 0.000490 (0.391918) | 0.000388 / 0.000200 (0.000188) | 0.000058 / 0.000054 (0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024571 / 0.037411 (-0.012840) | 0.099259 / 0.014526 (0.084734) | 0.107850 / 0.176557 (-0.068707) | 0.157686 / 0.737135 (-0.579449) | 0.109761 / 0.296338 (-0.186578) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.434791 / 0.215209 (0.219582) | 4.323099 / 2.077655 (2.245444) | 2.063610 / 1.504120 (0.559490) | 1.866136 / 1.541195 (0.324941) | 1.910185 / 1.468490 (0.441695) | 0.696584 / 4.584777 (-3.888193) | 3.398017 / 3.745712 (-0.347695) | 1.848473 / 5.269862 (-3.421388) | 1.168238 / 4.565676 (-3.397438) | 0.083222 / 0.424275 (-0.341053) | 0.012332 / 0.007607 (0.004725) | 0.538953 / 0.226044 (0.312909) | 5.421273 / 2.268929 (3.152344) | 2.499877 / 55.444624 (-52.944747) | 2.161853 / 6.876477 (-4.714624) | 2.183941 / 2.142072 (0.041868) | 0.803916 / 4.805227 (-4.001311) | 0.150266 / 6.500664 (-6.350398) | 0.067399 / 0.075469 (-0.008070) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.280479 / 1.841788 (-0.561309) | 13.728074 / 8.074308 (5.653766) | 12.946098 / 10.191392 (2.754706) | 0.128459 / 0.680424 (-0.551965) | 0.016567 / 0.534201 (-0.517634) | 0.374461 / 0.579283 (-0.204822) | 0.386973 / 0.434364 (-0.047391) | 0.459754 / 0.540337 (-0.080583) | 0.543870 / 1.386936 (-0.843066) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#595b3d47e1fc579f5db1cbc376f756edf32904dd \"CML watermark\")\n", "Instead of reverting the change, maybe we can use the same conversion in `to_iterable_dataset` as in `ArrowBasedBuilder._as_streaming_dataset` to avoid decoding images twice?", "True, let me take a look", "Closing in favor of https://github.com/huggingface/datasets/pull/5655" ]
null
[]
Revert "pass the dataset features to the IterableDataset.from_generator"
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/5589/timeline
This reverts commit b91070b9c09673e2e148eec458036ab6a62ac042 (temporarily) It hurts iterable dataset performance a lot (e.g. x4 slower because it encodes+decodes images unnecessarily). I think we need to fix this before re-adding it cc @mariosasko @Hubert-Bonisseur
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/5589.diff", "html_url": "https://github.com/huggingface/datasets/pull/5589", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/5589.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5589" }
1,603,535,704
https://api.github.com/repos/huggingface/datasets/issues/5589/comments
PR_kwDODunzps5K9K1i
null
5,589
https://api.github.com/repos/huggingface/datasets/issues/5589/events
true
closed
2023-02-28T15:37:46Z
null
https://api.github.com/repos/huggingface/datasets/issues/5588
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5588/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/5588/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
https://github.com/huggingface/datasets/pull/5588
[]
false
2023-02-28T17:28:35Z
2023-02-28T17:21:17Z
null
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009866 / 0.011353 (-0.001487) | 0.005334 / 0.011008 (-0.005675) | 0.101771 / 0.038508 (0.063263) | 0.037722 / 0.023109 (0.014613) | 0.301026 / 0.275898 (0.025128) | 0.336618 / 0.323480 (0.013138) | 0.008679 / 0.007986 (0.000693) | 0.005640 / 0.004328 (0.001312) | 0.077076 / 0.004250 (0.072825) | 0.045068 / 0.037052 (0.008016) | 0.302570 / 0.258489 (0.044081) | 0.359093 / 0.293841 (0.065252) | 0.038865 / 0.128546 (-0.089681) | 0.012318 / 0.075646 (-0.063328) | 0.334819 / 0.419271 (-0.084452) | 0.047980 / 0.043533 (0.004447) | 0.296999 / 0.255139 (0.041860) | 0.318855 / 0.283200 (0.035656) | 0.110633 / 0.141683 (-0.031050) | 1.464326 / 1.452155 (0.012172) | 1.537386 / 1.492716 (0.044670) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.282906 / 0.018006 (0.264900) | 0.498418 / 0.000490 (0.497928) | 0.001507 / 0.000200 (0.001307) | 0.000087 / 0.000054 (0.000032) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029948 / 0.037411 (-0.007463) | 0.114385 / 0.014526 (0.099859) | 0.125783 / 0.176557 (-0.050774) | 0.193458 / 0.737135 (-0.543678) | 0.129725 / 0.296338 (-0.166614) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.403822 / 0.215209 (0.188613) | 4.034180 / 2.077655 (1.956525) | 1.768206 / 1.504120 (0.264086) | 1.579267 / 1.541195 (0.038072) | 1.725077 / 1.468490 (0.256587) | 0.698743 / 4.584777 (-3.886034) | 3.723481 / 3.745712 (-0.022231) | 2.302374 / 5.269862 (-2.967488) | 1.497954 / 4.565676 (-3.067723) | 0.087360 / 0.424275 (-0.336915) | 0.012453 / 0.007607 (0.004846) | 0.523374 / 0.226044 (0.297329) | 5.244962 / 2.268929 (2.976033) | 2.272874 / 55.444624 (-53.171750) | 1.935570 / 6.876477 (-4.940907) | 2.043151 / 2.142072 (-0.098921) | 0.866298 / 4.805227 (-3.938929) | 0.169376 / 6.500664 (-6.331288) | 0.064578 / 0.075469 (-0.010892) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.217372 / 1.841788 (-0.624416) | 15.896050 / 8.074308 (7.821742) | 15.165190 / 10.191392 (4.973798) | 0.171168 / 0.680424 (-0.509256) | 0.029770 / 0.534201 (-0.504431) | 0.449030 / 0.579283 (-0.130253) | 0.454704 / 0.434364 (0.020340) | 0.550689 / 0.540337 (0.010351) | 0.651182 / 1.386936 (-0.735754) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008072 / 0.011353 (-0.003281) | 0.005533 / 0.011008 (-0.005475) | 0.076343 / 0.038508 (0.037835) | 0.037997 / 0.023109 (0.014888) | 0.350465 / 0.275898 (0.074567) | 0.391168 / 0.323480 (0.067688) | 0.006475 / 0.007986 (-0.001511) | 0.004299 / 0.004328 (-0.000029) | 0.074867 / 0.004250 (0.070617) | 0.055256 / 0.037052 (0.018204) | 0.363919 / 0.258489 (0.105430) | 0.396521 / 0.293841 (0.102680) | 0.037746 / 0.128546 (-0.090801) | 0.012556 / 0.075646 (-0.063091) | 0.087974 / 0.419271 (-0.331297) | 0.050850 / 0.043533 (0.007317) | 0.345857 / 0.255139 (0.090718) | 0.361019 / 0.283200 (0.077820) | 0.111007 / 0.141683 (-0.030676) | 1.444014 / 1.452155 (-0.008140) | 1.533154 / 1.492716 (0.040438) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.332114 / 0.018006 (0.314108) | 0.517232 / 0.000490 (0.516742) | 0.004459 / 0.000200 (0.004259) | 0.000102 / 0.000054 (0.000048) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033147 / 0.037411 (-0.004264) | 0.119983 / 0.014526 (0.105457) | 0.125970 / 0.176557 (-0.050586) | 0.196375 / 0.737135 (-0.540760) | 0.133849 / 0.296338 (-0.162489) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.429477 / 0.215209 (0.214267) | 4.263750 / 2.077655 (2.186096) | 2.079409 / 1.504120 (0.575289) | 1.899831 / 1.541195 (0.358636) | 2.048472 / 1.468490 (0.579982) | 0.720945 / 4.584777 (-3.863832) | 3.813195 / 3.745712 (0.067483) | 2.250353 / 5.269862 (-3.019508) | 1.401496 / 4.565676 (-3.164181) | 0.090052 / 0.424275 (-0.334223) | 0.012552 / 0.007607 (0.004945) | 0.536839 / 0.226044 (0.310794) | 5.361089 / 2.268929 (3.092161) | 2.559710 / 55.444624 (-52.884914) | 2.226963 / 6.876477 (-4.649513) | 2.341898 / 2.142072 (0.199825) | 0.872115 / 4.805227 (-3.933112) | 0.173776 / 6.500664 (-6.326888) | 0.068567 / 0.075469 (-0.006902) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.294583 / 1.841788 (-0.547205) | 16.624099 / 8.074308 (8.549791) | 13.698509 / 10.191392 (3.507117) | 0.161917 / 0.680424 (-0.518506) | 0.017744 / 0.534201 (-0.516457) | 0.428547 / 0.579283 (-0.150736) | 0.424687 / 0.434364 (-0.009677) | 0.525812 / 0.540337 (-0.014525) | 0.629075 / 1.386936 (-0.757861) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#33e4d6af919db17bf9a1eac544a0501b5972393b \"CML watermark\")\n", "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008667 / 0.011353 (-0.002686) | 0.004921 / 0.011008 (-0.006087) | 0.098352 / 0.038508 (0.059844) | 0.033983 / 0.023109 (0.010873) | 0.291640 / 0.275898 (0.015742) | 0.323388 / 0.323480 (-0.000092) | 0.007943 / 0.007986 (-0.000043) | 0.003922 / 0.004328 (-0.000407) | 0.075861 / 0.004250 (0.071610) | 0.042606 / 0.037052 (0.005554) | 0.298571 / 0.258489 (0.040081) | 0.345496 / 0.293841 (0.051655) | 0.037443 / 0.128546 (-0.091103) | 0.012114 / 0.075646 (-0.063532) | 0.333269 / 0.419271 (-0.086003) | 0.047762 / 0.043533 (0.004229) | 0.295452 / 0.255139 (0.040313) | 0.319641 / 0.283200 (0.036441) | 0.101083 / 0.141683 (-0.040600) | 1.432179 / 1.452155 (-0.019976) | 1.523976 / 1.492716 (0.031260) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.241327 / 0.018006 (0.223321) | 0.538315 / 0.000490 (0.537825) | 0.003479 / 0.000200 (0.003279) | 0.000082 / 0.000054 (0.000028) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025857 / 0.037411 (-0.011554) | 0.104833 / 0.014526 (0.090307) | 0.116826 / 0.176557 (-0.059730) | 0.183460 / 0.737135 (-0.553675) | 0.119595 / 0.296338 (-0.176743) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.397533 / 0.215209 (0.182324) | 3.968664 / 2.077655 (1.891010) | 1.774025 / 1.504120 (0.269905) | 1.577424 / 1.541195 (0.036229) | 1.623049 / 1.468490 (0.154559) | 0.701008 / 4.584777 (-3.883769) | 3.753278 / 3.745712 (0.007565) | 2.078313 / 5.269862 (-3.191549) | 1.335639 / 4.565676 (-3.230037) | 0.085216 / 0.424275 (-0.339059) | 0.012087 / 0.007607 (0.004480) | 0.513219 / 0.226044 (0.287174) | 5.097693 / 2.268929 (2.828765) | 2.275030 / 55.444624 (-53.169594) | 1.928037 / 6.876477 (-4.948439) | 1.941216 / 2.142072 (-0.200856) | 0.856720 / 4.805227 (-3.948507) | 0.166723 / 6.500664 (-6.333941) | 0.062263 / 0.075469 (-0.013206) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.196054 / 1.841788 (-0.645734) | 14.190526 / 8.074308 (6.116218) | 14.053768 / 10.191392 (3.862376) | 0.179982 / 0.680424 (-0.500442) | 0.029024 / 0.534201 (-0.505177) | 0.440391 / 0.579283 (-0.138892) | 0.445627 / 0.434364 (0.011264) | 0.543098 / 0.540337 (0.002761) | 0.640577 / 1.386936 (-0.746359) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007008 / 0.011353 (-0.004345) | 0.005015 / 0.011008 (-0.005993) | 0.073783 / 0.038508 (0.035274) | 0.032401 / 0.023109 (0.009292) | 0.343382 / 0.275898 (0.067484) | 0.358317 / 0.323480 (0.034837) | 0.005548 / 0.007986 (-0.002437) | 0.005188 / 0.004328 (0.000859) | 0.072867 / 0.004250 (0.068617) | 0.048555 / 0.037052 (0.011502) | 0.334516 / 0.258489 (0.076027) | 0.390263 / 0.293841 (0.096422) | 0.036343 / 0.128546 (-0.092203) | 0.012243 / 0.075646 (-0.063404) | 0.087067 / 0.419271 (-0.332205) | 0.049025 / 0.043533 (0.005492) | 0.333977 / 0.255139 (0.078838) | 0.354427 / 0.283200 (0.071227) | 0.104771 / 0.141683 (-0.036912) | 1.434588 / 1.452155 (-0.017567) | 1.519788 / 1.492716 (0.027072) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.264002 / 0.018006 (0.245996) | 0.547902 / 0.000490 (0.547412) | 0.000461 / 0.000200 (0.000261) | 0.000062 / 0.000054 (0.000008) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028916 / 0.037411 (-0.008496) | 0.110267 / 0.014526 (0.095741) | 0.119190 / 0.176557 (-0.057367) | 0.188599 / 0.737135 (-0.548537) | 0.126948 / 0.296338 (-0.169391) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.422777 / 0.215209 (0.207568) | 4.209813 / 2.077655 (2.132158) | 2.001360 / 1.504120 (0.497240) | 1.802651 / 1.541195 (0.261456) | 1.860357 / 1.468490 (0.391867) | 0.695006 / 4.584777 (-3.889771) | 3.741917 / 3.745712 (-0.003795) | 3.313071 / 5.269862 (-1.956791) | 1.726366 / 4.565676 (-2.839311) | 0.086185 / 0.424275 (-0.338090) | 0.012256 / 0.007607 (0.004649) | 0.536874 / 0.226044 (0.310830) | 5.253008 / 2.268929 (2.984079) | 2.457189 / 55.444624 (-52.987436) | 2.112199 / 6.876477 (-4.764278) | 2.117867 / 2.142072 (-0.024205) | 0.831914 / 4.805227 (-3.973314) | 0.168238 / 6.500664 (-6.332426) | 0.065075 / 0.075469 (-0.010394) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.280795 / 1.841788 (-0.560993) | 14.606608 / 8.074308 (6.532299) | 13.317597 / 10.191392 (3.126205) | 0.166590 / 0.680424 (-0.513834) | 0.017520 / 0.534201 (-0.516681) | 0.420978 / 0.579283 (-0.158305) | 0.415708 / 0.434364 (-0.018656) | 0.523619 / 0.540337 (-0.016718) | 0.625299 / 1.386936 (-0.761637) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#a2a83a8ea4b3a87a925ef44b787e87b59bf68225 \"CML watermark\")\n" ]
null
[]
Flatten dataset on the fly in `save_to_disk`
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/5588/timeline
Flatten a dataset on the fly in `save_to_disk` instead of doing it with `flatten_indices` to avoid creating an additional cache file. (this is one of the sub-tasks in https://github.com/huggingface/datasets/issues/5507)
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/5588.diff", "html_url": "https://github.com/huggingface/datasets/pull/5588", "merged_at": "2023-02-28T17:21:17Z", "patch_url": "https://github.com/huggingface/datasets/pull/5588.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5588" }
1,603,304,766
https://api.github.com/repos/huggingface/datasets/issues/5588/comments
PR_kwDODunzps5K8YYz
null
5,588
https://api.github.com/repos/huggingface/datasets/issues/5588/events
true
closed
2023-02-28T14:05:08Z
null
https://api.github.com/repos/huggingface/datasets/issues/5587
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5587/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/5587/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
https://github.com/huggingface/datasets/pull/5587
[]
false
2023-02-28T17:28:57Z
2023-02-28T17:21:58Z
null
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008740 / 0.011353 (-0.002613) | 0.004501 / 0.011008 (-0.006507) | 0.100045 / 0.038508 (0.061537) | 0.029999 / 0.023109 (0.006890) | 0.303556 / 0.275898 (0.027658) | 0.335342 / 0.323480 (0.011863) | 0.006996 / 0.007986 (-0.000989) | 0.004183 / 0.004328 (-0.000145) | 0.076434 / 0.004250 (0.072183) | 0.033899 / 0.037052 (-0.003153) | 0.301312 / 0.258489 (0.042823) | 0.343136 / 0.293841 (0.049295) | 0.034062 / 0.128546 (-0.094484) | 0.011465 / 0.075646 (-0.064181) | 0.323134 / 0.419271 (-0.096137) | 0.040820 / 0.043533 (-0.002713) | 0.301708 / 0.255139 (0.046569) | 0.329528 / 0.283200 (0.046328) | 0.088393 / 0.141683 (-0.053290) | 1.460996 / 1.452155 (0.008842) | 1.531145 / 1.492716 (0.038429) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.191918 / 0.018006 (0.173912) | 0.414099 / 0.000490 (0.413610) | 0.000411 / 0.000200 (0.000211) | 0.000060 / 0.000054 (0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022707 / 0.037411 (-0.014704) | 0.096991 / 0.014526 (0.082465) | 0.106070 / 0.176557 (-0.070487) | 0.151275 / 0.737135 (-0.585860) | 0.108909 / 0.296338 (-0.187430) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.422499 / 0.215209 (0.207289) | 4.205551 / 2.077655 (2.127896) | 1.918960 / 1.504120 (0.414841) | 1.715421 / 1.541195 (0.174227) | 1.768969 / 1.468490 (0.300479) | 0.692243 / 4.584777 (-3.892534) | 3.382452 / 3.745712 (-0.363260) | 1.943695 / 5.269862 (-3.326166) | 1.250482 / 4.565676 (-3.315195) | 0.082084 / 0.424275 (-0.342191) | 0.012446 / 0.007607 (0.004839) | 0.525584 / 0.226044 (0.299539) | 5.275530 / 2.268929 (3.006602) | 2.386207 / 55.444624 (-53.058418) | 2.043920 / 6.876477 (-4.832557) | 2.030932 / 2.142072 (-0.111140) | 0.810233 / 4.805227 (-3.994994) | 0.148139 / 6.500664 (-6.352525) | 0.064617 / 0.075469 (-0.010852) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.227352 / 1.841788 (-0.614436) | 13.527623 / 8.074308 (5.453315) | 14.018551 / 10.191392 (3.827159) | 0.140333 / 0.680424 (-0.540091) | 0.028349 / 0.534201 (-0.505852) | 0.394904 / 0.579283 (-0.184379) | 0.406532 / 0.434364 (-0.027831) | 0.471714 / 0.540337 (-0.068624) | 0.568517 / 1.386936 (-0.818419) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006623 / 0.011353 (-0.004730) | 0.004464 / 0.011008 (-0.006544) | 0.076342 / 0.038508 (0.037834) | 0.027451 / 0.023109 (0.004341) | 0.343851 / 0.275898 (0.067953) | 0.385723 / 0.323480 (0.062243) | 0.005624 / 0.007986 (-0.002362) | 0.004685 / 0.004328 (0.000356) | 0.075669 / 0.004250 (0.071419) | 0.037297 / 0.037052 (0.000244) | 0.343363 / 0.258489 (0.084874) | 0.396115 / 0.293841 (0.102274) | 0.031577 / 0.128546 (-0.096970) | 0.011557 / 0.075646 (-0.064090) | 0.085626 / 0.419271 (-0.333645) | 0.041699 / 0.043533 (-0.001834) | 0.340826 / 0.255139 (0.085687) | 0.377167 / 0.283200 (0.093967) | 0.088632 / 0.141683 (-0.053051) | 1.464500 / 1.452155 (0.012345) | 1.556686 / 1.492716 (0.063969) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.231136 / 0.018006 (0.213130) | 0.402687 / 0.000490 (0.402197) | 0.000590 / 0.000200 (0.000390) | 0.000059 / 0.000054 (0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024926 / 0.037411 (-0.012485) | 0.101062 / 0.014526 (0.086536) | 0.106481 / 0.176557 (-0.070075) | 0.159167 / 0.737135 (-0.577968) | 0.110948 / 0.296338 (-0.185390) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.441813 / 0.215209 (0.226603) | 4.416332 / 2.077655 (2.338677) | 2.080621 / 1.504120 (0.576501) | 1.877832 / 1.541195 (0.336637) | 1.944778 / 1.468490 (0.476288) | 0.704634 / 4.584777 (-3.880143) | 3.433955 / 3.745712 (-0.311758) | 1.863493 / 5.269862 (-3.406368) | 1.168869 / 4.565676 (-3.396807) | 0.084095 / 0.424275 (-0.340180) | 0.012440 / 0.007607 (0.004833) | 0.545122 / 0.226044 (0.319077) | 5.472214 / 2.268929 (3.203285) | 2.514580 / 55.444624 (-52.930044) | 2.164570 / 6.876477 (-4.711907) | 2.193467 / 2.142072 (0.051395) | 0.809056 / 4.805227 (-3.996171) | 0.152343 / 6.500664 (-6.348321) | 0.067610 / 0.075469 (-0.007859) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.280968 / 1.841788 (-0.560820) | 13.887674 / 8.074308 (5.813366) | 13.160405 / 10.191392 (2.969013) | 0.128601 / 0.680424 (-0.551823) | 0.016420 / 0.534201 (-0.517780) | 0.382810 / 0.579283 (-0.196473) | 0.394386 / 0.434364 (-0.039978) | 0.470254 / 0.540337 (-0.070083) | 0.566907 / 1.386936 (-0.820029) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#8cc6950322337ea8873939541c53858b10c0f3b9 \"CML watermark\")\n", "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008673 / 0.011353 (-0.002679) | 0.004475 / 0.011008 (-0.006533) | 0.102060 / 0.038508 (0.063552) | 0.029438 / 0.023109 (0.006329) | 0.351785 / 0.275898 (0.075887) | 0.388199 / 0.323480 (0.064719) | 0.007011 / 0.007986 (-0.000974) | 0.003317 / 0.004328 (-0.001012) | 0.080931 / 0.004250 (0.076681) | 0.033449 / 0.037052 (-0.003603) | 0.360329 / 0.258489 (0.101840) | 0.400069 / 0.293841 (0.106228) | 0.033628 / 0.128546 (-0.094918) | 0.011462 / 0.075646 (-0.064184) | 0.323781 / 0.419271 (-0.095490) | 0.040686 / 0.043533 (-0.002847) | 0.332715 / 0.255139 (0.077576) | 0.370339 / 0.283200 (0.087139) | 0.084633 / 0.141683 (-0.057050) | 1.459452 / 1.452155 (0.007297) | 1.547719 / 1.492716 (0.055003) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.187051 / 0.018006 (0.169045) | 0.402625 / 0.000490 (0.402135) | 0.002218 / 0.000200 (0.002018) | 0.000070 / 0.000054 (0.000016) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025240 / 0.037411 (-0.012171) | 0.102201 / 0.014526 (0.087675) | 0.108629 / 0.176557 (-0.067927) | 0.156686 / 0.737135 (-0.580449) | 0.111383 / 0.296338 (-0.184955) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.418099 / 0.215209 (0.202890) | 4.163345 / 2.077655 (2.085690) | 1.868419 / 1.504120 (0.364300) | 1.662066 / 1.541195 (0.120871) | 1.705912 / 1.468490 (0.237422) | 0.696391 / 4.584777 (-3.888386) | 3.338307 / 3.745712 (-0.407405) | 1.923255 / 5.269862 (-3.346607) | 1.249220 / 4.565676 (-3.316457) | 0.082037 / 0.424275 (-0.342238) | 0.012232 / 0.007607 (0.004624) | 0.523913 / 0.226044 (0.297869) | 5.290036 / 2.268929 (3.021107) | 2.319729 / 55.444624 (-53.124896) | 1.987345 / 6.876477 (-4.889132) | 2.044516 / 2.142072 (-0.097556) | 0.812098 / 4.805227 (-3.993129) | 0.147327 / 6.500664 (-6.353337) | 0.063838 / 0.075469 (-0.011631) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.219652 / 1.841788 (-0.622136) | 13.271513 / 8.074308 (5.197205) | 13.799982 / 10.191392 (3.608590) | 0.150055 / 0.680424 (-0.530369) | 0.028804 / 0.534201 (-0.505397) | 0.395452 / 0.579283 (-0.183831) | 0.398758 / 0.434364 (-0.035606) | 0.468575 / 0.540337 (-0.071763) | 0.553324 / 1.386936 (-0.833612) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006498 / 0.011353 (-0.004855) | 0.004439 / 0.011008 (-0.006569) | 0.076525 / 0.038508 (0.038017) | 0.027184 / 0.023109 (0.004074) | 0.364705 / 0.275898 (0.088807) | 0.409481 / 0.323480 (0.086001) | 0.004831 / 0.007986 (-0.003154) | 0.004524 / 0.004328 (0.000196) | 0.075403 / 0.004250 (0.071153) | 0.039013 / 0.037052 (0.001960) | 0.364042 / 0.258489 (0.105553) | 0.413090 / 0.293841 (0.119249) | 0.032052 / 0.128546 (-0.096495) | 0.011514 / 0.075646 (-0.064132) | 0.085219 / 0.419271 (-0.334053) | 0.041448 / 0.043533 (-0.002085) | 0.350371 / 0.255139 (0.095232) | 0.386670 / 0.283200 (0.103470) | 0.089824 / 0.141683 (-0.051859) | 1.487392 / 1.452155 (0.035238) | 1.537201 / 1.492716 (0.044485) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.231555 / 0.018006 (0.213549) | 0.407505 / 0.000490 (0.407016) | 0.000382 / 0.000200 (0.000182) | 0.000060 / 0.000054 (0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026665 / 0.037411 (-0.010747) | 0.105852 / 0.014526 (0.091326) | 0.108228 / 0.176557 (-0.068328) | 0.164164 / 0.737135 (-0.572972) | 0.114284 / 0.296338 (-0.182054) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.448957 / 0.215209 (0.233748) | 4.500058 / 2.077655 (2.422403) | 2.331660 / 1.504120 (0.827541) | 2.119904 / 1.541195 (0.578710) | 2.101489 / 1.468490 (0.632999) | 0.696580 / 4.584777 (-3.888197) | 3.364206 / 3.745712 (-0.381506) | 2.550157 / 5.269862 (-2.719704) | 1.496455 / 4.565676 (-3.069222) | 0.083289 / 0.424275 (-0.340986) | 0.012283 / 0.007607 (0.004676) | 0.555581 / 0.226044 (0.329537) | 5.556284 / 2.268929 (3.287355) | 2.595261 / 55.444624 (-52.849363) | 2.234793 / 6.876477 (-4.641683) | 2.280150 / 2.142072 (0.138078) | 0.817885 / 4.805227 (-3.987343) | 0.151481 / 6.500664 (-6.349183) | 0.066764 / 0.075469 (-0.008705) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.318875 / 1.841788 (-0.522913) | 14.220380 / 8.074308 (6.146072) | 13.922773 / 10.191392 (3.731381) | 0.154608 / 0.680424 (-0.525816) | 0.016343 / 0.534201 (-0.517858) | 0.380758 / 0.579283 (-0.198525) | 0.392595 / 0.434364 (-0.041769) | 0.468844 / 0.540337 (-0.071493) | 0.561047 / 1.386936 (-0.825889) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#d57fdcf2c8110b4b599289695fa065d1fc4936d4 \"CML watermark\")\n" ]
null
[]
Fix `sort` with indices mapping
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/5587/timeline
Fixes the `key` range in the `query_table` call in `sort` to account for an indices mapping Fix #5586
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/5587.diff", "html_url": "https://github.com/huggingface/datasets/pull/5587", "merged_at": "2023-02-28T17:21:58Z", "patch_url": "https://github.com/huggingface/datasets/pull/5587.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5587" }
1,603,139,420
https://api.github.com/repos/huggingface/datasets/issues/5587/comments
PR_kwDODunzps5K70pp
null
5,587
https://api.github.com/repos/huggingface/datasets/issues/5587/events
true
closed
2023-02-28T12:18:09Z
null
https://api.github.com/repos/huggingface/datasets/issues/5586
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5586/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/5586/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/57797966?v=4", "events_url": "https://api.github.com/users/MattYoon/events{/privacy}", "followers_url": "https://api.github.com/users/MattYoon/followers", "following_url": "https://api.github.com/users/MattYoon/following{/other_user}", "gists_url": "https://api.github.com/users/MattYoon/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/MattYoon", "id": 57797966, "login": "MattYoon", "node_id": "MDQ6VXNlcjU3Nzk3OTY2", "organizations_url": "https://api.github.com/users/MattYoon/orgs", "received_events_url": "https://api.github.com/users/MattYoon/received_events", "repos_url": "https://api.github.com/users/MattYoon/repos", "site_admin": false, "starred_url": "https://api.github.com/users/MattYoon/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MattYoon/subscriptions", "type": "User", "url": "https://api.github.com/users/MattYoon" }
https://github.com/huggingface/datasets/issues/5586
[]
false
2023-02-28T18:17:26Z
2023-02-28T17:21:59Z
null
[ "Thanks for reporting and thanks @mariosasko for fixing ! We just did a patch release `2.10.1` with the fix" ]
completed
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
.sort() is broken when used after .filter(), only in 2.10.0
NONE
https://api.github.com/repos/huggingface/datasets/issues/5586/timeline
### Describe the bug Hi, thank you for your support! It seems like the addition of multiple key sort (#5502) in 2.10.0 broke the `.sort()` method. After filtering a dataset with `.filter()`, the `.sort()` seems to refer to the query_table index of the previous unfiltered dataset, resulting in an IndexError. This only happens with the 2.10.0 release. ### Steps to reproduce the bug ```Python from datasets import load_dataset # dataset with length of 1104 ds = load_dataset('glue', 'ax')['test'] ds = ds.filter(lambda x: x['idx'] > 1100) ds.sort('premise') print('Done') ``` File "/home/dongkeun/datasets_test/test.py", line 5, in <module> ds.sort('premise') File "/home/dongkeun/miniconda3/envs/datasets_test/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 528, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/home/dongkeun/miniconda3/envs/datasets_test/lib/python3.9/site-packages/datasets/fingerprint.py", line 511, in wrapper out = func(dataset, *args, **kwargs) File "/home/dongkeun/miniconda3/envs/datasets_test/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 3959, in sort sort_table = query_table( File "/home/dongkeun/miniconda3/envs/datasets_test/lib/python3.9/site-packages/datasets/formatting/formatting.py", line 588, in query_table _check_valid_index_key(key, size) File "/home/dongkeun/miniconda3/envs/datasets_test/lib/python3.9/site-packages/datasets/formatting/formatting.py", line 537, in _check_valid_index_key _check_valid_index_key(max(key), size=size) File "/home/dongkeun/miniconda3/envs/datasets_test/lib/python3.9/site-packages/datasets/formatting/formatting.py", line 531, in _check_valid_index_key raise IndexError(f"Invalid key: {key} is out of bounds for size {size}") IndexError: Invalid key: 1103 is out of bounds for size 3 ### Expected behavior It should sort the dataset and print "Done". Which it does on 2.9.0. ### Environment info - `datasets` version: 2.10.0 - Platform: Linux-5.15.0-41-generic-x86_64-with-glibc2.31 - Python version: 3.9.16 - PyArrow version: 11.0.0 - Pandas version: 1.5.3
https://api.github.com/repos/huggingface/datasets
null
1,602,961,544
https://api.github.com/repos/huggingface/datasets/issues/5586/comments
I_kwDODunzps5fi0CI
null
5,586
https://api.github.com/repos/huggingface/datasets/issues/5586/events
false
closed
2023-02-28T00:53:06Z
null
https://api.github.com/repos/huggingface/datasets/issues/5585
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5585/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/5585/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/4443482?v=4", "events_url": "https://api.github.com/users/davidgilbertson/events{/privacy}", "followers_url": "https://api.github.com/users/davidgilbertson/followers", "following_url": "https://api.github.com/users/davidgilbertson/following{/other_user}", "gists_url": "https://api.github.com/users/davidgilbertson/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/davidgilbertson", "id": 4443482, "login": "davidgilbertson", "node_id": "MDQ6VXNlcjQ0NDM0ODI=", "organizations_url": "https://api.github.com/users/davidgilbertson/orgs", "received_events_url": "https://api.github.com/users/davidgilbertson/received_events", "repos_url": "https://api.github.com/users/davidgilbertson/repos", "site_admin": false, "starred_url": "https://api.github.com/users/davidgilbertson/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/davidgilbertson/subscriptions", "type": "User", "url": "https://api.github.com/users/davidgilbertson" }
https://github.com/huggingface/datasets/issues/5585
[]
false
2023-02-28T21:26:52Z
2023-02-28T21:26:52Z
null
[ "Hi ! No the cache is not transportable in general. It will work on a shared filesystem if you use the same python environment, but not across machines/os/environments.\r\n\r\nIn particular, reloading cached datasets does work, but reloading cached processed datasets (e.g. from `map`) may not work. This is because some hashes used by caching are based on pickle dumps of the function you pass to `map`.\r\n\r\nFinally you may copy the cache to another machine, but all the `cached-*.arrow` files are unlikely to be reloaded.", "OK good to know. Thanks @lhoestq !" ]
completed
[]
Cache is not transportable
NONE
https://api.github.com/repos/huggingface/datasets/issues/5585/timeline
### Describe the bug I would like to share cache between two machines (a Windows host machine and a WSL instance). I run most my code in WSL. I have just run out of space in the virtual drive. Rather than expand the drive size, I plan to move to cache to the host Windows machine, thereby sharing the downloads. I'm hoping that I can just copy/paste the cache files, but I notice that a lot of the file names start with the path name, e.g. `_home_davidg_.cache_huggingface_datasets_conll2003_default-451...98.lock` where `home/davidg` is where the cache is in WSL. This seems to suggest that the cache is not portable/cannot be centralised or shared. Is this the case, or are the files that start with path names not integral to the caching mechanism? Because copying the cache files _seems_ to work, but I'm not filled with confidence that something isn't going to break. A related issue, when trying to load a dataset that should come from cache (running in WSL, pointing to cache on the Windows host) it seemed to work fine, but it still uses a WSL directory for `.cache\huggingface\modules\datasets_modules`. I see nothing in the docs about this, or how to point it to a different place. I have asked a related question on the forum: https://discuss.huggingface.co/t/is-datasets-cache-operating-system-agnostic/32656 ### Steps to reproduce the bug View the cache directory in WSL/Windows. ### Expected behavior Cache can be shared between (virtual) machines and be transportable. It would be nice to have a simple way to say "Dear Hugging Face packages, please put ALL your cache in `blah/de/blah`" and have all the Hugging Face packages respect that single location. ### Environment info ``` - `datasets` version: 2.9.0 - Platform: Linux-5.10.102.1-microsoft-standard-WSL2-x86_64-with-glibc2.31 - Python version: 3.10.8 - PyArrow version: 11.0.0 - Pandas version: 1.5.3 - ```
https://api.github.com/repos/huggingface/datasets
null
1,602,190,030
https://api.github.com/repos/huggingface/datasets/issues/5585/comments
I_kwDODunzps5ff3rO
null
5,585
https://api.github.com/repos/huggingface/datasets/issues/5585/events
false
closed
2023-02-27T19:35:03Z
null
https://api.github.com/repos/huggingface/datasets/issues/5584
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5584/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/5584/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/3059998?v=4", "events_url": "https://api.github.com/users/manuaero/events{/privacy}", "followers_url": "https://api.github.com/users/manuaero/followers", "following_url": "https://api.github.com/users/manuaero/following{/other_user}", "gists_url": "https://api.github.com/users/manuaero/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/manuaero", "id": 3059998, "login": "manuaero", "node_id": "MDQ6VXNlcjMwNTk5OTg=", "organizations_url": "https://api.github.com/users/manuaero/orgs", "received_events_url": "https://api.github.com/users/manuaero/received_events", "repos_url": "https://api.github.com/users/manuaero/repos", "site_admin": false, "starred_url": "https://api.github.com/users/manuaero/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/manuaero/subscriptions", "type": "User", "url": "https://api.github.com/users/manuaero" }
https://github.com/huggingface/datasets/issues/5584
[]
false
2023-02-28T07:27:59Z
2023-02-28T07:27:58Z
null
[ "Hi @manuaero \r\n\r\nThank you for your interest in the COYO dataset.\r\n\r\nOur dataset provides the img-url and alt-text in the form of a parquet, so to utilize the coyo dataset you will need to download it directly.\r\n\r\nWe provide a [guide](https://github.com/kakaobrain/coyo-dataset/blob/main/download/README.md) to download, so check it out.\r\n\r\nThank you." ]
completed
[]
Unable to load coyo700M dataset
NONE
https://api.github.com/repos/huggingface/datasets/issues/5584/timeline
### Describe the bug Seeing this error when downloading https://huggingface.co/datasets/kakaobrain/coyo-700m: ```ArrowInvalid: Parquet magic bytes not found in footer. Either the file is corrupted or this is not a parquet file.``` Full stack trace ```Downloading and preparing dataset parquet/kakaobrain--coyo-700m to /root/.cache/huggingface/datasets/kakaobrain___parquet/kakaobrain--coyo-700m-ae729692ae3e0073/0.0.0/2a3b91fbd88a2c90d1dbbb32b460cf621d31bd5b05b934492fdef7d8d6f236ec... Downloading data files: 100% 1/1 [00:00<00:00, 63.35it/s] Extracting data files: 100% 1/1 [00:00<00:00, 5.00it/s] --------------------------------------------------------------------------- ArrowInvalid Traceback (most recent call last) [/usr/local/lib/python3.8/dist-packages/datasets/builder.py](https://localhost:8080/#) in _prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id) 1859 _time = time.time() -> 1860 for _, table in generator: 1861 if max_shard_size is not None and writer._num_bytes > max_shard_size: 9 frames ArrowInvalid: Parquet magic bytes not found in footer. Either the file is corrupted or this is not a parquet file. The above exception was the direct cause of the following exception: DatasetGenerationError Traceback (most recent call last) [/usr/local/lib/python3.8/dist-packages/datasets/builder.py](https://localhost:8080/#) in _prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id) 1890 if isinstance(e, SchemaInferenceError) and e.__context__ is not None: 1891 e = e.__context__ -> 1892 raise DatasetGenerationError("An error occurred while generating the dataset") from e 1893 1894 yield job_id, True, (total_num_examples, total_num_bytes, writer._features, num_shards, shard_lengths) DatasetGenerationError: An error occurred while generating the dataset``` ### Steps to reproduce the bug ``` from datasets import load_dataset hf_dataset = load_dataset("kakaobrain/coyo-700m") ``` ### Expected behavior The above commands load the dataset successfully. Or handles exception and continue loading the remainder. ### Environment info colab. any
https://api.github.com/repos/huggingface/datasets
null
1,601,821,808
https://api.github.com/repos/huggingface/datasets/issues/5584/comments
I_kwDODunzps5fedxw
null
5,584
https://api.github.com/repos/huggingface/datasets/issues/5584/events
false
closed
2023-02-27T17:04:46Z
null
https://api.github.com/repos/huggingface/datasets/issues/5583
null
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/5583/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/5583/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
https://github.com/huggingface/datasets/pull/5583
[]
false
2023-02-28T13:52:15Z
2023-02-28T13:44:04Z
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009044 / 0.011353 (-0.002309) | 0.004244 / 0.011008 (-0.006765) | 0.106705 / 0.038508 (0.068197) | 0.029779 / 0.023109 (0.006670) | 0.289684 / 0.275898 (0.013786) | 0.347100 / 0.323480 (0.023620) | 0.007071 / 0.007986 (-0.000915) | 0.003734 / 0.004328 (-0.000595) | 0.077971 / 0.004250 (0.073720) | 0.035323 / 0.037052 (-0.001730) | 0.334520 / 0.258489 (0.076031) | 0.375804 / 0.293841 (0.081964) | 0.049211 / 0.128546 (-0.079335) | 0.016992 / 0.075646 (-0.058654) | 0.337208 / 0.419271 (-0.082064) | 0.053700 / 0.043533 (0.010167) | 0.295750 / 0.255139 (0.040611) | 0.330157 / 0.283200 (0.046958) | 0.097017 / 0.141683 (-0.044666) | 1.379353 / 1.452155 (-0.072802) | 1.402670 / 1.492716 (-0.090047) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.012685 / 0.018006 (-0.005321) | 0.474541 / 0.000490 (0.474051) | 0.006752 / 0.000200 (0.006552) | 0.000097 / 0.000054 (0.000042) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025735 / 0.037411 (-0.011676) | 0.092507 / 0.014526 (0.077982) | 0.100275 / 0.176557 (-0.076281) | 0.180359 / 0.737135 (-0.556777) | 0.104312 / 0.296338 (-0.192026) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.456558 / 0.215209 (0.241349) | 4.786667 / 2.077655 (2.709012) | 1.873169 / 1.504120 (0.369050) | 1.640935 / 1.541195 (0.099741) | 1.614543 / 1.468490 (0.146053) | 0.936144 / 4.584777 (-3.648633) | 4.699886 / 3.745712 (0.954174) | 2.398545 / 5.269862 (-2.871317) | 1.642808 / 4.565676 (-2.922868) | 0.124803 / 0.424275 (-0.299472) | 0.011848 / 0.007607 (0.004241) | 0.631684 / 0.226044 (0.405639) | 6.096052 / 2.268929 (3.827124) | 2.463052 / 55.444624 (-52.981572) | 1.928551 / 6.876477 (-4.947926) | 1.927790 / 2.142072 (-0.214283) | 1.098912 / 4.805227 (-3.706315) | 0.196343 / 6.500664 (-6.304321) | 0.063296 / 0.075469 (-0.012173) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.255032 / 1.841788 (-0.586755) | 13.853623 / 8.074308 (5.779315) | 16.303280 / 10.191392 (6.111888) | 0.227287 / 0.680424 (-0.453137) | 0.037527 / 0.534201 (-0.496674) | 0.449345 / 0.579283 (-0.129938) | 0.522054 / 0.434364 (0.087690) | 0.552848 / 0.540337 (0.012511) | 0.642994 / 1.386936 (-0.743942) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008470 / 0.011353 (-0.002883) | 0.005167 / 0.011008 (-0.005841) | 0.077794 / 0.038508 (0.039286) | 0.029228 / 0.023109 (0.006119) | 0.340828 / 0.275898 (0.064930) | 0.400170 / 0.323480 (0.076691) | 0.005485 / 0.007986 (-0.002500) | 0.003854 / 0.004328 (-0.000475) | 0.077597 / 0.004250 (0.073346) | 0.036519 / 0.037052 (-0.000533) | 0.335522 / 0.258489 (0.077033) | 0.412622 / 0.293841 (0.118781) | 0.044587 / 0.128546 (-0.083959) | 0.016024 / 0.075646 (-0.059623) | 0.092312 / 0.419271 (-0.326960) | 0.055660 / 0.043533 (0.012127) | 0.343140 / 0.255139 (0.088001) | 0.386403 / 0.283200 (0.103203) | 0.098634 / 0.141683 (-0.043049) | 1.326126 / 1.452155 (-0.126029) | 1.430316 / 1.492716 (-0.062400) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.222807 / 0.018006 (0.204801) | 0.473622 / 0.000490 (0.473132) | 0.000376 / 0.000200 (0.000176) | 0.000066 / 0.000054 (0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024599 / 0.037411 (-0.012813) | 0.100743 / 0.014526 (0.086217) | 0.112086 / 0.176557 (-0.064471) | 0.198294 / 0.737135 (-0.538842) | 0.111210 / 0.296338 (-0.185129) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.494120 / 0.215209 (0.278911) | 5.117958 / 2.077655 (3.040303) | 2.305131 / 1.504120 (0.801011) | 2.015591 / 1.541195 (0.474396) | 2.027284 / 1.468490 (0.558794) | 1.014241 / 4.584777 (-3.570536) | 4.738836 / 3.745712 (0.993124) | 2.519718 / 5.269862 (-2.750143) | 1.706379 / 4.565676 (-2.859298) | 0.122452 / 0.424275 (-0.301824) | 0.011500 / 0.007607 (0.003893) | 0.632864 / 0.226044 (0.406820) | 6.295457 / 2.268929 (4.026529) | 2.824897 / 55.444624 (-52.619727) | 2.324359 / 6.876477 (-4.552117) | 2.281046 / 2.142072 (0.138974) | 1.173570 / 4.805227 (-3.631657) | 0.197195 / 6.500664 (-6.303469) | 0.064845 / 0.075469 (-0.010624) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.273224 / 1.841788 (-0.568563) | 14.531155 / 8.074308 (6.456847) | 15.892176 / 10.191392 (5.700784) | 0.208051 / 0.680424 (-0.472373) | 0.023119 / 0.534201 (-0.511082) | 0.422317 / 0.579283 (-0.156966) | 0.519946 / 0.434364 (0.085582) | 0.544517 / 0.540337 (0.004179) | 0.605955 / 1.386936 (-0.780981) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#337a4a91d0268c68f26760321c9b45bb4a98832a \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010806 / 0.011353 (-0.000547) | 0.005631 / 0.011008 (-0.005378) | 0.113166 / 0.038508 (0.074657) | 0.042980 / 0.023109 (0.019871) | 0.344856 / 0.275898 (0.068958) | 0.404417 / 0.323480 (0.080938) | 0.012222 / 0.007986 (0.004236) | 0.004470 / 0.004328 (0.000141) | 0.088072 / 0.004250 (0.083822) | 0.049815 / 0.037052 (0.012763) | 0.366532 / 0.258489 (0.108043) | 0.392558 / 0.293841 (0.098717) | 0.045411 / 0.128546 (-0.083135) | 0.014118 / 0.075646 (-0.061529) | 0.392894 / 0.419271 (-0.026378) | 0.067713 / 0.043533 (0.024181) | 0.353013 / 0.255139 (0.097874) | 0.378375 / 0.283200 (0.095175) | 0.123686 / 0.141683 (-0.017996) | 1.665272 / 1.452155 (0.213118) | 1.748383 / 1.492716 (0.255667) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.011672 / 0.018006 (-0.006335) | 0.481667 / 0.000490 (0.481178) | 0.003644 / 0.000200 (0.003444) | 0.000092 / 0.000054 (0.000037) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030436 / 0.037411 (-0.006976) | 0.122577 / 0.014526 (0.108052) | 0.135409 / 0.176557 (-0.041148) | 0.220385 / 0.737135 (-0.516750) | 0.143140 / 0.296338 (-0.153199) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.471146 / 0.215209 (0.255937) | 4.645023 / 2.077655 (2.567368) | 2.126783 / 1.504120 (0.622663) | 1.907905 / 1.541195 (0.366710) | 1.969561 / 1.468490 (0.501071) | 0.798670 / 4.584777 (-3.786107) | 4.394787 / 3.745712 (0.649075) | 2.353535 / 5.269862 (-2.916327) | 1.501013 / 4.565676 (-3.064664) | 0.097472 / 0.424275 (-0.326803) | 0.014015 / 0.007607 (0.006408) | 0.589365 / 0.226044 (0.363320) | 5.897331 / 2.268929 (3.628402) | 2.656198 / 55.444624 (-52.788427) | 2.256082 / 6.876477 (-4.620395) | 2.271122 / 2.142072 (0.129050) | 0.961566 / 4.805227 (-3.843661) | 0.188303 / 6.500664 (-6.312361) | 0.073258 / 0.075469 (-0.002211) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.445266 / 1.841788 (-0.396522) | 16.876710 / 8.074308 (8.802402) | 16.004287 / 10.191392 (5.812895) | 0.212252 / 0.680424 (-0.468172) | 0.033186 / 0.534201 (-0.501015) | 0.520564 / 0.579283 (-0.058719) | 0.516865 / 0.434364 (0.082501) | 0.638482 / 0.540337 (0.098144) | 0.761959 / 1.386936 (-0.624977) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008101 / 0.011353 (-0.003252) | 0.005512 / 0.011008 (-0.005497) | 0.086138 / 0.038508 (0.047630) | 0.038605 / 0.023109 (0.015496) | 0.413082 / 0.275898 (0.137184) | 0.444016 / 0.323480 (0.120536) | 0.006196 / 0.007986 (-0.001790) | 0.005736 / 0.004328 (0.001408) | 0.086938 / 0.004250 (0.082688) | 0.052307 / 0.037052 (0.015255) | 0.415206 / 0.258489 (0.156717) | 0.481510 / 0.293841 (0.187669) | 0.041469 / 0.128546 (-0.087077) | 0.013481 / 0.075646 (-0.062165) | 0.101528 / 0.419271 (-0.317744) | 0.056507 / 0.043533 (0.012974) | 0.418166 / 0.255139 (0.163027) | 0.443834 / 0.283200 (0.160634) | 0.116434 / 0.141683 (-0.025249) | 1.651223 / 1.452155 (0.199068) | 1.746429 / 1.492716 (0.253713) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.242381 / 0.018006 (0.224375) | 0.478826 / 0.000490 (0.478337) | 0.000463 / 0.000200 (0.000264) | 0.000067 / 0.000054 (0.000013) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031743 / 0.037411 (-0.005668) | 0.126141 / 0.014526 (0.111616) | 0.134539 / 0.176557 (-0.042018) | 0.216546 / 0.737135 (-0.520590) | 0.143513 / 0.296338 (-0.152825) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.486915 / 0.215209 (0.271706) | 4.833812 / 2.077655 (2.756158) | 2.317785 / 1.504120 (0.813666) | 2.114181 / 1.541195 (0.572986) | 2.153896 / 1.468490 (0.685406) | 0.797490 / 4.584777 (-3.787287) | 4.369950 / 3.745712 (0.624238) | 2.305492 / 5.269862 (-2.964370) | 1.488860 / 4.565676 (-3.076816) | 0.098071 / 0.424275 (-0.326204) | 0.014129 / 0.007607 (0.006522) | 0.611311 / 0.226044 (0.385266) | 6.087482 / 2.268929 (3.818554) | 2.837676 / 55.444624 (-52.606948) | 2.451819 / 6.876477 (-4.424657) | 2.456763 / 2.142072 (0.314690) | 0.957637 / 4.805227 (-3.847590) | 0.190974 / 6.500664 (-6.309690) | 0.074497 / 0.075469 (-0.000972) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.466214 / 1.841788 (-0.375574) | 17.063925 / 8.074308 (8.989617) | 14.630326 / 10.191392 (4.438934) | 0.170570 / 0.680424 (-0.509854) | 0.023794 / 0.534201 (-0.510407) | 0.509175 / 0.579283 (-0.070108) | 0.506485 / 0.434364 (0.072121) | 0.616965 / 0.540337 (0.076628) | 0.718176 / 1.386936 (-0.668760) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#c4f14de325e26910d026f377756dd8a231150398 \"CML watermark\")\n" ]
null
[]
Do no write index by default when exporting a dataset
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/5583/timeline
Ensures all the writers that use Pandas for conversion (JSON, CSV, SQL) do not export `index` by default (https://github.com/huggingface/datasets/pull/5490 only did this for CSV)
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/5583.diff", "html_url": "https://github.com/huggingface/datasets/pull/5583", "merged_at": "2023-02-28T13:44:04Z", "patch_url": "https://github.com/huggingface/datasets/pull/5583.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5583" }
1,601,583,625
https://api.github.com/repos/huggingface/datasets/issues/5583/comments
PR_kwDODunzps5K2mIz
null
5,583
https://api.github.com/repos/huggingface/datasets/issues/5583/events
true
closed
2023-02-27T10:50:07Z
null
https://api.github.com/repos/huggingface/datasets/issues/5582
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5582/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/5582/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/50772274?v=4", "events_url": "https://api.github.com/users/patrickloeber/events{/privacy}", "followers_url": "https://api.github.com/users/patrickloeber/followers", "following_url": "https://api.github.com/users/patrickloeber/following{/other_user}", "gists_url": "https://api.github.com/users/patrickloeber/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/patrickloeber", "id": 50772274, "login": "patrickloeber", "node_id": "MDQ6VXNlcjUwNzcyMjc0", "organizations_url": "https://api.github.com/users/patrickloeber/orgs", "received_events_url": "https://api.github.com/users/patrickloeber/received_events", "repos_url": "https://api.github.com/users/patrickloeber/repos", "site_admin": false, "starred_url": "https://api.github.com/users/patrickloeber/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickloeber/subscriptions", "type": "User", "url": "https://api.github.com/users/patrickloeber" }
https://github.com/huggingface/datasets/pull/5582
[]
false
2023-03-13T19:10:22Z
2023-03-13T19:03:32Z
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006362 / 0.011353 (-0.004991) | 0.004546 / 0.011008 (-0.006462) | 0.097003 / 0.038508 (0.058495) | 0.028007 / 0.023109 (0.004898) | 0.315097 / 0.275898 (0.039199) | 0.365128 / 0.323480 (0.041649) | 0.004819 / 0.007986 (-0.003167) | 0.003335 / 0.004328 (-0.000994) | 0.076665 / 0.004250 (0.072415) | 0.038285 / 0.037052 (0.001233) | 0.322100 / 0.258489 (0.063611) | 0.407466 / 0.293841 (0.113625) | 0.031580 / 0.128546 (-0.096966) | 0.011645 / 0.075646 (-0.064001) | 0.321789 / 0.419271 (-0.097483) | 0.051015 / 0.043533 (0.007483) | 0.331762 / 0.255139 (0.076623) | 0.369727 / 0.283200 (0.086527) | 0.090144 / 0.141683 (-0.051539) | 1.485480 / 1.452155 (0.033326) | 1.562032 / 1.492716 (0.069316) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.201192 / 0.018006 (0.183186) | 0.409760 / 0.000490 (0.409270) | 0.002220 / 0.000200 (0.002020) | 0.000070 / 0.000054 (0.000016) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022361 / 0.037411 (-0.015050) | 0.096375 / 0.014526 (0.081849) | 0.101369 / 0.176557 (-0.075188) | 0.161568 / 0.737135 (-0.575568) | 0.105094 / 0.296338 (-0.191245) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.426251 / 0.215209 (0.211042) | 4.261374 / 2.077655 (2.183720) | 2.015688 / 1.504120 (0.511569) | 1.833708 / 1.541195 (0.292513) | 1.908994 / 1.468490 (0.440504) | 0.703108 / 4.584777 (-3.881669) | 3.420767 / 3.745712 (-0.324945) | 1.844776 / 5.269862 (-3.425086) | 1.158470 / 4.565676 (-3.407207) | 0.083324 / 0.424275 (-0.340951) | 0.013054 / 0.007607 (0.005447) | 0.521473 / 0.226044 (0.295429) | 5.245505 / 2.268929 (2.976576) | 2.349110 / 55.444624 (-53.095515) | 2.011119 / 6.876477 (-4.865358) | 2.217807 / 2.142072 (0.075734) | 0.808584 / 4.805227 (-3.996643) | 0.151337 / 6.500664 (-6.349327) | 0.065815 / 0.075469 (-0.009654) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.221839 / 1.841788 (-0.619949) | 13.634161 / 8.074308 (5.559853) | 13.915360 / 10.191392 (3.723968) | 0.126448 / 0.680424 (-0.553976) | 0.016614 / 0.534201 (-0.517587) | 0.379150 / 0.579283 (-0.200133) | 0.382134 / 0.434364 (-0.052230) | 0.442845 / 0.540337 (-0.097493) | 0.519578 / 1.386936 (-0.867358) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006238 / 0.011353 (-0.005115) | 0.004591 / 0.011008 (-0.006418) | 0.076652 / 0.038508 (0.038144) | 0.026882 / 0.023109 (0.003773) | 0.341948 / 0.275898 (0.066050) | 0.375244 / 0.323480 (0.051764) | 0.004770 / 0.007986 (-0.003215) | 0.004703 / 0.004328 (0.000374) | 0.075797 / 0.004250 (0.071547) | 0.035001 / 0.037052 (-0.002051) | 0.341670 / 0.258489 (0.083181) | 0.383028 / 0.293841 (0.089187) | 0.031756 / 0.128546 (-0.096791) | 0.011714 / 0.075646 (-0.063933) | 0.085552 / 0.419271 (-0.333720) | 0.047697 / 0.043533 (0.004164) | 0.340805 / 0.255139 (0.085666) | 0.365478 / 0.283200 (0.082278) | 0.093146 / 0.141683 (-0.048537) | 1.465100 / 1.452155 (0.012945) | 1.552708 / 1.492716 (0.059992) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.209117 / 0.018006 (0.191111) | 0.402622 / 0.000490 (0.402132) | 0.003940 / 0.000200 (0.003740) | 0.000078 / 0.000054 (0.000023) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026027 / 0.037411 (-0.011385) | 0.098346 / 0.014526 (0.083820) | 0.107349 / 0.176557 (-0.069207) | 0.157846 / 0.737135 (-0.579289) | 0.109566 / 0.296338 (-0.186772) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.445088 / 0.215209 (0.229879) | 4.450727 / 2.077655 (2.373072) | 2.237798 / 1.504120 (0.733678) | 2.026060 / 1.541195 (0.484866) | 2.020464 / 1.468490 (0.551974) | 0.700155 / 4.584777 (-3.884622) | 3.435497 / 3.745712 (-0.310215) | 2.851970 / 5.269862 (-2.417891) | 1.512689 / 4.565676 (-3.052988) | 0.083717 / 0.424275 (-0.340558) | 0.012466 / 0.007607 (0.004859) | 0.545130 / 0.226044 (0.319085) | 5.478228 / 2.268929 (3.209300) | 2.554169 / 55.444624 (-52.890456) | 2.214703 / 6.876477 (-4.661774) | 2.229997 / 2.142072 (0.087925) | 0.809851 / 4.805227 (-3.995376) | 0.151019 / 6.500664 (-6.349645) | 0.066354 / 0.075469 (-0.009115) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.281016 / 1.841788 (-0.560772) | 14.071312 / 8.074308 (5.997004) | 14.682465 / 10.191392 (4.491073) | 0.144197 / 0.680424 (-0.536227) | 0.017088 / 0.534201 (-0.517113) | 0.379049 / 0.579283 (-0.200234) | 0.390713 / 0.434364 (-0.043650) | 0.435804 / 0.540337 (-0.104534) | 0.518895 / 1.386936 (-0.868041) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#fc5c84f36684343bff3e424cb0fd1ac5ecdd66da \"CML watermark\")\n" ]
null
[]
Add column_names to IterableDataset
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/5582/timeline
This PR closes #5383 * Add column_names property to IterableDataset * Add multiple tests for this new property
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/5582.diff", "html_url": "https://github.com/huggingface/datasets/pull/5582", "merged_at": "2023-03-13T19:03:31Z", "patch_url": "https://github.com/huggingface/datasets/pull/5582.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5582" }
1,600,932,092
https://api.github.com/repos/huggingface/datasets/issues/5582/comments
PR_kwDODunzps5K0ZcN
null
5,582
https://api.github.com/repos/huggingface/datasets/issues/5582/events
true
closed
2023-02-27T08:03:09Z
null
https://api.github.com/repos/huggingface/datasets/issues/5581
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5581/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/5581/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/36224762?v=4", "events_url": "https://api.github.com/users/NightMachinery/events{/privacy}", "followers_url": "https://api.github.com/users/NightMachinery/followers", "following_url": "https://api.github.com/users/NightMachinery/following{/other_user}", "gists_url": "https://api.github.com/users/NightMachinery/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/NightMachinery", "id": 36224762, "login": "NightMachinery", "node_id": "MDQ6VXNlcjM2MjI0NzYy", "organizations_url": "https://api.github.com/users/NightMachinery/orgs", "received_events_url": "https://api.github.com/users/NightMachinery/received_events", "repos_url": "https://api.github.com/users/NightMachinery/repos", "site_admin": false, "starred_url": "https://api.github.com/users/NightMachinery/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NightMachinery/subscriptions", "type": "User", "url": "https://api.github.com/users/NightMachinery" }
https://github.com/huggingface/datasets/issues/5581
[]
false
2023-02-28T19:19:17Z
2023-02-28T19:19:17Z
null
[ "Thanks for reporting!" ]
completed
[ { "color": "7057ff", "default": true, "description": "Good for newcomers", "id": 1935892877, "name": "good first issue", "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue" } ]
[DOC] Mistaken docs on set_format
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/5581/timeline
### Describe the bug https://huggingface.co/docs/datasets/v2.10.0/en/package_reference/main_classes#datasets.Dataset.set_format <img width="700" alt="image" src="https://user-images.githubusercontent.com/36224762/221506973-ae2e3991-60a7-4d4e-99f8-965c6eb61e59.png"> While actually running it will result in: <img width="1094" alt="image" src="https://user-images.githubusercontent.com/36224762/221507032-007dab82-8781-4319-b21a-e6e4d40d97b3.png"> ### Steps to reproduce the bug _ ### Expected behavior _ ### Environment info - `datasets` version: 2.10.0 - Platform: Linux-5.10.147+-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyArrow version: 9.0.0 - Pandas version: 1.3.5
https://api.github.com/repos/huggingface/datasets
null
1,600,675,489
https://api.github.com/repos/huggingface/datasets/issues/5581/comments
I_kwDODunzps5faF6h
null
5,581
https://api.github.com/repos/huggingface/datasets/issues/5581/events
false
closed
2023-02-27T04:06:05Z
null
https://api.github.com/repos/huggingface/datasets/issues/5580
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5580/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/5580/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/2512762?v=4", "events_url": "https://api.github.com/users/dwyatte/events{/privacy}", "followers_url": "https://api.github.com/users/dwyatte/followers", "following_url": "https://api.github.com/users/dwyatte/following{/other_user}", "gists_url": "https://api.github.com/users/dwyatte/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dwyatte", "id": 2512762, "login": "dwyatte", "node_id": "MDQ6VXNlcjI1MTI3NjI=", "organizations_url": "https://api.github.com/users/dwyatte/orgs", "received_events_url": "https://api.github.com/users/dwyatte/received_events", "repos_url": "https://api.github.com/users/dwyatte/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dwyatte/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dwyatte/subscriptions", "type": "User", "url": "https://api.github.com/users/dwyatte" }
https://github.com/huggingface/datasets/pull/5580
[]
false
2023-03-11T01:02:49Z
2023-03-11T00:55:40Z
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "> Regarding the tests I think it should be possible to use the mockfs fixture, it allows to play with a dummy fsspec FileSystem with the \"mock://\" protocol.\r\n\r\n> However it requires some storage_options to be passed. Maybe it can be added to DownloadConfig which is passed to cached_path, so that fsspec_get and fsspec_head can use the user's storage_options ?\r\n\r\n@lhoestq I went ahead and tested this with a patch so that I could assign the mockfs as a return value. Let me know if I'm missing something though and we need to pass storage_options down", "> Instead of patching think it would be better to have a new filesystem TmpDirFileSystem (tmpfs) that doesn't need storage_options for the tests, and that is based on a temporary directory created just for the fixture. Maybe something like this ?\r\n\r\nThanks for the recommendation, this works great.", "Feel free to merge `main` into your PR to fix the CI :)", "Should be good to go. Thanks!", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006183 / 0.011353 (-0.005170) | 0.004180 / 0.011008 (-0.006829) | 0.095965 / 0.038508 (0.057457) | 0.026754 / 0.023109 (0.003645) | 0.339724 / 0.275898 (0.063826) | 0.381628 / 0.323480 (0.058149) | 0.004615 / 0.007986 (-0.003371) | 0.004469 / 0.004328 (0.000140) | 0.074035 / 0.004250 (0.069784) | 0.035089 / 0.037052 (-0.001963) | 0.352253 / 0.258489 (0.093764) | 0.389598 / 0.293841 (0.095757) | 0.032262 / 0.128546 (-0.096285) | 0.011392 / 0.075646 (-0.064254) | 0.323884 / 0.419271 (-0.095388) | 0.042658 / 0.043533 (-0.000874) | 0.331533 / 0.255139 (0.076394) | 0.364723 / 0.283200 (0.081523) | 0.086349 / 0.141683 (-0.055334) | 1.465687 / 1.452155 (0.013533) | 1.559782 / 1.492716 (0.067066) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.198562 / 0.018006 (0.180556) | 0.457170 / 0.000490 (0.456680) | 0.000409 / 0.000200 (0.000209) | 0.000061 / 0.000054 (0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022439 / 0.037411 (-0.014973) | 0.096551 / 0.014526 (0.082025) | 0.102230 / 0.176557 (-0.074326) | 0.160878 / 0.737135 (-0.576257) | 0.109348 / 0.296338 (-0.186990) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.456635 / 0.215209 (0.241426) | 4.563571 / 2.077655 (2.485916) | 2.313048 / 1.504120 (0.808928) | 2.117433 / 1.541195 (0.576239) | 2.127478 / 1.468490 (0.658988) | 0.699478 / 4.584777 (-3.885299) | 3.358955 / 3.745712 (-0.386757) | 1.821437 / 5.269862 (-3.448424) | 1.158239 / 4.565676 (-3.407438) | 0.083207 / 0.424275 (-0.341068) | 0.012925 / 0.007607 (0.005318) | 0.556526 / 0.226044 (0.330482) | 5.552364 / 2.268929 (3.283435) | 2.744696 / 55.444624 (-52.699928) | 2.374455 / 6.876477 (-4.502022) | 2.442021 / 2.142072 (0.299949) | 0.809393 / 4.805227 (-3.995834) | 0.152305 / 6.500664 (-6.348359) | 0.066164 / 0.075469 (-0.009305) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.258268 / 1.841788 (-0.583520) | 13.402391 / 8.074308 (5.328083) | 13.816927 / 10.191392 (3.625535) | 0.148466 / 0.680424 (-0.531958) | 0.016487 / 0.534201 (-0.517714) | 0.385888 / 0.579283 (-0.193395) | 0.378840 / 0.434364 (-0.055524) | 0.444527 / 0.540337 (-0.095810) | 0.531011 / 1.386936 (-0.855925) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006230 / 0.011353 (-0.005123) | 0.004488 / 0.011008 (-0.006520) | 0.077539 / 0.038508 (0.039031) | 0.026611 / 0.023109 (0.003502) | 0.342093 / 0.275898 (0.066195) | 0.371555 / 0.323480 (0.048075) | 0.004665 / 0.007986 (-0.003321) | 0.003289 / 0.004328 (-0.001039) | 0.078378 / 0.004250 (0.074128) | 0.035223 / 0.037052 (-0.001829) | 0.339972 / 0.258489 (0.081483) | 0.378755 / 0.293841 (0.084914) | 0.031331 / 0.128546 (-0.097215) | 0.011406 / 0.075646 (-0.064241) | 0.086891 / 0.419271 (-0.332381) | 0.047713 / 0.043533 (0.004180) | 0.342678 / 0.255139 (0.087539) | 0.364536 / 0.283200 (0.081337) | 0.092132 / 0.141683 (-0.049551) | 1.537050 / 1.452155 (0.084895) | 1.639927 / 1.492716 (0.147211) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.219933 / 0.018006 (0.201927) | 0.391627 / 0.000490 (0.391137) | 0.002238 / 0.000200 (0.002038) | 0.000072 / 0.000054 (0.000018) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024890 / 0.037411 (-0.012521) | 0.098989 / 0.014526 (0.084464) | 0.104505 / 0.176557 (-0.072052) | 0.156252 / 0.737135 (-0.580884) | 0.108027 / 0.296338 (-0.188312) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.443957 / 0.215209 (0.228748) | 4.450850 / 2.077655 (2.373196) | 2.076043 / 1.504120 (0.571923) | 1.866396 / 1.541195 (0.325202) | 1.902692 / 1.468490 (0.434202) | 0.703160 / 4.584777 (-3.881617) | 3.373761 / 3.745712 (-0.371951) | 2.615649 / 5.269862 (-2.654213) | 1.340612 / 4.565676 (-3.225065) | 0.083836 / 0.424275 (-0.340439) | 0.012619 / 0.007607 (0.005012) | 0.553410 / 0.226044 (0.327365) | 5.526500 / 2.268929 (3.257571) | 2.513213 / 55.444624 (-52.931411) | 2.152701 / 6.876477 (-4.723776) | 2.165092 / 2.142072 (0.023019) | 0.818381 / 4.805227 (-3.986846) | 0.152118 / 6.500664 (-6.348546) | 0.066950 / 0.075469 (-0.008519) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.291468 / 1.841788 (-0.550320) | 13.694828 / 8.074308 (5.620520) | 13.821019 / 10.191392 (3.629627) | 0.126077 / 0.680424 (-0.554347) | 0.016543 / 0.534201 (-0.517658) | 0.381399 / 0.579283 (-0.197884) | 0.377326 / 0.434364 (-0.057038) | 0.439275 / 0.540337 (-0.101063) | 0.524021 / 1.386936 (-0.862915) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#3e6269979fc80ae8939294d26298897f0db5b84d \"CML watermark\")\n" ]
null
[]
Support cloud storage in load_dataset via fsspec
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/5580/timeline
Closes https://github.com/huggingface/datasets/issues/5281 This PR uses fsspec to support datasets on cloud storage (tested manually with GCS). ETags are currently unsupported for cloud storage. In general, a much larger refactor could be done to just use fsspec for all schemes (ftp, http/s, s3, gcs) to unify the interfaces here, but I ultimately opted to leave that out of this PR I didn't create a GCS filesystem class in `datasets.filesystems` since the S3 one appears to be a wrapper around `s3fs.S3FileSystem` and mainly used to generate docs.
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/5580.diff", "html_url": "https://github.com/huggingface/datasets/pull/5580", "merged_at": "2023-03-11T00:55:40Z", "patch_url": "https://github.com/huggingface/datasets/pull/5580.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5580" }
1,600,431,792
https://api.github.com/repos/huggingface/datasets/issues/5580/comments
PR_kwDODunzps5Kys1c
null
5,580
https://api.github.com/repos/huggingface/datasets/issues/5580/events
true
closed
2023-02-25T14:53:17Z
null
https://api.github.com/repos/huggingface/datasets/issues/5579
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5579/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/5579/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/21087104?v=4", "events_url": "https://api.github.com/users/Laurent2916/events{/privacy}", "followers_url": "https://api.github.com/users/Laurent2916/followers", "following_url": "https://api.github.com/users/Laurent2916/following{/other_user}", "gists_url": "https://api.github.com/users/Laurent2916/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Laurent2916", "id": 21087104, "login": "Laurent2916", "node_id": "MDQ6VXNlcjIxMDg3MTA0", "organizations_url": "https://api.github.com/users/Laurent2916/orgs", "received_events_url": "https://api.github.com/users/Laurent2916/received_events", "repos_url": "https://api.github.com/users/Laurent2916/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Laurent2916/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Laurent2916/subscriptions", "type": "User", "url": "https://api.github.com/users/Laurent2916" }
https://github.com/huggingface/datasets/pull/5579
[]
false
2023-03-23T19:24:59Z
2023-03-23T19:24:50Z
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5579). All of your documentation changes will be reflected on that endpoint.", "I'm not sure we need this part as we provide a link to the notebook that shows how to train an object detection model, and this notebook instantiates a `DataLoader` before training the model. I'd like to hear what @stevhliu thinks.\r\n\r\nPS: Your `collate_fn` calls `torch.stack` on the `bbox` tensors, which don't have the same shape, so this will fail.", "I agree with @mariosasko; we also have a [Use with PyTorch](https://huggingface.co/docs/datasets/use_with_pytorch) guide that shows how you can create a `DataLoader`. " ]
null
[]
Add instructions to create `DataLoader` from augmented dataset in object detection guide
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/5579/timeline
The following adds instructions on how to create a `DataLoader` from the guide on how to use object detection with augmentations (#4710). I am open to hearing any suggestions for improvement !
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/5579.diff", "html_url": "https://github.com/huggingface/datasets/pull/5579", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/5579.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5579" }
1,599,732,211
https://api.github.com/repos/huggingface/datasets/issues/5579/comments
PR_kwDODunzps5Kwgo4
null
5,579
https://api.github.com/repos/huggingface/datasets/issues/5579/events
true
closed
2023-02-24T15:37:43Z
null
https://api.github.com/repos/huggingface/datasets/issues/5578
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5578/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/5578/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
https://github.com/huggingface/datasets/pull/5578
[]
false
2023-02-27T17:28:25Z
2023-02-27T17:21:09Z
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008124 / 0.011353 (-0.003229) | 0.004594 / 0.011008 (-0.006414) | 0.101575 / 0.038508 (0.063066) | 0.029074 / 0.023109 (0.005965) | 0.314641 / 0.275898 (0.038743) | 0.372006 / 0.323480 (0.048526) | 0.006882 / 0.007986 (-0.001103) | 0.003371 / 0.004328 (-0.000958) | 0.078800 / 0.004250 (0.074550) | 0.034030 / 0.037052 (-0.003023) | 0.326917 / 0.258489 (0.068428) | 0.357628 / 0.293841 (0.063788) | 0.033076 / 0.128546 (-0.095470) | 0.011552 / 0.075646 (-0.064094) | 0.321715 / 0.419271 (-0.097557) | 0.040426 / 0.043533 (-0.003107) | 0.315091 / 0.255139 (0.059952) | 0.339291 / 0.283200 (0.056091) | 0.087280 / 0.141683 (-0.054403) | 1.443445 / 1.452155 (-0.008710) | 1.489233 / 1.492716 (-0.003483) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.182643 / 0.018006 (0.164637) | 0.390205 / 0.000490 (0.389716) | 0.001361 / 0.000200 (0.001161) | 0.000072 / 0.000054 (0.000018) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022767 / 0.037411 (-0.014644) | 0.095744 / 0.014526 (0.081219) | 0.102763 / 0.176557 (-0.073794) | 0.166760 / 0.737135 (-0.570375) | 0.106393 / 0.296338 (-0.189945) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.424649 / 0.215209 (0.209440) | 4.257982 / 2.077655 (2.180327) | 2.135847 / 1.504120 (0.631727) | 1.924810 / 1.541195 (0.383615) | 1.813797 / 1.468490 (0.345307) | 0.695467 / 4.584777 (-3.889310) | 3.330164 / 3.745712 (-0.415548) | 2.665606 / 5.269862 (-2.604255) | 1.458619 / 4.565676 (-3.107058) | 0.082408 / 0.424275 (-0.341867) | 0.012259 / 0.007607 (0.004652) | 0.527737 / 0.226044 (0.301693) | 5.271119 / 2.268929 (3.002191) | 2.618655 / 55.444624 (-52.825970) | 2.312321 / 6.876477 (-4.564155) | 2.270096 / 2.142072 (0.128023) | 0.811563 / 4.805227 (-3.993664) | 0.148512 / 6.500664 (-6.352152) | 0.064562 / 0.075469 (-0.010907) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.212483 / 1.841788 (-0.629304) | 13.471679 / 8.074308 (5.397371) | 13.691054 / 10.191392 (3.499662) | 0.137399 / 0.680424 (-0.543025) | 0.028489 / 0.534201 (-0.505711) | 0.398879 / 0.579283 (-0.180404) | 0.396712 / 0.434364 (-0.037652) | 0.458879 / 0.540337 (-0.081458) | 0.537143 / 1.386936 (-0.849793) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006911 / 0.011353 (-0.004442) | 0.004941 / 0.011008 (-0.006067) | 0.078606 / 0.038508 (0.040098) | 0.028411 / 0.023109 (0.005302) | 0.352172 / 0.275898 (0.076274) | 0.401155 / 0.323480 (0.077675) | 0.005433 / 0.007986 (-0.002552) | 0.003704 / 0.004328 (-0.000625) | 0.076615 / 0.004250 (0.072365) | 0.043814 / 0.037052 (0.006761) | 0.346928 / 0.258489 (0.088439) | 0.405587 / 0.293841 (0.111746) | 0.032176 / 0.128546 (-0.096370) | 0.011863 / 0.075646 (-0.063783) | 0.087209 / 0.419271 (-0.332063) | 0.042977 / 0.043533 (-0.000556) | 0.345366 / 0.255139 (0.090227) | 0.419664 / 0.283200 (0.136464) | 0.093862 / 0.141683 (-0.047821) | 1.490968 / 1.452155 (0.038813) | 1.566644 / 1.492716 (0.073927) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.216703 / 0.018006 (0.198697) | 0.472411 / 0.000490 (0.471921) | 0.002234 / 0.000200 (0.002034) | 0.000085 / 0.000054 (0.000031) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027672 / 0.037411 (-0.009740) | 0.109793 / 0.014526 (0.095267) | 0.110720 / 0.176557 (-0.065837) | 0.182342 / 0.737135 (-0.554793) | 0.116150 / 0.296338 (-0.180188) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.438165 / 0.215209 (0.222956) | 4.366213 / 2.077655 (2.288558) | 2.065036 / 1.504120 (0.560917) | 1.860105 / 1.541195 (0.318911) | 1.966885 / 1.468490 (0.498395) | 0.705194 / 4.584777 (-3.879583) | 3.389408 / 3.745712 (-0.356304) | 2.632155 / 5.269862 (-2.637707) | 1.471090 / 4.565676 (-3.094587) | 0.083579 / 0.424275 (-0.340697) | 0.012643 / 0.007607 (0.005036) | 0.542230 / 0.226044 (0.316186) | 5.416293 / 2.268929 (3.147365) | 2.517391 / 55.444624 (-52.927233) | 2.160159 / 6.876477 (-4.716317) | 2.167104 / 2.142072 (0.025031) | 0.807142 / 4.805227 (-3.998085) | 0.152249 / 6.500664 (-6.348415) | 0.067559 / 0.075469 (-0.007910) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.399516 / 1.841788 (-0.442272) | 15.289898 / 8.074308 (7.215590) | 14.188758 / 10.191392 (3.997366) | 0.161277 / 0.680424 (-0.519147) | 0.016854 / 0.534201 (-0.517347) | 0.382091 / 0.579283 (-0.197192) | 0.396639 / 0.434364 (-0.037725) | 0.467932 / 0.540337 (-0.072405) | 0.552017 / 1.386936 (-0.834919) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#2e050273ec3d2a7e53d817544318b23fb51430d0 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.011038 / 0.011353 (-0.000315) | 0.005878 / 0.011008 (-0.005130) | 0.118247 / 0.038508 (0.079739) | 0.043988 / 0.023109 (0.020879) | 0.350823 / 0.275898 (0.074925) | 0.430350 / 0.323480 (0.106870) | 0.009259 / 0.007986 (0.001274) | 0.004614 / 0.004328 (0.000286) | 0.089366 / 0.004250 (0.085116) | 0.049993 / 0.037052 (0.012941) | 0.367620 / 0.258489 (0.109131) | 0.404809 / 0.293841 (0.110968) | 0.044078 / 0.128546 (-0.084468) | 0.014226 / 0.075646 (-0.061421) | 0.397707 / 0.419271 (-0.021565) | 0.056631 / 0.043533 (0.013098) | 0.355942 / 0.255139 (0.100803) | 0.375537 / 0.283200 (0.092338) | 0.121956 / 0.141683 (-0.019727) | 1.757958 / 1.452155 (0.305803) | 1.822183 / 1.492716 (0.329466) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.024505 / 0.018006 (0.006499) | 0.488754 / 0.000490 (0.488265) | 0.011032 / 0.000200 (0.010832) | 0.000540 / 0.000054 (0.000486) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032895 / 0.037411 (-0.004516) | 0.132496 / 0.014526 (0.117970) | 0.140620 / 0.176557 (-0.035937) | 0.220628 / 0.737135 (-0.516507) | 0.147622 / 0.296338 (-0.148717) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.471335 / 0.215209 (0.256126) | 4.699792 / 2.077655 (2.622137) | 2.119782 / 1.504120 (0.615662) | 1.894784 / 1.541195 (0.353590) | 2.002694 / 1.468490 (0.534204) | 0.822610 / 4.584777 (-3.762167) | 4.511510 / 3.745712 (0.765797) | 2.467017 / 5.269862 (-2.802845) | 1.568500 / 4.565676 (-2.997177) | 0.101488 / 0.424275 (-0.322787) | 0.014567 / 0.007607 (0.006960) | 0.603033 / 0.226044 (0.376989) | 6.041397 / 2.268929 (3.772468) | 2.759140 / 55.444624 (-52.685484) | 2.397192 / 6.876477 (-4.479285) | 2.491986 / 2.142072 (0.349914) | 1.021198 / 4.805227 (-3.784029) | 0.196415 / 6.500664 (-6.304249) | 0.076409 / 0.075469 (0.000939) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.406816 / 1.841788 (-0.434972) | 17.740263 / 8.074308 (9.665954) | 16.926489 / 10.191392 (6.735097) | 0.235302 / 0.680424 (-0.445122) | 0.036829 / 0.534201 (-0.497372) | 0.525326 / 0.579283 (-0.053957) | 0.530905 / 0.434364 (0.096541) | 0.650357 / 0.540337 (0.110019) | 0.770641 / 1.386936 (-0.616295) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008728 / 0.011353 (-0.002625) | 0.006023 / 0.011008 (-0.004985) | 0.088694 / 0.038508 (0.050186) | 0.040345 / 0.023109 (0.017236) | 0.408126 / 0.275898 (0.132228) | 0.461178 / 0.323480 (0.137698) | 0.007456 / 0.007986 (-0.000529) | 0.004722 / 0.004328 (0.000394) | 0.087340 / 0.004250 (0.083090) | 0.055826 / 0.037052 (0.018774) | 0.422432 / 0.258489 (0.163942) | 0.466308 / 0.293841 (0.172467) | 0.043637 / 0.128546 (-0.084909) | 0.014602 / 0.075646 (-0.061044) | 0.103610 / 0.419271 (-0.315662) | 0.069999 / 0.043533 (0.026466) | 0.410676 / 0.255139 (0.155537) | 0.434551 / 0.283200 (0.151351) | 0.127699 / 0.141683 (-0.013984) | 1.699858 / 1.452155 (0.247703) | 1.830331 / 1.492716 (0.337615) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.235217 / 0.018006 (0.217211) | 0.494814 / 0.000490 (0.494325) | 0.004942 / 0.000200 (0.004742) | 0.000099 / 0.000054 (0.000045) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.035996 / 0.037411 (-0.001416) | 0.139419 / 0.014526 (0.124893) | 0.146859 / 0.176557 (-0.029698) | 0.234793 / 0.737135 (-0.502343) | 0.152495 / 0.296338 (-0.143843) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.509812 / 0.215209 (0.294603) | 5.067227 / 2.077655 (2.989572) | 2.455505 / 1.504120 (0.951385) | 2.223516 / 1.541195 (0.682321) | 2.367783 / 1.468490 (0.899293) | 0.852550 / 4.584777 (-3.732227) | 4.517284 / 3.745712 (0.771572) | 4.860399 / 5.269862 (-0.409462) | 2.175290 / 4.565676 (-2.390386) | 0.106155 / 0.424275 (-0.318120) | 0.015023 / 0.007607 (0.007416) | 0.633753 / 0.226044 (0.407708) | 6.316214 / 2.268929 (4.047285) | 3.021118 / 55.444624 (-52.423506) | 2.601317 / 6.876477 (-4.275160) | 2.807988 / 2.142072 (0.665916) | 1.028695 / 4.805227 (-3.776532) | 0.204387 / 6.500664 (-6.296277) | 0.077368 / 0.075469 (0.001899) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.540299 / 1.841788 (-0.301489) | 18.311957 / 8.074308 (10.237649) | 16.139892 / 10.191392 (5.948500) | 0.217231 / 0.680424 (-0.463193) | 0.020544 / 0.534201 (-0.513657) | 0.505589 / 0.579283 (-0.073694) | 0.506694 / 0.434364 (0.072330) | 0.622162 / 0.540337 (0.081824) | 0.739537 / 1.386936 (-0.647399) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#0f595fc2aa4786720f7a21da56069a1c46b4552a \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009465 / 0.011353 (-0.001887) | 0.005307 / 0.011008 (-0.005701) | 0.104111 / 0.038508 (0.065603) | 0.036083 / 0.023109 (0.012974) | 0.296608 / 0.275898 (0.020710) | 0.351365 / 0.323480 (0.027885) | 0.008309 / 0.007986 (0.000323) | 0.004383 / 0.004328 (0.000055) | 0.078297 / 0.004250 (0.074047) | 0.044062 / 0.037052 (0.007009) | 0.295592 / 0.258489 (0.037103) | 0.354442 / 0.293841 (0.060602) | 0.038651 / 0.128546 (-0.089896) | 0.012311 / 0.075646 (-0.063335) | 0.337933 / 0.419271 (-0.081338) | 0.048179 / 0.043533 (0.004646) | 0.308320 / 0.255139 (0.053181) | 0.335028 / 0.283200 (0.051829) | 0.105394 / 0.141683 (-0.036289) | 1.444104 / 1.452155 (-0.008050) | 1.573953 / 1.492716 (0.081237) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.236548 / 0.018006 (0.218542) | 0.552862 / 0.000490 (0.552372) | 0.003925 / 0.000200 (0.003726) | 0.000107 / 0.000054 (0.000053) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026386 / 0.037411 (-0.011025) | 0.108002 / 0.014526 (0.093476) | 0.118327 / 0.176557 (-0.058230) | 0.182861 / 0.737135 (-0.554274) | 0.126032 / 0.296338 (-0.170307) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.397037 / 0.215209 (0.181827) | 3.960978 / 2.077655 (1.883323) | 1.771955 / 1.504120 (0.267835) | 1.575033 / 1.541195 (0.033839) | 1.696552 / 1.468490 (0.228062) | 0.679013 / 4.584777 (-3.905764) | 3.770136 / 3.745712 (0.024424) | 2.068323 / 5.269862 (-3.201539) | 1.310823 / 4.565676 (-3.254853) | 0.083752 / 0.424275 (-0.340523) | 0.012366 / 0.007607 (0.004759) | 0.512679 / 0.226044 (0.286635) | 5.127036 / 2.268929 (2.858108) | 2.313200 / 55.444624 (-53.131424) | 1.931007 / 6.876477 (-4.945470) | 2.018336 / 2.142072 (-0.123737) | 0.833033 / 4.805227 (-3.972194) | 0.163778 / 6.500664 (-6.336886) | 0.064053 / 0.075469 (-0.011417) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.234102 / 1.841788 (-0.607685) | 15.227921 / 8.074308 (7.153613) | 14.587146 / 10.191392 (4.395754) | 0.176236 / 0.680424 (-0.504187) | 0.028905 / 0.534201 (-0.505295) | 0.439758 / 0.579283 (-0.139525) | 0.439211 / 0.434364 (0.004848) | 0.544325 / 0.540337 (0.003988) | 0.633804 / 1.386936 (-0.753132) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007933 / 0.011353 (-0.003420) | 0.005446 / 0.011008 (-0.005563) | 0.077846 / 0.038508 (0.039338) | 0.036017 / 0.023109 (0.012907) | 0.358925 / 0.275898 (0.083027) | 0.402757 / 0.323480 (0.079277) | 0.006478 / 0.007986 (-0.001508) | 0.005708 / 0.004328 (0.001380) | 0.074833 / 0.004250 (0.070583) | 0.053412 / 0.037052 (0.016360) | 0.358587 / 0.258489 (0.100098) | 0.430904 / 0.293841 (0.137063) | 0.037778 / 0.128546 (-0.090768) | 0.012698 / 0.075646 (-0.062948) | 0.087615 / 0.419271 (-0.331657) | 0.050236 / 0.043533 (0.006703) | 0.344160 / 0.255139 (0.089021) | 0.390870 / 0.283200 (0.107670) | 0.111035 / 0.141683 (-0.030648) | 1.446963 / 1.452155 (-0.005192) | 1.566158 / 1.492716 (0.073442) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.302380 / 0.018006 (0.284373) | 0.554005 / 0.000490 (0.553515) | 0.007244 / 0.000200 (0.007044) | 0.000115 / 0.000054 (0.000061) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032291 / 0.037411 (-0.005120) | 0.117117 / 0.014526 (0.102591) | 0.127513 / 0.176557 (-0.049044) | 0.204208 / 0.737135 (-0.532927) | 0.133730 / 0.296338 (-0.162608) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.424597 / 0.215209 (0.209388) | 4.233852 / 2.077655 (2.156198) | 2.029731 / 1.504120 (0.525611) | 1.830075 / 1.541195 (0.288880) | 1.966198 / 1.468490 (0.497707) | 0.697881 / 4.584777 (-3.886896) | 3.758012 / 3.745712 (0.012299) | 3.405319 / 5.269862 (-1.864542) | 1.870816 / 4.565676 (-2.694860) | 0.086892 / 0.424275 (-0.337383) | 0.012438 / 0.007607 (0.004831) | 0.524252 / 0.226044 (0.298207) | 5.209534 / 2.268929 (2.940606) | 2.478608 / 55.444624 (-52.966017) | 2.151535 / 6.876477 (-4.724942) | 2.249260 / 2.142072 (0.107187) | 0.831955 / 4.805227 (-3.973273) | 0.165955 / 6.500664 (-6.334710) | 0.064663 / 0.075469 (-0.010806) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.253327 / 1.841788 (-0.588460) | 15.904393 / 8.074308 (7.830085) | 13.253464 / 10.191392 (3.062072) | 0.162148 / 0.680424 (-0.518276) | 0.017643 / 0.534201 (-0.516558) | 0.425028 / 0.579283 (-0.154255) | 0.425615 / 0.434364 (-0.008749) | 0.521503 / 0.540337 (-0.018835) | 0.629473 / 1.386936 (-0.757463) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#939b2332115c7ec3dd56f58169800ed81cc4a982 \"CML watermark\")\n" ]
null
[]
Add `huggingface_hub` version to env cli command
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/5578/timeline
Add the `huggingface_hub` version to the `env` command's output.
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/5578.diff", "html_url": "https://github.com/huggingface/datasets/pull/5578", "merged_at": "2023-02-27T17:21:09Z", "patch_url": "https://github.com/huggingface/datasets/pull/5578.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5578" }
1,598,863,119
https://api.github.com/repos/huggingface/datasets/issues/5578/comments
PR_kwDODunzps5Kto96
null
5,578
https://api.github.com/repos/huggingface/datasets/issues/5578/events
true
closed
2023-02-24T13:01:48Z
null
https://api.github.com/repos/huggingface/datasets/issues/5577
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5577/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/5577/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/5126316?v=4", "events_url": "https://api.github.com/users/wjfwzzc/events{/privacy}", "followers_url": "https://api.github.com/users/wjfwzzc/followers", "following_url": "https://api.github.com/users/wjfwzzc/following{/other_user}", "gists_url": "https://api.github.com/users/wjfwzzc/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/wjfwzzc", "id": 5126316, "login": "wjfwzzc", "node_id": "MDQ6VXNlcjUxMjYzMTY=", "organizations_url": "https://api.github.com/users/wjfwzzc/orgs", "received_events_url": "https://api.github.com/users/wjfwzzc/received_events", "repos_url": "https://api.github.com/users/wjfwzzc/repos", "site_admin": false, "starred_url": "https://api.github.com/users/wjfwzzc/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wjfwzzc/subscriptions", "type": "User", "url": "https://api.github.com/users/wjfwzzc" }
https://github.com/huggingface/datasets/issues/5577
[]
false
2023-02-24T14:01:09Z
2023-02-24T14:01:09Z
null
[ "Hi! I've merged a PR to use `int32` instead of `int8` for `reddit_scores`, so it should work now.\r\n\r\n" ]
completed
[]
Cannot load `the_pile_openwebtext2`
NONE
https://api.github.com/repos/huggingface/datasets/issues/5577/timeline
### Describe the bug I met the same bug mentioned in #3053 which is never fixed. Because several `reddit_scores` are larger than `int8` even `int16`. https://huggingface.co/datasets/the_pile_openwebtext2/blob/main/the_pile_openwebtext2.py#L62 ### Steps to reproduce the bug ```python3 from datasets import load_dataset dataset = load_dataset("the_pile_openwebtext2") ``` ### Expected behavior load as normal. ### Environment info - `datasets` version: 2.10.0 - Platform: Linux-5.4.143.bsk.7-amd64-x86_64-with-glibc2.31 - Python version: 3.9.2 - PyArrow version: 11.0.0 - Pandas version: 1.5.3
https://api.github.com/repos/huggingface/datasets
null
1,598,587,665
https://api.github.com/repos/huggingface/datasets/issues/5577/comments
I_kwDODunzps5fSIMR
null
5,577
https://api.github.com/repos/huggingface/datasets/issues/5577/events
false
closed
2023-02-24T12:57:49Z
null
https://api.github.com/repos/huggingface/datasets/issues/5576
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5576/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/5576/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/5126316?v=4", "events_url": "https://api.github.com/users/wjfwzzc/events{/privacy}", "followers_url": "https://api.github.com/users/wjfwzzc/followers", "following_url": "https://api.github.com/users/wjfwzzc/following{/other_user}", "gists_url": "https://api.github.com/users/wjfwzzc/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/wjfwzzc", "id": 5126316, "login": "wjfwzzc", "node_id": "MDQ6VXNlcjUxMjYzMTY=", "organizations_url": "https://api.github.com/users/wjfwzzc/orgs", "received_events_url": "https://api.github.com/users/wjfwzzc/received_events", "repos_url": "https://api.github.com/users/wjfwzzc/repos", "site_admin": false, "starred_url": "https://api.github.com/users/wjfwzzc/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wjfwzzc/subscriptions", "type": "User", "url": "https://api.github.com/users/wjfwzzc" }
https://github.com/huggingface/datasets/issues/5576
[]
false
2023-02-24T12:58:31Z
2023-02-24T12:58:18Z
null
[ "Duplicated issue." ]
not_planned
[]
I was getting a similar error `pyarrow.lib.ArrowInvalid: Integer value 528 not in range: -128 to 127` - AFAICT, this is because the type specified for `reddit_scores` is `datasets.Sequence(datasets.Value("int8"))`, but the actual values can be well outside the max range for 8-bit integers.
NONE
https://api.github.com/repos/huggingface/datasets/issues/5576/timeline
I was getting a similar error `pyarrow.lib.ArrowInvalid: Integer value 528 not in range: -128 to 127` - AFAICT, this is because the type specified for `reddit_scores` is `datasets.Sequence(datasets.Value("int8"))`, but the actual values can be well outside the max range for 8-bit integers. I worked around this by downloading the `the_pile_openwebtext2.py` and editing it to use local files and drop reddit scores as a column (not needed for my purposes). _Originally posted by @tc-wolf in https://github.com/huggingface/datasets/issues/3053#issuecomment-1281392422_
https://api.github.com/repos/huggingface/datasets
null
1,598,582,744
https://api.github.com/repos/huggingface/datasets/issues/5576/comments
I_kwDODunzps5fSG_Y
null
5,576
https://api.github.com/repos/huggingface/datasets/issues/5576/events
false
open
2023-02-24T10:53:44Z
null
https://api.github.com/repos/huggingface/datasets/issues/5575
null
{ "+1": 3, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 3, "url": "https://api.github.com/repos/huggingface/datasets/issues/5575/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/5575/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/11356471?v=4", "events_url": "https://api.github.com/users/parsa-ra/events{/privacy}", "followers_url": "https://api.github.com/users/parsa-ra/followers", "following_url": "https://api.github.com/users/parsa-ra/following{/other_user}", "gists_url": "https://api.github.com/users/parsa-ra/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/parsa-ra", "id": 11356471, "login": "parsa-ra", "node_id": "MDQ6VXNlcjExMzU2NDcx", "organizations_url": "https://api.github.com/users/parsa-ra/orgs", "received_events_url": "https://api.github.com/users/parsa-ra/received_events", "repos_url": "https://api.github.com/users/parsa-ra/repos", "site_admin": false, "starred_url": "https://api.github.com/users/parsa-ra/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/parsa-ra/subscriptions", "type": "User", "url": "https://api.github.com/users/parsa-ra" }
https://github.com/huggingface/datasets/issues/5575
[]
false
2024-01-05T21:48:35Z
null
{ "closed_at": null, "closed_issues": 0, "created_at": "2023-02-13T16:22:42Z", "creator": { "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }, "description": "Next major release", "due_on": null, "html_url": "https://github.com/huggingface/datasets/milestone/10", "id": 9038583, "labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/10/labels", "node_id": "MI_kwDODunzps4Aier3", "number": 10, "open_issues": 4, "state": "open", "title": "3.0", "updated_at": "2023-09-22T14:07:52Z", "url": "https://api.github.com/repos/huggingface/datasets/milestones/10" }
[ "Hi! Indeed it would be useful to support this. PyArrow natively supports schema-level and column-level metadata, so implementing this should be straightforward. The API I have in mind would work as follows:\r\n```python\r\ncol_feature = Value(\"string\", metadata=\"Some column-level metadata\")\r\n\r\nfeatures = Features({\"col\": col_feature}, metadata=\"Some schema-level metadata\")\r\n```\r\n\r\nWDYT?", "Sorry for the late reply, \r\nYes, I think this is the most straight-forward approach with the things that we already have.\r\n\r\n", "@mariosasko Let me know how I can help.", "Hi, is this feature to be implemented in the near future? It would be really nice if that would be the case! ", "Hi, I also need this feature for tell my customer if any of the feature is encrypted with a certain key. " ]
null
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
Metadata for each column
NONE
https://api.github.com/repos/huggingface/datasets/issues/5575/timeline
### Feature request Being able to put some metadata for each column as a string or any other type. ### Motivation I will bring the motivation by an example, lets say we are experimenting with embedding produced by some image encoder network, and we want to iterate through a couple of preprocessing and see which one works better in our downstream task, here as workaround right now what I do is the compute the hash of the preprocessing that the images went through as part of the new columns name, it would be nice to attach some kinda meta data in these scenarios to the each columns. metadata ### Your contribution Maybe we could map another relational like database as the metadata?
https://api.github.com/repos/huggingface/datasets
null
1,598,396,552
https://api.github.com/repos/huggingface/datasets/issues/5575/comments
I_kwDODunzps5fRZiI
null
5,575
https://api.github.com/repos/huggingface/datasets/issues/5575/events
false
closed
2023-02-24T07:57:32Z
null
https://api.github.com/repos/huggingface/datasets/issues/5574
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5574/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/5574/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/202907?v=4", "events_url": "https://api.github.com/users/krasserm/events{/privacy}", "followers_url": "https://api.github.com/users/krasserm/followers", "following_url": "https://api.github.com/users/krasserm/following{/other_user}", "gists_url": "https://api.github.com/users/krasserm/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/krasserm", "id": 202907, "login": "krasserm", "node_id": "MDQ6VXNlcjIwMjkwNw==", "organizations_url": "https://api.github.com/users/krasserm/orgs", "received_events_url": "https://api.github.com/users/krasserm/received_events", "repos_url": "https://api.github.com/users/krasserm/repos", "site_admin": false, "starred_url": "https://api.github.com/users/krasserm/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/krasserm/subscriptions", "type": "User", "url": "https://api.github.com/users/krasserm" }
https://github.com/huggingface/datasets/issues/5574
[]
false
2023-12-18T07:32:32Z
2023-02-27T04:03:38Z
null
[ "Also encountering this issue for every dataset I try to stream! Installed datasets from main:\r\n```\r\n- `datasets` version: 2.10.1.dev0\r\n- Platform: macOS-13.1-arm64-arm-64bit\r\n- Python version: 3.9.13\r\n- PyArrow version: 10.0.1\r\n- Pandas version: 1.5.2\r\n```\r\n\r\nRepro:\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nspigi = load_dataset(\"kensho/spgispeech\", \"dev\", split=\"validation\", streaming=True, use_auth_token=True)\r\nsample = next(iter(spigi))\r\n```\r\n\r\n<details>\r\n<summary> Traceback </summary>\r\n\r\n```python\r\n---------------------------------------------------------------------------\r\nClientResponseError Traceback (most recent call last)\r\nFile ~/venv/lib/python3.9/site-packages/fsspec/implementations/http.py:407, in HTTPFileSystem._info(self, url, **kwargs)\r\n 405 try:\r\n 406 info.update(\r\n--> 407 await _file_info(\r\n 408 self.encode_url(url),\r\n 409 size_policy=policy,\r\n 410 session=session,\r\n 411 **self.kwargs,\r\n 412 **kwargs,\r\n 413 )\r\n 414 )\r\n 415 if info.get(\"size\") is not None:\r\n\r\nFile ~/venv/lib/python3.9/site-packages/fsspec/implementations/http.py:792, in _file_info(url, session, size_policy, **kwargs)\r\n 791 async with r:\r\n--> 792 r.raise_for_status()\r\n 794 # TODO:\r\n 795 # recognise lack of 'Accept-Ranges',\r\n 796 # or 'Accept-Ranges': 'none' (not 'bytes')\r\n 797 # to mean streaming only, no random access => return None\r\n\r\nFile ~/venv/lib/python3.9/site-packages/aiohttp/client_reqrep.py:1005, in ClientResponse.raise_for_status(self)\r\n 1004 self.release()\r\n-> 1005 raise ClientResponseError(\r\n 1006 self.request_info,\r\n 1007 self.history,\r\n 1008 status=self.status,\r\n 1009 message=self.reason,\r\n 1010 headers=self.headers,\r\n 1011 )\r\n\r\nClientResponseError: 403, message='Forbidden', url=URL('[https://cdn-lfs.huggingface.co/repos/e2/89/e28905247d6f48bb4edad5baf9b1bb4158e897a13fdf18bf3b8ee89ff8387ab8/46eca7431a7b6bad344bf451800e5b10cea1dd168f26d1027a6d9eb374b7fac3?response-content-disposition=attachment%3B+filename*%3DUTF-8''dev.csv%3B+filename%3D%22dev.csv%22%3B&response-content-type=text/csv&Expires=1677494732&Policy=eyJTdGF0ZW1lbnQiOlt7IlJlc291cmNlIjoiaHR0cHM6Ly9jZG4tbGZzLmh1Z2dpbmdmYWNlLmNvL3JlcG9zL2UyLzg5L2UyODkwNTI0N2Q2ZjQ4YmI0ZWRhZDViYWY5YjFiYjQxNThlODk3YTEzZmRmMThiZjNiOGVlODlmZjgzODdhYjgvNDZlY2E3NDMxYTdiNmJhZDM0NGJmNDUxODAwZTViMTBjZWExZGQxNjhmMjZkMTAyN2E2ZDllYjM3NGI3ZmFjMz9yZXNwb25zZS1jb250ZW50LWRpc3Bvc2l0aW9uPSomcmVzcG9uc2UtY29udGVudC10eXBlPXRleHQlMkZjc3YiLCJDb25kaXRpb24iOnsiRGF0ZUxlc3NUaGFuIjp7IkFXUzpFcG9jaFRpbWUiOjE2Nzc0OTQ3MzJ9fX1dfQ__&Signature=EzQB9f7xPckvqfFB6LzcyR-wzTnQCqtPDdWtQUzZ3QJ-gY-IHG5mxQITJgMr1nVTbJZrPmGAaDngMcPFUfSQa8RmCqYH~dZl-UGE8CO4neKNUT1DvA2WEvLDS4WaAJ3SN-9rX0uFb03~c1QS78cIgIRboYvf6ugKiJz86Bd7Vs~tcp201JFR0A6jIMseqApOnkb9d8dHMP3Ny~F6gO3Qf2QpEWM-QsDIyw2Kz2QV55nq8TsDpRYZCZo50~WwD~73Hej0PoDhEA1K37d19pa0CQhkaN-gjCrbT9xLabbvhJWa~ZkWcMdD0teCgjYqv1wKyvFXDAxukxLGEc7OBXVbYw__&Key-Pair-Id=KVTP0A1DKRTAX](https://cdn-lfs.huggingface.co/repos/e2/89/e28905247d6f48bb4edad5baf9b1bb4158e897a13fdf18bf3b8ee89ff8387ab8/46eca7431a7b6bad344bf451800e5b10cea1dd168f26d1027a6d9eb374b7fac3?response-content-disposition=attachment%3B+filename*%3DUTF-8%27%27dev.csv%3B+filename%3D%22dev.csv%22%3B&response-content-type=text/csv&Expires=1677494732&Policy=eyJTdGF0ZW1lbnQiOlt7IlJlc291cmNlIjoiaHR0cHM6Ly9jZG4tbGZzLmh1Z2dpbmdmYWNlLmNvL3JlcG9zL2UyLzg5L2UyODkwNTI0N2Q2ZjQ4YmI0ZWRhZDViYWY5YjFiYjQxNThlODk3YTEzZmRmMThiZjNiOGVlODlmZjgzODdhYjgvNDZlY2E3NDMxYTdiNmJhZDM0NGJmNDUxODAwZTViMTBjZWExZGQxNjhmMjZkMTAyN2E2ZDllYjM3NGI3ZmFjMz9yZXNwb25zZS1jb250ZW50LWRpc3Bvc2l0aW9uPSomcmVzcG9uc2UtY29udGVudC10eXBlPXRleHQlMkZjc3YiLCJDb25kaXRpb24iOnsiRGF0ZUxlc3NUaGFuIjp7IkFXUzpFcG9jaFRpbWUiOjE2Nzc0OTQ3MzJ9fX1dfQ__&Signature=EzQB9f7xPckvqfFB6LzcyR-wzTnQCqtPDdWtQUzZ3QJ-gY-IHG5mxQITJgMr1nVTbJZrPmGAaDngMcPFUfSQa8RmCqYH~dZl-UGE8CO4neKNUT1DvA2WEvLDS4WaAJ3SN-9rX0uFb03~c1QS78cIgIRboYvf6ugKiJz86Bd7Vs~tcp201JFR0A6jIMseqApOnkb9d8dHMP3Ny~F6gO3Qf2QpEWM-QsDIyw2Kz2QV55nq8TsDpRYZCZo50~WwD~73Hej0PoDhEA1K37d19pa0CQhkaN-gjCrbT9xLabbvhJWa~ZkWcMdD0teCgjYqv1wKyvFXDAxukxLGEc7OBXVbYw__&Key-Pair-Id=KVTP0A1DKRTAX)')\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nFileNotFoundError Traceback (most recent call last)\r\nCell In[5], line 4\r\n 1 from datasets import load_dataset\r\n 3 spigi = load_dataset(\"kensho/spgispeech\", \"dev\", split=\"validation\", streaming=True)\r\n----> 4 sample = next(iter(spigi))\r\n\r\nFile ~/datasets/src/datasets/iterable_dataset.py:937, in IterableDataset.__iter__(self)\r\n 934 yield from self._iter_pytorch(ex_iterable)\r\n 935 return\r\n--> 937 for key, example in ex_iterable:\r\n 938 if self.features:\r\n 939 # `IterableDataset` automatically fills missing columns with None.\r\n 940 # This is done with `_apply_feature_types_on_example`.\r\n 941 yield _apply_feature_types_on_example(\r\n 942 example, self.features, token_per_repo_id=self._token_per_repo_id\r\n 943 )\r\n\r\nFile ~/datasets/src/datasets/iterable_dataset.py:113, in ExamplesIterable.__iter__(self)\r\n 112 def __iter__(self):\r\n--> 113 yield from self.generate_examples_fn(**self.kwargs)\r\n\r\nFile ~/.cache/huggingface/modules/datasets_modules/datasets/kensho--spgispeech/5fbf75dd9ef795a9b5a673457d2cbaf0b8fa0de8fb62acbd1da338d83a41e2f0/spgispeech.py:186, in Spgispeech._generate_examples(self, local_extracted_archive_paths, archives, meta_path)\r\n 183 dict_keys = [\"wav_filename\", \"wav_filesize\", \"transcript\"]\r\n 185 logging.info(\"Reading metadata...\")\r\n--> 186 with open(meta_path, encoding=\"utf-8\") as f:\r\n 187 csvreader = csv.DictReader(f, delimiter=\"|\")\r\n 188 metadata = {x[\"wav_filename\"]: dict((k, x[k]) for k in dict_keys) for x in csvreader}\r\n\r\nFile ~/datasets/src/datasets/streaming.py:70, in extend_module_for_streaming.<locals>.wrap_auth.<locals>.wrapper(*args, **kwargs)\r\n 68 @wraps(function)\r\n 69 def wrapper(*args, **kwargs):\r\n---> 70 return function(*args, use_auth_token=use_auth_token, **kwargs)\r\n\r\nFile ~/datasets/src/datasets/download/streaming_download_manager.py:495, in xopen(file, mode, use_auth_token, *args, **kwargs)\r\n 493 kwargs = {**kwargs, **new_kwargs}\r\n 494 try:\r\n--> 495 file_obj = fsspec.open(file, mode=mode, *args, **kwargs).open()\r\n 496 except ValueError as e:\r\n 497 if str(e) == \"Cannot seek streaming HTTP file\":\r\n\r\nFile ~/venv/lib/python3.9/site-packages/fsspec/core.py:135, in OpenFile.open(self)\r\n 128 def open(self):\r\n 129 \"\"\"Materialise this as a real open file without context\r\n 130 \r\n 131 The OpenFile object should be explicitly closed to avoid enclosed file\r\n 132 instances persisting. You must, therefore, keep a reference to the OpenFile\r\n 133 during the life of the file-like it generates.\r\n 134 \"\"\"\r\n--> 135 return self.__enter__()\r\n\r\nFile ~/venv/lib/python3.9/site-packages/fsspec/core.py:103, in OpenFile.__enter__(self)\r\n 100 def __enter__(self):\r\n 101 mode = self.mode.replace(\"t\", \"\").replace(\"b\", \"\") + \"b\"\r\n--> 103 f = self.fs.open(self.path, mode=mode)\r\n 105 self.fobjects = [f]\r\n 107 if self.compression is not None:\r\n\r\nFile ~/venv/lib/python3.9/site-packages/fsspec/spec.py:1106, in AbstractFileSystem.open(self, path, mode, block_size, cache_options, compression, **kwargs)\r\n 1104 else:\r\n 1105 ac = kwargs.pop(\"autocommit\", not self._intrans)\r\n-> 1106 f = self._open(\r\n 1107 path,\r\n 1108 mode=mode,\r\n 1109 block_size=block_size,\r\n 1110 autocommit=ac,\r\n 1111 cache_options=cache_options,\r\n 1112 **kwargs,\r\n 1113 )\r\n 1114 if compression is not None:\r\n 1115 from fsspec.compression import compr\r\n\r\nFile ~/venv/lib/python3.9/site-packages/fsspec/implementations/http.py:346, in HTTPFileSystem._open(self, path, mode, block_size, autocommit, cache_type, cache_options, size, **kwargs)\r\n 344 kw[\"asynchronous\"] = self.asynchronous\r\n 345 kw.update(kwargs)\r\n--> 346 size = size or self.info(path, **kwargs)[\"size\"]\r\n 347 session = sync(self.loop, self.set_session)\r\n 348 if block_size and size:\r\n\r\nFile ~/venv/lib/python3.9/site-packages/fsspec/asyn.py:113, in sync_wrapper.<locals>.wrapper(*args, **kwargs)\r\n 110 @functools.wraps(func)\r\n 111 def wrapper(*args, **kwargs):\r\n 112 self = obj or args[0]\r\n--> 113 return sync(self.loop, func, *args, **kwargs)\r\n\r\nFile ~/venv/lib/python3.9/site-packages/fsspec/asyn.py:98, in sync(loop, func, timeout, *args, **kwargs)\r\n 96 raise FSTimeoutError from return_result\r\n 97 elif isinstance(return_result, BaseException):\r\n---> 98 raise return_result\r\n 99 else:\r\n 100 return return_result\r\n\r\nFile ~/venv/lib/python3.9/site-packages/fsspec/asyn.py:53, in _runner(event, coro, result, timeout)\r\n 51 coro = asyncio.wait_for(coro, timeout=timeout)\r\n 52 try:\r\n---> 53 result[0] = await coro\r\n 54 except Exception as ex:\r\n 55 result[0] = ex\r\n\r\nFile ~/venv/lib/python3.9/site-packages/fsspec/implementations/http.py:420, in HTTPFileSystem._info(self, url, **kwargs)\r\n 417 except Exception as exc:\r\n 418 if policy == \"get\":\r\n 419 # If get failed, then raise a FileNotFoundError\r\n--> 420 raise FileNotFoundError(url) from exc\r\n 421 logger.debug(str(exc))\r\n 423 return {\"name\": url, \"size\": None, **info, \"type\": \"file\"}\r\n\r\nFileNotFoundError: https://huggingface.co/datasets/kensho/spgispeech/resolve/main/data/meta/dev.csv\r\n```\r\n</details>", "Hi ! We're investigating this issue, sorry for the inconvenience", "This has been resolved ! Thanks for reporting", "Wow, thanks for the very quick fix!", "This problem now appears again, this time with an underlying HTTP 502 status code:\r\n\r\n```\r\naiohttp.client_exceptions.ClientResponseError: 502, message='Bad Gateway', url=URL('https://huggingface.co/datasets/allenai/c4/resolve/1ddc917116b730e1859edef32896ec5c16be51d0/en/c4-validation.00002-of-00008.json.gz')\r\n```", "Re-executing a minute later, the underlying cause is an HTTP 403 status code, as reported yesterday:\r\n\r\n```\r\naiohttp.client_exceptions.ClientResponseError: 403, message='Forbidden', url=URL('https://cdn-lfs.huggingface.co/datasets/allenai/c4/4bf6b248b0f910dcde2cdf2118d6369d8208c8f9515ec29ab73e531f380b18e2?response-content-disposition=attachment%3B+filename*%3DUTF-8''c4-validation.00002-of-00008.json.gz%3B+filename%3D%22c4-validation.00002-of-00008.json.gz%22%3B&response-content-type=application/gzip&Expires=1677571273&Policy=eyJTdGF0ZW1lbnQiOlt7IlJlc291cmNlIjoiaHR0cHM6Ly9jZG4tbGZzLmh1Z2dpbmdmYWNlLmNvL2RhdGFzZXRzL2FsbGVuYWkvYzQvNGJmNmIyNDhiMGY5MTBkY2RlMmNkZjIxMThkNjM2OWQ4MjA4YzhmOTUxNWVjMjlhYjczZTUzMWYzODBiMThlMj9yZXNwb25zZS1jb250ZW50LWRpc3Bvc2l0aW9uPSomcmVzcG9uc2UtY29udGVudC10eXBlPWFwcGxpY2F0aW9uJTJGZ3ppcCIsIkNvbmRpdGlvbiI6eyJEYXRlTGVzc1RoYW4iOnsiQVdTOkVwb2NoVGltZSI6MTY3NzU3MTI3M319fV19&Signature=WW42NOKkLuX~xVB1QfbkqzdvGo2AOXpgbF3PjTXy6iKd~ffilr1N9ScPXfvTXqy5yvdhJg1G0xJy1zYtUjGAL8GEx3Av-0vIhpWMGYTM8XKEU5gYA9qt30oVtNph6TkTYSABrsYTaj-hzQL9WCgyapmjvG69ETMh4wj44r2rcbk4T3j0l6l4u76Gh~lyRSll3aK4qycdUwcyL7FECDu~0W1mJIJwKkCrWHhSpHJSshb-0ElwG71pq4eyQ5g2uxHdK6JbRF7loxUpRQQJ1vlk0EHXdw0wTMaQ9tqHy6xcrQd8Ep0Yvx3tUD8MR0vWOcbQKnL6LwPQByc8tkChlpjnig__&Key-Pair-Id=KVTP0A1DKRTAX')\r\n```", "I'm facing the same problem. Interestingly using `wget` I can download the file. ", "It's been resolved again ;)", "> It's been resolved again ;)\r\n\r\nI'm experiencing the same issue when trying to load this dataset, `FileNotFoundError: https://huggingface.co/datasets/allenai/c4/resolve/1ddc917116b730e1859edef32896ec5c16be51d0/realnewslike/c4-train.00000-of-00512.json.gz`", "Experiencing the same issues as above : `FileNotFoundError: https://huggingface.co/datasets/allenai/c4/resolve/1ddc917116b730e1859edef32896ec5c16be51d0/en/c4-train.00000-of-01024.json.gz\r\nIf the repo is private or gated, make sure to log in with `huggingface-cli login`.`\r\n\r\nHave made sure to login as well, issue persists.", "> Experiencing the same issues as above : `FileNotFoundError: https://huggingface.co/datasets/allenai/c4/resolve/1ddc917116b730e1859edef32896ec5c16be51d0/en/c4-train.00000-of-01024.json.gz If the repo is private or gated, make sure to log in with `huggingface-cli login`.`\r\n> \r\n> Have made sure to login as well, issue persists.\r\n\r\nI meet the same issue", "I meet the same issue" ]
completed
[]
c4 dataset streaming fails with `FileNotFoundError`
NONE
https://api.github.com/repos/huggingface/datasets/issues/5574/timeline
### Describe the bug Loading the `c4` dataset in streaming mode with `load_dataset("c4", "en", split="validation", streaming=True)` and then using it fails with a `FileNotFoundException`. ### Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("c4", "en", split="train", streaming=True) next(iter(dataset)) ``` causes a ``` FileNotFoundError: https://huggingface.co/datasets/allenai/c4/resolve/1ddc917116b730e1859edef32896ec5c16be51d0/en/c4-train.00000-of-01024.json.gz ``` I can download this file manually though e.g. by entering this URL in a browser. There is an underlying HTTP 403 status code: ``` aiohttp.client_exceptions.ClientResponseError: 403, message='Forbidden', url=URL('https://cdn-lfs.huggingface.co/datasets/allenai/c4/8ef8d75b0e045dec4aa5123a671b4564466b0707086a7ed1ba8721626dfffbc9?response-content-disposition=attachment%3B+filename*%3DUTF-8''c4-train.00000-of-01024.json.gz%3B+filename%3D%22c4-train.00000-of-01024.json.gz%22%3B&response-content-type=application/gzip&Expires=1677483770&Policy=eyJTdGF0ZW1lbnQiOlt7IlJlc291cmNlIjoiaHR0cHM6Ly9jZG4tbGZzLmh1Z2dpbmdmYWNlLmNvL2RhdGFzZXRzL2FsbGVuYWkvYzQvOGVmOGQ3NWIwZTA0NWRlYzRhYTUxMjNhNjcxYjQ1NjQ0NjZiMDcwNzA4NmE3ZWQxYmE4NzIxNjI2ZGZmZmJjOT9yZXNwb25zZS1jb250ZW50LWRpc3Bvc2l0aW9uPSomcmVzcG9uc2UtY29udGVudC10eXBlPWFwcGxpY2F0aW9uJTJGZ3ppcCIsIkNvbmRpdGlvbiI6eyJEYXRlTGVzc1RoYW4iOnsiQVdTOkVwb2NoVGltZSI6MTY3NzQ4Mzc3MH19fV19&Signature=yjL3UeY72cf2xpnvPvD68eAYOEe2qtaUJV55sB-jnPskBJEMwpMJcBZvg2~GqXZdM3O-GWV-Z3CI~d4u5VCb4YZ-HlmOjr3VBYkvox2EKiXnBIhjMecf2UVUPtxhTa9kBVlWjqu4qKzB9gKXZF2Cwpp5ctLzapEaT2nnqF84RAL-rsqMA3I~M8vWWfivQsbBK63hMfgZqqKMgdWM0iKMaItveDl0ufQ29azMFmsR7qd8V7sU2Z-F1fAeohS8HpN9OOnClW34yi~YJ2AbgZJJBXA~qsylfVA0Qp7Q~yX~q4P8JF1vmJ2BjkiSbGrj3bAXOGugpOVU5msI52DT88yMdA__&Key-Pair-Id=KVTP0A1DKRTAX') ``` ### Expected behavior This should retrieve the first example from the C4 validation set. This worked a few days ago but stopped working now. ### Environment info - `datasets` version: 2.9.0 - Platform: Linux-5.15.0-60-generic-x86_64-with-glibc2.31 - Python version: 3.9.16 - PyArrow version: 11.0.0 - Pandas version: 1.5.3
https://api.github.com/repos/huggingface/datasets
null
1,598,104,691
https://api.github.com/repos/huggingface/datasets/issues/5574/comments
I_kwDODunzps5fQSRz
null
5,574
https://api.github.com/repos/huggingface/datasets/issues/5574/events
false
closed
2023-02-23T19:19:44Z
null
https://api.github.com/repos/huggingface/datasets/issues/5573
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 3, "laugh": 0, "rocket": 0, "total_count": 3, "url": "https://api.github.com/repos/huggingface/datasets/issues/5573/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/5573/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/polinaeterna", "id": 16348744, "login": "polinaeterna", "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "repos_url": "https://api.github.com/users/polinaeterna/repos", "site_admin": false, "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "type": "User", "url": "https://api.github.com/users/polinaeterna" }
https://github.com/huggingface/datasets/pull/5573
[]
false
2023-02-28T20:25:14Z
2023-02-28T20:16:02Z
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "@mariosasko thank you for the review! do you have any idea why `test_hash_torch_tensor` fails on \"ubuntu-latest deps-minimum\"? I removed the `torchaudio<0.12.0` test dependency so it uses the latest `torch` now, might it be connected?", "@polinaeterna The failure is due to `torch.from_numpy` not being picklable in newer versions of PyTorch. You can replace the current definition of `_save_tensor` in `utils/py_utils.py` with the following one to fix it: \r\n\r\n```python\r\n@pklregister(obj_type)\r\ndef _save_tensor(pickler, obj):\r\n # `torch.from_numpy` is not picklable in `torch>=1.11.0`\r\n def _create_tensor(np_array):\r\n return torch.from_numpy(np_array)\r\n\r\n dill_log(pickler, f\"To: {obj}\")\r\n args = (obj.detach().cpu().numpy(),)\r\n pickler.save_reduce(_create_tensor, args, obj=obj)\r\n dill_log(pickler, \"# To\")\r\n return\r\n```", "(doing a patch release now - please wait before merging ^^)", "@mariosasko génial, merci!! i've integrated all your changes, can you pls take a look one more time?", "Patch release is done (I did it from another branch than `main` anyway)", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010927 / 0.011353 (-0.000426) | 0.006232 / 0.011008 (-0.004776) | 0.119815 / 0.038508 (0.081307) | 0.034138 / 0.023109 (0.011029) | 0.349945 / 0.275898 (0.074047) | 0.404967 / 0.323480 (0.081487) | 0.008672 / 0.007986 (0.000687) | 0.005010 / 0.004328 (0.000681) | 0.091931 / 0.004250 (0.087680) | 0.042534 / 0.037052 (0.005482) | 0.374701 / 0.258489 (0.116212) | 0.401027 / 0.293841 (0.107186) | 0.053523 / 0.128546 (-0.075024) | 0.019704 / 0.075646 (-0.055942) | 0.384207 / 0.419271 (-0.035064) | 0.065350 / 0.043533 (0.021817) | 0.375074 / 0.255139 (0.119935) | 0.390458 / 0.283200 (0.107259) | 0.110549 / 0.141683 (-0.031134) | 1.719812 / 1.452155 (0.267657) | 1.748906 / 1.492716 (0.256190) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.210051 / 0.018006 (0.192045) | 0.546503 / 0.000490 (0.546013) | 0.004078 / 0.000200 (0.003878) | 0.000111 / 0.000054 (0.000056) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030212 / 0.037411 (-0.007199) | 0.121845 / 0.014526 (0.107319) | 0.136309 / 0.176557 (-0.040247) | 0.204667 / 0.737135 (-0.532468) | 0.157327 / 0.296338 (-0.139012) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.672548 / 0.215209 (0.457339) | 6.239409 / 2.077655 (4.161754) | 2.462441 / 1.504120 (0.958322) | 2.063985 / 1.541195 (0.522791) | 2.098858 / 1.468490 (0.630368) | 1.262600 / 4.584777 (-3.322177) | 5.478462 / 3.745712 (1.732750) | 5.454672 / 5.269862 (0.184810) | 2.991866 / 4.565676 (-1.573810) | 0.153415 / 0.424275 (-0.270861) | 0.015061 / 0.007607 (0.007454) | 0.796115 / 0.226044 (0.570071) | 8.206858 / 2.268929 (5.937930) | 3.226395 / 55.444624 (-52.218229) | 2.503522 / 6.876477 (-4.372955) | 2.547489 / 2.142072 (0.405417) | 1.504776 / 4.805227 (-3.300451) | 0.256536 / 6.500664 (-6.244128) | 0.078543 / 0.075469 (0.003073) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.591109 / 1.841788 (-0.250678) | 18.153317 / 8.074308 (10.079008) | 20.465684 / 10.191392 (10.274292) | 0.229808 / 0.680424 (-0.450616) | 0.045263 / 0.534201 (-0.488938) | 0.556760 / 0.579283 (-0.022524) | 0.614985 / 0.434364 (0.180622) | 0.635675 / 0.540337 (0.095337) | 0.729817 / 1.386936 (-0.657119) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.011247 / 0.011353 (-0.000106) | 0.006823 / 0.011008 (-0.004185) | 0.101989 / 0.038508 (0.063481) | 0.036077 / 0.023109 (0.012968) | 0.413469 / 0.275898 (0.137571) | 0.505560 / 0.323480 (0.182080) | 0.007506 / 0.007986 (-0.000480) | 0.006369 / 0.004328 (0.002040) | 0.099597 / 0.004250 (0.095346) | 0.058115 / 0.037052 (0.021063) | 0.414735 / 0.258489 (0.156246) | 0.466801 / 0.293841 (0.172960) | 0.064771 / 0.128546 (-0.063775) | 0.021100 / 0.075646 (-0.054546) | 0.135407 / 0.419271 (-0.283864) | 0.068784 / 0.043533 (0.025251) | 0.410467 / 0.255139 (0.155328) | 0.465993 / 0.283200 (0.182794) | 0.119404 / 0.141683 (-0.022279) | 1.767107 / 1.452155 (0.314952) | 1.938342 / 1.492716 (0.445626) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.227038 / 0.018006 (0.209032) | 0.511389 / 0.000490 (0.510899) | 0.006723 / 0.000200 (0.006523) | 0.000118 / 0.000054 (0.000064) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033078 / 0.037411 (-0.004333) | 0.133159 / 0.014526 (0.118633) | 0.147928 / 0.176557 (-0.028629) | 0.214005 / 0.737135 (-0.523130) | 0.151655 / 0.296338 (-0.144683) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.634829 / 0.215209 (0.419620) | 6.578640 / 2.077655 (4.500985) | 2.673598 / 1.504120 (1.169478) | 2.338671 / 1.541195 (0.797476) | 2.389104 / 1.468490 (0.920614) | 1.274938 / 4.584777 (-3.309839) | 5.746524 / 3.745712 (2.000812) | 5.992084 / 5.269862 (0.722222) | 3.092090 / 4.565676 (-1.473587) | 0.150375 / 0.424275 (-0.273900) | 0.015470 / 0.007607 (0.007863) | 0.792962 / 0.226044 (0.566918) | 8.057491 / 2.268929 (5.788563) | 3.483966 / 55.444624 (-51.960659) | 2.715038 / 6.876477 (-4.161438) | 2.747186 / 2.142072 (0.605114) | 1.532951 / 4.805227 (-3.272276) | 0.262214 / 6.500664 (-6.238450) | 0.081308 / 0.075469 (0.005839) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.698448 / 1.841788 (-0.143340) | 18.590002 / 8.074308 (10.515694) | 20.584508 / 10.191392 (10.393116) | 0.227237 / 0.680424 (-0.453187) | 0.028445 / 0.534201 (-0.505756) | 0.527874 / 0.579283 (-0.051409) | 0.602844 / 0.434364 (0.168480) | 0.672948 / 0.540337 (0.132611) | 0.788103 / 1.386936 (-0.598833) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#f96547708a889c09ca8a02ed7aadd8c5690503c5 \"CML watermark\")\n" ]
null
[]
Use soundfile for mp3 decoding instead of torchaudio
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/5573/timeline
I've removed `torchaudio` completely and switched to use `soundfile` for everything. With the new version of `soundfile` package this should work smoothly because the `libsndfile` C library is bundled, in Linux wheels too. Let me know if you think it's too harsh and we should continue to support `torchaudio` decoding. I decided that we can drop it completely because: 1. it's always something wrong with `torchaudio` (for example recently https://github.com/huggingface/datasets/issues/5488 ) 2. the results of mp3 decoding are different depending on `torchaudio` version 3. `soundfile` is slightly faster then the latest `torchaudio` 4. anyway users can pass any custom decoding function with any library they want if needed (worth putting a snippet in the docs). cc @sanchit-gandhi @vaibhavad
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/5573.diff", "html_url": "https://github.com/huggingface/datasets/pull/5573", "merged_at": "2023-02-28T20:16:02Z", "patch_url": "https://github.com/huggingface/datasets/pull/5573.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5573" }
1,597,400,836
https://api.github.com/repos/huggingface/datasets/issues/5573/comments
PR_kwDODunzps5Kop7n
null
5,573
https://api.github.com/repos/huggingface/datasets/issues/5573/events
true
closed
2023-02-23T17:28:11Z
null
https://api.github.com/repos/huggingface/datasets/issues/5572
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5572/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/5572/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/45281?v=4", "events_url": "https://api.github.com/users/lsb/events{/privacy}", "followers_url": "https://api.github.com/users/lsb/followers", "following_url": "https://api.github.com/users/lsb/following{/other_user}", "gists_url": "https://api.github.com/users/lsb/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lsb", "id": 45281, "login": "lsb", "node_id": "MDQ6VXNlcjQ1Mjgx", "organizations_url": "https://api.github.com/users/lsb/orgs", "received_events_url": "https://api.github.com/users/lsb/received_events", "repos_url": "https://api.github.com/users/lsb/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lsb/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lsb/subscriptions", "type": "User", "url": "https://api.github.com/users/lsb" }
https://github.com/huggingface/datasets/issues/5572
[]
false
2023-02-23T18:03:55Z
2023-02-23T18:03:55Z
null
[]
completed
[]
Datasets 2.10.0 does not reuse the dataset cache
NONE
https://api.github.com/repos/huggingface/datasets/issues/5572/timeline
### Describe the bug download_mode="reuse_dataset_if_exists" will always consider that a dataset doesn't exist. Specifically, upon losing an internet connection trying to load a dataset for a second time in ten seconds, a connection error results, showing a breakpoint of: ``` File ~/jupyterlab/.direnv/python-3.9.6/lib/python3.9/site-packages/datasets/load.py:1174, in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, **download_kwargs) 1165 except Exception as e: # noqa: catch any exception of hf_hub and consider that the dataset doesn't exist 1166 if isinstance( 1167 e, 1168 ( (...) 1172 ), 1173 ): -> 1174 raise ConnectionError(f"Couldn't reach '{path}' on the Hub ({type(e).__name__})") 1175 elif "404" in str(e): 1176 msg = f"Dataset '{path}' doesn't exist on the Hub" ConnectionError: Couldn't reach 'lsb/tenk' on the Hub (ConnectionError) ``` This has been around since at least v2.0. ### Steps to reproduce the bug ``` from datasets import load_dataset import numpy as np tenk = load_dataset("lsb/tenk") # ten thousand integers print(np.average(tenk['train']['a'])) # prints 4999.5 ### now disconnect your internet tenk_too = load_dataset("lsb/tenk", download_mode="reuse_dataset_if_exists") # Raises ConnectionError: Couldn't reach 'lsb/tenk' on the Hub (ConnectionError) ``` ### Expected behavior I expected that I would be able to reuse the dataset I just downloaded. ### Environment info - `datasets` version: 2.10.0 - Platform: macOS-13.1-arm64-arm-64bit - Python version: 3.9.6 - PyArrow version: 7.0.0 - Pandas version: 1.5.2
https://api.github.com/repos/huggingface/datasets
null
1,597,257,624
https://api.github.com/repos/huggingface/datasets/issues/5572/comments
I_kwDODunzps5fNDeY
null
5,572
https://api.github.com/repos/huggingface/datasets/issues/5572/events
false
closed
2023-02-23T16:50:11Z
null
https://api.github.com/repos/huggingface/datasets/issues/5571
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5571/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/5571/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/11876897?v=4", "events_url": "https://api.github.com/users/abinashsahu/events{/privacy}", "followers_url": "https://api.github.com/users/abinashsahu/followers", "following_url": "https://api.github.com/users/abinashsahu/following{/other_user}", "gists_url": "https://api.github.com/users/abinashsahu/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/abinashsahu", "id": 11876897, "login": "abinashsahu", "node_id": "MDQ6VXNlcjExODc2ODk3", "organizations_url": "https://api.github.com/users/abinashsahu/orgs", "received_events_url": "https://api.github.com/users/abinashsahu/received_events", "repos_url": "https://api.github.com/users/abinashsahu/repos", "site_admin": false, "starred_url": "https://api.github.com/users/abinashsahu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/abinashsahu/subscriptions", "type": "User", "url": "https://api.github.com/users/abinashsahu" }
https://github.com/huggingface/datasets/issues/5571
[]
false
2023-02-24T13:21:47Z
2023-02-24T13:21:47Z
null
[ "Hi! \r\n\r\nYou need to pass an input json file explicitly as `data_files` to `load_dataset` to avoid this error:\r\n```python\r\n ds = load_dataset(\"json\", data_files=args.input_json)\r\n```\r\n\r\n", "Thanks it worked!" ]
completed
[]
load_dataset fails for JSON in windows
NONE
https://api.github.com/repos/huggingface/datasets/issues/5571/timeline
### Describe the bug Steps: 1. Created a dataset in a Linux VM and created a small sample using dataset.to_json() method. 2. Downloaded the JSON file to my local Windows machine for working and saved in say - r"C:\Users\name\file.json" 3. I am reading the file in my local PyCharm - the location of python file is different than the location of the JSON. 4. When I read using load_dataset("json",args.input_json), it throws and error from builder.py. raise InvalidConfigName( f"Bad characters from black list '{invalid_windows_characters}' found in '{self.name}'. " f"They could create issues when creating a directory for this config on Windows filesystem." 6. When I bring the data to the current directory, it works fine. ### Steps to reproduce the bug Steps: 1. Created a dataset in a Linux VM and created a small sample using dataset.to_json() method. 2. Downloaded the JSON file to my local Windows machine for working and saved in say - r"C:\Users\name\file.json" 3. I am reading the file in my local PyCharm - the location of python file is different than the location of the JSON. 4. When I read using load_dataset("json",args.input_json), it throws and error from builder.py. raise InvalidConfigName( f"Bad characters from black list '{invalid_windows_characters}' found in '{self.name}'. " f"They could create issues when creating a directory for this config on Windows filesystem." 6. When I bring the data to the current directory, it works fine. ### Expected behavior Should be able to read from a path different than current directory in Windows machine. ### Environment info datasets version: 2.3.1 python version: 3.8 Windows OS
https://api.github.com/repos/huggingface/datasets
null
1,597,198,953
https://api.github.com/repos/huggingface/datasets/issues/5571/comments
I_kwDODunzps5fM1Jp
null
5,571
https://api.github.com/repos/huggingface/datasets/issues/5571/events
false
closed
2023-02-23T16:44:32Z
null
https://api.github.com/repos/huggingface/datasets/issues/5570
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5570/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/5570/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/38630200?v=4", "events_url": "https://api.github.com/users/buoi/events{/privacy}", "followers_url": "https://api.github.com/users/buoi/followers", "following_url": "https://api.github.com/users/buoi/following{/other_user}", "gists_url": "https://api.github.com/users/buoi/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/buoi", "id": 38630200, "login": "buoi", "node_id": "MDQ6VXNlcjM4NjMwMjAw", "organizations_url": "https://api.github.com/users/buoi/orgs", "received_events_url": "https://api.github.com/users/buoi/received_events", "repos_url": "https://api.github.com/users/buoi/repos", "site_admin": false, "starred_url": "https://api.github.com/users/buoi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/buoi/subscriptions", "type": "User", "url": "https://api.github.com/users/buoi" }
https://github.com/huggingface/datasets/issues/5570
[]
false
2023-07-24T15:18:50Z
2023-07-24T15:18:50Z
null
[ "Hi, thanks for the feedback! Would it help to add a tip or note saying the dataset is gated and you need to accept the license before downloading it?", "The error is now more informative:\r\n```\r\nFileNotFoundError: Couldn't find a dataset script at /content/imagenet-1k/imagenet-1k.py or any data file in the same directory. Couldn't find 'imagenet-1k' on the Hugging Face Hub either: FileNotFoundError: Dataset 'imagenet-1k' doesn't exist on the Hub. If the repo is private or gated, make sure to log in with `huggingface-cli login`.\r\n```\r\n\r\n" ]
completed
[]
load_dataset gives FileNotFoundError on imagenet-1k if license is not accepted on the hub
NONE
https://api.github.com/repos/huggingface/datasets/issues/5570/timeline
### Describe the bug When calling ```load_dataset('imagenet-1k')``` FileNotFoundError is raised, if not logged in and if logged in with huggingface-cli but not having accepted the licence on the hub. There is no error once accepting. ### Steps to reproduce the bug ``` from datasets import load_dataset imagenet = load_dataset("imagenet-1k", split="train", streaming=True) FileNotFoundError: Couldn't find a dataset script at /content/imagenet-1k/imagenet-1k.py or any data file in the same directory. Couldn't find 'imagenet-1k' on the Hugging Face Hub either: FileNotFoundError: Dataset 'imagenet-1k' doesn't exist on the Hub ``` tested on a colab notebook. ### Expected behavior I would expect a specific error indicating that I have to login then accept the dataset licence. I find this bug very relevant as this code is on a guide on the [Huggingface documentation for Datasets](https://huggingface.co/docs/datasets/about_mapstyle_vs_iterable) ### Environment info google colab cpu-only instance
https://api.github.com/repos/huggingface/datasets
null
1,597,190,926
https://api.github.com/repos/huggingface/datasets/issues/5570/comments
I_kwDODunzps5fMzMO
null
5,570
https://api.github.com/repos/huggingface/datasets/issues/5570/events
false
closed
2023-02-23T16:06:04Z
null
https://api.github.com/repos/huggingface/datasets/issues/5569
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5569/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/5569/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/48770768?v=4", "events_url": "https://api.github.com/users/Hubert-Bonisseur/events{/privacy}", "followers_url": "https://api.github.com/users/Hubert-Bonisseur/followers", "following_url": "https://api.github.com/users/Hubert-Bonisseur/following{/other_user}", "gists_url": "https://api.github.com/users/Hubert-Bonisseur/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Hubert-Bonisseur", "id": 48770768, "login": "Hubert-Bonisseur", "node_id": "MDQ6VXNlcjQ4NzcwNzY4", "organizations_url": "https://api.github.com/users/Hubert-Bonisseur/orgs", "received_events_url": "https://api.github.com/users/Hubert-Bonisseur/received_events", "repos_url": "https://api.github.com/users/Hubert-Bonisseur/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Hubert-Bonisseur/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Hubert-Bonisseur/subscriptions", "type": "User", "url": "https://api.github.com/users/Hubert-Bonisseur" }
https://github.com/huggingface/datasets/pull/5569
[]
false
2023-02-24T14:06:37Z
2023-02-23T18:15:16Z
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008753 / 0.011353 (-0.002600) | 0.004877 / 0.011008 (-0.006131) | 0.098320 / 0.038508 (0.059812) | 0.034123 / 0.023109 (0.011014) | 0.289539 / 0.275898 (0.013641) | 0.323584 / 0.323480 (0.000104) | 0.007455 / 0.007986 (-0.000531) | 0.004763 / 0.004328 (0.000434) | 0.074350 / 0.004250 (0.070100) | 0.039018 / 0.037052 (0.001966) | 0.294319 / 0.258489 (0.035830) | 0.348686 / 0.293841 (0.054845) | 0.037814 / 0.128546 (-0.090732) | 0.011808 / 0.075646 (-0.063838) | 0.333808 / 0.419271 (-0.085464) | 0.047758 / 0.043533 (0.004225) | 0.298533 / 0.255139 (0.043394) | 0.320790 / 0.283200 (0.037590) | 0.095909 / 0.141683 (-0.045774) | 1.434422 / 1.452155 (-0.017732) | 1.509703 / 1.492716 (0.016987) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.201728 / 0.018006 (0.183722) | 0.432243 / 0.000490 (0.431753) | 0.002760 / 0.000200 (0.002560) | 0.000080 / 0.000054 (0.000026) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026090 / 0.037411 (-0.011321) | 0.105914 / 0.014526 (0.091388) | 0.115869 / 0.176557 (-0.060688) | 0.178291 / 0.737135 (-0.558844) | 0.121435 / 0.296338 (-0.174904) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.402304 / 0.215209 (0.187095) | 3.995183 / 2.077655 (1.917529) | 1.794548 / 1.504120 (0.290428) | 1.603034 / 1.541195 (0.061839) | 1.643836 / 1.468490 (0.175346) | 0.694934 / 4.584777 (-3.889843) | 3.695128 / 3.745712 (-0.050584) | 2.018582 / 5.269862 (-3.251279) | 1.294315 / 4.565676 (-3.271362) | 0.085346 / 0.424275 (-0.338929) | 0.012201 / 0.007607 (0.004594) | 0.510057 / 0.226044 (0.284012) | 5.123404 / 2.268929 (2.854476) | 2.319089 / 55.444624 (-53.125535) | 1.930935 / 6.876477 (-4.945542) | 1.939700 / 2.142072 (-0.202372) | 0.848282 / 4.805227 (-3.956945) | 0.165561 / 6.500664 (-6.335103) | 0.062028 / 0.075469 (-0.013441) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.220576 / 1.841788 (-0.621212) | 14.413853 / 8.074308 (6.339544) | 14.027156 / 10.191392 (3.835764) | 0.170109 / 0.680424 (-0.510315) | 0.029412 / 0.534201 (-0.504789) | 0.443898 / 0.579283 (-0.135386) | 0.433059 / 0.434364 (-0.001305) | 0.533465 / 0.540337 (-0.006872) | 0.626562 / 1.386936 (-0.760374) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007148 / 0.011353 (-0.004205) | 0.005019 / 0.011008 (-0.005989) | 0.073132 / 0.038508 (0.034624) | 0.032763 / 0.023109 (0.009654) | 0.329309 / 0.275898 (0.053411) | 0.361658 / 0.323480 (0.038178) | 0.005683 / 0.007986 (-0.002302) | 0.003793 / 0.004328 (-0.000536) | 0.071858 / 0.004250 (0.067608) | 0.045160 / 0.037052 (0.008107) | 0.335852 / 0.258489 (0.077363) | 0.384274 / 0.293841 (0.090433) | 0.036647 / 0.128546 (-0.091899) | 0.012217 / 0.075646 (-0.063430) | 0.086265 / 0.419271 (-0.333007) | 0.049223 / 0.043533 (0.005690) | 0.331460 / 0.255139 (0.076321) | 0.353175 / 0.283200 (0.069975) | 0.102214 / 0.141683 (-0.039469) | 1.491451 / 1.452155 (0.039296) | 1.553702 / 1.492716 (0.060985) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.222972 / 0.018006 (0.204966) | 0.432862 / 0.000490 (0.432372) | 0.000421 / 0.000200 (0.000221) | 0.000059 / 0.000054 (0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028401 / 0.037411 (-0.009010) | 0.109331 / 0.014526 (0.094805) | 0.119246 / 0.176557 (-0.057311) | 0.187997 / 0.737135 (-0.549138) | 0.124212 / 0.296338 (-0.172127) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.427240 / 0.215209 (0.212031) | 4.271619 / 2.077655 (2.193964) | 2.104948 / 1.504120 (0.600828) | 1.910624 / 1.541195 (0.369430) | 1.943812 / 1.468490 (0.475322) | 0.711466 / 4.584777 (-3.873311) | 3.778987 / 3.745712 (0.033275) | 2.976258 / 5.269862 (-2.293604) | 1.807591 / 4.565676 (-2.758086) | 0.088286 / 0.424275 (-0.335989) | 0.012461 / 0.007607 (0.004854) | 0.527554 / 0.226044 (0.301509) | 5.279461 / 2.268929 (3.010532) | 2.517911 / 55.444624 (-52.926713) | 2.176557 / 6.876477 (-4.699920) | 2.205322 / 2.142072 (0.063249) | 0.855012 / 4.805227 (-3.950215) | 0.170007 / 6.500664 (-6.330658) | 0.065273 / 0.075469 (-0.010196) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.282785 / 1.841788 (-0.559003) | 14.819500 / 8.074308 (6.745192) | 13.282211 / 10.191392 (3.090819) | 0.161804 / 0.680424 (-0.518620) | 0.017615 / 0.534201 (-0.516586) | 0.420159 / 0.579283 (-0.159124) | 0.441304 / 0.434364 (0.006940) | 0.531704 / 0.540337 (-0.008634) | 0.627477 / 1.386936 (-0.759459) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#b91070b9c09673e2e148eec458036ab6a62ac042 \"CML watermark\")\n", "Hmm I think we need to add more tests. Not sure what would happen with :\r\n- decodable features that may end up decoded twice \r\n- formatted datasets \r\n\r\nI'd be in favor of reverting this until we checked those" ]
null
[]
pass the dataset features to the IterableDataset.from_generator function
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/5569/timeline
[5558](https://github.com/huggingface/datasets/issues/5568)
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/5569.diff", "html_url": "https://github.com/huggingface/datasets/pull/5569", "merged_at": "2023-02-23T18:15:16Z", "patch_url": "https://github.com/huggingface/datasets/pull/5569.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5569" }
1,597,132,383
https://api.github.com/repos/huggingface/datasets/issues/5569/comments
PR_kwDODunzps5KnwHD
null
5,569
https://api.github.com/repos/huggingface/datasets/issues/5569/events
true
closed
2023-02-23T13:45:33Z
null
https://api.github.com/repos/huggingface/datasets/issues/5568
{ "avatar_url": "https://avatars.githubusercontent.com/u/48770768?v=4", "events_url": "https://api.github.com/users/Hubert-Bonisseur/events{/privacy}", "followers_url": "https://api.github.com/users/Hubert-Bonisseur/followers", "following_url": "https://api.github.com/users/Hubert-Bonisseur/following{/other_user}", "gists_url": "https://api.github.com/users/Hubert-Bonisseur/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Hubert-Bonisseur", "id": 48770768, "login": "Hubert-Bonisseur", "node_id": "MDQ6VXNlcjQ4NzcwNzY4", "organizations_url": "https://api.github.com/users/Hubert-Bonisseur/orgs", "received_events_url": "https://api.github.com/users/Hubert-Bonisseur/received_events", "repos_url": "https://api.github.com/users/Hubert-Bonisseur/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Hubert-Bonisseur/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Hubert-Bonisseur/subscriptions", "type": "User", "url": "https://api.github.com/users/Hubert-Bonisseur" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5568/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/5568/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/48770768?v=4", "events_url": "https://api.github.com/users/Hubert-Bonisseur/events{/privacy}", "followers_url": "https://api.github.com/users/Hubert-Bonisseur/followers", "following_url": "https://api.github.com/users/Hubert-Bonisseur/following{/other_user}", "gists_url": "https://api.github.com/users/Hubert-Bonisseur/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Hubert-Bonisseur", "id": 48770768, "login": "Hubert-Bonisseur", "node_id": "MDQ6VXNlcjQ4NzcwNzY4", "organizations_url": "https://api.github.com/users/Hubert-Bonisseur/orgs", "received_events_url": "https://api.github.com/users/Hubert-Bonisseur/received_events", "repos_url": "https://api.github.com/users/Hubert-Bonisseur/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Hubert-Bonisseur/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Hubert-Bonisseur/subscriptions", "type": "User", "url": "https://api.github.com/users/Hubert-Bonisseur" }
https://github.com/huggingface/datasets/issues/5568
[ { "avatar_url": "https://avatars.githubusercontent.com/u/48770768?v=4", "events_url": "https://api.github.com/users/Hubert-Bonisseur/events{/privacy}", "followers_url": "https://api.github.com/users/Hubert-Bonisseur/followers", "following_url": "https://api.github.com/users/Hubert-Bonisseur/following{/other_user}", "gists_url": "https://api.github.com/users/Hubert-Bonisseur/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Hubert-Bonisseur", "id": 48770768, "login": "Hubert-Bonisseur", "node_id": "MDQ6VXNlcjQ4NzcwNzY4", "organizations_url": "https://api.github.com/users/Hubert-Bonisseur/orgs", "received_events_url": "https://api.github.com/users/Hubert-Bonisseur/received_events", "repos_url": "https://api.github.com/users/Hubert-Bonisseur/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Hubert-Bonisseur/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Hubert-Bonisseur/subscriptions", "type": "User", "url": "https://api.github.com/users/Hubert-Bonisseur" } ]
false
2023-02-24T13:22:36Z
2023-02-24T13:22:36Z
null
[ "Hi ! Oh good catch. I think the features should be passed to `IterableDataset.from_generator()` in `to_iterable_dataset()` indeed.\r\n\r\nSetting this as a good first issue if someone would like to contribute, otherwise we can take care of it :)", "#self-assign", "seems like the feature parameter is missing from `return IterableDataset.from_generator(Dataset._iter_shards, gen_kwargs={\"shards\": shards})` hence it defaults to None." ]
completed
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" }, { "color": "7057ff", "default": true, "description": "Good for newcomers", "id": 1935892877, "name": "good first issue", "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue" } ]
dataset.to_iterable_dataset() loses useful info like dataset features
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/5568/timeline
### Describe the bug Hello, I like the new `to_iterable_dataset` feature but I noticed something that seems to be missing. When using `to_iterable_dataset` to transform your map style dataset into iterable dataset, you lose valuable metadata like the features. These metadata are useful if you want to interleave iterable datasets, cast columns etc. ### Steps to reproduce the bug ```python dataset = load_dataset("lhoestq/demo1")["train"] print(dataset.features) # {'id': Value(dtype='string', id=None), 'package_name': Value(dtype='string', id=None), 'review': Value(dtype='string', id=None), 'date': Value(dtype='string', id=None), 'star': Value(dtype='int64', id=None), 'version_id': Value(dtype='int64', id=None)} dataset = dataset.to_iterable_dataset() print(dataset.features) # None ``` ### Expected behavior Keep the relevant information ### Environment info datasets==2.10.0
https://api.github.com/repos/huggingface/datasets
null
1,596,900,532
https://api.github.com/repos/huggingface/datasets/issues/5568/comments
I_kwDODunzps5fLsS0
null
5,568
https://api.github.com/repos/huggingface/datasets/issues/5568/events
false
open
2023-02-22T22:13:40Z
null
https://api.github.com/repos/huggingface/datasets/issues/5566
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5566/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/5566/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4", "events_url": "https://api.github.com/users/shamanez/events{/privacy}", "followers_url": "https://api.github.com/users/shamanez/followers", "following_url": "https://api.github.com/users/shamanez/following{/other_user}", "gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/shamanez", "id": 16892570, "login": "shamanez", "node_id": "MDQ6VXNlcjE2ODkyNTcw", "organizations_url": "https://api.github.com/users/shamanez/orgs", "received_events_url": "https://api.github.com/users/shamanez/received_events", "repos_url": "https://api.github.com/users/shamanez/repos", "site_admin": false, "starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shamanez/subscriptions", "type": "User", "url": "https://api.github.com/users/shamanez" }
https://github.com/huggingface/datasets/issues/5566
[]
false
2023-02-23T11:03:29Z
null
null
[ "Hi ! I think is in the scope of this other issue: to https://github.com/huggingface/datasets/issues/5281 " ]
null
[ { "color": "cfd3d7", "default": true, "description": "This issue or pull request already exists", "id": 1935892865, "name": "duplicate", "node_id": "MDU6TGFiZWwxOTM1ODkyODY1", "url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate" }, { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
Directly reading parquet files in a s3 bucket from the load_dataset method
NONE
https://api.github.com/repos/huggingface/datasets/issues/5566/timeline
### Feature request Right now, we have to read the get the parquet file to the local storage. So having ability to read given the bucket directly address would be benificial ### Motivation In a production set up, this feature can help us a lot. So we do not need move training datafiles in between storage. ### Your contribution I am willing to help if there's anyway.
https://api.github.com/repos/huggingface/datasets
null
1,595,916,674
https://api.github.com/repos/huggingface/datasets/issues/5566/comments
I_kwDODunzps5fH8GC
null
5,566
https://api.github.com/repos/huggingface/datasets/issues/5566/events
false
closed
2023-02-22T15:09:30Z
null
https://api.github.com/repos/huggingface/datasets/issues/5565
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5565/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/5565/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://github.com/huggingface/datasets/pull/5565
[]
false
2023-03-10T13:53:03Z
2023-03-10T13:45:43Z
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008745 / 0.011353 (-0.002608) | 0.004651 / 0.011008 (-0.006357) | 0.099678 / 0.038508 (0.061170) | 0.029441 / 0.023109 (0.006332) | 0.300314 / 0.275898 (0.024416) | 0.342022 / 0.323480 (0.018542) | 0.006965 / 0.007986 (-0.001021) | 0.003382 / 0.004328 (-0.000946) | 0.078195 / 0.004250 (0.073945) | 0.033308 / 0.037052 (-0.003744) | 0.300857 / 0.258489 (0.042368) | 0.356763 / 0.293841 (0.062922) | 0.033919 / 0.128546 (-0.094627) | 0.011436 / 0.075646 (-0.064210) | 0.319581 / 0.419271 (-0.099691) | 0.041303 / 0.043533 (-0.002229) | 0.299387 / 0.255139 (0.044248) | 0.327783 / 0.283200 (0.044583) | 0.087210 / 0.141683 (-0.054473) | 1.498757 / 1.452155 (0.046603) | 1.560417 / 1.492716 (0.067701) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.191806 / 0.018006 (0.173800) | 0.407044 / 0.000490 (0.406554) | 0.005116 / 0.000200 (0.004916) | 0.000073 / 0.000054 (0.000019) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023760 / 0.037411 (-0.013652) | 0.096844 / 0.014526 (0.082318) | 0.104710 / 0.176557 (-0.071847) | 0.168161 / 0.737135 (-0.568974) | 0.107808 / 0.296338 (-0.188531) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.417707 / 0.215209 (0.202498) | 4.155952 / 2.077655 (2.078297) | 1.864934 / 1.504120 (0.360814) | 1.654925 / 1.541195 (0.113730) | 1.731341 / 1.468490 (0.262851) | 0.692014 / 4.584777 (-3.892763) | 3.407318 / 3.745712 (-0.338394) | 3.394791 / 5.269862 (-1.875071) | 1.650429 / 4.565676 (-2.915247) | 0.082177 / 0.424275 (-0.342098) | 0.012463 / 0.007607 (0.004856) | 0.523681 / 0.226044 (0.297637) | 5.249426 / 2.268929 (2.980498) | 2.327443 / 55.444624 (-53.117181) | 1.982160 / 6.876477 (-4.894317) | 2.019822 / 2.142072 (-0.122250) | 0.804820 / 4.805227 (-4.000408) | 0.148423 / 6.500664 (-6.352241) | 0.064938 / 0.075469 (-0.010531) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.225722 / 1.841788 (-0.616066) | 13.774257 / 8.074308 (5.699949) | 14.090298 / 10.191392 (3.898906) | 0.152489 / 0.680424 (-0.527935) | 0.028595 / 0.534201 (-0.505606) | 0.399011 / 0.579283 (-0.180272) | 0.399546 / 0.434364 (-0.034818) | 0.485513 / 0.540337 (-0.054824) | 0.564055 / 1.386936 (-0.822881) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006891 / 0.011353 (-0.004462) | 0.004557 / 0.011008 (-0.006451) | 0.077868 / 0.038508 (0.039360) | 0.028767 / 0.023109 (0.005657) | 0.344127 / 0.275898 (0.068229) | 0.377097 / 0.323480 (0.053617) | 0.005119 / 0.007986 (-0.002866) | 0.003547 / 0.004328 (-0.000782) | 0.077047 / 0.004250 (0.072796) | 0.043037 / 0.037052 (0.005984) | 0.341900 / 0.258489 (0.083410) | 0.384570 / 0.293841 (0.090729) | 0.032606 / 0.128546 (-0.095940) | 0.011752 / 0.075646 (-0.063894) | 0.086731 / 0.419271 (-0.332540) | 0.045459 / 0.043533 (0.001926) | 0.339308 / 0.255139 (0.084169) | 0.370498 / 0.283200 (0.087298) | 0.096237 / 0.141683 (-0.045446) | 1.499253 / 1.452155 (0.047098) | 1.583871 / 1.492716 (0.091154) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.245471 / 0.018006 (0.227465) | 0.408750 / 0.000490 (0.408260) | 0.008992 / 0.000200 (0.008792) | 0.000249 / 0.000054 (0.000194) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025508 / 0.037411 (-0.011903) | 0.102103 / 0.014526 (0.087578) | 0.109247 / 0.176557 (-0.067310) | 0.176369 / 0.737135 (-0.560766) | 0.111241 / 0.296338 (-0.185097) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.437209 / 0.215209 (0.222000) | 4.354386 / 2.077655 (2.276731) | 2.064008 / 1.504120 (0.559888) | 1.855518 / 1.541195 (0.314323) | 1.931647 / 1.468490 (0.463157) | 0.704913 / 4.584777 (-3.879864) | 3.397913 / 3.745712 (-0.347800) | 1.871524 / 5.269862 (-3.398338) | 1.176492 / 4.565676 (-3.389185) | 0.083976 / 0.424275 (-0.340299) | 0.012806 / 0.007607 (0.005199) | 0.539138 / 0.226044 (0.313094) | 5.401493 / 2.268929 (3.132564) | 2.539185 / 55.444624 (-52.905440) | 2.186445 / 6.876477 (-4.690031) | 2.222170 / 2.142072 (0.080097) | 0.815641 / 4.805227 (-3.989586) | 0.153033 / 6.500664 (-6.347631) | 0.069168 / 0.075469 (-0.006301) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.283530 / 1.841788 (-0.558258) | 14.075831 / 8.074308 (6.001523) | 13.649137 / 10.191392 (3.457745) | 0.127517 / 0.680424 (-0.552907) | 0.016619 / 0.534201 (-0.517582) | 0.377400 / 0.579283 (-0.201883) | 0.410796 / 0.434364 (-0.023568) | 0.463996 / 0.540337 (-0.076342) | 0.551867 / 1.386936 (-0.835069) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#1135285d80ff9cd65fc51905f08343b4d7c2fa9c \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009161 / 0.011353 (-0.002192) | 0.004987 / 0.011008 (-0.006022) | 0.098553 / 0.038508 (0.060045) | 0.034326 / 0.023109 (0.011216) | 0.295325 / 0.275898 (0.019427) | 0.326361 / 0.323480 (0.002881) | 0.007827 / 0.007986 (-0.000159) | 0.004933 / 0.004328 (0.000604) | 0.074236 / 0.004250 (0.069986) | 0.040410 / 0.037052 (0.003357) | 0.295644 / 0.258489 (0.037155) | 0.355050 / 0.293841 (0.061209) | 0.038384 / 0.128546 (-0.090162) | 0.011845 / 0.075646 (-0.063801) | 0.340678 / 0.419271 (-0.078594) | 0.047615 / 0.043533 (0.004082) | 0.292429 / 0.255139 (0.037290) | 0.312610 / 0.283200 (0.029410) | 0.100106 / 0.141683 (-0.041577) | 1.446186 / 1.452155 (-0.005969) | 1.534763 / 1.492716 (0.042046) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.213667 / 0.018006 (0.195661) | 0.447310 / 0.000490 (0.446820) | 0.000402 / 0.000200 (0.000202) | 0.000056 / 0.000054 (0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027604 / 0.037411 (-0.009807) | 0.112785 / 0.014526 (0.098259) | 0.119450 / 0.176557 (-0.057106) | 0.185728 / 0.737135 (-0.551407) | 0.122860 / 0.296338 (-0.173478) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.399162 / 0.215209 (0.183953) | 3.992701 / 2.077655 (1.915046) | 1.773881 / 1.504120 (0.269761) | 1.589842 / 1.541195 (0.048647) | 1.670065 / 1.468490 (0.201575) | 0.707669 / 4.584777 (-3.877107) | 3.719657 / 3.745712 (-0.026055) | 2.139629 / 5.269862 (-3.130232) | 1.467224 / 4.565676 (-3.098453) | 0.086033 / 0.424275 (-0.338242) | 0.012151 / 0.007607 (0.004544) | 0.519700 / 0.226044 (0.293656) | 5.150254 / 2.268929 (2.881325) | 2.305076 / 55.444624 (-53.139548) | 1.927914 / 6.876477 (-4.948563) | 1.999461 / 2.142072 (-0.142612) | 0.851819 / 4.805227 (-3.953408) | 0.165513 / 6.500664 (-6.335151) | 0.061898 / 0.075469 (-0.013571) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.226251 / 1.841788 (-0.615536) | 14.990253 / 8.074308 (6.915945) | 14.658720 / 10.191392 (4.467328) | 0.191665 / 0.680424 (-0.488759) | 0.028768 / 0.534201 (-0.505433) | 0.443907 / 0.579283 (-0.135376) | 0.455183 / 0.434364 (0.020819) | 0.552760 / 0.540337 (0.012422) | 0.653927 / 1.386936 (-0.733009) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007677 / 0.011353 (-0.003675) | 0.005340 / 0.011008 (-0.005668) | 0.075644 / 0.038508 (0.037136) | 0.035046 / 0.023109 (0.011937) | 0.341437 / 0.275898 (0.065538) | 0.377782 / 0.323480 (0.054302) | 0.006091 / 0.007986 (-0.001895) | 0.004170 / 0.004328 (-0.000158) | 0.074294 / 0.004250 (0.070044) | 0.049851 / 0.037052 (0.012798) | 0.351691 / 0.258489 (0.093202) | 0.386020 / 0.293841 (0.092179) | 0.036884 / 0.128546 (-0.091662) | 0.012475 / 0.075646 (-0.063172) | 0.087267 / 0.419271 (-0.332005) | 0.058623 / 0.043533 (0.015090) | 0.347186 / 0.255139 (0.092047) | 0.355869 / 0.283200 (0.072669) | 0.112022 / 0.141683 (-0.029661) | 1.451798 / 1.452155 (-0.000357) | 1.553262 / 1.492716 (0.060546) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.233451 / 0.018006 (0.215445) | 0.444384 / 0.000490 (0.443895) | 0.003695 / 0.000200 (0.003495) | 0.000088 / 0.000054 (0.000034) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029686 / 0.037411 (-0.007725) | 0.113736 / 0.014526 (0.099210) | 0.123998 / 0.176557 (-0.052559) | 0.197847 / 0.737135 (-0.539288) | 0.129936 / 0.296338 (-0.166403) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.421904 / 0.215209 (0.206695) | 4.203533 / 2.077655 (2.125878) | 2.038199 / 1.504120 (0.534079) | 1.832402 / 1.541195 (0.291208) | 1.930765 / 1.468490 (0.462274) | 0.709775 / 4.584777 (-3.875002) | 3.760893 / 3.745712 (0.015181) | 2.091185 / 5.269862 (-3.178677) | 1.342248 / 4.565676 (-3.223428) | 0.087770 / 0.424275 (-0.336505) | 0.012357 / 0.007607 (0.004750) | 0.519605 / 0.226044 (0.293560) | 5.215883 / 2.268929 (2.946954) | 2.510200 / 55.444624 (-52.934425) | 2.192482 / 6.876477 (-4.683995) | 2.290214 / 2.142072 (0.148141) | 0.872067 / 4.805227 (-3.933160) | 0.168491 / 6.500664 (-6.332173) | 0.064707 / 0.075469 (-0.010762) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.291956 / 1.841788 (-0.549832) | 15.244530 / 8.074308 (7.170222) | 13.594895 / 10.191392 (3.403503) | 0.172669 / 0.680424 (-0.507755) | 0.017765 / 0.534201 (-0.516436) | 0.426946 / 0.579283 (-0.152337) | 0.442843 / 0.434364 (0.008479) | 0.549683 / 0.540337 (0.009346) | 0.653433 / 1.386936 (-0.733503) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#b54a6d21795cf6cc50a13ff870648241a60fd2e0 \"CML watermark\")\n", "Can you review this @mariosasko ? since Albert is off", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008396 / 0.011353 (-0.002957) | 0.004556 / 0.011008 (-0.006452) | 0.101343 / 0.038508 (0.062835) | 0.029137 / 0.023109 (0.006027) | 0.298553 / 0.275898 (0.022655) | 0.334050 / 0.323480 (0.010570) | 0.006746 / 0.007986 (-0.001239) | 0.005050 / 0.004328 (0.000721) | 0.076055 / 0.004250 (0.071804) | 0.031988 / 0.037052 (-0.005064) | 0.301324 / 0.258489 (0.042835) | 0.340121 / 0.293841 (0.046280) | 0.033827 / 0.128546 (-0.094720) | 0.011447 / 0.075646 (-0.064200) | 0.321827 / 0.419271 (-0.097445) | 0.040846 / 0.043533 (-0.002687) | 0.296957 / 0.255139 (0.041818) | 0.324178 / 0.283200 (0.040979) | 0.083852 / 0.141683 (-0.057831) | 1.456123 / 1.452155 (0.003968) | 1.538311 / 1.492716 (0.045595) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.208897 / 0.018006 (0.190891) | 0.430560 / 0.000490 (0.430070) | 0.002917 / 0.000200 (0.002717) | 0.000077 / 0.000054 (0.000022) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024332 / 0.037411 (-0.013080) | 0.101659 / 0.014526 (0.087133) | 0.107636 / 0.176557 (-0.068920) | 0.168805 / 0.737135 (-0.568330) | 0.111404 / 0.296338 (-0.184934) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.412704 / 0.215209 (0.197495) | 4.124852 / 2.077655 (2.047197) | 1.843555 / 1.504120 (0.339435) | 1.641636 / 1.541195 (0.100441) | 1.755783 / 1.468490 (0.287293) | 0.693212 / 4.584777 (-3.891565) | 3.391803 / 3.745712 (-0.353909) | 1.954473 / 5.269862 (-3.315389) | 1.274395 / 4.565676 (-3.291282) | 0.082536 / 0.424275 (-0.341739) | 0.012335 / 0.007607 (0.004728) | 0.523720 / 0.226044 (0.297676) | 5.268339 / 2.268929 (2.999411) | 2.318163 / 55.444624 (-53.126461) | 1.978503 / 6.876477 (-4.897974) | 2.046689 / 2.142072 (-0.095384) | 0.806735 / 4.805227 (-3.998492) | 0.148010 / 6.500664 (-6.352654) | 0.065305 / 0.075469 (-0.010164) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.266950 / 1.841788 (-0.574838) | 13.870803 / 8.074308 (5.796495) | 14.272556 / 10.191392 (4.081164) | 0.151703 / 0.680424 (-0.528720) | 0.028991 / 0.534201 (-0.505210) | 0.400831 / 0.579283 (-0.178452) | 0.400891 / 0.434364 (-0.033473) | 0.476225 / 0.540337 (-0.064113) | 0.564925 / 1.386936 (-0.822011) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006810 / 0.011353 (-0.004543) | 0.004544 / 0.011008 (-0.006464) | 0.076516 / 0.038508 (0.038008) | 0.027705 / 0.023109 (0.004596) | 0.343215 / 0.275898 (0.067317) | 0.379136 / 0.323480 (0.055656) | 0.005227 / 0.007986 (-0.002758) | 0.003527 / 0.004328 (-0.000801) | 0.074775 / 0.004250 (0.070524) | 0.041700 / 0.037052 (0.004648) | 0.343612 / 0.258489 (0.085123) | 0.385657 / 0.293841 (0.091817) | 0.032082 / 0.128546 (-0.096464) | 0.011567 / 0.075646 (-0.064079) | 0.083814 / 0.419271 (-0.335458) | 0.042173 / 0.043533 (-0.001360) | 0.340261 / 0.255139 (0.085122) | 0.364778 / 0.283200 (0.081578) | 0.093401 / 0.141683 (-0.048282) | 1.513475 / 1.452155 (0.061320) | 1.599393 / 1.492716 (0.106677) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.237117 / 0.018006 (0.219111) | 0.424241 / 0.000490 (0.423751) | 0.002900 / 0.000200 (0.002700) | 0.000076 / 0.000054 (0.000021) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031122 / 0.037411 (-0.006289) | 0.107530 / 0.014526 (0.093004) | 0.117777 / 0.176557 (-0.058780) | 0.188300 / 0.737135 (-0.548836) | 0.119989 / 0.296338 (-0.176349) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.438563 / 0.215209 (0.223354) | 4.404969 / 2.077655 (2.327315) | 2.260182 / 1.504120 (0.756062) | 2.035472 / 1.541195 (0.494277) | 2.045685 / 1.468490 (0.577195) | 0.706758 / 4.584777 (-3.878019) | 3.434843 / 3.745712 (-0.310869) | 1.909533 / 5.269862 (-3.360328) | 1.175374 / 4.565676 (-3.390303) | 0.084831 / 0.424275 (-0.339444) | 0.012441 / 0.007607 (0.004833) | 0.551818 / 0.226044 (0.325774) | 5.509005 / 2.268929 (3.240077) | 2.576545 / 55.444624 (-52.868080) | 2.226204 / 6.876477 (-4.650272) | 2.276544 / 2.142072 (0.134471) | 0.818069 / 4.805227 (-3.987158) | 0.152797 / 6.500664 (-6.347867) | 0.067896 / 0.075469 (-0.007573) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.276859 / 1.841788 (-0.564929) | 14.312914 / 8.074308 (6.238606) | 13.406602 / 10.191392 (3.215210) | 0.157466 / 0.680424 (-0.522958) | 0.016709 / 0.534201 (-0.517492) | 0.390951 / 0.579283 (-0.188333) | 0.395525 / 0.434364 (-0.038839) | 0.484486 / 0.540337 (-0.055852) | 0.576125 / 1.386936 (-0.810811) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#b951e1b6cdd927604599f1aa5dadfb8ee8e62e05 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007316 / 0.011353 (-0.004037) | 0.005041 / 0.011008 (-0.005968) | 0.100477 / 0.038508 (0.061969) | 0.034068 / 0.023109 (0.010959) | 0.351156 / 0.275898 (0.075258) | 0.373892 / 0.323480 (0.050412) | 0.005748 / 0.007986 (-0.002237) | 0.003959 / 0.004328 (-0.000370) | 0.075540 / 0.004250 (0.071290) | 0.045282 / 0.037052 (0.008230) | 0.362364 / 0.258489 (0.103874) | 0.376461 / 0.293841 (0.082620) | 0.036724 / 0.128546 (-0.091822) | 0.012008 / 0.075646 (-0.063638) | 0.333802 / 0.419271 (-0.085470) | 0.050107 / 0.043533 (0.006574) | 0.348003 / 0.255139 (0.092864) | 0.367187 / 0.283200 (0.083988) | 0.103171 / 0.141683 (-0.038511) | 1.448281 / 1.452155 (-0.003874) | 1.516231 / 1.492716 (0.023514) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.203651 / 0.018006 (0.185645) | 0.438103 / 0.000490 (0.437613) | 0.004165 / 0.000200 (0.003966) | 0.000085 / 0.000054 (0.000030) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027068 / 0.037411 (-0.010343) | 0.111728 / 0.014526 (0.097202) | 0.116963 / 0.176557 (-0.059594) | 0.172652 / 0.737135 (-0.564483) | 0.124257 / 0.296338 (-0.172082) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.407937 / 0.215209 (0.192728) | 4.066008 / 2.077655 (1.988353) | 1.895000 / 1.504120 (0.390880) | 1.698422 / 1.541195 (0.157227) | 1.872446 / 1.468490 (0.403956) | 0.688888 / 4.584777 (-3.895889) | 3.743635 / 3.745712 (-0.002077) | 2.161507 / 5.269862 (-3.108354) | 1.485218 / 4.565676 (-3.080458) | 0.085959 / 0.424275 (-0.338316) | 0.012554 / 0.007607 (0.004947) | 0.510953 / 0.226044 (0.284909) | 5.103241 / 2.268929 (2.834312) | 2.439670 / 55.444624 (-53.004955) | 2.057089 / 6.876477 (-4.819387) | 2.240137 / 2.142072 (0.098065) | 0.847750 / 4.805227 (-3.957477) | 0.172952 / 6.500664 (-6.327712) | 0.066023 / 0.075469 (-0.009446) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.190677 / 1.841788 (-0.651110) | 14.593162 / 8.074308 (6.518854) | 14.254983 / 10.191392 (4.063591) | 0.155811 / 0.680424 (-0.524613) | 0.017698 / 0.534201 (-0.516503) | 0.420455 / 0.579283 (-0.158828) | 0.412146 / 0.434364 (-0.022218) | 0.493113 / 0.540337 (-0.047225) | 0.582097 / 1.386936 (-0.804839) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007319 / 0.011353 (-0.004033) | 0.005102 / 0.011008 (-0.005906) | 0.073760 / 0.038508 (0.035252) | 0.033496 / 0.023109 (0.010387) | 0.338778 / 0.275898 (0.062880) | 0.371870 / 0.323480 (0.048391) | 0.005804 / 0.007986 (-0.002182) | 0.004142 / 0.004328 (-0.000186) | 0.073203 / 0.004250 (0.068953) | 0.046568 / 0.037052 (0.009516) | 0.343544 / 0.258489 (0.085055) | 0.381188 / 0.293841 (0.087347) | 0.036391 / 0.128546 (-0.092155) | 0.012046 / 0.075646 (-0.063600) | 0.086007 / 0.419271 (-0.333265) | 0.048706 / 0.043533 (0.005173) | 0.330836 / 0.255139 (0.075697) | 0.355328 / 0.283200 (0.072128) | 0.100104 / 0.141683 (-0.041579) | 1.434237 / 1.452155 (-0.017917) | 1.549380 / 1.492716 (0.056663) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.231099 / 0.018006 (0.213093) | 0.450650 / 0.000490 (0.450160) | 0.000404 / 0.000200 (0.000204) | 0.000059 / 0.000054 (0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030534 / 0.037411 (-0.006877) | 0.119005 / 0.014526 (0.104479) | 0.125362 / 0.176557 (-0.051195) | 0.176823 / 0.737135 (-0.560313) | 0.132044 / 0.296338 (-0.164295) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.431004 / 0.215209 (0.215795) | 4.318969 / 2.077655 (2.241315) | 1.994941 / 1.504120 (0.490821) | 1.791870 / 1.541195 (0.250675) | 1.904134 / 1.468490 (0.435644) | 0.723493 / 4.584777 (-3.861284) | 3.823670 / 3.745712 (0.077958) | 2.118892 / 5.269862 (-3.150969) | 1.375088 / 4.565676 (-3.190588) | 0.088875 / 0.424275 (-0.335400) | 0.013137 / 0.007607 (0.005530) | 0.530523 / 0.226044 (0.304479) | 5.341438 / 2.268929 (3.072509) | 2.459044 / 55.444624 (-52.985580) | 2.150119 / 6.876477 (-4.726357) | 2.228567 / 2.142072 (0.086494) | 0.877549 / 4.805227 (-3.927678) | 0.175040 / 6.500664 (-6.325625) | 0.068188 / 0.075469 (-0.007281) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.273780 / 1.841788 (-0.568008) | 15.206331 / 8.074308 (7.132023) | 14.963058 / 10.191392 (4.771666) | 0.184543 / 0.680424 (-0.495881) | 0.017612 / 0.534201 (-0.516589) | 0.426248 / 0.579283 (-0.153035) | 0.437889 / 0.434364 (0.003525) | 0.508979 / 0.540337 (-0.031359) | 0.602040 / 1.386936 (-0.784896) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#c5ca1d86949ec3a5fdaec03b80500fb822bcfab4 \"CML watermark\")\n" ]
null
[]
Add writer_batch_size for ArrowBasedBuilder
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/5565/timeline
This way we can control the size of the record_batches/row_groups of arrow/parquet files. This can be useful for `datasets-server` to keep control of the row groups size which can affect random access performance for audio/image/video datasets Right now having 1,000 examples per row group cause some image datasets to be pretty slow for random access (e.g. 4seconds for `beans` to get 20 rows)
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/5565.diff", "html_url": "https://github.com/huggingface/datasets/pull/5565", "merged_at": "2023-03-10T13:45:43Z", "patch_url": "https://github.com/huggingface/datasets/pull/5565.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5565" }
1,595,281,752
https://api.github.com/repos/huggingface/datasets/issues/5565/comments
PR_kwDODunzps5KhfTH
null
5,565
https://api.github.com/repos/huggingface/datasets/issues/5565/events
true
closed
2023-02-22T13:00:09Z
null
https://api.github.com/repos/huggingface/datasets/issues/5564
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5564/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/5564/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://github.com/huggingface/datasets/pull/5564
[]
false
2023-02-22T13:09:26Z
2023-02-22T13:00:25Z
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5564). All of your documentation changes will be reflected on that endpoint.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008810 / 0.011353 (-0.002543) | 0.004583 / 0.011008 (-0.006425) | 0.100787 / 0.038508 (0.062279) | 0.030170 / 0.023109 (0.007061) | 0.301749 / 0.275898 (0.025851) | 0.386958 / 0.323480 (0.063478) | 0.007211 / 0.007986 (-0.000775) | 0.004939 / 0.004328 (0.000611) | 0.078046 / 0.004250 (0.073796) | 0.035672 / 0.037052 (-0.001380) | 0.314403 / 0.258489 (0.055914) | 0.348547 / 0.293841 (0.054706) | 0.034242 / 0.128546 (-0.094304) | 0.011599 / 0.075646 (-0.064047) | 0.321936 / 0.419271 (-0.097336) | 0.043214 / 0.043533 (-0.000319) | 0.298782 / 0.255139 (0.043643) | 0.334513 / 0.283200 (0.051313) | 0.091630 / 0.141683 (-0.050053) | 1.518194 / 1.452155 (0.066039) | 1.553665 / 1.492716 (0.060949) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.196322 / 0.018006 (0.178316) | 0.427280 / 0.000490 (0.426790) | 0.001933 / 0.000200 (0.001733) | 0.000069 / 0.000054 (0.000015) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023190 / 0.037411 (-0.014221) | 0.097387 / 0.014526 (0.082862) | 0.104532 / 0.176557 (-0.072024) | 0.166670 / 0.737135 (-0.570465) | 0.108787 / 0.296338 (-0.187552) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.415776 / 0.215209 (0.200567) | 4.135899 / 2.077655 (2.058244) | 1.857600 / 1.504120 (0.353480) | 1.654099 / 1.541195 (0.112904) | 1.729102 / 1.468490 (0.260612) | 0.695946 / 4.584777 (-3.888831) | 3.352776 / 3.745712 (-0.392936) | 2.754443 / 5.269862 (-2.515418) | 1.517181 / 4.565676 (-3.048495) | 0.082782 / 0.424275 (-0.341493) | 0.012431 / 0.007607 (0.004824) | 0.526593 / 0.226044 (0.300548) | 5.263051 / 2.268929 (2.994123) | 2.290713 / 55.444624 (-53.153911) | 1.953017 / 6.876477 (-4.923460) | 1.998419 / 2.142072 (-0.143653) | 0.817055 / 4.805227 (-3.988173) | 0.148213 / 6.500664 (-6.352451) | 0.065527 / 0.075469 (-0.009942) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.254275 / 1.841788 (-0.587513) | 13.618962 / 8.074308 (5.544654) | 14.057134 / 10.191392 (3.865742) | 0.137180 / 0.680424 (-0.543244) | 0.028460 / 0.534201 (-0.505741) | 0.393836 / 0.579283 (-0.185447) | 0.406665 / 0.434364 (-0.027699) | 0.476812 / 0.540337 (-0.063526) | 0.561047 / 1.386936 (-0.825889) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006483 / 0.011353 (-0.004870) | 0.004525 / 0.011008 (-0.006483) | 0.075696 / 0.038508 (0.037188) | 0.027306 / 0.023109 (0.004197) | 0.359141 / 0.275898 (0.083243) | 0.394595 / 0.323480 (0.071115) | 0.004907 / 0.007986 (-0.003079) | 0.003403 / 0.004328 (-0.000925) | 0.074473 / 0.004250 (0.070223) | 0.037801 / 0.037052 (0.000749) | 0.359350 / 0.258489 (0.100861) | 0.411902 / 0.293841 (0.118061) | 0.032280 / 0.128546 (-0.096267) | 0.011728 / 0.075646 (-0.063918) | 0.085692 / 0.419271 (-0.333580) | 0.047779 / 0.043533 (0.004246) | 0.348820 / 0.255139 (0.093681) | 0.389396 / 0.283200 (0.106197) | 0.094923 / 0.141683 (-0.046760) | 1.507137 / 1.452155 (0.054982) | 1.556873 / 1.492716 (0.064157) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.197510 / 0.018006 (0.179504) | 0.413885 / 0.000490 (0.413395) | 0.002527 / 0.000200 (0.002327) | 0.000073 / 0.000054 (0.000019) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024571 / 0.037411 (-0.012840) | 0.099845 / 0.014526 (0.085319) | 0.108130 / 0.176557 (-0.068426) | 0.176153 / 0.737135 (-0.560982) | 0.111907 / 0.296338 (-0.184432) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.436393 / 0.215209 (0.221184) | 4.343296 / 2.077655 (2.265642) | 2.056062 / 1.504120 (0.551942) | 1.855372 / 1.541195 (0.314177) | 1.946429 / 1.468490 (0.477939) | 0.701862 / 4.584777 (-3.882915) | 3.337115 / 3.745712 (-0.408597) | 2.755416 / 5.269862 (-2.514446) | 1.335596 / 4.565676 (-3.230081) | 0.083938 / 0.424275 (-0.340337) | 0.012914 / 0.007607 (0.005307) | 0.530272 / 0.226044 (0.304228) | 5.307739 / 2.268929 (3.038810) | 2.506435 / 55.444624 (-52.938189) | 2.170830 / 6.876477 (-4.705646) | 2.224641 / 2.142072 (0.082568) | 0.804416 / 4.805227 (-4.000811) | 0.151594 / 6.500664 (-6.349070) | 0.067221 / 0.075469 (-0.008248) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.257063 / 1.841788 (-0.584725) | 14.054346 / 8.074308 (5.980038) | 13.490649 / 10.191392 (3.299257) | 0.139320 / 0.680424 (-0.541104) | 0.016501 / 0.534201 (-0.517700) | 0.382655 / 0.579283 (-0.196629) | 0.383305 / 0.434364 (-0.051059) | 0.465091 / 0.540337 (-0.075247) | 0.552552 / 1.386936 (-0.834384) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#c480083958126c40bb7bdba8e1eeb3945a8fe6ea \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.011278 / 0.011353 (-0.000075) | 0.007351 / 0.011008 (-0.003657) | 0.131145 / 0.038508 (0.092637) | 0.041585 / 0.023109 (0.018476) | 0.410230 / 0.275898 (0.134332) | 0.464069 / 0.323480 (0.140589) | 0.010228 / 0.007986 (0.002242) | 0.005324 / 0.004328 (0.000996) | 0.102680 / 0.004250 (0.098430) | 0.041644 / 0.037052 (0.004592) | 0.439127 / 0.258489 (0.180638) | 0.467828 / 0.293841 (0.173987) | 0.054373 / 0.128546 (-0.074173) | 0.019495 / 0.075646 (-0.056152) | 0.432425 / 0.419271 (0.013153) | 0.056863 / 0.043533 (0.013331) | 0.405883 / 0.255139 (0.150744) | 0.452786 / 0.283200 (0.169586) | 0.109888 / 0.141683 (-0.031795) | 1.797015 / 1.452155 (0.344860) | 1.985937 / 1.492716 (0.493221) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.275121 / 0.018006 (0.257115) | 0.587585 / 0.000490 (0.587095) | 0.005557 / 0.000200 (0.005357) | 0.000118 / 0.000054 (0.000064) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032968 / 0.037411 (-0.004443) | 0.135886 / 0.014526 (0.121360) | 0.154000 / 0.176557 (-0.022557) | 0.233345 / 0.737135 (-0.503790) | 0.144125 / 0.296338 (-0.152214) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.613056 / 0.215209 (0.397847) | 6.206135 / 2.077655 (4.128480) | 2.686989 / 1.504120 (1.182869) | 2.389946 / 1.541195 (0.848751) | 2.437506 / 1.468490 (0.969016) | 1.255900 / 4.584777 (-3.328877) | 5.654803 / 3.745712 (1.909091) | 5.467693 / 5.269862 (0.197832) | 2.872397 / 4.565676 (-1.693279) | 0.145658 / 0.424275 (-0.278617) | 0.016883 / 0.007607 (0.009276) | 0.793820 / 0.226044 (0.567775) | 7.961881 / 2.268929 (5.692952) | 3.617422 / 55.444624 (-51.827203) | 2.795185 / 6.876477 (-4.081292) | 2.881726 / 2.142072 (0.739653) | 1.434543 / 4.805227 (-3.370685) | 0.252206 / 6.500664 (-6.248458) | 0.094694 / 0.075469 (0.019225) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.552401 / 1.841788 (-0.289386) | 18.436068 / 8.074308 (10.361760) | 22.539049 / 10.191392 (12.347657) | 0.269471 / 0.680424 (-0.410953) | 0.053242 / 0.534201 (-0.480959) | 0.568325 / 0.579283 (-0.010958) | 0.660339 / 0.434364 (0.225975) | 0.689507 / 0.540337 (0.149169) | 0.836785 / 1.386936 (-0.550151) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009853 / 0.011353 (-0.001500) | 0.009752 / 0.011008 (-0.001256) | 0.095422 / 0.038508 (0.056914) | 0.037760 / 0.023109 (0.014651) | 0.450898 / 0.275898 (0.175000) | 0.501671 / 0.323480 (0.178191) | 0.006748 / 0.007986 (-0.001237) | 0.005054 / 0.004328 (0.000725) | 0.099382 / 0.004250 (0.095131) | 0.058078 / 0.037052 (0.021026) | 0.447606 / 0.258489 (0.189116) | 0.503887 / 0.293841 (0.210046) | 0.054579 / 0.128546 (-0.073967) | 0.026150 / 0.075646 (-0.049496) | 0.113042 / 0.419271 (-0.306230) | 0.061049 / 0.043533 (0.017516) | 0.437831 / 0.255139 (0.182692) | 0.480830 / 0.283200 (0.197630) | 0.121199 / 0.141683 (-0.020484) | 1.795409 / 1.452155 (0.343254) | 1.911207 / 1.492716 (0.418491) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.311774 / 0.018006 (0.293768) | 0.602027 / 0.000490 (0.601537) | 0.000651 / 0.000200 (0.000451) | 0.000136 / 0.000054 (0.000081) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.035185 / 0.037411 (-0.002227) | 0.149574 / 0.014526 (0.135048) | 0.153672 / 0.176557 (-0.022884) | 0.241720 / 0.737135 (-0.495416) | 0.153543 / 0.296338 (-0.142795) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.678508 / 0.215209 (0.463299) | 6.535313 / 2.077655 (4.457658) | 2.840175 / 1.504120 (1.336055) | 2.458141 / 1.541195 (0.916947) | 2.551369 / 1.468490 (1.082879) | 1.339117 / 4.584777 (-3.245660) | 5.844429 / 3.745712 (2.098717) | 3.221100 / 5.269862 (-2.048762) | 2.114844 / 4.565676 (-2.450833) | 0.149263 / 0.424275 (-0.275012) | 0.016101 / 0.007607 (0.008494) | 0.830650 / 0.226044 (0.604605) | 8.096655 / 2.268929 (5.827727) | 3.445947 / 55.444624 (-51.998677) | 2.826874 / 6.876477 (-4.049603) | 2.812765 / 2.142072 (0.670693) | 1.453789 / 4.805227 (-3.351438) | 0.263911 / 6.500664 (-6.236753) | 0.082609 / 0.075469 (0.007139) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.651624 / 1.841788 (-0.190163) | 18.703020 / 8.074308 (10.628712) | 21.360445 / 10.191392 (11.169053) | 0.249718 / 0.680424 (-0.430706) | 0.028373 / 0.534201 (-0.505828) | 0.576237 / 0.579283 (-0.003046) | 0.620574 / 0.434364 (0.186210) | 0.684155 / 0.540337 (0.143817) | 0.758950 / 1.386936 (-0.627986) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#f51ef325602bb297a18a75680575cbe9b940b1d9 \"CML watermark\")\n" ]
null
[]
Set dev version
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/5564/timeline
null
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/5564.diff", "html_url": "https://github.com/huggingface/datasets/pull/5564", "merged_at": "2023-02-22T13:00:25Z", "patch_url": "https://github.com/huggingface/datasets/pull/5564.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5564" }
1,595,064,698
https://api.github.com/repos/huggingface/datasets/issues/5564/comments
PR_kwDODunzps5KgwzU
null
5,564
https://api.github.com/repos/huggingface/datasets/issues/5564/events
true
closed
2023-02-22T12:48:52Z
null
https://api.github.com/repos/huggingface/datasets/issues/5563
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5563/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/5563/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://github.com/huggingface/datasets/pull/5563
[]
false
2023-02-22T13:05:55Z
2023-02-22T12:56:48Z
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009437 / 0.011353 (-0.001916) | 0.004999 / 0.011008 (-0.006010) | 0.098839 / 0.038508 (0.060331) | 0.035496 / 0.023109 (0.012386) | 0.300726 / 0.275898 (0.024828) | 0.359793 / 0.323480 (0.036313) | 0.007694 / 0.007986 (-0.000292) | 0.003980 / 0.004328 (-0.000348) | 0.075240 / 0.004250 (0.070989) | 0.041149 / 0.037052 (0.004097) | 0.313185 / 0.258489 (0.054696) | 0.344111 / 0.293841 (0.050270) | 0.037775 / 0.128546 (-0.090772) | 0.011901 / 0.075646 (-0.063745) | 0.332631 / 0.419271 (-0.086641) | 0.047194 / 0.043533 (0.003661) | 0.306902 / 0.255139 (0.051763) | 0.321725 / 0.283200 (0.038525) | 0.101031 / 0.141683 (-0.040652) | 1.458778 / 1.452155 (0.006623) | 1.530196 / 1.492716 (0.037480) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.203241 / 0.018006 (0.185235) | 0.447147 / 0.000490 (0.446657) | 0.004159 / 0.000200 (0.003959) | 0.000131 / 0.000054 (0.000076) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025845 / 0.037411 (-0.011566) | 0.106966 / 0.014526 (0.092440) | 0.115876 / 0.176557 (-0.060681) | 0.179052 / 0.737135 (-0.558084) | 0.123012 / 0.296338 (-0.173327) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.408766 / 0.215209 (0.193557) | 4.080400 / 2.077655 (2.002745) | 1.893747 / 1.504120 (0.389627) | 1.709389 / 1.541195 (0.168194) | 1.768071 / 1.468490 (0.299581) | 0.689717 / 4.584777 (-3.895059) | 3.760897 / 3.745712 (0.015185) | 2.017050 / 5.269862 (-3.252811) | 1.333027 / 4.565676 (-3.232650) | 0.083559 / 0.424275 (-0.340716) | 0.011951 / 0.007607 (0.004344) | 0.512313 / 0.226044 (0.286268) | 5.162696 / 2.268929 (2.893767) | 2.418559 / 55.444624 (-53.026065) | 2.110178 / 6.876477 (-4.766299) | 2.113635 / 2.142072 (-0.028437) | 0.835171 / 4.805227 (-3.970056) | 0.164222 / 6.500664 (-6.336442) | 0.061955 / 0.075469 (-0.013515) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.198336 / 1.841788 (-0.643452) | 14.531468 / 8.074308 (6.457160) | 13.882133 / 10.191392 (3.690741) | 0.154524 / 0.680424 (-0.525900) | 0.028782 / 0.534201 (-0.505419) | 0.441808 / 0.579283 (-0.137475) | 0.433096 / 0.434364 (-0.001268) | 0.518229 / 0.540337 (-0.022108) | 0.603201 / 1.386936 (-0.783735) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007385 / 0.011353 (-0.003967) | 0.005193 / 0.011008 (-0.005815) | 0.075517 / 0.038508 (0.037009) | 0.033192 / 0.023109 (0.010083) | 0.332299 / 0.275898 (0.056401) | 0.363043 / 0.323480 (0.039563) | 0.006368 / 0.007986 (-0.001617) | 0.004003 / 0.004328 (-0.000326) | 0.073710 / 0.004250 (0.069460) | 0.046916 / 0.037052 (0.009863) | 0.336307 / 0.258489 (0.077818) | 0.384910 / 0.293841 (0.091069) | 0.038132 / 0.128546 (-0.090414) | 0.012283 / 0.075646 (-0.063364) | 0.088036 / 0.419271 (-0.331235) | 0.049699 / 0.043533 (0.006166) | 0.333953 / 0.255139 (0.078814) | 0.352961 / 0.283200 (0.069762) | 0.101905 / 0.141683 (-0.039778) | 1.470480 / 1.452155 (0.018325) | 1.498212 / 1.492716 (0.005496) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.275067 / 0.018006 (0.257061) | 0.452589 / 0.000490 (0.452099) | 0.047067 / 0.000200 (0.046867) | 0.000983 / 0.000054 (0.000929) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028649 / 0.037411 (-0.008762) | 0.108385 / 0.014526 (0.093859) | 0.121213 / 0.176557 (-0.055343) | 0.192236 / 0.737135 (-0.544899) | 0.124620 / 0.296338 (-0.171719) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.428742 / 0.215209 (0.213533) | 4.264893 / 2.077655 (2.187238) | 2.061650 / 1.504120 (0.557530) | 1.873267 / 1.541195 (0.332072) | 1.961012 / 1.468490 (0.492522) | 0.708904 / 4.584777 (-3.875873) | 3.821289 / 3.745712 (0.075577) | 3.287231 / 5.269862 (-1.982631) | 1.903539 / 4.565676 (-2.662137) | 0.086474 / 0.424275 (-0.337801) | 0.012101 / 0.007607 (0.004494) | 0.531411 / 0.226044 (0.305367) | 5.216785 / 2.268929 (2.947857) | 2.575209 / 55.444624 (-52.869416) | 2.264902 / 6.876477 (-4.611574) | 2.291225 / 2.142072 (0.149153) | 0.853486 / 4.805227 (-3.951741) | 0.168550 / 6.500664 (-6.332114) | 0.064158 / 0.075469 (-0.011311) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.295830 / 1.841788 (-0.545958) | 14.419524 / 8.074308 (6.345216) | 13.397985 / 10.191392 (3.206593) | 0.181367 / 0.680424 (-0.499057) | 0.017666 / 0.534201 (-0.516535) | 0.420645 / 0.579283 (-0.158638) | 0.421025 / 0.434364 (-0.013339) | 0.527369 / 0.540337 (-0.012969) | 0.627175 / 1.386936 (-0.759761) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#589b49dfaffa729bc9997a38d4cedafb107ea2e4 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008717 / 0.011353 (-0.002635) | 0.004573 / 0.011008 (-0.006435) | 0.103660 / 0.038508 (0.065151) | 0.035274 / 0.023109 (0.012165) | 0.298563 / 0.275898 (0.022665) | 0.384397 / 0.323480 (0.060917) | 0.006932 / 0.007986 (-0.001053) | 0.003422 / 0.004328 (-0.000907) | 0.080193 / 0.004250 (0.075943) | 0.039767 / 0.037052 (0.002714) | 0.310296 / 0.258489 (0.051807) | 0.351361 / 0.293841 (0.057520) | 0.033532 / 0.128546 (-0.095014) | 0.011543 / 0.075646 (-0.064104) | 0.374816 / 0.419271 (-0.044456) | 0.046046 / 0.043533 (0.002513) | 0.306918 / 0.255139 (0.051779) | 0.382242 / 0.283200 (0.099042) | 0.098945 / 0.141683 (-0.042738) | 1.456929 / 1.452155 (0.004775) | 1.535763 / 1.492716 (0.043046) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.011759 / 0.018006 (-0.006247) | 0.405345 / 0.000490 (0.404855) | 0.002667 / 0.000200 (0.002467) | 0.000075 / 0.000054 (0.000020) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023924 / 0.037411 (-0.013487) | 0.095537 / 0.014526 (0.081011) | 0.106959 / 0.176557 (-0.069598) | 0.170782 / 0.737135 (-0.566353) | 0.109169 / 0.296338 (-0.187170) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.437521 / 0.215209 (0.222312) | 4.383556 / 2.077655 (2.305902) | 2.092055 / 1.504120 (0.587935) | 1.889316 / 1.541195 (0.348121) | 1.937436 / 1.468490 (0.468946) | 0.700175 / 4.584777 (-3.884602) | 3.358107 / 3.745712 (-0.387605) | 3.243226 / 5.269862 (-2.026636) | 1.620497 / 4.565676 (-2.945180) | 0.083063 / 0.424275 (-0.341212) | 0.012970 / 0.007607 (0.005363) | 0.544226 / 0.226044 (0.318181) | 5.483315 / 2.268929 (3.214386) | 2.555183 / 55.444624 (-52.889441) | 2.204230 / 6.876477 (-4.672247) | 2.230551 / 2.142072 (0.088478) | 0.816121 / 4.805227 (-3.989106) | 0.151356 / 6.500664 (-6.349308) | 0.068564 / 0.075469 (-0.006905) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.208420 / 1.841788 (-0.633367) | 13.652597 / 8.074308 (5.578289) | 14.096318 / 10.191392 (3.904926) | 0.154473 / 0.680424 (-0.525951) | 0.028436 / 0.534201 (-0.505765) | 0.399949 / 0.579283 (-0.179334) | 0.398961 / 0.434364 (-0.035403) | 0.488703 / 0.540337 (-0.051634) | 0.572640 / 1.386936 (-0.814296) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006373 / 0.011353 (-0.004979) | 0.004368 / 0.011008 (-0.006640) | 0.076410 / 0.038508 (0.037902) | 0.027055 / 0.023109 (0.003945) | 0.336969 / 0.275898 (0.061071) | 0.374533 / 0.323480 (0.051053) | 0.004781 / 0.007986 (-0.003204) | 0.003317 / 0.004328 (-0.001011) | 0.076099 / 0.004250 (0.071849) | 0.038414 / 0.037052 (0.001361) | 0.339578 / 0.258489 (0.081089) | 0.384138 / 0.293841 (0.090297) | 0.031581 / 0.128546 (-0.096965) | 0.011666 / 0.075646 (-0.063981) | 0.085690 / 0.419271 (-0.333582) | 0.042277 / 0.043533 (-0.001256) | 0.337931 / 0.255139 (0.082792) | 0.365827 / 0.283200 (0.082628) | 0.088713 / 0.141683 (-0.052970) | 1.519789 / 1.452155 (0.067635) | 1.583097 / 1.492716 (0.090381) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.223472 / 0.018006 (0.205466) | 0.392474 / 0.000490 (0.391984) | 0.002739 / 0.000200 (0.002539) | 0.000076 / 0.000054 (0.000021) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024373 / 0.037411 (-0.013038) | 0.099822 / 0.014526 (0.085296) | 0.106128 / 0.176557 (-0.070428) | 0.174688 / 0.737135 (-0.562447) | 0.112660 / 0.296338 (-0.183678) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.436317 / 0.215209 (0.221108) | 4.358277 / 2.077655 (2.280622) | 2.089746 / 1.504120 (0.585626) | 1.881040 / 1.541195 (0.339845) | 1.923653 / 1.468490 (0.455163) | 0.698176 / 4.584777 (-3.886601) | 3.346460 / 3.745712 (-0.399252) | 3.301429 / 5.269862 (-1.968433) | 1.391042 / 4.565676 (-3.174634) | 0.083025 / 0.424275 (-0.341250) | 0.012459 / 0.007607 (0.004851) | 0.533011 / 0.226044 (0.306967) | 5.334984 / 2.268929 (3.066056) | 2.534105 / 55.444624 (-52.910520) | 2.206295 / 6.876477 (-4.670181) | 2.231752 / 2.142072 (0.089680) | 0.798650 / 4.805227 (-4.006577) | 0.150070 / 6.500664 (-6.350594) | 0.066898 / 0.075469 (-0.008571) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.310527 / 1.841788 (-0.531261) | 13.920492 / 8.074308 (5.846184) | 13.359382 / 10.191392 (3.167990) | 0.154561 / 0.680424 (-0.525863) | 0.016387 / 0.534201 (-0.517814) | 0.379892 / 0.579283 (-0.199391) | 0.376746 / 0.434364 (-0.057618) | 0.462606 / 0.540337 (-0.077732) | 0.550895 / 1.386936 (-0.836041) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#cac733fdaef84cfee92856bd259ce024ec157c91 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009373 / 0.011353 (-0.001980) | 0.005212 / 0.011008 (-0.005797) | 0.099287 / 0.038508 (0.060779) | 0.035175 / 0.023109 (0.012066) | 0.307012 / 0.275898 (0.031114) | 0.335105 / 0.323480 (0.011625) | 0.008006 / 0.007986 (0.000020) | 0.004017 / 0.004328 (-0.000311) | 0.075519 / 0.004250 (0.071269) | 0.040276 / 0.037052 (0.003223) | 0.302615 / 0.258489 (0.044126) | 0.361742 / 0.293841 (0.067901) | 0.038773 / 0.128546 (-0.089773) | 0.011892 / 0.075646 (-0.063754) | 0.334199 / 0.419271 (-0.085073) | 0.048035 / 0.043533 (0.004503) | 0.301361 / 0.255139 (0.046222) | 0.321996 / 0.283200 (0.038796) | 0.101818 / 0.141683 (-0.039865) | 1.442601 / 1.452155 (-0.009554) | 1.530669 / 1.492716 (0.037953) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.201470 / 0.018006 (0.183464) | 0.496305 / 0.000490 (0.495815) | 0.003794 / 0.000200 (0.003594) | 0.000149 / 0.000054 (0.000094) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028401 / 0.037411 (-0.009010) | 0.107924 / 0.014526 (0.093398) | 0.121716 / 0.176557 (-0.054840) | 0.187407 / 0.737135 (-0.549728) | 0.124755 / 0.296338 (-0.171583) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.395667 / 0.215209 (0.180457) | 3.939079 / 2.077655 (1.861424) | 1.776308 / 1.504120 (0.272188) | 1.583487 / 1.541195 (0.042292) | 1.682957 / 1.468490 (0.214467) | 0.677322 / 4.584777 (-3.907455) | 3.796987 / 3.745712 (0.051275) | 3.406199 / 5.269862 (-1.863663) | 1.905467 / 4.565676 (-2.660210) | 0.083189 / 0.424275 (-0.341086) | 0.012156 / 0.007607 (0.004549) | 0.507078 / 0.226044 (0.281033) | 5.031293 / 2.268929 (2.762365) | 2.228403 / 55.444624 (-53.216221) | 1.885760 / 6.876477 (-4.990717) | 1.962340 / 2.142072 (-0.179732) | 0.824979 / 4.805227 (-3.980248) | 0.162107 / 6.500664 (-6.338557) | 0.062324 / 0.075469 (-0.013145) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.205104 / 1.841788 (-0.636683) | 15.368896 / 8.074308 (7.294588) | 14.757540 / 10.191392 (4.566148) | 0.177544 / 0.680424 (-0.502880) | 0.029097 / 0.534201 (-0.505104) | 0.445252 / 0.579283 (-0.134031) | 0.456521 / 0.434364 (0.022157) | 0.544166 / 0.540337 (0.003829) | 0.640675 / 1.386936 (-0.746261) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007438 / 0.011353 (-0.003914) | 0.005236 / 0.011008 (-0.005772) | 0.075379 / 0.038508 (0.036871) | 0.033274 / 0.023109 (0.010165) | 0.344584 / 0.275898 (0.068686) | 0.372161 / 0.323480 (0.048681) | 0.005914 / 0.007986 (-0.002071) | 0.004176 / 0.004328 (-0.000152) | 0.073311 / 0.004250 (0.069061) | 0.050845 / 0.037052 (0.013793) | 0.338978 / 0.258489 (0.080489) | 0.391563 / 0.293841 (0.097722) | 0.037559 / 0.128546 (-0.090987) | 0.012455 / 0.075646 (-0.063192) | 0.086224 / 0.419271 (-0.333047) | 0.052956 / 0.043533 (0.009423) | 0.338529 / 0.255139 (0.083390) | 0.356752 / 0.283200 (0.073553) | 0.105864 / 0.141683 (-0.035819) | 1.467727 / 1.452155 (0.015572) | 1.588727 / 1.492716 (0.096010) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.215959 / 0.018006 (0.197953) | 0.440619 / 0.000490 (0.440129) | 0.000397 / 0.000200 (0.000197) | 0.000058 / 0.000054 (0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028855 / 0.037411 (-0.008556) | 0.114239 / 0.014526 (0.099713) | 0.121726 / 0.176557 (-0.054830) | 0.190377 / 0.737135 (-0.546759) | 0.127858 / 0.296338 (-0.168480) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.415399 / 0.215209 (0.200190) | 4.159012 / 2.077655 (2.081357) | 1.987593 / 1.504120 (0.483474) | 1.794785 / 1.541195 (0.253591) | 1.924819 / 1.468490 (0.456329) | 0.696082 / 4.584777 (-3.888694) | 3.820461 / 3.745712 (0.074749) | 2.139236 / 5.269862 (-3.130626) | 1.348593 / 4.565676 (-3.217084) | 0.086536 / 0.424275 (-0.337739) | 0.012510 / 0.007607 (0.004902) | 0.518804 / 0.226044 (0.292760) | 5.188659 / 2.268929 (2.919730) | 2.501303 / 55.444624 (-52.943322) | 2.138831 / 6.876477 (-4.737646) | 2.220451 / 2.142072 (0.078378) | 0.836277 / 4.805227 (-3.968950) | 0.170940 / 6.500664 (-6.329724) | 0.067326 / 0.075469 (-0.008143) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.307848 / 1.841788 (-0.533940) | 15.995785 / 8.074308 (7.921477) | 13.646285 / 10.191392 (3.454893) | 0.181120 / 0.680424 (-0.499304) | 0.017500 / 0.534201 (-0.516701) | 0.426697 / 0.579283 (-0.152586) | 0.436702 / 0.434364 (0.002338) | 0.518060 / 0.540337 (-0.022278) | 0.632577 / 1.386936 (-0.754359) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#cac733fdaef84cfee92856bd259ce024ec157c91 \"CML watermark\")\n" ]
null
[]
Release: 2.10.0
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/5563/timeline
null
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/5563.diff", "html_url": "https://github.com/huggingface/datasets/pull/5563", "merged_at": "2023-02-22T12:56:48Z", "patch_url": "https://github.com/huggingface/datasets/pull/5563.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5563" }
1,595,049,025
https://api.github.com/repos/huggingface/datasets/issues/5563/comments
PR_kwDODunzps5KgtbL
null
5,563
https://api.github.com/repos/huggingface/datasets/issues/5563/events
true
closed
2023-02-22T07:56:10Z
null
https://api.github.com/repos/huggingface/datasets/issues/5562
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5562/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/5562/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/54279069?v=4", "events_url": "https://api.github.com/users/XDoubleU/events{/privacy}", "followers_url": "https://api.github.com/users/XDoubleU/followers", "following_url": "https://api.github.com/users/XDoubleU/following{/other_user}", "gists_url": "https://api.github.com/users/XDoubleU/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/XDoubleU", "id": 54279069, "login": "XDoubleU", "node_id": "MDQ6VXNlcjU0Mjc5MDY5", "organizations_url": "https://api.github.com/users/XDoubleU/orgs", "received_events_url": "https://api.github.com/users/XDoubleU/received_events", "repos_url": "https://api.github.com/users/XDoubleU/repos", "site_admin": false, "starred_url": "https://api.github.com/users/XDoubleU/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/XDoubleU/subscriptions", "type": "User", "url": "https://api.github.com/users/XDoubleU" }
https://github.com/huggingface/datasets/pull/5562
[]
false
2023-02-23T11:07:49Z
2023-02-23T11:00:58Z
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "Removed it :)", "Changed it :)", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008358 / 0.011353 (-0.002995) | 0.004555 / 0.011008 (-0.006453) | 0.100935 / 0.038508 (0.062427) | 0.029473 / 0.023109 (0.006364) | 0.336165 / 0.275898 (0.060266) | 0.420397 / 0.323480 (0.096917) | 0.006609 / 0.007986 (-0.001376) | 0.003338 / 0.004328 (-0.000991) | 0.078639 / 0.004250 (0.074388) | 0.034051 / 0.037052 (-0.003001) | 0.342820 / 0.258489 (0.084331) | 0.399392 / 0.293841 (0.105551) | 0.033935 / 0.128546 (-0.094611) | 0.011555 / 0.075646 (-0.064092) | 0.323467 / 0.419271 (-0.095804) | 0.040675 / 0.043533 (-0.002858) | 0.321247 / 0.255139 (0.066108) | 0.370967 / 0.283200 (0.087767) | 0.085766 / 0.141683 (-0.055917) | 1.461158 / 1.452155 (0.009003) | 1.504641 / 1.492716 (0.011925) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.180060 / 0.018006 (0.162053) | 0.403623 / 0.000490 (0.403134) | 0.002253 / 0.000200 (0.002053) | 0.000072 / 0.000054 (0.000017) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022793 / 0.037411 (-0.014618) | 0.098869 / 0.014526 (0.084343) | 0.104512 / 0.176557 (-0.072045) | 0.167721 / 0.737135 (-0.569414) | 0.107969 / 0.296338 (-0.188370) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.411179 / 0.215209 (0.195969) | 4.095345 / 2.077655 (2.017690) | 1.825992 / 1.504120 (0.321872) | 1.624386 / 1.541195 (0.083192) | 1.654903 / 1.468490 (0.186413) | 0.695041 / 4.584777 (-3.889736) | 3.319087 / 3.745712 (-0.426625) | 1.881945 / 5.269862 (-3.387917) | 1.250360 / 4.565676 (-3.315316) | 0.082405 / 0.424275 (-0.341870) | 0.012499 / 0.007607 (0.004892) | 0.522846 / 0.226044 (0.296801) | 5.241103 / 2.268929 (2.972175) | 2.293100 / 55.444624 (-53.151524) | 1.942937 / 6.876477 (-4.933540) | 1.957434 / 2.142072 (-0.184638) | 0.809782 / 4.805227 (-3.995445) | 0.148290 / 6.500664 (-6.352374) | 0.064157 / 0.075469 (-0.011312) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.185616 / 1.841788 (-0.656172) | 13.616791 / 8.074308 (5.542483) | 13.741806 / 10.191392 (3.550414) | 0.137396 / 0.680424 (-0.543028) | 0.028751 / 0.534201 (-0.505450) | 0.397636 / 0.579283 (-0.181647) | 0.403594 / 0.434364 (-0.030770) | 0.484039 / 0.540337 (-0.056299) | 0.568398 / 1.386936 (-0.818538) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006712 / 0.011353 (-0.004640) | 0.004511 / 0.011008 (-0.006497) | 0.076946 / 0.038508 (0.038438) | 0.027219 / 0.023109 (0.004110) | 0.350769 / 0.275898 (0.074871) | 0.408539 / 0.323480 (0.085059) | 0.005014 / 0.007986 (-0.002971) | 0.003361 / 0.004328 (-0.000968) | 0.077106 / 0.004250 (0.072856) | 0.040105 / 0.037052 (0.003053) | 0.342041 / 0.258489 (0.083552) | 0.426355 / 0.293841 (0.132514) | 0.031684 / 0.128546 (-0.096863) | 0.011575 / 0.075646 (-0.064072) | 0.085797 / 0.419271 (-0.333474) | 0.041575 / 0.043533 (-0.001958) | 0.340837 / 0.255139 (0.085698) | 0.390461 / 0.283200 (0.107262) | 0.089531 / 0.141683 (-0.052152) | 1.504600 / 1.452155 (0.052445) | 1.538712 / 1.492716 (0.045996) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.236679 / 0.018006 (0.218673) | 0.396258 / 0.000490 (0.395768) | 0.006479 / 0.000200 (0.006279) | 0.000081 / 0.000054 (0.000026) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024682 / 0.037411 (-0.012729) | 0.100167 / 0.014526 (0.085641) | 0.106627 / 0.176557 (-0.069929) | 0.174592 / 0.737135 (-0.562543) | 0.109499 / 0.296338 (-0.186839) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.444702 / 0.215209 (0.229493) | 4.462779 / 2.077655 (2.385125) | 2.087711 / 1.504120 (0.583591) | 1.874900 / 1.541195 (0.333705) | 1.918609 / 1.468490 (0.450119) | 0.705867 / 4.584777 (-3.878910) | 3.355483 / 3.745712 (-0.390229) | 2.808348 / 5.269862 (-2.461514) | 1.253319 / 4.565676 (-3.312358) | 0.083747 / 0.424275 (-0.340528) | 0.012491 / 0.007607 (0.004884) | 0.542885 / 0.226044 (0.316841) | 5.453921 / 2.268929 (3.184993) | 2.545688 / 55.444624 (-52.898937) | 2.185022 / 6.876477 (-4.691455) | 2.215351 / 2.142072 (0.073279) | 0.808201 / 4.805227 (-3.997027) | 0.151754 / 6.500664 (-6.348910) | 0.066886 / 0.075469 (-0.008583) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.298583 / 1.841788 (-0.543205) | 14.014276 / 8.074308 (5.939968) | 13.505338 / 10.191392 (3.313946) | 0.142033 / 0.680424 (-0.538391) | 0.016863 / 0.534201 (-0.517338) | 0.381195 / 0.579283 (-0.198088) | 0.384455 / 0.434364 (-0.049909) | 0.465765 / 0.540337 (-0.074572) | 0.552571 / 1.386936 (-0.834366) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#a29cca79ce64a5c64ad7047e57845b22154d7b8d \"CML watermark\")\n" ]
null
[]
Update csv.py
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/5562/timeline
Removed mangle_dup_cols=True from BuilderConfig. It triggered following deprecation warning: /usr/local/lib/python3.8/dist-packages/datasets/download/streaming_download_manager.py:776: FutureWarning: the 'mangle_dupe_cols' keyword is deprecated and will be removed in a future version. Please take steps to stop the use of 'mangle_dupe_cols' return pd.read_csv(xopen(filepath_or_buffer, "rb", use_auth_token=use_auth_token), **kwargs) Further documentation of pandas: https://pandas.pydata.org/docs/whatsnew/v1.4.0.html#mangle-dupe-cols-in-read-csv-no-longer-renames-unique-columns-conflicting-with-target-names At first sight it seems like this flag is resolved internally, it might need some more research.
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/5562.diff", "html_url": "https://github.com/huggingface/datasets/pull/5562", "merged_at": "2023-02-23T11:00:58Z", "patch_url": "https://github.com/huggingface/datasets/pull/5562.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5562" }
1,594,625,539
https://api.github.com/repos/huggingface/datasets/issues/5562/comments
PR_kwDODunzps5KfTUT
null
5,562
https://api.github.com/repos/huggingface/datasets/issues/5562/events
true
closed
2023-02-21T17:35:07Z
null
https://api.github.com/repos/huggingface/datasets/issues/5561
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5561/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/5561/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/polinaeterna", "id": 16348744, "login": "polinaeterna", "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "repos_url": "https://api.github.com/users/polinaeterna/repos", "site_admin": false, "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "type": "User", "url": "https://api.github.com/users/polinaeterna" }
https://github.com/huggingface/datasets/pull/5561
[]
false
2023-02-28T15:37:22Z
2023-02-23T18:23:29Z
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "Better yet have someone enable pre-commit CI https://pre-commit.ci/ and it will apply the pre-commit fixes to the PR automatically as an additional commit.", "@Skylion007 hi! I agree with @nateraw here, I'd better not force to use pre-commit so I'm not setting it up in the CI for now. And regarding end-of-file - currently it's being done by `black`. \r\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008704 / 0.011353 (-0.002649) | 0.004448 / 0.011008 (-0.006560) | 0.099530 / 0.038508 (0.061022) | 0.029739 / 0.023109 (0.006629) | 0.329267 / 0.275898 (0.053369) | 0.368805 / 0.323480 (0.045325) | 0.006852 / 0.007986 (-0.001133) | 0.004575 / 0.004328 (0.000246) | 0.076838 / 0.004250 (0.072588) | 0.033885 / 0.037052 (-0.003167) | 0.336340 / 0.258489 (0.077851) | 0.384880 / 0.293841 (0.091039) | 0.034051 / 0.128546 (-0.094495) | 0.011638 / 0.075646 (-0.064009) | 0.321650 / 0.419271 (-0.097622) | 0.041202 / 0.043533 (-0.002330) | 0.330841 / 0.255139 (0.075702) | 0.361329 / 0.283200 (0.078130) | 0.084864 / 0.141683 (-0.056819) | 1.454005 / 1.452155 (0.001850) | 1.542167 / 1.492716 (0.049451) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.196207 / 0.018006 (0.178200) | 0.400675 / 0.000490 (0.400185) | 0.000403 / 0.000200 (0.000203) | 0.000059 / 0.000054 (0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022694 / 0.037411 (-0.014717) | 0.095139 / 0.014526 (0.080613) | 0.104129 / 0.176557 (-0.072427) | 0.168688 / 0.737135 (-0.568447) | 0.109243 / 0.296338 (-0.187096) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.427520 / 0.215209 (0.212311) | 4.237726 / 2.077655 (2.160071) | 2.191887 / 1.504120 (0.687767) | 1.987750 / 1.541195 (0.446555) | 1.996540 / 1.468490 (0.528050) | 0.696416 / 4.584777 (-3.888361) | 3.454536 / 3.745712 (-0.291176) | 2.023600 / 5.269862 (-3.246261) | 1.336394 / 4.565676 (-3.229282) | 0.082933 / 0.424275 (-0.341342) | 0.012572 / 0.007607 (0.004965) | 0.534330 / 0.226044 (0.308285) | 5.347588 / 2.268929 (3.078659) | 2.640397 / 55.444624 (-52.804228) | 2.338266 / 6.876477 (-4.538211) | 2.431969 / 2.142072 (0.289897) | 0.821335 / 4.805227 (-3.983893) | 0.151905 / 6.500664 (-6.348759) | 0.067983 / 0.075469 (-0.007486) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.228841 / 1.841788 (-0.612947) | 13.660437 / 8.074308 (5.586128) | 13.729442 / 10.191392 (3.538050) | 0.165835 / 0.680424 (-0.514589) | 0.028753 / 0.534201 (-0.505448) | 0.400143 / 0.579283 (-0.179140) | 0.403714 / 0.434364 (-0.030650) | 0.492168 / 0.540337 (-0.048170) | 0.581151 / 1.386936 (-0.805785) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006289 / 0.011353 (-0.005064) | 0.004419 / 0.011008 (-0.006589) | 0.077220 / 0.038508 (0.038712) | 0.027170 / 0.023109 (0.004060) | 0.344988 / 0.275898 (0.069090) | 0.374150 / 0.323480 (0.050670) | 0.004842 / 0.007986 (-0.003144) | 0.003289 / 0.004328 (-0.001039) | 0.076200 / 0.004250 (0.071950) | 0.036287 / 0.037052 (-0.000766) | 0.345764 / 0.258489 (0.087275) | 0.387439 / 0.293841 (0.093599) | 0.031547 / 0.128546 (-0.096999) | 0.011586 / 0.075646 (-0.064060) | 0.086599 / 0.419271 (-0.332672) | 0.042338 / 0.043533 (-0.001195) | 0.355384 / 0.255139 (0.100246) | 0.369474 / 0.283200 (0.086275) | 0.090945 / 0.141683 (-0.050738) | 1.488632 / 1.452155 (0.036477) | 1.554606 / 1.492716 (0.061890) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.212962 / 0.018006 (0.194956) | 0.399647 / 0.000490 (0.399157) | 0.003055 / 0.000200 (0.002856) | 0.000083 / 0.000054 (0.000029) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024349 / 0.037411 (-0.013062) | 0.100342 / 0.014526 (0.085817) | 0.105657 / 0.176557 (-0.070899) | 0.175139 / 0.737135 (-0.561997) | 0.110014 / 0.296338 (-0.186324) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.434785 / 0.215209 (0.219575) | 4.346950 / 2.077655 (2.269295) | 2.045411 / 1.504120 (0.541291) | 1.844258 / 1.541195 (0.303064) | 1.889503 / 1.468490 (0.421013) | 0.704530 / 4.584777 (-3.880247) | 3.362435 / 3.745712 (-0.383277) | 2.797205 / 5.269862 (-2.472656) | 1.504431 / 4.565676 (-3.061245) | 0.083331 / 0.424275 (-0.340945) | 0.012274 / 0.007607 (0.004666) | 0.531123 / 0.226044 (0.305078) | 5.322588 / 2.268929 (3.053660) | 2.483875 / 55.444624 (-52.960750) | 2.147218 / 6.876477 (-4.729258) | 2.164024 / 2.142072 (0.021952) | 0.807191 / 4.805227 (-3.998036) | 0.151189 / 6.500664 (-6.349475) | 0.068027 / 0.075469 (-0.007442) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.316001 / 1.841788 (-0.525787) | 13.892785 / 8.074308 (5.818477) | 13.485982 / 10.191392 (3.294590) | 0.138904 / 0.680424 (-0.541520) | 0.016748 / 0.534201 (-0.517453) | 0.379840 / 0.579283 (-0.199443) | 0.384854 / 0.434364 (-0.049510) | 0.464275 / 0.540337 (-0.076063) | 0.553622 / 1.386936 (-0.833314) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#a940972a9a38543b2066129dc6e7987e08dca082 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009179 / 0.011353 (-0.002174) | 0.005080 / 0.011008 (-0.005929) | 0.099061 / 0.038508 (0.060553) | 0.035252 / 0.023109 (0.012143) | 0.293496 / 0.275898 (0.017598) | 0.360365 / 0.323480 (0.036886) | 0.007757 / 0.007986 (-0.000229) | 0.003985 / 0.004328 (-0.000343) | 0.076021 / 0.004250 (0.071771) | 0.042286 / 0.037052 (0.005233) | 0.316542 / 0.258489 (0.058053) | 0.341711 / 0.293841 (0.047870) | 0.037970 / 0.128546 (-0.090576) | 0.011977 / 0.075646 (-0.063670) | 0.333341 / 0.419271 (-0.085931) | 0.049211 / 0.043533 (0.005678) | 0.297401 / 0.255139 (0.042262) | 0.313424 / 0.283200 (0.030224) | 0.105719 / 0.141683 (-0.035964) | 1.487879 / 1.452155 (0.035724) | 1.529785 / 1.492716 (0.037068) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.201062 / 0.018006 (0.183056) | 0.438024 / 0.000490 (0.437534) | 0.002129 / 0.000200 (0.001929) | 0.000083 / 0.000054 (0.000028) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026422 / 0.037411 (-0.010989) | 0.104863 / 0.014526 (0.090337) | 0.114934 / 0.176557 (-0.061623) | 0.179173 / 0.737135 (-0.557962) | 0.119734 / 0.296338 (-0.176604) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.397195 / 0.215209 (0.181986) | 3.959945 / 2.077655 (1.882290) | 1.794059 / 1.504120 (0.289939) | 1.606814 / 1.541195 (0.065619) | 1.674681 / 1.468490 (0.206191) | 0.680130 / 4.584777 (-3.904646) | 3.742730 / 3.745712 (-0.002982) | 2.021793 / 5.269862 (-3.248069) | 1.322726 / 4.565676 (-3.242950) | 0.084519 / 0.424275 (-0.339756) | 0.012012 / 0.007607 (0.004405) | 0.510076 / 0.226044 (0.284032) | 5.084163 / 2.268929 (2.815234) | 2.241032 / 55.444624 (-53.203592) | 1.911936 / 6.876477 (-4.964540) | 1.947992 / 2.142072 (-0.194080) | 0.838779 / 4.805227 (-3.966448) | 0.165103 / 6.500664 (-6.335561) | 0.060722 / 0.075469 (-0.014747) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.180274 / 1.841788 (-0.661514) | 14.285364 / 8.074308 (6.211056) | 12.941205 / 10.191392 (2.749813) | 0.153815 / 0.680424 (-0.526609) | 0.028554 / 0.534201 (-0.505647) | 0.441551 / 0.579283 (-0.137732) | 0.434906 / 0.434364 (0.000542) | 0.516120 / 0.540337 (-0.024217) | 0.603062 / 1.386936 (-0.783874) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007287 / 0.011353 (-0.004066) | 0.004998 / 0.011008 (-0.006010) | 0.074997 / 0.038508 (0.036489) | 0.033209 / 0.023109 (0.010100) | 0.336836 / 0.275898 (0.060938) | 0.365562 / 0.323480 (0.042082) | 0.005739 / 0.007986 (-0.002246) | 0.003942 / 0.004328 (-0.000387) | 0.074681 / 0.004250 (0.070430) | 0.049530 / 0.037052 (0.012478) | 0.335642 / 0.258489 (0.077153) | 0.388874 / 0.293841 (0.095033) | 0.037198 / 0.128546 (-0.091349) | 0.011983 / 0.075646 (-0.063664) | 0.087601 / 0.419271 (-0.331671) | 0.053761 / 0.043533 (0.010228) | 0.334142 / 0.255139 (0.079003) | 0.351348 / 0.283200 (0.068148) | 0.107462 / 0.141683 (-0.034221) | 1.497015 / 1.452155 (0.044860) | 1.608287 / 1.492716 (0.115571) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.255395 / 0.018006 (0.237389) | 0.439141 / 0.000490 (0.438651) | 0.021391 / 0.000200 (0.021191) | 0.000230 / 0.000054 (0.000176) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028331 / 0.037411 (-0.009080) | 0.108744 / 0.014526 (0.094218) | 0.118201 / 0.176557 (-0.058355) | 0.189556 / 0.737135 (-0.547579) | 0.123112 / 0.296338 (-0.173226) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.431394 / 0.215209 (0.216185) | 4.296121 / 2.077655 (2.218466) | 2.126371 / 1.504120 (0.622251) | 1.978178 / 1.541195 (0.436983) | 2.082674 / 1.468490 (0.614184) | 0.701789 / 4.584777 (-3.882988) | 3.791495 / 3.745712 (0.045783) | 2.115267 / 5.269862 (-3.154594) | 1.342159 / 4.565676 (-3.223517) | 0.088132 / 0.424275 (-0.336143) | 0.011903 / 0.007607 (0.004295) | 0.528398 / 0.226044 (0.302354) | 5.270077 / 2.268929 (3.001148) | 2.498860 / 55.444624 (-52.945765) | 2.155515 / 6.876477 (-4.720962) | 2.192866 / 2.142072 (0.050793) | 0.859596 / 4.805227 (-3.945631) | 0.170544 / 6.500664 (-6.330120) | 0.063883 / 0.075469 (-0.011587) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.240679 / 1.841788 (-0.601109) | 14.497379 / 8.074308 (6.423071) | 12.881417 / 10.191392 (2.690025) | 0.147295 / 0.680424 (-0.533129) | 0.017465 / 0.534201 (-0.516736) | 0.424695 / 0.579283 (-0.154588) | 0.414929 / 0.434364 (-0.019435) | 0.536079 / 0.540337 (-0.004259) | 0.638245 / 1.386936 (-0.748691) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#a940972a9a38543b2066129dc6e7987e08dca082 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008806 / 0.011353 (-0.002547) | 0.004712 / 0.011008 (-0.006297) | 0.102383 / 0.038508 (0.063875) | 0.030260 / 0.023109 (0.007151) | 0.330175 / 0.275898 (0.054277) | 0.376816 / 0.323480 (0.053337) | 0.008065 / 0.007986 (0.000079) | 0.003534 / 0.004328 (-0.000794) | 0.078824 / 0.004250 (0.074573) | 0.036704 / 0.037052 (-0.000349) | 0.331848 / 0.258489 (0.073359) | 0.351031 / 0.293841 (0.057190) | 0.033406 / 0.128546 (-0.095140) | 0.011543 / 0.075646 (-0.064103) | 0.322114 / 0.419271 (-0.097157) | 0.041249 / 0.043533 (-0.002284) | 0.309413 / 0.255139 (0.054274) | 0.329156 / 0.283200 (0.045956) | 0.088636 / 0.141683 (-0.053047) | 1.508226 / 1.452155 (0.056071) | 1.557203 / 1.492716 (0.064487) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.196696 / 0.018006 (0.178690) | 0.426360 / 0.000490 (0.425870) | 0.001263 / 0.000200 (0.001064) | 0.000079 / 0.000054 (0.000024) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023747 / 0.037411 (-0.013664) | 0.100756 / 0.014526 (0.086230) | 0.105817 / 0.176557 (-0.070739) | 0.172573 / 0.737135 (-0.564562) | 0.110705 / 0.296338 (-0.185634) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.436913 / 0.215209 (0.221704) | 4.365753 / 2.077655 (2.288099) | 2.201346 / 1.504120 (0.697226) | 1.978800 / 1.541195 (0.437605) | 1.951585 / 1.468490 (0.483094) | 0.699208 / 4.584777 (-3.885569) | 3.381492 / 3.745712 (-0.364220) | 2.966174 / 5.269862 (-2.303687) | 1.487521 / 4.565676 (-3.078156) | 0.082673 / 0.424275 (-0.341602) | 0.012436 / 0.007607 (0.004829) | 0.553276 / 0.226044 (0.327232) | 5.554081 / 2.268929 (3.285153) | 2.653286 / 55.444624 (-52.791339) | 2.404788 / 6.876477 (-4.471689) | 2.484610 / 2.142072 (0.342537) | 0.817073 / 4.805227 (-3.988154) | 0.151619 / 6.500664 (-6.349045) | 0.068259 / 0.075469 (-0.007210) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.273481 / 1.841788 (-0.568306) | 13.908825 / 8.074308 (5.834517) | 13.106695 / 10.191392 (2.915303) | 0.139609 / 0.680424 (-0.540815) | 0.028425 / 0.534201 (-0.505776) | 0.395626 / 0.579283 (-0.183657) | 0.405526 / 0.434364 (-0.028838) | 0.465628 / 0.540337 (-0.074709) | 0.542824 / 1.386936 (-0.844112) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006821 / 0.011353 (-0.004532) | 0.004570 / 0.011008 (-0.006438) | 0.076568 / 0.038508 (0.038060) | 0.028109 / 0.023109 (0.004999) | 0.342768 / 0.275898 (0.066870) | 0.390680 / 0.323480 (0.067200) | 0.005056 / 0.007986 (-0.002930) | 0.003359 / 0.004328 (-0.000970) | 0.075835 / 0.004250 (0.071584) | 0.038888 / 0.037052 (0.001836) | 0.343489 / 0.258489 (0.085000) | 0.400766 / 0.293841 (0.106925) | 0.031816 / 0.128546 (-0.096730) | 0.011637 / 0.075646 (-0.064009) | 0.085474 / 0.419271 (-0.333797) | 0.041740 / 0.043533 (-0.001793) | 0.342501 / 0.255139 (0.087362) | 0.377467 / 0.283200 (0.094267) | 0.091532 / 0.141683 (-0.050151) | 1.457368 / 1.452155 (0.005213) | 1.537187 / 1.492716 (0.044471) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.187507 / 0.018006 (0.169501) | 0.415706 / 0.000490 (0.415217) | 0.001816 / 0.000200 (0.001616) | 0.000072 / 0.000054 (0.000018) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026251 / 0.037411 (-0.011161) | 0.106609 / 0.014526 (0.092083) | 0.109822 / 0.176557 (-0.066735) | 0.180462 / 0.737135 (-0.556674) | 0.114647 / 0.296338 (-0.181691) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.438804 / 0.215209 (0.223595) | 4.387960 / 2.077655 (2.310306) | 2.056804 / 1.504120 (0.552684) | 1.848584 / 1.541195 (0.307389) | 1.939470 / 1.468490 (0.470980) | 0.702539 / 4.584777 (-3.882238) | 3.419535 / 3.745712 (-0.326177) | 1.933889 / 5.269862 (-3.335973) | 1.189631 / 4.565676 (-3.376045) | 0.084105 / 0.424275 (-0.340170) | 0.012520 / 0.007607 (0.004913) | 0.538125 / 0.226044 (0.312081) | 5.370000 / 2.268929 (3.101072) | 2.497487 / 55.444624 (-52.947137) | 2.156054 / 6.876477 (-4.720423) | 2.225909 / 2.142072 (0.083837) | 0.811456 / 4.805227 (-3.993771) | 0.151461 / 6.500664 (-6.349203) | 0.066940 / 0.075469 (-0.008530) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.301246 / 1.841788 (-0.540542) | 14.459755 / 8.074308 (6.385447) | 13.147151 / 10.191392 (2.955759) | 0.129236 / 0.680424 (-0.551188) | 0.016427 / 0.534201 (-0.517774) | 0.380047 / 0.579283 (-0.199236) | 0.392217 / 0.434364 (-0.042147) | 0.470338 / 0.540337 (-0.069999) | 0.559800 / 1.386936 (-0.827136) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#a940972a9a38543b2066129dc6e7987e08dca082 \"CML watermark\")\n" ]
null
[]
Add pre-commit config yaml file to enable automatic code formatting
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/5561/timeline
@huggingface/datasets do you think it would be useful? Motivation - sometimes PRs are like 30% "fix: style" commits :) If so - I need to double check the config but for me locally it works as expected.
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/5561.diff", "html_url": "https://github.com/huggingface/datasets/pull/5561", "merged_at": "2023-02-23T18:23:29Z", "patch_url": "https://github.com/huggingface/datasets/pull/5561.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5561" }
1,593,862,388
https://api.github.com/repos/huggingface/datasets/issues/5561/comments
PR_kwDODunzps5Kcxw_
null
5,561
https://api.github.com/repos/huggingface/datasets/issues/5561/events
true
closed
2023-02-21T16:56:17Z
null
https://api.github.com/repos/huggingface/datasets/issues/5560
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5560/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/5560/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
https://github.com/huggingface/datasets/pull/5560
[]
false
2023-02-21T18:26:23Z
2023-02-21T18:19:09Z
null
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.011060 / 0.011353 (-0.000293) | 0.005752 / 0.011008 (-0.005256) | 0.120349 / 0.038508 (0.081841) | 0.045303 / 0.023109 (0.022194) | 0.359196 / 0.275898 (0.083298) | 0.406351 / 0.323480 (0.082871) | 0.009474 / 0.007986 (0.001489) | 0.004524 / 0.004328 (0.000195) | 0.091990 / 0.004250 (0.087739) | 0.050034 / 0.037052 (0.012982) | 0.372479 / 0.258489 (0.113990) | 0.418907 / 0.293841 (0.125067) | 0.044300 / 0.128546 (-0.084247) | 0.013989 / 0.075646 (-0.061657) | 0.397406 / 0.419271 (-0.021866) | 0.056070 / 0.043533 (0.012537) | 0.357597 / 0.255139 (0.102458) | 0.382938 / 0.283200 (0.099738) | 0.117060 / 0.141683 (-0.024623) | 1.670869 / 1.452155 (0.218714) | 1.780944 / 1.492716 (0.288227) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.229578 / 0.018006 (0.211572) | 0.493711 / 0.000490 (0.493222) | 0.008413 / 0.000200 (0.008213) | 0.000118 / 0.000054 (0.000063) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033364 / 0.037411 (-0.004047) | 0.135953 / 0.014526 (0.121427) | 0.141942 / 0.176557 (-0.034614) | 0.225891 / 0.737135 (-0.511244) | 0.151010 / 0.296338 (-0.145328) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.470937 / 0.215209 (0.255728) | 4.710258 / 2.077655 (2.632603) | 2.132025 / 1.504120 (0.627905) | 1.913134 / 1.541195 (0.371939) | 2.025993 / 1.468490 (0.557503) | 0.835993 / 4.584777 (-3.748784) | 4.446678 / 3.745712 (0.700965) | 4.260014 / 5.269862 (-1.009847) | 2.193078 / 4.565676 (-2.372598) | 0.100132 / 0.424275 (-0.324143) | 0.014163 / 0.007607 (0.006556) | 0.599252 / 0.226044 (0.373208) | 5.976377 / 2.268929 (3.707448) | 2.678116 / 55.444624 (-52.766508) | 2.309311 / 6.876477 (-4.567166) | 2.410284 / 2.142072 (0.268212) | 1.002415 / 4.805227 (-3.802813) | 0.194588 / 6.500664 (-6.306076) | 0.074921 / 0.075469 (-0.000548) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.432389 / 1.841788 (-0.409399) | 17.915288 / 8.074308 (9.840980) | 17.190906 / 10.191392 (6.999514) | 0.238469 / 0.680424 (-0.441955) | 0.036270 / 0.534201 (-0.497931) | 0.537320 / 0.579283 (-0.041963) | 0.512876 / 0.434364 (0.078512) | 0.629022 / 0.540337 (0.088685) | 0.750109 / 1.386936 (-0.636827) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008544 / 0.011353 (-0.002809) | 0.005933 / 0.011008 (-0.005075) | 0.088879 / 0.038508 (0.050371) | 0.040387 / 0.023109 (0.017278) | 0.406392 / 0.275898 (0.130494) | 0.449572 / 0.323480 (0.126092) | 0.006623 / 0.007986 (-0.001362) | 0.004727 / 0.004328 (0.000398) | 0.086745 / 0.004250 (0.082495) | 0.054335 / 0.037052 (0.017283) | 0.405652 / 0.258489 (0.147163) | 0.473934 / 0.293841 (0.180093) | 0.042157 / 0.128546 (-0.086390) | 0.014249 / 0.075646 (-0.061397) | 0.102130 / 0.419271 (-0.317141) | 0.056815 / 0.043533 (0.013282) | 0.407945 / 0.255139 (0.152806) | 0.431720 / 0.283200 (0.148521) | 0.119901 / 0.141683 (-0.021781) | 1.738381 / 1.452155 (0.286227) | 1.838981 / 1.492716 (0.346265) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.251926 / 0.018006 (0.233919) | 0.498117 / 0.000490 (0.497627) | 0.000439 / 0.000200 (0.000239) | 0.000065 / 0.000054 (0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034526 / 0.037411 (-0.002886) | 0.133038 / 0.014526 (0.118512) | 0.147494 / 0.176557 (-0.029063) | 0.234392 / 0.737135 (-0.502743) | 0.152361 / 0.296338 (-0.143978) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.495144 / 0.215209 (0.279935) | 4.936646 / 2.077655 (2.858991) | 2.385549 / 1.504120 (0.881429) | 2.173817 / 1.541195 (0.632622) | 2.327508 / 1.468490 (0.859018) | 0.851899 / 4.584777 (-3.732878) | 4.820388 / 3.745712 (1.074676) | 2.500304 / 5.269862 (-2.769558) | 1.621246 / 4.565676 (-2.944430) | 0.102858 / 0.424275 (-0.321417) | 0.014719 / 0.007607 (0.007112) | 0.611880 / 0.226044 (0.385836) | 6.100737 / 2.268929 (3.831808) | 2.955681 / 55.444624 (-52.488943) | 2.563533 / 6.876477 (-4.312943) | 2.659030 / 2.142072 (0.516958) | 1.004737 / 4.805227 (-3.800490) | 0.198379 / 6.500664 (-6.302285) | 0.078705 / 0.075469 (0.003236) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.501155 / 1.841788 (-0.340633) | 18.381513 / 8.074308 (10.307205) | 16.173893 / 10.191392 (5.982501) | 0.209497 / 0.680424 (-0.470927) | 0.021640 / 0.534201 (-0.512561) | 0.505905 / 0.579283 (-0.073378) | 0.513446 / 0.434364 (0.079082) | 0.652704 / 0.540337 (0.112366) | 0.761038 / 1.386936 (-0.625898) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#b8235c92b46b6a63286fcee1a56adae4c0a751d3 \"CML watermark\")\n", "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009085 / 0.011353 (-0.002268) | 0.004589 / 0.011008 (-0.006419) | 0.100820 / 0.038508 (0.062312) | 0.030677 / 0.023109 (0.007568) | 0.306702 / 0.275898 (0.030804) | 0.360623 / 0.323480 (0.037144) | 0.007377 / 0.007986 (-0.000608) | 0.003480 / 0.004328 (-0.000848) | 0.077813 / 0.004250 (0.073562) | 0.037293 / 0.037052 (0.000241) | 0.314137 / 0.258489 (0.055648) | 0.343394 / 0.293841 (0.049554) | 0.034202 / 0.128546 (-0.094344) | 0.011417 / 0.075646 (-0.064230) | 0.322584 / 0.419271 (-0.096687) | 0.041524 / 0.043533 (-0.002009) | 0.308116 / 0.255139 (0.052977) | 0.324527 / 0.283200 (0.041327) | 0.090973 / 0.141683 (-0.050710) | 1.515941 / 1.452155 (0.063787) | 1.548975 / 1.492716 (0.056259) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.185901 / 0.018006 (0.167895) | 0.420742 / 0.000490 (0.420252) | 0.002958 / 0.000200 (0.002758) | 0.000086 / 0.000054 (0.000032) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024242 / 0.037411 (-0.013170) | 0.098827 / 0.014526 (0.084302) | 0.107609 / 0.176557 (-0.068947) | 0.172228 / 0.737135 (-0.564908) | 0.110042 / 0.296338 (-0.186296) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.429647 / 0.215209 (0.214438) | 4.265406 / 2.077655 (2.187751) | 1.924514 / 1.504120 (0.420394) | 1.709881 / 1.541195 (0.168686) | 1.764872 / 1.468490 (0.296382) | 0.698089 / 4.584777 (-3.886688) | 3.439154 / 3.745712 (-0.306558) | 1.925058 / 5.269862 (-3.344804) | 1.267506 / 4.565676 (-3.298171) | 0.082167 / 0.424275 (-0.342108) | 0.012450 / 0.007607 (0.004843) | 0.523077 / 0.226044 (0.297033) | 5.240422 / 2.268929 (2.971494) | 2.363666 / 55.444624 (-53.080959) | 2.021903 / 6.876477 (-4.854574) | 2.136430 / 2.142072 (-0.005643) | 0.816377 / 4.805227 (-3.988850) | 0.151516 / 6.500664 (-6.349148) | 0.066590 / 0.075469 (-0.008879) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.216477 / 1.841788 (-0.625310) | 13.685044 / 8.074308 (5.610736) | 14.082620 / 10.191392 (3.891228) | 0.148399 / 0.680424 (-0.532025) | 0.028337 / 0.534201 (-0.505864) | 0.405379 / 0.579283 (-0.173904) | 0.405650 / 0.434364 (-0.028714) | 0.492658 / 0.540337 (-0.047679) | 0.578836 / 1.386936 (-0.808100) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006863 / 0.011353 (-0.004490) | 0.004746 / 0.011008 (-0.006262) | 0.075802 / 0.038508 (0.037294) | 0.027950 / 0.023109 (0.004840) | 0.347613 / 0.275898 (0.071715) | 0.401201 / 0.323480 (0.077721) | 0.005765 / 0.007986 (-0.002221) | 0.003567 / 0.004328 (-0.000762) | 0.074188 / 0.004250 (0.069937) | 0.041209 / 0.037052 (0.004157) | 0.346541 / 0.258489 (0.088052) | 0.425729 / 0.293841 (0.131888) | 0.032430 / 0.128546 (-0.096116) | 0.011708 / 0.075646 (-0.063938) | 0.084667 / 0.419271 (-0.334604) | 0.042155 / 0.043533 (-0.001378) | 0.341210 / 0.255139 (0.086071) | 0.389759 / 0.283200 (0.106559) | 0.092640 / 0.141683 (-0.049042) | 1.526093 / 1.452155 (0.073938) | 1.556277 / 1.492716 (0.063561) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.232383 / 0.018006 (0.214377) | 0.412353 / 0.000490 (0.411863) | 0.004009 / 0.000200 (0.003809) | 0.000071 / 0.000054 (0.000017) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025854 / 0.037411 (-0.011557) | 0.102660 / 0.014526 (0.088134) | 0.108420 / 0.176557 (-0.068137) | 0.175834 / 0.737135 (-0.561301) | 0.113472 / 0.296338 (-0.182867) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.443595 / 0.215209 (0.228386) | 4.420959 / 2.077655 (2.343305) | 2.112790 / 1.504120 (0.608670) | 1.908836 / 1.541195 (0.367641) | 1.998340 / 1.468490 (0.529850) | 0.706096 / 4.584777 (-3.878681) | 3.400871 / 3.745712 (-0.344841) | 2.803315 / 5.269862 (-2.466547) | 1.539392 / 4.565676 (-3.026284) | 0.083523 / 0.424275 (-0.340752) | 0.012541 / 0.007607 (0.004934) | 0.543428 / 0.226044 (0.317383) | 5.467416 / 2.268929 (3.198488) | 2.551970 / 55.444624 (-52.892654) | 2.212708 / 6.876477 (-4.663768) | 2.266169 / 2.142072 (0.124096) | 0.809943 / 4.805227 (-3.995284) | 0.152300 / 6.500664 (-6.348364) | 0.068591 / 0.075469 (-0.006878) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.330141 / 1.841788 (-0.511646) | 14.292734 / 8.074308 (6.218426) | 13.556157 / 10.191392 (3.364765) | 0.155949 / 0.680424 (-0.524475) | 0.016464 / 0.534201 (-0.517737) | 0.377906 / 0.579283 (-0.201377) | 0.390385 / 0.434364 (-0.043979) | 0.471867 / 0.540337 (-0.068471) | 0.557794 / 1.386936 (-0.829142) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#ba50512b76ef315f73bf821b0487296cdb373850 \"CML watermark\")\n", "I just tried on colab and it didn't finish the progress bar for some reason.\r\n\r\nMaybe we need to call `pbar.close()` before `return`\r\n\r\n<img width=\"729\" alt=\"image\" src=\"https://user-images.githubusercontent.com/42851186/220417517-919438a4-5462-4e87-8f84-e9399a9be27c.png\">\r\n", "(just added .close() - let me try quickly if it works now)", "it worked ! :)\r\n\r\n<img width=\"575\" alt=\"image\" src=\"https://user-images.githubusercontent.com/42851186/220419220-8108f225-13cb-4968-acff-fe4543d5a324.png\">\r\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008465 / 0.011353 (-0.002888) | 0.004622 / 0.011008 (-0.006387) | 0.100365 / 0.038508 (0.061857) | 0.029453 / 0.023109 (0.006344) | 0.358041 / 0.275898 (0.082143) | 0.424777 / 0.323480 (0.101298) | 0.006930 / 0.007986 (-0.001055) | 0.004756 / 0.004328 (0.000428) | 0.077128 / 0.004250 (0.072878) | 0.036338 / 0.037052 (-0.000715) | 0.367613 / 0.258489 (0.109124) | 0.397798 / 0.293841 (0.103957) | 0.033500 / 0.128546 (-0.095047) | 0.011427 / 0.075646 (-0.064219) | 0.321617 / 0.419271 (-0.097654) | 0.040937 / 0.043533 (-0.002596) | 0.345358 / 0.255139 (0.090219) | 0.366932 / 0.283200 (0.083733) | 0.086506 / 0.141683 (-0.055177) | 1.482434 / 1.452155 (0.030280) | 1.522773 / 1.492716 (0.030057) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.188815 / 0.018006 (0.170809) | 0.404689 / 0.000490 (0.404200) | 0.000390 / 0.000200 (0.000190) | 0.000059 / 0.000054 (0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023165 / 0.037411 (-0.014246) | 0.095934 / 0.014526 (0.081408) | 0.105788 / 0.176557 (-0.070769) | 0.169908 / 0.737135 (-0.567227) | 0.107871 / 0.296338 (-0.188467) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.457543 / 0.215209 (0.242334) | 4.563209 / 2.077655 (2.485554) | 2.172272 / 1.504120 (0.668152) | 1.965064 / 1.541195 (0.423870) | 2.020811 / 1.468490 (0.552321) | 0.705138 / 4.584777 (-3.879638) | 3.353430 / 3.745712 (-0.392283) | 1.861970 / 5.269862 (-3.407892) | 1.159201 / 4.565676 (-3.406476) | 0.083187 / 0.424275 (-0.341088) | 0.012750 / 0.007607 (0.005143) | 0.566377 / 0.226044 (0.340333) | 5.662645 / 2.268929 (3.393717) | 2.609565 / 55.444624 (-52.835059) | 2.244519 / 6.876477 (-4.631957) | 2.284111 / 2.142072 (0.142038) | 0.821974 / 4.805227 (-3.983253) | 0.151080 / 6.500664 (-6.349584) | 0.065373 / 0.075469 (-0.010096) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.230960 / 1.841788 (-0.610828) | 13.930408 / 8.074308 (5.856100) | 13.989082 / 10.191392 (3.797690) | 0.151961 / 0.680424 (-0.528462) | 0.028770 / 0.534201 (-0.505431) | 0.392269 / 0.579283 (-0.187015) | 0.400490 / 0.434364 (-0.033874) | 0.459770 / 0.540337 (-0.080568) | 0.534174 / 1.386936 (-0.852762) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006740 / 0.011353 (-0.004613) | 0.004496 / 0.011008 (-0.006512) | 0.076886 / 0.038508 (0.038377) | 0.027593 / 0.023109 (0.004484) | 0.339570 / 0.275898 (0.063672) | 0.379915 / 0.323480 (0.056435) | 0.004999 / 0.007986 (-0.002987) | 0.004253 / 0.004328 (-0.000076) | 0.074973 / 0.004250 (0.070722) | 0.037321 / 0.037052 (0.000269) | 0.344720 / 0.258489 (0.086230) | 0.398919 / 0.293841 (0.105078) | 0.032146 / 0.128546 (-0.096400) | 0.011694 / 0.075646 (-0.063952) | 0.085134 / 0.419271 (-0.334138) | 0.042328 / 0.043533 (-0.001205) | 0.339384 / 0.255139 (0.084245) | 0.368031 / 0.283200 (0.084831) | 0.092088 / 0.141683 (-0.049595) | 1.492313 / 1.452155 (0.040158) | 1.538406 / 1.492716 (0.045690) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.265619 / 0.018006 (0.247613) | 0.415478 / 0.000490 (0.414988) | 0.030221 / 0.000200 (0.030021) | 0.000277 / 0.000054 (0.000223) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024489 / 0.037411 (-0.012922) | 0.099920 / 0.014526 (0.085395) | 0.108301 / 0.176557 (-0.068256) | 0.179525 / 0.737135 (-0.557610) | 0.111492 / 0.296338 (-0.184847) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.440759 / 0.215209 (0.225550) | 4.382754 / 2.077655 (2.305100) | 2.088686 / 1.504120 (0.584566) | 1.890557 / 1.541195 (0.349363) | 1.947461 / 1.468490 (0.478971) | 0.701751 / 4.584777 (-3.883025) | 3.368896 / 3.745712 (-0.376816) | 1.867238 / 5.269862 (-3.402624) | 1.166787 / 4.565676 (-3.398890) | 0.083427 / 0.424275 (-0.340848) | 0.012406 / 0.007607 (0.004799) | 0.539467 / 0.226044 (0.313423) | 5.376083 / 2.268929 (3.107154) | 2.516566 / 55.444624 (-52.928058) | 2.177991 / 6.876477 (-4.698486) | 2.207438 / 2.142072 (0.065366) | 0.803316 / 4.805227 (-4.001911) | 0.150900 / 6.500664 (-6.349764) | 0.066328 / 0.075469 (-0.009141) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.295308 / 1.841788 (-0.546480) | 14.081343 / 8.074308 (6.007035) | 13.516853 / 10.191392 (3.325461) | 0.160530 / 0.680424 (-0.519894) | 0.016516 / 0.534201 (-0.517685) | 0.380160 / 0.579283 (-0.199123) | 0.443484 / 0.434364 (0.009120) | 0.466645 / 0.540337 (-0.073692) | 0.555339 / 1.386936 (-0.831597) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#e8a12313cd728e37b4dc4ce67864621ffc79fedb \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.011321 / 0.011353 (-0.000031) | 0.006365 / 0.011008 (-0.004643) | 0.125613 / 0.038508 (0.087105) | 0.035327 / 0.023109 (0.012218) | 0.391998 / 0.275898 (0.116100) | 0.475402 / 0.323480 (0.151923) | 0.009579 / 0.007986 (0.001593) | 0.005621 / 0.004328 (0.001293) | 0.106097 / 0.004250 (0.101846) | 0.042774 / 0.037052 (0.005722) | 0.420850 / 0.258489 (0.162361) | 0.454501 / 0.293841 (0.160660) | 0.056885 / 0.128546 (-0.071661) | 0.021718 / 0.075646 (-0.053928) | 0.419422 / 0.419271 (0.000150) | 0.056690 / 0.043533 (0.013157) | 0.405375 / 0.255139 (0.150236) | 0.444404 / 0.283200 (0.161204) | 0.136912 / 0.141683 (-0.004771) | 1.846363 / 1.452155 (0.394208) | 1.747433 / 1.492716 (0.254717) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.282260 / 0.018006 (0.264254) | 0.615813 / 0.000490 (0.615323) | 0.000515 / 0.000200 (0.000315) | 0.000106 / 0.000054 (0.000052) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029913 / 0.037411 (-0.007499) | 0.135568 / 0.014526 (0.121042) | 0.134476 / 0.176557 (-0.042081) | 0.206974 / 0.737135 (-0.530161) | 0.136976 / 0.296338 (-0.159362) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.605241 / 0.215209 (0.390032) | 6.125097 / 2.077655 (4.047442) | 2.390102 / 1.504120 (0.885982) | 2.082196 / 1.541195 (0.541001) | 2.226527 / 1.468490 (0.758037) | 1.244807 / 4.584777 (-3.339970) | 5.476437 / 3.745712 (1.730725) | 3.014970 / 5.269862 (-2.254891) | 1.963428 / 4.565676 (-2.602249) | 0.137813 / 0.424275 (-0.286462) | 0.013794 / 0.007607 (0.006187) | 0.766149 / 0.226044 (0.540104) | 7.566103 / 2.268929 (5.297175) | 3.048958 / 55.444624 (-52.395666) | 2.394819 / 6.876477 (-4.481658) | 2.416021 / 2.142072 (0.273949) | 1.369896 / 4.805227 (-3.435331) | 0.245159 / 6.500664 (-6.255506) | 0.076848 / 0.075469 (0.001379) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.530448 / 1.841788 (-0.311340) | 18.580227 / 8.074308 (10.505919) | 20.108470 / 10.191392 (9.917078) | 0.227124 / 0.680424 (-0.453300) | 0.052050 / 0.534201 (-0.482151) | 0.604565 / 0.579283 (0.025282) | 0.686475 / 0.434364 (0.252111) | 0.672298 / 0.540337 (0.131960) | 0.770552 / 1.386936 (-0.616384) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010043 / 0.011353 (-0.001310) | 0.006445 / 0.011008 (-0.004563) | 0.099486 / 0.038508 (0.060978) | 0.037720 / 0.023109 (0.014610) | 0.425571 / 0.275898 (0.149673) | 0.467031 / 0.323480 (0.143551) | 0.007394 / 0.007986 (-0.000591) | 0.005008 / 0.004328 (0.000679) | 0.096176 / 0.004250 (0.091926) | 0.053694 / 0.037052 (0.016641) | 0.418653 / 0.258489 (0.160164) | 0.492441 / 0.293841 (0.198600) | 0.054593 / 0.128546 (-0.073953) | 0.023410 / 0.075646 (-0.052236) | 0.113825 / 0.419271 (-0.305446) | 0.066000 / 0.043533 (0.022467) | 0.418127 / 0.255139 (0.162988) | 0.457416 / 0.283200 (0.174217) | 0.119911 / 0.141683 (-0.021771) | 1.733805 / 1.452155 (0.281651) | 1.961252 / 1.492716 (0.468536) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.296126 / 0.018006 (0.278120) | 0.602169 / 0.000490 (0.601680) | 0.000454 / 0.000200 (0.000254) | 0.000075 / 0.000054 (0.000021) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032970 / 0.037411 (-0.004442) | 0.124071 / 0.014526 (0.109545) | 0.143800 / 0.176557 (-0.032757) | 0.227168 / 0.737135 (-0.509967) | 0.142817 / 0.296338 (-0.153521) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.626239 / 0.215209 (0.411030) | 6.438629 / 2.077655 (4.360974) | 2.760747 / 1.504120 (1.256627) | 2.355419 / 1.541195 (0.814224) | 2.384924 / 1.468490 (0.916434) | 1.210543 / 4.584777 (-3.374234) | 5.440389 / 3.745712 (1.694677) | 5.047939 / 5.269862 (-0.221922) | 2.759618 / 4.565676 (-1.806059) | 0.132757 / 0.424275 (-0.291518) | 0.013163 / 0.007607 (0.005556) | 0.745721 / 0.226044 (0.519677) | 7.660327 / 2.268929 (5.391398) | 3.559385 / 55.444624 (-51.885240) | 2.764344 / 6.876477 (-4.112133) | 2.975274 / 2.142072 (0.833202) | 1.460346 / 4.805227 (-3.344881) | 0.257222 / 6.500664 (-6.243443) | 0.081106 / 0.075469 (0.005637) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.698245 / 1.841788 (-0.143543) | 18.754129 / 8.074308 (10.679821) | 19.065596 / 10.191392 (8.874204) | 0.228237 / 0.680424 (-0.452187) | 0.030688 / 0.534201 (-0.503513) | 0.532561 / 0.579283 (-0.046722) | 0.601133 / 0.434364 (0.166769) | 0.620218 / 0.540337 (0.079881) | 0.751392 / 1.386936 (-0.635545) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#5f293ff23853fea210388bbef11d1621e54f22e7 \"CML watermark\")\n", "(the BadZipFile error is unrelated to the changes)", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009368 / 0.011353 (-0.001984) | 0.005143 / 0.011008 (-0.005865) | 0.100675 / 0.038508 (0.062167) | 0.036033 / 0.023109 (0.012924) | 0.297391 / 0.275898 (0.021493) | 0.362230 / 0.323480 (0.038750) | 0.008041 / 0.007986 (0.000055) | 0.004041 / 0.004328 (-0.000287) | 0.075395 / 0.004250 (0.071144) | 0.043020 / 0.037052 (0.005968) | 0.308936 / 0.258489 (0.050447) | 0.343723 / 0.293841 (0.049883) | 0.038416 / 0.128546 (-0.090131) | 0.012086 / 0.075646 (-0.063560) | 0.335102 / 0.419271 (-0.084170) | 0.047718 / 0.043533 (0.004185) | 0.297856 / 0.255139 (0.042717) | 0.317326 / 0.283200 (0.034126) | 0.101462 / 0.141683 (-0.040221) | 1.459965 / 1.452155 (0.007810) | 1.491194 / 1.492716 (-0.001522) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.211311 / 0.018006 (0.193305) | 0.443663 / 0.000490 (0.443174) | 0.003654 / 0.000200 (0.003454) | 0.000076 / 0.000054 (0.000022) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027316 / 0.037411 (-0.010095) | 0.109929 / 0.014526 (0.095403) | 0.117170 / 0.176557 (-0.059387) | 0.182494 / 0.737135 (-0.554641) | 0.124693 / 0.296338 (-0.171646) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.395904 / 0.215209 (0.180695) | 3.950906 / 2.077655 (1.873251) | 1.768807 / 1.504120 (0.264687) | 1.578979 / 1.541195 (0.037784) | 1.689976 / 1.468490 (0.221486) | 0.696458 / 4.584777 (-3.888319) | 3.750491 / 3.745712 (0.004778) | 2.117863 / 5.269862 (-3.151998) | 1.340403 / 4.565676 (-3.225274) | 0.085752 / 0.424275 (-0.338523) | 0.012206 / 0.007607 (0.004599) | 0.505561 / 0.226044 (0.279517) | 5.048721 / 2.268929 (2.779792) | 2.256623 / 55.444624 (-53.188001) | 1.905912 / 6.876477 (-4.970565) | 1.988400 / 2.142072 (-0.153672) | 0.843066 / 4.805227 (-3.962161) | 0.165717 / 6.500664 (-6.334947) | 0.062910 / 0.075469 (-0.012559) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.225668 / 1.841788 (-0.616120) | 14.660082 / 8.074308 (6.585773) | 14.295369 / 10.191392 (4.103977) | 0.171075 / 0.680424 (-0.509348) | 0.029279 / 0.534201 (-0.504922) | 0.441559 / 0.579283 (-0.137724) | 0.445382 / 0.434364 (0.011018) | 0.525350 / 0.540337 (-0.014987) | 0.608493 / 1.386936 (-0.778443) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007288 / 0.011353 (-0.004065) | 0.004999 / 0.011008 (-0.006009) | 0.074656 / 0.038508 (0.036147) | 0.033897 / 0.023109 (0.010788) | 0.345826 / 0.275898 (0.069928) | 0.390891 / 0.323480 (0.067411) | 0.005811 / 0.007986 (-0.002174) | 0.003976 / 0.004328 (-0.000353) | 0.073546 / 0.004250 (0.069295) | 0.047245 / 0.037052 (0.010193) | 0.351851 / 0.258489 (0.093362) | 0.403217 / 0.293841 (0.109376) | 0.036771 / 0.128546 (-0.091775) | 0.012240 / 0.075646 (-0.063407) | 0.086720 / 0.419271 (-0.332552) | 0.049440 / 0.043533 (0.005907) | 0.339520 / 0.255139 (0.084381) | 0.372160 / 0.283200 (0.088961) | 0.100813 / 0.141683 (-0.040870) | 1.436436 / 1.452155 (-0.015718) | 1.514723 / 1.492716 (0.022007) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.231394 / 0.018006 (0.213388) | 0.440825 / 0.000490 (0.440336) | 0.000994 / 0.000200 (0.000794) | 0.000077 / 0.000054 (0.000022) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028999 / 0.037411 (-0.008412) | 0.111391 / 0.014526 (0.096865) | 0.123058 / 0.176557 (-0.053498) | 0.194348 / 0.737135 (-0.542787) | 0.125730 / 0.296338 (-0.170609) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.431950 / 0.215209 (0.216741) | 4.298724 / 2.077655 (2.221069) | 2.064116 / 1.504120 (0.559996) | 1.892062 / 1.541195 (0.350867) | 1.985441 / 1.468490 (0.516951) | 0.707028 / 4.584777 (-3.877749) | 3.812976 / 3.745712 (0.067264) | 3.078704 / 5.269862 (-2.191158) | 1.832737 / 4.565676 (-2.732939) | 0.086182 / 0.424275 (-0.338093) | 0.012289 / 0.007607 (0.004681) | 0.530265 / 0.226044 (0.304220) | 5.283122 / 2.268929 (3.014194) | 2.558491 / 55.444624 (-52.886134) | 2.237046 / 6.876477 (-4.639431) | 2.354548 / 2.142072 (0.212475) | 0.848947 / 4.805227 (-3.956280) | 0.167907 / 6.500664 (-6.332757) | 0.064998 / 0.075469 (-0.010471) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.248287 / 1.841788 (-0.593500) | 14.976327 / 8.074308 (6.902019) | 13.596143 / 10.191392 (3.404751) | 0.145730 / 0.680424 (-0.534694) | 0.017340 / 0.534201 (-0.516861) | 0.430111 / 0.579283 (-0.149172) | 0.433462 / 0.434364 (-0.000902) | 0.540365 / 0.540337 (0.000028) | 0.650586 / 1.386936 (-0.736350) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#1875c8a4c928aeaccc826f13ffdbf7543112024d \"CML watermark\")\n" ]
null
[]
Ensure last tqdm update in `map`
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/5560/timeline
This PR modifies `map` to: * ensure the TQDM bar gets the last progress update * when a map function fails, avoid throwing a chained exception in the single-proc mode
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/5560.diff", "html_url": "https://github.com/huggingface/datasets/pull/5560", "merged_at": "2023-02-21T18:19:09Z", "patch_url": "https://github.com/huggingface/datasets/pull/5560.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5560" }
1,593,809,978
https://api.github.com/repos/huggingface/datasets/issues/5560/comments
PR_kwDODunzps5Kcml6
null
5,560
https://api.github.com/repos/huggingface/datasets/issues/5560/events
true
closed
2023-02-21T15:26:26Z
null
https://api.github.com/repos/huggingface/datasets/issues/5559
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 2, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/5559/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/5559/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://github.com/huggingface/datasets/pull/5559
[]
false
2023-02-21T17:21:37Z
2023-02-21T17:14:29Z
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.011596 / 0.011353 (0.000244) | 0.005845 / 0.011008 (-0.005164) | 0.121302 / 0.038508 (0.082794) | 0.034306 / 0.023109 (0.011196) | 0.355973 / 0.275898 (0.080075) | 0.419903 / 0.323480 (0.096423) | 0.009049 / 0.007986 (0.001064) | 0.004245 / 0.004328 (-0.000084) | 0.092004 / 0.004250 (0.087753) | 0.042782 / 0.037052 (0.005730) | 0.355805 / 0.258489 (0.097316) | 0.407298 / 0.293841 (0.113457) | 0.052481 / 0.128546 (-0.076066) | 0.020880 / 0.075646 (-0.054766) | 0.379948 / 0.419271 (-0.039324) | 0.061337 / 0.043533 (0.017804) | 0.359829 / 0.255139 (0.104690) | 0.379244 / 0.283200 (0.096044) | 0.116692 / 0.141683 (-0.024990) | 1.733717 / 1.452155 (0.281562) | 1.700246 / 1.492716 (0.207530) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.014622 / 0.018006 (-0.003384) | 0.518777 / 0.000490 (0.518288) | 0.004086 / 0.000200 (0.003886) | 0.000136 / 0.000054 (0.000082) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031208 / 0.037411 (-0.006204) | 0.143003 / 0.014526 (0.128477) | 0.132625 / 0.176557 (-0.043932) | 0.187681 / 0.737135 (-0.549455) | 0.136576 / 0.296338 (-0.159763) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.626516 / 0.215209 (0.411307) | 6.282558 / 2.077655 (4.204904) | 2.702686 / 1.504120 (1.198566) | 2.287445 / 1.541195 (0.746250) | 2.333014 / 1.468490 (0.864524) | 1.227815 / 4.584777 (-3.356962) | 5.545640 / 3.745712 (1.799928) | 4.953226 / 5.269862 (-0.316635) | 2.774549 / 4.565676 (-1.791128) | 0.145257 / 0.424275 (-0.279018) | 0.014887 / 0.007607 (0.007280) | 0.812226 / 0.226044 (0.586182) | 8.002727 / 2.268929 (5.733798) | 3.314852 / 55.444624 (-52.129773) | 2.602348 / 6.876477 (-4.274128) | 2.593511 / 2.142072 (0.451438) | 1.440498 / 4.805227 (-3.364730) | 0.254849 / 6.500664 (-6.245815) | 0.077020 / 0.075469 (0.001551) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.487633 / 1.841788 (-0.354155) | 17.385773 / 8.074308 (9.311465) | 21.775511 / 10.191392 (11.584118) | 0.273514 / 0.680424 (-0.406910) | 0.059644 / 0.534201 (-0.474557) | 0.578710 / 0.579283 (-0.000573) | 0.630221 / 0.434364 (0.195857) | 0.632089 / 0.540337 (0.091752) | 0.762367 / 1.386936 (-0.624569) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009513 / 0.011353 (-0.001840) | 0.006009 / 0.011008 (-0.004999) | 0.087589 / 0.038508 (0.049081) | 0.037487 / 0.023109 (0.014378) | 0.397660 / 0.275898 (0.121762) | 0.474438 / 0.323480 (0.150958) | 0.007373 / 0.007986 (-0.000613) | 0.005839 / 0.004328 (0.001511) | 0.092759 / 0.004250 (0.088509) | 0.052128 / 0.037052 (0.015075) | 0.382378 / 0.258489 (0.123889) | 0.458244 / 0.293841 (0.164403) | 0.057232 / 0.128546 (-0.071314) | 0.020662 / 0.075646 (-0.054984) | 0.110314 / 0.419271 (-0.308957) | 0.063014 / 0.043533 (0.019481) | 0.386020 / 0.255139 (0.130881) | 0.476169 / 0.283200 (0.192970) | 0.118081 / 0.141683 (-0.023602) | 1.724158 / 1.452155 (0.272003) | 1.862257 / 1.492716 (0.369541) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.224288 / 0.018006 (0.206281) | 0.523631 / 0.000490 (0.523141) | 0.004420 / 0.000200 (0.004220) | 0.000127 / 0.000054 (0.000073) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032359 / 0.037411 (-0.005052) | 0.140045 / 0.014526 (0.125519) | 0.138164 / 0.176557 (-0.038393) | 0.181068 / 0.737135 (-0.556067) | 0.143965 / 0.296338 (-0.152374) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.573809 / 0.215209 (0.358600) | 6.083247 / 2.077655 (4.005592) | 2.671258 / 1.504120 (1.167138) | 2.277062 / 1.541195 (0.735868) | 2.299544 / 1.468490 (0.831054) | 1.267351 / 4.584777 (-3.317425) | 5.494461 / 3.745712 (1.748749) | 5.083169 / 5.269862 (-0.186692) | 2.531738 / 4.565676 (-2.033938) | 0.151834 / 0.424275 (-0.272441) | 0.014123 / 0.007607 (0.006516) | 0.800222 / 0.226044 (0.574177) | 7.637624 / 2.268929 (5.368695) | 3.325574 / 55.444624 (-52.119050) | 2.563008 / 6.876477 (-4.313468) | 2.596259 / 2.142072 (0.454187) | 1.459206 / 4.805227 (-3.346021) | 0.237771 / 6.500664 (-6.262893) | 0.071854 / 0.075469 (-0.003615) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.605504 / 1.841788 (-0.236284) | 17.593594 / 8.074308 (9.519285) | 20.618005 / 10.191392 (10.426612) | 0.270938 / 0.680424 (-0.409486) | 0.026205 / 0.534201 (-0.507996) | 0.562223 / 0.579283 (-0.017060) | 0.617571 / 0.434364 (0.183207) | 0.616398 / 0.540337 (0.076060) | 0.715293 / 1.386936 (-0.671643) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#673dc0dd7d063b2313f7adcc9e0be53d4718f5cf \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.013213 / 0.011353 (0.001860) | 0.006253 / 0.011008 (-0.004756) | 0.125175 / 0.038508 (0.086667) | 0.037491 / 0.023109 (0.014382) | 0.401379 / 0.275898 (0.125481) | 0.395826 / 0.323480 (0.072346) | 0.009224 / 0.007986 (0.001238) | 0.005163 / 0.004328 (0.000835) | 0.096490 / 0.004250 (0.092239) | 0.042473 / 0.037052 (0.005420) | 0.383713 / 0.258489 (0.125224) | 0.429234 / 0.293841 (0.135393) | 0.063261 / 0.128546 (-0.065285) | 0.020114 / 0.075646 (-0.055532) | 0.401687 / 0.419271 (-0.017585) | 0.062831 / 0.043533 (0.019298) | 0.405211 / 0.255139 (0.150072) | 0.380810 / 0.283200 (0.097610) | 0.109166 / 0.141683 (-0.032517) | 1.869580 / 1.452155 (0.417426) | 1.949947 / 1.492716 (0.457231) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.207481 / 0.018006 (0.189475) | 0.504161 / 0.000490 (0.503671) | 0.008429 / 0.000200 (0.008229) | 0.000101 / 0.000054 (0.000047) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029182 / 0.037411 (-0.008229) | 0.126284 / 0.014526 (0.111758) | 0.140381 / 0.176557 (-0.036175) | 0.175878 / 0.737135 (-0.561257) | 0.138824 / 0.296338 (-0.157514) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.643658 / 0.215209 (0.428449) | 6.396224 / 2.077655 (4.318569) | 2.600702 / 1.504120 (1.096582) | 2.176721 / 1.541195 (0.635526) | 2.216116 / 1.468490 (0.747626) | 1.235069 / 4.584777 (-3.349708) | 5.457228 / 3.745712 (1.711516) | 3.060455 / 5.269862 (-2.209407) | 2.028123 / 4.565676 (-2.537554) | 0.141617 / 0.424275 (-0.282658) | 0.016596 / 0.007607 (0.008989) | 0.804915 / 0.226044 (0.578870) | 7.968821 / 2.268929 (5.699893) | 3.340650 / 55.444624 (-52.103974) | 2.533620 / 6.876477 (-4.342856) | 2.457388 / 2.142072 (0.315315) | 1.486527 / 4.805227 (-3.318700) | 0.253767 / 6.500664 (-6.246897) | 0.082192 / 0.075469 (0.006723) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.470896 / 1.841788 (-0.370892) | 17.566637 / 8.074308 (9.492329) | 23.144148 / 10.191392 (12.952756) | 0.235510 / 0.680424 (-0.444913) | 0.046051 / 0.534201 (-0.488150) | 0.559954 / 0.579283 (-0.019329) | 0.645390 / 0.434364 (0.211026) | 0.690983 / 0.540337 (0.150646) | 0.776252 / 1.386936 (-0.610684) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010564 / 0.011353 (-0.000789) | 0.006150 / 0.011008 (-0.004858) | 0.100030 / 0.038508 (0.061522) | 0.036873 / 0.023109 (0.013764) | 0.448508 / 0.275898 (0.172610) | 0.492593 / 0.323480 (0.169113) | 0.007337 / 0.007986 (-0.000648) | 0.004804 / 0.004328 (0.000475) | 0.099218 / 0.004250 (0.094967) | 0.055513 / 0.037052 (0.018461) | 0.462147 / 0.258489 (0.203658) | 0.510229 / 0.293841 (0.216388) | 0.055307 / 0.128546 (-0.073239) | 0.021989 / 0.075646 (-0.053657) | 0.118487 / 0.419271 (-0.300785) | 0.071752 / 0.043533 (0.028219) | 0.456572 / 0.255139 (0.201433) | 0.475160 / 0.283200 (0.191961) | 0.117472 / 0.141683 (-0.024211) | 1.813212 / 1.452155 (0.361058) | 1.908413 / 1.492716 (0.415696) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.352929 / 0.018006 (0.334923) | 0.543874 / 0.000490 (0.543384) | 0.078529 / 0.000200 (0.078329) | 0.000669 / 0.000054 (0.000614) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033157 / 0.037411 (-0.004254) | 0.162503 / 0.014526 (0.147977) | 0.146424 / 0.176557 (-0.030132) | 0.201781 / 0.737135 (-0.535354) | 0.168110 / 0.296338 (-0.128229) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.644205 / 0.215209 (0.428996) | 6.327519 / 2.077655 (4.249865) | 2.728102 / 1.504120 (1.223982) | 2.306426 / 1.541195 (0.765232) | 2.373125 / 1.468490 (0.904635) | 1.350649 / 4.584777 (-3.234128) | 5.652714 / 3.745712 (1.907002) | 3.175335 / 5.269862 (-2.094526) | 2.222902 / 4.565676 (-2.342775) | 0.160609 / 0.424275 (-0.263666) | 0.015596 / 0.007607 (0.007989) | 0.790357 / 0.226044 (0.564313) | 8.289758 / 2.268929 (6.020830) | 3.479215 / 55.444624 (-51.965410) | 2.860063 / 6.876477 (-4.016413) | 2.806720 / 2.142072 (0.664648) | 1.639046 / 4.805227 (-3.166181) | 0.267017 / 6.500664 (-6.233648) | 0.083990 / 0.075469 (0.008521) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.632262 / 1.841788 (-0.209525) | 17.794357 / 8.074308 (9.720049) | 21.203547 / 10.191392 (11.012155) | 0.250899 / 0.680424 (-0.429525) | 0.024502 / 0.534201 (-0.509699) | 0.519960 / 0.579283 (-0.059323) | 0.615412 / 0.434364 (0.181048) | 0.641914 / 0.540337 (0.101577) | 0.772355 / 1.386936 (-0.614581) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#32cc4d10243b0feb69650f007d010971fd861dc1 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009501 / 0.011353 (-0.001852) | 0.005262 / 0.011008 (-0.005747) | 0.100809 / 0.038508 (0.062301) | 0.036601 / 0.023109 (0.013492) | 0.299612 / 0.275898 (0.023714) | 0.366970 / 0.323480 (0.043490) | 0.007879 / 0.007986 (-0.000107) | 0.004216 / 0.004328 (-0.000113) | 0.076749 / 0.004250 (0.072498) | 0.042081 / 0.037052 (0.005029) | 0.299572 / 0.258489 (0.041083) | 0.339687 / 0.293841 (0.045846) | 0.038706 / 0.128546 (-0.089840) | 0.012295 / 0.075646 (-0.063352) | 0.336172 / 0.419271 (-0.083100) | 0.047524 / 0.043533 (0.003992) | 0.296800 / 0.255139 (0.041661) | 0.331592 / 0.283200 (0.048393) | 0.101191 / 0.141683 (-0.040491) | 1.486200 / 1.452155 (0.034046) | 1.509955 / 1.492716 (0.017239) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.204735 / 0.018006 (0.186728) | 0.446381 / 0.000490 (0.445891) | 0.005177 / 0.000200 (0.004977) | 0.000099 / 0.000054 (0.000045) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028655 / 0.037411 (-0.008756) | 0.116559 / 0.014526 (0.102033) | 0.122551 / 0.176557 (-0.054006) | 0.189764 / 0.737135 (-0.547372) | 0.126446 / 0.296338 (-0.169892) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.400104 / 0.215209 (0.184895) | 4.001524 / 2.077655 (1.923869) | 1.779267 / 1.504120 (0.275147) | 1.580168 / 1.541195 (0.038974) | 1.684100 / 1.468490 (0.215610) | 0.703354 / 4.584777 (-3.881423) | 3.828131 / 3.745712 (0.082419) | 2.098500 / 5.269862 (-3.171362) | 1.331161 / 4.565676 (-3.234516) | 0.085417 / 0.424275 (-0.338858) | 0.012380 / 0.007607 (0.004772) | 0.504189 / 0.226044 (0.278144) | 5.094672 / 2.268929 (2.825743) | 2.264352 / 55.444624 (-53.180272) | 1.909573 / 6.876477 (-4.966904) | 2.005425 / 2.142072 (-0.136648) | 0.840893 / 4.805227 (-3.964335) | 0.164689 / 6.500664 (-6.335975) | 0.062754 / 0.075469 (-0.012715) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.250001 / 1.841788 (-0.591786) | 14.993313 / 8.074308 (6.919005) | 14.880601 / 10.191392 (4.689209) | 0.175141 / 0.680424 (-0.505283) | 0.028952 / 0.534201 (-0.505249) | 0.447073 / 0.579283 (-0.132210) | 0.445993 / 0.434364 (0.011629) | 0.525527 / 0.540337 (-0.014811) | 0.613156 / 1.386936 (-0.773780) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007796 / 0.011353 (-0.003557) | 0.005399 / 0.011008 (-0.005609) | 0.078240 / 0.038508 (0.039732) | 0.035303 / 0.023109 (0.012193) | 0.364603 / 0.275898 (0.088705) | 0.400794 / 0.323480 (0.077314) | 0.006152 / 0.007986 (-0.001834) | 0.004324 / 0.004328 (-0.000004) | 0.074949 / 0.004250 (0.070698) | 0.051939 / 0.037052 (0.014887) | 0.377079 / 0.258489 (0.118590) | 0.413630 / 0.293841 (0.119789) | 0.037567 / 0.128546 (-0.090979) | 0.012793 / 0.075646 (-0.062854) | 0.089013 / 0.419271 (-0.330258) | 0.050748 / 0.043533 (0.007215) | 0.370100 / 0.255139 (0.114961) | 0.384838 / 0.283200 (0.101638) | 0.105840 / 0.141683 (-0.035843) | 1.476490 / 1.452155 (0.024335) | 1.544688 / 1.492716 (0.051972) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.220987 / 0.018006 (0.202981) | 0.443801 / 0.000490 (0.443311) | 0.005747 / 0.000200 (0.005547) | 0.000106 / 0.000054 (0.000051) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030187 / 0.037411 (-0.007225) | 0.118230 / 0.014526 (0.103704) | 0.126810 / 0.176557 (-0.049746) | 0.200482 / 0.737135 (-0.536654) | 0.130831 / 0.296338 (-0.165507) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.423231 / 0.215209 (0.208022) | 4.196576 / 2.077655 (2.118921) | 1.992919 / 1.504120 (0.488799) | 1.809172 / 1.541195 (0.267977) | 1.932706 / 1.468490 (0.464216) | 0.727319 / 4.584777 (-3.857458) | 3.833295 / 3.745712 (0.087583) | 3.527005 / 5.269862 (-1.742857) | 1.937348 / 4.565676 (-2.628329) | 0.088713 / 0.424275 (-0.335562) | 0.012711 / 0.007607 (0.005104) | 0.531385 / 0.226044 (0.305341) | 5.308051 / 2.268929 (3.039123) | 2.493494 / 55.444624 (-52.951131) | 2.168359 / 6.876477 (-4.708118) | 2.258160 / 2.142072 (0.116088) | 0.865629 / 4.805227 (-3.939598) | 0.171281 / 6.500664 (-6.329383) | 0.065746 / 0.075469 (-0.009723) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.290378 / 1.841788 (-0.551409) | 15.900804 / 8.074308 (7.826496) | 14.809614 / 10.191392 (4.618222) | 0.177287 / 0.680424 (-0.503137) | 0.017875 / 0.534201 (-0.516326) | 0.429646 / 0.579283 (-0.149637) | 0.451646 / 0.434364 (0.017282) | 0.545669 / 0.540337 (0.005332) | 0.633215 / 1.386936 (-0.753721) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#2c67b5f4bc9cea088e977a135644d38da8c144ff \"CML watermark\")\n" ]
null
[]
Fix map suffix_template
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/5559/timeline
#5455 introduced a small bug that lead `map` to ignore the `suffix_template` argument and not put suffixes to cached files in multiprocessing. I fixed this and also improved a few things: - regarding logging: "Loading cached processed dataset" is now logged only once even in multiprocessing (it used to be logged `num_proc` times) - regarding new_fingerprint: I made sure that the returned dataset satisfies `ds._fingerprint==new_fingerprint` if `new_fingerprint` is passed to `map`
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/5559.diff", "html_url": "https://github.com/huggingface/datasets/pull/5559", "merged_at": "2023-02-21T17:14:29Z", "patch_url": "https://github.com/huggingface/datasets/pull/5559.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5559" }
1,593,676,489
https://api.github.com/repos/huggingface/datasets/issues/5559/comments
PR_kwDODunzps5KcKSb
null
5,559
https://api.github.com/repos/huggingface/datasets/issues/5559/events
true
closed
2023-02-21T15:13:36Z
null
https://api.github.com/repos/huggingface/datasets/issues/5558
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5558/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/5558/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/polinaeterna", "id": 16348744, "login": "polinaeterna", "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "repos_url": "https://api.github.com/users/polinaeterna/repos", "site_admin": false, "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "type": "User", "url": "https://api.github.com/users/polinaeterna" }
https://github.com/huggingface/datasets/pull/5558
[]
false
2023-03-01T13:46:04Z
2023-02-23T13:50:27Z
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.014525 / 0.011353 (0.003172) | 0.006871 / 0.011008 (-0.004137) | 0.135577 / 0.038508 (0.097069) | 0.039620 / 0.023109 (0.016511) | 0.499829 / 0.275898 (0.223931) | 0.571000 / 0.323480 (0.247520) | 0.009726 / 0.007986 (0.001740) | 0.005654 / 0.004328 (0.001325) | 0.104732 / 0.004250 (0.100482) | 0.046849 / 0.037052 (0.009796) | 0.486667 / 0.258489 (0.228178) | 0.543611 / 0.293841 (0.249770) | 0.056414 / 0.128546 (-0.072133) | 0.019974 / 0.075646 (-0.055672) | 0.484878 / 0.419271 (0.065606) | 0.059244 / 0.043533 (0.015711) | 0.490046 / 0.255139 (0.234907) | 0.517427 / 0.283200 (0.234227) | 0.114692 / 0.141683 (-0.026991) | 1.935935 / 1.452155 (0.483780) | 1.990253 / 1.492716 (0.497537) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.271008 / 0.018006 (0.253002) | 0.610964 / 0.000490 (0.610474) | 0.013423 / 0.000200 (0.013223) | 0.000523 / 0.000054 (0.000468) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031940 / 0.037411 (-0.005472) | 0.130755 / 0.014526 (0.116229) | 0.146616 / 0.176557 (-0.029941) | 0.239386 / 0.737135 (-0.497749) | 0.146612 / 0.296338 (-0.149726) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.675383 / 0.215209 (0.460174) | 6.656828 / 2.077655 (4.579174) | 2.741231 / 1.504120 (1.237111) | 2.232921 / 1.541195 (0.691726) | 2.172116 / 1.468490 (0.703626) | 1.221623 / 4.584777 (-3.363154) | 5.683653 / 3.745712 (1.937941) | 5.344137 / 5.269862 (0.074275) | 2.969670 / 4.565676 (-1.596006) | 0.142107 / 0.424275 (-0.282168) | 0.015808 / 0.007607 (0.008201) | 0.767366 / 0.226044 (0.541321) | 8.059605 / 2.268929 (5.790676) | 3.333535 / 55.444624 (-52.111089) | 2.669619 / 6.876477 (-4.206857) | 2.652989 / 2.142072 (0.510917) | 1.526397 / 4.805227 (-3.278830) | 0.265609 / 6.500664 (-6.235055) | 0.082759 / 0.075469 (0.007290) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.631086 / 1.841788 (-0.210701) | 18.701351 / 8.074308 (10.627043) | 22.843802 / 10.191392 (12.652410) | 0.240134 / 0.680424 (-0.440290) | 0.046683 / 0.534201 (-0.487518) | 0.576488 / 0.579283 (-0.002795) | 0.650123 / 0.434364 (0.215759) | 0.661190 / 0.540337 (0.120853) | 0.759563 / 1.386936 (-0.627373) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009883 / 0.011353 (-0.001470) | 0.006692 / 0.011008 (-0.004316) | 0.098550 / 0.038508 (0.060042) | 0.035188 / 0.023109 (0.012078) | 0.463535 / 0.275898 (0.187637) | 0.472762 / 0.323480 (0.149282) | 0.007199 / 0.007986 (-0.000787) | 0.007961 / 0.004328 (0.003632) | 0.093140 / 0.004250 (0.088890) | 0.051752 / 0.037052 (0.014700) | 0.453412 / 0.258489 (0.194922) | 0.502741 / 0.293841 (0.208900) | 0.056006 / 0.128546 (-0.072540) | 0.020164 / 0.075646 (-0.055482) | 0.116828 / 0.419271 (-0.302444) | 0.067205 / 0.043533 (0.023672) | 0.442715 / 0.255139 (0.187576) | 0.472525 / 0.283200 (0.189326) | 0.122767 / 0.141683 (-0.018915) | 1.881366 / 1.452155 (0.429212) | 1.978786 / 1.492716 (0.486069) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.284180 / 0.018006 (0.266174) | 0.601556 / 0.000490 (0.601067) | 0.008455 / 0.000200 (0.008255) | 0.000112 / 0.000054 (0.000057) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033515 / 0.037411 (-0.003896) | 0.136407 / 0.014526 (0.121881) | 0.143341 / 0.176557 (-0.033215) | 0.225394 / 0.737135 (-0.511741) | 0.153343 / 0.296338 (-0.142995) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.688202 / 0.215209 (0.472993) | 6.576502 / 2.077655 (4.498847) | 2.839175 / 1.504120 (1.335055) | 2.481152 / 1.541195 (0.939957) | 2.617227 / 1.468490 (1.148736) | 1.314854 / 4.584777 (-3.269922) | 5.805950 / 3.745712 (2.060238) | 3.188930 / 5.269862 (-2.080932) | 2.141719 / 4.565676 (-2.423957) | 0.145069 / 0.424275 (-0.279206) | 0.014567 / 0.007607 (0.006960) | 0.780000 / 0.226044 (0.553955) | 7.898016 / 2.268929 (5.629088) | 3.549060 / 55.444624 (-51.895564) | 2.856569 / 6.876477 (-4.019907) | 3.117719 / 2.142072 (0.975647) | 1.512560 / 4.805227 (-3.292668) | 0.262689 / 6.500664 (-6.237975) | 0.085979 / 0.075469 (0.010509) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.623550 / 1.841788 (-0.218238) | 19.597063 / 8.074308 (11.522755) | 21.293369 / 10.191392 (11.101977) | 0.263780 / 0.680424 (-0.416643) | 0.027289 / 0.534201 (-0.506912) | 0.560361 / 0.579283 (-0.018922) | 0.646288 / 0.434364 (0.211924) | 0.712699 / 0.540337 (0.172361) | 0.818332 / 1.386936 (-0.568604) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#b304de5dde30c945ec1397d3b4fe86f3b323ca8b \"CML watermark\")\n" ]
null
[]
Remove instructions for `ffmpeg` system package installation on Colab
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/5558/timeline
Colab now has Ubuntu 20.04 which already has `ffmpeg` of required (>4) version.
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/5558.diff", "html_url": "https://github.com/huggingface/datasets/pull/5558", "merged_at": "2023-02-23T13:50:27Z", "patch_url": "https://github.com/huggingface/datasets/pull/5558.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5558" }
1,593,655,815
https://api.github.com/repos/huggingface/datasets/issues/5558/comments
PR_kwDODunzps5KcF5E
null
5,558
https://api.github.com/repos/huggingface/datasets/issues/5558/events
true
closed
2023-02-21T14:04:42Z
null
https://api.github.com/repos/huggingface/datasets/issues/5557
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5557/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/5557/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://github.com/huggingface/datasets/pull/5557
[]
false
2023-02-21T14:19:54Z
2023-02-21T14:12:39Z
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008477 / 0.011353 (-0.002875) | 0.004565 / 0.011008 (-0.006443) | 0.101640 / 0.038508 (0.063132) | 0.029581 / 0.023109 (0.006472) | 0.296524 / 0.275898 (0.020625) | 0.363175 / 0.323480 (0.039695) | 0.006961 / 0.007986 (-0.001024) | 0.003365 / 0.004328 (-0.000963) | 0.079689 / 0.004250 (0.075439) | 0.034881 / 0.037052 (-0.002171) | 0.310979 / 0.258489 (0.052489) | 0.348663 / 0.293841 (0.054822) | 0.034549 / 0.128546 (-0.093997) | 0.011463 / 0.075646 (-0.064184) | 0.326218 / 0.419271 (-0.093053) | 0.041393 / 0.043533 (-0.002140) | 0.297604 / 0.255139 (0.042465) | 0.335751 / 0.283200 (0.052551) | 0.086521 / 0.141683 (-0.055162) | 1.478906 / 1.452155 (0.026752) | 1.512777 / 1.492716 (0.020060) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.008767 / 0.018006 (-0.009239) | 0.397386 / 0.000490 (0.396897) | 0.003136 / 0.000200 (0.002936) | 0.000096 / 0.000054 (0.000042) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022804 / 0.037411 (-0.014608) | 0.097591 / 0.014526 (0.083066) | 0.103189 / 0.176557 (-0.073368) | 0.138165 / 0.737135 (-0.598970) | 0.107464 / 0.296338 (-0.188874) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.428956 / 0.215209 (0.213747) | 4.269656 / 2.077655 (2.192001) | 2.154418 / 1.504120 (0.650298) | 1.914176 / 1.541195 (0.372982) | 1.818452 / 1.468490 (0.349962) | 0.701381 / 4.584777 (-3.883396) | 3.425190 / 3.745712 (-0.320522) | 1.862545 / 5.269862 (-3.407316) | 1.166271 / 4.565676 (-3.399405) | 0.083678 / 0.424275 (-0.340597) | 0.012254 / 0.007607 (0.004647) | 0.535710 / 0.226044 (0.309665) | 5.342528 / 2.268929 (3.073600) | 2.627135 / 55.444624 (-52.817489) | 2.308313 / 6.876477 (-4.568164) | 2.325568 / 2.142072 (0.183496) | 0.818318 / 4.805227 (-3.986909) | 0.149812 / 6.500664 (-6.350853) | 0.064559 / 0.075469 (-0.010910) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.253611 / 1.841788 (-0.588176) | 13.646763 / 8.074308 (5.572455) | 14.387630 / 10.191392 (4.196238) | 0.159937 / 0.680424 (-0.520487) | 0.029123 / 0.534201 (-0.505078) | 0.400909 / 0.579283 (-0.178374) | 0.422830 / 0.434364 (-0.011534) | 0.488205 / 0.540337 (-0.052133) | 0.577982 / 1.386936 (-0.808954) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006430 / 0.011353 (-0.004923) | 0.004433 / 0.011008 (-0.006576) | 0.077459 / 0.038508 (0.038951) | 0.026949 / 0.023109 (0.003840) | 0.350276 / 0.275898 (0.074378) | 0.376189 / 0.323480 (0.052709) | 0.004945 / 0.007986 (-0.003041) | 0.003280 / 0.004328 (-0.001048) | 0.076465 / 0.004250 (0.072215) | 0.037510 / 0.037052 (0.000457) | 0.350410 / 0.258489 (0.091921) | 0.386778 / 0.293841 (0.092937) | 0.031933 / 0.128546 (-0.096613) | 0.011691 / 0.075646 (-0.063956) | 0.086519 / 0.419271 (-0.332753) | 0.042490 / 0.043533 (-0.001043) | 0.355930 / 0.255139 (0.100791) | 0.366500 / 0.283200 (0.083301) | 0.089542 / 0.141683 (-0.052141) | 1.492859 / 1.452155 (0.040704) | 1.548626 / 1.492716 (0.055910) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.220123 / 0.018006 (0.202117) | 0.396970 / 0.000490 (0.396480) | 0.000398 / 0.000200 (0.000198) | 0.000059 / 0.000054 (0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024831 / 0.037411 (-0.012580) | 0.099681 / 0.014526 (0.085156) | 0.108922 / 0.176557 (-0.067635) | 0.143004 / 0.737135 (-0.594131) | 0.109671 / 0.296338 (-0.186667) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.444237 / 0.215209 (0.229028) | 4.430330 / 2.077655 (2.352675) | 2.235003 / 1.504120 (0.730883) | 2.010499 / 1.541195 (0.469305) | 2.030585 / 1.468490 (0.562095) | 0.701938 / 4.584777 (-3.882839) | 3.334569 / 3.745712 (-0.411144) | 1.861680 / 5.269862 (-3.408181) | 1.166072 / 4.565676 (-3.399604) | 0.083870 / 0.424275 (-0.340405) | 0.012615 / 0.007607 (0.005008) | 0.548789 / 0.226044 (0.322744) | 5.488064 / 2.268929 (3.219136) | 2.614926 / 55.444624 (-52.829698) | 2.246455 / 6.876477 (-4.630022) | 2.277439 / 2.142072 (0.135367) | 0.808449 / 4.805227 (-3.996778) | 0.152434 / 6.500664 (-6.348230) | 0.066709 / 0.075469 (-0.008760) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.316880 / 1.841788 (-0.524908) | 13.965269 / 8.074308 (5.890961) | 13.660187 / 10.191392 (3.468795) | 0.157801 / 0.680424 (-0.522623) | 0.016580 / 0.534201 (-0.517621) | 0.382834 / 0.579283 (-0.196449) | 0.394717 / 0.434364 (-0.039647) | 0.465138 / 0.540337 (-0.075200) | 0.552399 / 1.386936 (-0.834537) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#fa06927a62e2983e2f0e8b7ba8262070c1543d78 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009341 / 0.011353 (-0.002012) | 0.005303 / 0.011008 (-0.005705) | 0.099287 / 0.038508 (0.060779) | 0.035587 / 0.023109 (0.012478) | 0.295146 / 0.275898 (0.019248) | 0.370470 / 0.323480 (0.046990) | 0.008910 / 0.007986 (0.000925) | 0.004358 / 0.004328 (0.000029) | 0.076298 / 0.004250 (0.072047) | 0.047187 / 0.037052 (0.010135) | 0.309025 / 0.258489 (0.050536) | 0.346659 / 0.293841 (0.052818) | 0.038378 / 0.128546 (-0.090168) | 0.012475 / 0.075646 (-0.063172) | 0.334370 / 0.419271 (-0.084901) | 0.048391 / 0.043533 (0.004858) | 0.298613 / 0.255139 (0.043474) | 0.317329 / 0.283200 (0.034130) | 0.108748 / 0.141683 (-0.032934) | 1.450454 / 1.452155 (-0.001701) | 1.519883 / 1.492716 (0.027167) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.011513 / 0.018006 (-0.006494) | 0.498941 / 0.000490 (0.498451) | 0.005098 / 0.000200 (0.004898) | 0.000096 / 0.000054 (0.000042) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030523 / 0.037411 (-0.006888) | 0.105478 / 0.014526 (0.090952) | 0.121101 / 0.176557 (-0.055456) | 0.159951 / 0.737135 (-0.577184) | 0.126766 / 0.296338 (-0.169572) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.399101 / 0.215209 (0.183892) | 3.997069 / 2.077655 (1.919414) | 1.851592 / 1.504120 (0.347472) | 1.695708 / 1.541195 (0.154513) | 1.759504 / 1.468490 (0.291014) | 0.708241 / 4.584777 (-3.876536) | 3.786724 / 3.745712 (0.041012) | 3.523731 / 5.269862 (-1.746131) | 1.899474 / 4.565676 (-2.666203) | 0.086680 / 0.424275 (-0.337595) | 0.012232 / 0.007607 (0.004625) | 0.508507 / 0.226044 (0.282462) | 5.086320 / 2.268929 (2.817391) | 2.234906 / 55.444624 (-53.209718) | 1.911090 / 6.876477 (-4.965386) | 1.989232 / 2.142072 (-0.152841) | 0.863660 / 4.805227 (-3.941567) | 0.169334 / 6.500664 (-6.331330) | 0.063273 / 0.075469 (-0.012196) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.237590 / 1.841788 (-0.604198) | 15.417631 / 8.074308 (7.343323) | 15.235308 / 10.191392 (5.043916) | 0.209431 / 0.680424 (-0.470993) | 0.029214 / 0.534201 (-0.504987) | 0.444767 / 0.579283 (-0.134516) | 0.447776 / 0.434364 (0.013413) | 0.538440 / 0.540337 (-0.001897) | 0.635760 / 1.386936 (-0.751176) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007758 / 0.011353 (-0.003594) | 0.005539 / 0.011008 (-0.005469) | 0.077011 / 0.038508 (0.038503) | 0.034305 / 0.023109 (0.011196) | 0.363352 / 0.275898 (0.087454) | 0.411882 / 0.323480 (0.088403) | 0.006286 / 0.007986 (-0.001700) | 0.004378 / 0.004328 (0.000050) | 0.075504 / 0.004250 (0.071253) | 0.052728 / 0.037052 (0.015675) | 0.370122 / 0.258489 (0.111633) | 0.421910 / 0.293841 (0.128069) | 0.038444 / 0.128546 (-0.090102) | 0.012602 / 0.075646 (-0.063045) | 0.088540 / 0.419271 (-0.330731) | 0.060321 / 0.043533 (0.016788) | 0.350502 / 0.255139 (0.095363) | 0.393211 / 0.283200 (0.110011) | 0.113057 / 0.141683 (-0.028626) | 1.453275 / 1.452155 (0.001120) | 1.541033 / 1.492716 (0.048317) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.333603 / 0.018006 (0.315597) | 0.510548 / 0.000490 (0.510058) | 0.003573 / 0.000200 (0.003373) | 0.000116 / 0.000054 (0.000061) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032783 / 0.037411 (-0.004628) | 0.111943 / 0.014526 (0.097418) | 0.127154 / 0.176557 (-0.049403) | 0.171716 / 0.737135 (-0.565420) | 0.132441 / 0.296338 (-0.163898) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.439110 / 0.215209 (0.223901) | 4.440874 / 2.077655 (2.363220) | 2.145850 / 1.504120 (0.641730) | 1.909566 / 1.541195 (0.368371) | 2.032199 / 1.468490 (0.563709) | 0.711295 / 4.584777 (-3.873482) | 3.845729 / 3.745712 (0.100017) | 3.583555 / 5.269862 (-1.686307) | 1.836856 / 4.565676 (-2.728820) | 0.085966 / 0.424275 (-0.338309) | 0.012479 / 0.007607 (0.004872) | 0.545379 / 0.226044 (0.319334) | 5.425724 / 2.268929 (3.156796) | 2.648304 / 55.444624 (-52.796321) | 2.286369 / 6.876477 (-4.590108) | 2.367714 / 2.142072 (0.225642) | 0.831035 / 4.805227 (-3.974192) | 0.167603 / 6.500664 (-6.333061) | 0.064721 / 0.075469 (-0.010748) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.244495 / 1.841788 (-0.597292) | 15.304267 / 8.074308 (7.229958) | 13.912185 / 10.191392 (3.720793) | 0.156459 / 0.680424 (-0.523965) | 0.019181 / 0.534201 (-0.515019) | 0.425940 / 0.579283 (-0.153343) | 0.427956 / 0.434364 (-0.006408) | 0.529126 / 0.540337 (-0.011212) | 0.628360 / 1.386936 (-0.758576) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#da31f6ee02af29d92ee5541e4a3fc388c3d9abfc \"CML watermark\")\n" ]
null
[]
Add filter desc
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/5557/timeline
Otherwise it would show a `Map` progress bar, since it uses `map` under the hood
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/5557.diff", "html_url": "https://github.com/huggingface/datasets/pull/5557", "merged_at": "2023-02-21T14:12:39Z", "patch_url": "https://github.com/huggingface/datasets/pull/5557.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5557" }
1,593,545,324
https://api.github.com/repos/huggingface/datasets/issues/5557/comments
PR_kwDODunzps5Kbube
null
5,557
https://api.github.com/repos/huggingface/datasets/issues/5557/events
true
closed
2023-02-21T10:45:50Z
null
https://api.github.com/repos/huggingface/datasets/issues/5556
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5556/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/5556/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://github.com/huggingface/datasets/pull/5556
[]
false
2023-02-21T12:49:50Z
2023-02-21T12:42:52Z
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008730 / 0.011353 (-0.002623) | 0.004551 / 0.011008 (-0.006457) | 0.100206 / 0.038508 (0.061698) | 0.030264 / 0.023109 (0.007154) | 0.303310 / 0.275898 (0.027412) | 0.339040 / 0.323480 (0.015560) | 0.006923 / 0.007986 (-0.001063) | 0.004707 / 0.004328 (0.000379) | 0.077822 / 0.004250 (0.073571) | 0.034368 / 0.037052 (-0.002684) | 0.303125 / 0.258489 (0.044636) | 0.348322 / 0.293841 (0.054481) | 0.033831 / 0.128546 (-0.094715) | 0.011459 / 0.075646 (-0.064187) | 0.322092 / 0.419271 (-0.097180) | 0.047720 / 0.043533 (0.004187) | 0.304849 / 0.255139 (0.049710) | 0.330767 / 0.283200 (0.047567) | 0.087362 / 0.141683 (-0.054321) | 1.536095 / 1.452155 (0.083941) | 1.599979 / 1.492716 (0.107263) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.188985 / 0.018006 (0.170979) | 0.410775 / 0.000490 (0.410286) | 0.004215 / 0.000200 (0.004015) | 0.000086 / 0.000054 (0.000032) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023124 / 0.037411 (-0.014287) | 0.096962 / 0.014526 (0.082436) | 0.104070 / 0.176557 (-0.072486) | 0.141248 / 0.737135 (-0.595887) | 0.108534 / 0.296338 (-0.187804) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.417118 / 0.215209 (0.201909) | 4.167808 / 2.077655 (2.090154) | 2.016540 / 1.504120 (0.512420) | 1.847812 / 1.541195 (0.306617) | 1.967023 / 1.468490 (0.498532) | 0.689262 / 4.584777 (-3.895515) | 3.378747 / 3.745712 (-0.366965) | 1.854126 / 5.269862 (-3.415735) | 1.152102 / 4.565676 (-3.413575) | 0.081839 / 0.424275 (-0.342437) | 0.012426 / 0.007607 (0.004819) | 0.521334 / 0.226044 (0.295289) | 5.230593 / 2.268929 (2.961664) | 2.269386 / 55.444624 (-53.175238) | 1.965631 / 6.876477 (-4.910846) | 2.028994 / 2.142072 (-0.113079) | 0.802142 / 4.805227 (-4.003085) | 0.147954 / 6.500664 (-6.352710) | 0.065031 / 0.075469 (-0.010438) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.235289 / 1.841788 (-0.606499) | 13.723507 / 8.074308 (5.649199) | 14.197923 / 10.191392 (4.006531) | 0.147950 / 0.680424 (-0.532473) | 0.028332 / 0.534201 (-0.505869) | 0.400180 / 0.579283 (-0.179103) | 0.418970 / 0.434364 (-0.015393) | 0.478381 / 0.540337 (-0.061957) | 0.576138 / 1.386936 (-0.810798) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006548 / 0.011353 (-0.004805) | 0.004567 / 0.011008 (-0.006441) | 0.075658 / 0.038508 (0.037150) | 0.027190 / 0.023109 (0.004080) | 0.363417 / 0.275898 (0.087518) | 0.399575 / 0.323480 (0.076095) | 0.004982 / 0.007986 (-0.003004) | 0.003364 / 0.004328 (-0.000964) | 0.074392 / 0.004250 (0.070142) | 0.038839 / 0.037052 (0.001787) | 0.361133 / 0.258489 (0.102644) | 0.408557 / 0.293841 (0.114717) | 0.031468 / 0.128546 (-0.097078) | 0.011645 / 0.075646 (-0.064001) | 0.085145 / 0.419271 (-0.334126) | 0.041775 / 0.043533 (-0.001758) | 0.348624 / 0.255139 (0.093485) | 0.389610 / 0.283200 (0.106410) | 0.088576 / 0.141683 (-0.053107) | 1.511208 / 1.452155 (0.059054) | 1.560568 / 1.492716 (0.067852) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.185017 / 0.018006 (0.167011) | 0.407543 / 0.000490 (0.407053) | 0.002486 / 0.000200 (0.002286) | 0.000076 / 0.000054 (0.000021) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025181 / 0.037411 (-0.012231) | 0.099056 / 0.014526 (0.084530) | 0.108597 / 0.176557 (-0.067959) | 0.144664 / 0.737135 (-0.592471) | 0.110417 / 0.296338 (-0.185922) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.434302 / 0.215209 (0.219093) | 4.327840 / 2.077655 (2.250185) | 2.059939 / 1.504120 (0.555819) | 1.853267 / 1.541195 (0.312072) | 1.906616 / 1.468490 (0.438126) | 0.700165 / 4.584777 (-3.884611) | 3.439216 / 3.745712 (-0.306496) | 2.792034 / 5.269862 (-2.477827) | 1.424852 / 4.565676 (-3.140824) | 0.083926 / 0.424275 (-0.340349) | 0.013943 / 0.007607 (0.006336) | 0.535964 / 0.226044 (0.309920) | 5.368671 / 2.268929 (3.099743) | 2.497027 / 55.444624 (-52.947597) | 2.166222 / 6.876477 (-4.710254) | 2.183766 / 2.142072 (0.041693) | 0.805886 / 4.805227 (-3.999341) | 0.152474 / 6.500664 (-6.348190) | 0.067354 / 0.075469 (-0.008115) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.284052 / 1.841788 (-0.557736) | 13.714066 / 8.074308 (5.639758) | 14.195212 / 10.191392 (4.003820) | 0.151815 / 0.680424 (-0.528609) | 0.016847 / 0.534201 (-0.517354) | 0.391174 / 0.579283 (-0.188109) | 0.409784 / 0.434364 (-0.024580) | 0.473880 / 0.540337 (-0.066458) | 0.561016 / 1.386936 (-0.825920) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#47ab08d9f06abd5bc23bddaa4875b93e926dd3b1 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010284 / 0.011353 (-0.001068) | 0.005654 / 0.011008 (-0.005355) | 0.100522 / 0.038508 (0.062014) | 0.039201 / 0.023109 (0.016092) | 0.320831 / 0.275898 (0.044933) | 0.365351 / 0.323480 (0.041871) | 0.009066 / 0.007986 (0.001080) | 0.005805 / 0.004328 (0.001476) | 0.076969 / 0.004250 (0.072719) | 0.045813 / 0.037052 (0.008760) | 0.327115 / 0.258489 (0.068626) | 0.362823 / 0.293841 (0.068982) | 0.040521 / 0.128546 (-0.088025) | 0.013166 / 0.075646 (-0.062481) | 0.358579 / 0.419271 (-0.060692) | 0.051753 / 0.043533 (0.008220) | 0.323741 / 0.255139 (0.068602) | 0.360211 / 0.283200 (0.077011) | 0.111534 / 0.141683 (-0.030149) | 1.594887 / 1.452155 (0.142732) | 1.651516 / 1.492716 (0.158799) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.012051 / 0.018006 (-0.005956) | 0.475316 / 0.000490 (0.474826) | 0.004804 / 0.000200 (0.004604) | 0.000100 / 0.000054 (0.000046) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027480 / 0.037411 (-0.009931) | 0.112022 / 0.014526 (0.097496) | 0.121539 / 0.176557 (-0.055017) | 0.166327 / 0.737135 (-0.570809) | 0.132575 / 0.296338 (-0.163763) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.418322 / 0.215209 (0.203113) | 4.149463 / 2.077655 (2.071808) | 1.890901 / 1.504120 (0.386781) | 1.682521 / 1.541195 (0.141327) | 1.716331 / 1.468490 (0.247841) | 0.729176 / 4.584777 (-3.855601) | 4.173303 / 3.745712 (0.427591) | 2.166249 / 5.269862 (-3.103612) | 1.384623 / 4.565676 (-3.181053) | 0.095486 / 0.424275 (-0.328789) | 0.013800 / 0.007607 (0.006193) | 0.573917 / 0.226044 (0.347872) | 5.348843 / 2.268929 (3.079914) | 2.421716 / 55.444624 (-53.022909) | 2.002048 / 6.876477 (-4.874428) | 2.079493 / 2.142072 (-0.062579) | 0.882818 / 4.805227 (-3.922409) | 0.172936 / 6.500664 (-6.327728) | 0.068384 / 0.075469 (-0.007085) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.285704 / 1.841788 (-0.556084) | 16.036346 / 8.074308 (7.962038) | 15.181557 / 10.191392 (4.990165) | 0.194044 / 0.680424 (-0.486380) | 0.033128 / 0.534201 (-0.501073) | 0.480290 / 0.579283 (-0.098993) | 0.497525 / 0.434364 (0.063161) | 0.602304 / 0.540337 (0.061966) | 0.754273 / 1.386936 (-0.632663) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007263 / 0.011353 (-0.004090) | 0.005164 / 0.011008 (-0.005845) | 0.079833 / 0.038508 (0.041325) | 0.033974 / 0.023109 (0.010865) | 0.382057 / 0.275898 (0.106159) | 0.402924 / 0.323480 (0.079444) | 0.007273 / 0.007986 (-0.000712) | 0.004378 / 0.004328 (0.000049) | 0.080556 / 0.004250 (0.076305) | 0.047376 / 0.037052 (0.010324) | 0.379044 / 0.258489 (0.120555) | 0.422135 / 0.293841 (0.128294) | 0.038294 / 0.128546 (-0.090252) | 0.013974 / 0.075646 (-0.061672) | 0.094936 / 0.419271 (-0.324335) | 0.051033 / 0.043533 (0.007501) | 0.368197 / 0.255139 (0.113058) | 0.409627 / 0.283200 (0.126427) | 0.107365 / 0.141683 (-0.034318) | 1.537501 / 1.452155 (0.085346) | 1.618021 / 1.492716 (0.125305) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.227187 / 0.018006 (0.209181) | 0.473226 / 0.000490 (0.472736) | 0.006532 / 0.000200 (0.006332) | 0.000121 / 0.000054 (0.000066) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029814 / 0.037411 (-0.007597) | 0.121113 / 0.014526 (0.106587) | 0.125107 / 0.176557 (-0.051450) | 0.167008 / 0.737135 (-0.570127) | 0.128720 / 0.296338 (-0.167619) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.452158 / 0.215209 (0.236949) | 4.507087 / 2.077655 (2.429433) | 2.193910 / 1.504120 (0.689790) | 1.991234 / 1.541195 (0.450039) | 2.120256 / 1.468490 (0.651766) | 0.726664 / 4.584777 (-3.858113) | 4.213148 / 3.745712 (0.467436) | 4.082857 / 5.269862 (-1.187005) | 1.741018 / 4.565676 (-2.824658) | 0.090176 / 0.424275 (-0.334099) | 0.013221 / 0.007607 (0.005614) | 0.558868 / 0.226044 (0.332824) | 5.617242 / 2.268929 (3.348313) | 2.985430 / 55.444624 (-52.459194) | 2.623136 / 6.876477 (-4.253341) | 2.383177 / 2.142072 (0.241105) | 0.917237 / 4.805227 (-3.887990) | 0.178774 / 6.500664 (-6.321890) | 0.064707 / 0.075469 (-0.010762) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.365402 / 1.841788 (-0.476385) | 16.035773 / 8.074308 (7.961465) | 13.917612 / 10.191392 (3.726220) | 0.152191 / 0.680424 (-0.528233) | 0.020734 / 0.534201 (-0.513467) | 0.442055 / 0.579283 (-0.137228) | 0.470588 / 0.434364 (0.036224) | 0.563433 / 0.540337 (0.023096) | 0.651161 / 1.386936 (-0.735775) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#6ab909a44b723fe0a8a586beafc8c5cbf9c91c21 \"CML watermark\")\n", "If it's good for you @polinaeterna I'd like to merge it and then run the `transformers` CI to see if it changes anything", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008829 / 0.011353 (-0.002524) | 0.004652 / 0.011008 (-0.006356) | 0.102505 / 0.038508 (0.063997) | 0.030164 / 0.023109 (0.007054) | 0.306551 / 0.275898 (0.030653) | 0.368920 / 0.323480 (0.045440) | 0.007084 / 0.007986 (-0.000902) | 0.003545 / 0.004328 (-0.000783) | 0.079402 / 0.004250 (0.075152) | 0.035296 / 0.037052 (-0.001756) | 0.312010 / 0.258489 (0.053520) | 0.348773 / 0.293841 (0.054932) | 0.034622 / 0.128546 (-0.093924) | 0.011727 / 0.075646 (-0.063920) | 0.326911 / 0.419271 (-0.092361) | 0.043832 / 0.043533 (0.000300) | 0.306357 / 0.255139 (0.051218) | 0.328744 / 0.283200 (0.045544) | 0.091954 / 0.141683 (-0.049729) | 1.563989 / 1.452155 (0.111834) | 1.591901 / 1.492716 (0.099185) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.194955 / 0.018006 (0.176948) | 0.412864 / 0.000490 (0.412374) | 0.003710 / 0.000200 (0.003510) | 0.000081 / 0.000054 (0.000026) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023132 / 0.037411 (-0.014279) | 0.099586 / 0.014526 (0.085060) | 0.105031 / 0.176557 (-0.071525) | 0.141206 / 0.737135 (-0.595929) | 0.111978 / 0.296338 (-0.184360) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.413729 / 0.215209 (0.198520) | 4.161713 / 2.077655 (2.084058) | 1.887442 / 1.504120 (0.383322) | 1.711847 / 1.541195 (0.170653) | 1.756833 / 1.468490 (0.288343) | 0.699239 / 4.584777 (-3.885538) | 3.346213 / 3.745712 (-0.399499) | 2.822289 / 5.269862 (-2.447573) | 1.475650 / 4.565676 (-3.090027) | 0.082800 / 0.424275 (-0.341475) | 0.012302 / 0.007607 (0.004695) | 0.523068 / 0.226044 (0.297024) | 5.242833 / 2.268929 (2.973904) | 2.310768 / 55.444624 (-53.133856) | 1.954629 / 6.876477 (-4.921847) | 2.015563 / 2.142072 (-0.126510) | 0.812613 / 4.805227 (-3.992614) | 0.149512 / 6.500664 (-6.351152) | 0.065162 / 0.075469 (-0.010307) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.270177 / 1.841788 (-0.571610) | 13.664765 / 8.074308 (5.590457) | 14.317968 / 10.191392 (4.126576) | 0.138135 / 0.680424 (-0.542289) | 0.028503 / 0.534201 (-0.505698) | 0.402921 / 0.579283 (-0.176362) | 0.400999 / 0.434364 (-0.033365) | 0.470983 / 0.540337 (-0.069355) | 0.544319 / 1.386936 (-0.842617) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006841 / 0.011353 (-0.004512) | 0.004570 / 0.011008 (-0.006439) | 0.076398 / 0.038508 (0.037890) | 0.028136 / 0.023109 (0.005027) | 0.339864 / 0.275898 (0.063966) | 0.375496 / 0.323480 (0.052016) | 0.004967 / 0.007986 (-0.003019) | 0.003411 / 0.004328 (-0.000917) | 0.075727 / 0.004250 (0.071476) | 0.040025 / 0.037052 (0.002973) | 0.340473 / 0.258489 (0.081984) | 0.384396 / 0.293841 (0.090555) | 0.031683 / 0.128546 (-0.096863) | 0.011752 / 0.075646 (-0.063894) | 0.085635 / 0.419271 (-0.333636) | 0.042764 / 0.043533 (-0.000769) | 0.339417 / 0.255139 (0.084278) | 0.364190 / 0.283200 (0.080991) | 0.093842 / 0.141683 (-0.047841) | 1.480999 / 1.452155 (0.028844) | 1.549752 / 1.492716 (0.057036) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.174146 / 0.018006 (0.156140) | 0.415459 / 0.000490 (0.414970) | 0.002854 / 0.000200 (0.002654) | 0.000077 / 0.000054 (0.000023) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024671 / 0.037411 (-0.012740) | 0.101229 / 0.014526 (0.086703) | 0.108841 / 0.176557 (-0.067716) | 0.144530 / 0.737135 (-0.592606) | 0.112509 / 0.296338 (-0.183829) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.460561 / 0.215209 (0.245352) | 4.591139 / 2.077655 (2.513484) | 2.275535 / 1.504120 (0.771415) | 2.070976 / 1.541195 (0.529781) | 2.028766 / 1.468490 (0.560276) | 0.706166 / 4.584777 (-3.878611) | 3.408498 / 3.745712 (-0.337215) | 3.034665 / 5.269862 (-2.235197) | 1.586805 / 4.565676 (-2.978872) | 0.083355 / 0.424275 (-0.340920) | 0.012460 / 0.007607 (0.004853) | 0.565256 / 0.226044 (0.339212) | 5.662643 / 2.268929 (3.393715) | 2.697019 / 55.444624 (-52.747605) | 2.302044 / 6.876477 (-4.574433) | 2.373081 / 2.142072 (0.231009) | 0.809804 / 4.805227 (-3.995423) | 0.151481 / 6.500664 (-6.349183) | 0.066870 / 0.075469 (-0.008599) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.257293 / 1.841788 (-0.584495) | 14.059454 / 8.074308 (5.985146) | 13.783251 / 10.191392 (3.591859) | 0.140007 / 0.680424 (-0.540417) | 0.016624 / 0.534201 (-0.517577) | 0.381703 / 0.579283 (-0.197580) | 0.389032 / 0.434364 (-0.045332) | 0.466127 / 0.540337 (-0.074211) | 0.551052 / 1.386936 (-0.835884) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#4a767f7a3dffdf45886690b81c6e624146ae14da \"CML watermark\")\n" ]
null
[]
Use default audio resampling type
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/5556/timeline
...instead of relying on the optional librosa dependency `resampy`. It was only used for `_decode_non_mp3_file_like` anyway and not for the other ones - removing it fixes consistency between decoding methods (except torchaudio decoding) Therefore I think it is a better solution than adding `resampy` as a dependency in https://github.com/huggingface/datasets/pull/5554 cc @polinaeterna
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/5556.diff", "html_url": "https://github.com/huggingface/datasets/pull/5556", "merged_at": "2023-02-21T12:42:52Z", "patch_url": "https://github.com/huggingface/datasets/pull/5556.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5556" }
1,593,246,936
https://api.github.com/repos/huggingface/datasets/issues/5556/comments
PR_kwDODunzps5KauVL
null
5,556
https://api.github.com/repos/huggingface/datasets/issues/5556/events
true
open
2023-02-20T21:33:45Z
null
https://api.github.com/repos/huggingface/datasets/issues/5555
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5555/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/5555/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/10768588?v=4", "events_url": "https://api.github.com/users/prabhakar267/events{/privacy}", "followers_url": "https://api.github.com/users/prabhakar267/followers", "following_url": "https://api.github.com/users/prabhakar267/following{/other_user}", "gists_url": "https://api.github.com/users/prabhakar267/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/prabhakar267", "id": 10768588, "login": "prabhakar267", "node_id": "MDQ6VXNlcjEwNzY4NTg4", "organizations_url": "https://api.github.com/users/prabhakar267/orgs", "received_events_url": "https://api.github.com/users/prabhakar267/received_events", "repos_url": "https://api.github.com/users/prabhakar267/repos", "site_admin": false, "starred_url": "https://api.github.com/users/prabhakar267/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/prabhakar267/subscriptions", "type": "User", "url": "https://api.github.com/users/prabhakar267" }
https://github.com/huggingface/datasets/issues/5555
[]
false
2023-02-27T09:23:34Z
null
null
[ "Hi ! The indices mapping is written in the same cachedirectory as your dataset.\r\n\r\nCan you run this to show your current cache directory ?\r\n```python\r\nprint(train_dataset.cache_files)\r\n```", "```\r\n[{'filename': '.../train/dataset.arrow'}, {'filename': '.../train/dataset.arrow'}]\r\n```\r\n\r\nThese are the actual paths where `.hf` files are stored. ", "I'm not aware of any `.hf` file ? What are you referring to ?\r\n\r\nAlso the error says \"Protocol unknown: parent\". Is there a chance you may have ended up with a path that contains this string `parent://` ?", "I figured out why the issue was occuring but don't know the long-term fix.\r\nThe dataset I was trying to shuffle was loaded from a saved file which had `::` delimiter in filename. When I try with the exact same file without `::` in filename, it works as expected.\r\nQuick fix is to not use colons in filename. But if this is expected behaviour, this should be clearly stated in the documentation.\r\nThanks for help @lhoestq " ]
null
[]
`.shuffle` throwing error `ValueError: Protocol not known: parent`
NONE
https://api.github.com/repos/huggingface/datasets/issues/5555/timeline
### Describe the bug ``` --------------------------------------------------------------------------- ValueError Traceback (most recent call last) Cell In [16], line 1 ----> 1 train_dataset = train_dataset.shuffle() File /opt/conda/envs/pytorch/lib/python3.9/site-packages/datasets/arrow_dataset.py:551, in transmit_format.<locals>.wrapper(*args, **kwargs) 544 self_format = { 545 "type": self._format_type, 546 "format_kwargs": self._format_kwargs, 547 "columns": self._format_columns, 548 "output_all_columns": self._output_all_columns, 549 } 550 # apply actual function --> 551 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) 552 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out] 553 # re-apply format to the output File /opt/conda/envs/pytorch/lib/python3.9/site-packages/datasets/fingerprint.py:480, in fingerprint_transform.<locals>._fingerprint.<locals>.wrapper(*args, **kwargs) 476 validate_fingerprint(kwargs[fingerprint_name]) 478 # Call actual function --> 480 out = func(self, *args, **kwargs) 482 # Update fingerprint of in-place transforms + update in-place history of transforms 484 if inplace: # update after calling func so that the fingerprint doesn't change if the function fails File /opt/conda/envs/pytorch/lib/python3.9/site-packages/datasets/arrow_dataset.py:3616, in Dataset.shuffle(self, seed, generator, keep_in_memory, load_from_cache_file, indices_cache_file_name, writer_batch_size, new_fingerprint) 3610 return self._new_dataset_with_indices( 3611 fingerprint=new_fingerprint, indices_cache_file_name=indices_cache_file_name 3612 ) 3614 permutation = generator.permutation(len(self)) -> 3616 return self.select( 3617 indices=permutation, 3618 keep_in_memory=keep_in_memory, 3619 indices_cache_file_name=indices_cache_file_name if not keep_in_memory else None, 3620 writer_batch_size=writer_batch_size, 3621 new_fingerprint=new_fingerprint, 3622 ) File /opt/conda/envs/pytorch/lib/python3.9/site-packages/datasets/arrow_dataset.py:551, in transmit_format.<locals>.wrapper(*args, **kwargs) 544 self_format = { 545 "type": self._format_type, 546 "format_kwargs": self._format_kwargs, 547 "columns": self._format_columns, 548 "output_all_columns": self._output_all_columns, 549 } 550 # apply actual function --> 551 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) 552 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out] 553 # re-apply format to the output File /opt/conda/envs/pytorch/lib/python3.9/site-packages/datasets/fingerprint.py:480, in fingerprint_transform.<locals>._fingerprint.<locals>.wrapper(*args, **kwargs) 476 validate_fingerprint(kwargs[fingerprint_name]) 478 # Call actual function --> 480 out = func(self, *args, **kwargs) 482 # Update fingerprint of in-place transforms + update in-place history of transforms 484 if inplace: # update after calling func so that the fingerprint doesn't change if the function fails File /opt/conda/envs/pytorch/lib/python3.9/site-packages/datasets/arrow_dataset.py:3266, in Dataset.select(self, indices, keep_in_memory, indices_cache_file_name, writer_batch_size, new_fingerprint) 3263 return self._select_contiguous(start, length, new_fingerprint=new_fingerprint) 3265 # If not contiguous, we need to create a new indices mapping -> 3266 return self._select_with_indices_mapping( 3267 indices, 3268 keep_in_memory=keep_in_memory, 3269 indices_cache_file_name=indices_cache_file_name, 3270 writer_batch_size=writer_batch_size, 3271 new_fingerprint=new_fingerprint, 3272 ) File /opt/conda/envs/pytorch/lib/python3.9/site-packages/datasets/arrow_dataset.py:551, in transmit_format.<locals>.wrapper(*args, **kwargs) 544 self_format = { 545 "type": self._format_type, 546 "format_kwargs": self._format_kwargs, 547 "columns": self._format_columns, 548 "output_all_columns": self._output_all_columns, 549 } 550 # apply actual function --> 551 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) 552 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out] 553 # re-apply format to the output File /opt/conda/envs/pytorch/lib/python3.9/site-packages/datasets/fingerprint.py:480, in fingerprint_transform.<locals>._fingerprint.<locals>.wrapper(*args, **kwargs) 476 validate_fingerprint(kwargs[fingerprint_name]) 478 # Call actual function --> 480 out = func(self, *args, **kwargs) 482 # Update fingerprint of in-place transforms + update in-place history of transforms 484 if inplace: # update after calling func so that the fingerprint doesn't change if the function fails File /opt/conda/envs/pytorch/lib/python3.9/site-packages/datasets/arrow_dataset.py:3389, in Dataset._select_with_indices_mapping(self, indices, keep_in_memory, indices_cache_file_name, writer_batch_size, new_fingerprint) 3387 logger.info(f"Caching indices mapping at {indices_cache_file_name}") 3388 tmp_file = tempfile.NamedTemporaryFile("wb", dir=os.path.dirname(indices_cache_file_name), delete=False) -> 3389 writer = ArrowWriter( 3390 path=tmp_file.name, writer_batch_size=writer_batch_size, fingerprint=new_fingerprint, unit="indices" 3391 ) 3393 indices = indices if isinstance(indices, list) else list(indices) 3395 size = len(self) File /opt/conda/envs/pytorch/lib/python3.9/site-packages/datasets/arrow_writer.py:315, in ArrowWriter.__init__(self, schema, features, path, stream, fingerprint, writer_batch_size, hash_salt, check_duplicates, disable_nullable, update_features, with_metadata, unit, embed_local_files, storage_options) 312 self._disable_nullable = disable_nullable 314 if stream is None: --> 315 fs_token_paths = fsspec.get_fs_token_paths(path, storage_options=storage_options) 316 self._fs: fsspec.AbstractFileSystem = fs_token_paths[0] 317 self._path = ( 318 fs_token_paths[2][0] 319 if not is_remote_filesystem(self._fs) 320 else self._fs.unstrip_protocol(fs_token_paths[2][0]) 321 ) File /opt/conda/envs/pytorch/lib/python3.9/site-packages/fsspec/core.py:593, in get_fs_token_paths(urlpath, mode, num, name_function, storage_options, protocol, expand) 591 else: 592 urlpath = stringify_path(urlpath) --> 593 chain = _un_chain(urlpath, storage_options or {}) 594 if len(chain) > 1: 595 inkwargs = {} File /opt/conda/envs/pytorch/lib/python3.9/site-packages/fsspec/core.py:330, in _un_chain(path, kwargs) 328 for bit in reversed(bits): 329 protocol = split_protocol(bit)[0] or "file" --> 330 cls = get_filesystem_class(protocol) 331 extra_kwargs = cls._get_kwargs_from_urls(bit) 332 kws = kwargs.get(protocol, {}) File /opt/conda/envs/pytorch/lib/python3.9/site-packages/fsspec/registry.py:240, in get_filesystem_class(protocol) 238 if protocol not in registry: 239 if protocol not in known_implementations: --> 240 raise ValueError("Protocol not known: %s" % protocol) 241 bit = known_implementations[protocol] 242 try: ValueError: Protocol not known: parent ``` This is what the `train_dataset` object looks like ``` Dataset({ features: ['label', 'input_ids', 'attention_mask'], num_rows: 364166 }) ``` ### Steps to reproduce the bug The `train_dataset` obj is created by concatenating two datasets And then shuffle is called, but it throws the mentioned error. ### Expected behavior Should shuffle the dataset properly. ### Environment info - `datasets` version: 2.6.1 - Platform: Linux-5.15.0-1022-aws-x86_64-with-glibc2.31 - Python version: 3.9.13 - PyArrow version: 10.0.0 - Pandas version: 1.4.4
https://api.github.com/repos/huggingface/datasets
null
1,592,469,938
https://api.github.com/repos/huggingface/datasets/issues/5555/comments
I_kwDODunzps5e6ymy
null
5,555
https://api.github.com/repos/huggingface/datasets/issues/5555/events
false
closed
2023-02-20T18:15:43Z
null
https://api.github.com/repos/huggingface/datasets/issues/5554
null
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/5554/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/5554/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://github.com/huggingface/datasets/pull/5554
[]
false
2023-09-24T10:07:29Z
2023-02-21T12:43:38Z
null
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008735 / 0.011353 (-0.002618) | 0.004514 / 0.011008 (-0.006494) | 0.099348 / 0.038508 (0.060840) | 0.030060 / 0.023109 (0.006951) | 0.302189 / 0.275898 (0.026291) | 0.339535 / 0.323480 (0.016055) | 0.007053 / 0.007986 (-0.000933) | 0.003420 / 0.004328 (-0.000909) | 0.076967 / 0.004250 (0.072717) | 0.034484 / 0.037052 (-0.002568) | 0.304349 / 0.258489 (0.045860) | 0.354032 / 0.293841 (0.060191) | 0.033552 / 0.128546 (-0.094995) | 0.011405 / 0.075646 (-0.064241) | 0.324773 / 0.419271 (-0.094498) | 0.041103 / 0.043533 (-0.002429) | 0.313559 / 0.255139 (0.058420) | 0.333251 / 0.283200 (0.050052) | 0.087580 / 0.141683 (-0.054103) | 1.460324 / 1.452155 (0.008169) | 1.552239 / 1.492716 (0.059523) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.183759 / 0.018006 (0.165753) | 0.413274 / 0.000490 (0.412784) | 0.001684 / 0.000200 (0.001484) | 0.000076 / 0.000054 (0.000022) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023341 / 0.037411 (-0.014071) | 0.098368 / 0.014526 (0.083842) | 0.105522 / 0.176557 (-0.071034) | 0.151581 / 0.737135 (-0.585554) | 0.108980 / 0.296338 (-0.187358) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.417856 / 0.215209 (0.202647) | 4.167570 / 2.077655 (2.089915) | 1.843669 / 1.504120 (0.339549) | 1.643130 / 1.541195 (0.101936) | 1.717587 / 1.468490 (0.249097) | 0.696392 / 4.584777 (-3.888384) | 3.427617 / 3.745712 (-0.318096) | 2.816486 / 5.269862 (-2.453376) | 1.539519 / 4.565676 (-3.026157) | 0.082112 / 0.424275 (-0.342163) | 0.012425 / 0.007607 (0.004818) | 0.525325 / 0.226044 (0.299281) | 5.251710 / 2.268929 (2.982781) | 2.273641 / 55.444624 (-53.170983) | 1.931002 / 6.876477 (-4.945474) | 1.977253 / 2.142072 (-0.164819) | 0.804794 / 4.805227 (-4.000434) | 0.147324 / 6.500664 (-6.353340) | 0.064966 / 0.075469 (-0.010503) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.193173 / 1.841788 (-0.648615) | 13.705127 / 8.074308 (5.630819) | 14.348408 / 10.191392 (4.157016) | 0.165374 / 0.680424 (-0.515050) | 0.028288 / 0.534201 (-0.505913) | 0.402546 / 0.579283 (-0.176737) | 0.413503 / 0.434364 (-0.020861) | 0.473298 / 0.540337 (-0.067039) | 0.567571 / 1.386936 (-0.819365) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006735 / 0.011353 (-0.004618) | 0.004601 / 0.011008 (-0.006407) | 0.077414 / 0.038508 (0.038906) | 0.027402 / 0.023109 (0.004293) | 0.353469 / 0.275898 (0.077571) | 0.381697 / 0.323480 (0.058218) | 0.005076 / 0.007986 (-0.002910) | 0.004665 / 0.004328 (0.000336) | 0.076210 / 0.004250 (0.071960) | 0.039114 / 0.037052 (0.002061) | 0.354980 / 0.258489 (0.096491) | 0.389648 / 0.293841 (0.095807) | 0.031674 / 0.128546 (-0.096872) | 0.011752 / 0.075646 (-0.063894) | 0.086330 / 0.419271 (-0.332942) | 0.041530 / 0.043533 (-0.002003) | 0.343002 / 0.255139 (0.087863) | 0.365959 / 0.283200 (0.082760) | 0.091848 / 0.141683 (-0.049835) | 1.519427 / 1.452155 (0.067272) | 1.591529 / 1.492716 (0.098813) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.216458 / 0.018006 (0.198452) | 0.403326 / 0.000490 (0.402836) | 0.000432 / 0.000200 (0.000232) | 0.000059 / 0.000054 (0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025106 / 0.037411 (-0.012305) | 0.101113 / 0.014526 (0.086588) | 0.108104 / 0.176557 (-0.068453) | 0.142342 / 0.737135 (-0.594794) | 0.112012 / 0.296338 (-0.184326) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.443128 / 0.215209 (0.227919) | 4.434707 / 2.077655 (2.357052) | 2.115434 / 1.504120 (0.611315) | 1.902865 / 1.541195 (0.361670) | 1.996981 / 1.468490 (0.528491) | 0.702485 / 4.584777 (-3.882292) | 3.419151 / 3.745712 (-0.326561) | 1.911977 / 5.269862 (-3.357884) | 1.178195 / 4.565676 (-3.387481) | 0.082985 / 0.424275 (-0.341290) | 0.012415 / 0.007607 (0.004808) | 0.546188 / 0.226044 (0.320144) | 5.463592 / 2.268929 (3.194664) | 2.574911 / 55.444624 (-52.869713) | 2.232883 / 6.876477 (-4.643594) | 2.284391 / 2.142072 (0.142319) | 0.807389 / 4.805227 (-3.997839) | 0.151461 / 6.500664 (-6.349203) | 0.067831 / 0.075469 (-0.007638) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.286605 / 1.841788 (-0.555183) | 14.230328 / 8.074308 (6.156020) | 13.944645 / 10.191392 (3.753253) | 0.153725 / 0.680424 (-0.526699) | 0.016876 / 0.534201 (-0.517325) | 0.386109 / 0.579283 (-0.193174) | 0.401798 / 0.434364 (-0.032566) | 0.467883 / 0.540337 (-0.072454) | 0.557788 / 1.386936 (-0.829148) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#c07f5c9268ce55d0e2022b018d5f44cfcedf1e43 \"CML watermark\")\n", "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009305 / 0.011353 (-0.002048) | 0.004978 / 0.011008 (-0.006031) | 0.101687 / 0.038508 (0.063179) | 0.035339 / 0.023109 (0.012230) | 0.294770 / 0.275898 (0.018872) | 0.355491 / 0.323480 (0.032011) | 0.008183 / 0.007986 (0.000197) | 0.004076 / 0.004328 (-0.000253) | 0.077552 / 0.004250 (0.073302) | 0.042891 / 0.037052 (0.005838) | 0.305727 / 0.258489 (0.047238) | 0.336508 / 0.293841 (0.042667) | 0.038525 / 0.128546 (-0.090022) | 0.011878 / 0.075646 (-0.063768) | 0.334136 / 0.419271 (-0.085136) | 0.047548 / 0.043533 (0.004015) | 0.301749 / 0.255139 (0.046610) | 0.318221 / 0.283200 (0.035022) | 0.099172 / 0.141683 (-0.042511) | 1.440638 / 1.452155 (-0.011516) | 1.503505 / 1.492716 (0.010789) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.202748 / 0.018006 (0.184742) | 0.433670 / 0.000490 (0.433181) | 0.003139 / 0.000200 (0.002939) | 0.000083 / 0.000054 (0.000028) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025555 / 0.037411 (-0.011856) | 0.107156 / 0.014526 (0.092631) | 0.116706 / 0.176557 (-0.059851) | 0.153165 / 0.737135 (-0.583970) | 0.122614 / 0.296338 (-0.173724) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.398912 / 0.215209 (0.183703) | 3.965048 / 2.077655 (1.887394) | 1.894678 / 1.504120 (0.390558) | 1.706925 / 1.541195 (0.165730) | 1.745264 / 1.468490 (0.276774) | 0.691174 / 4.584777 (-3.893603) | 3.824583 / 3.745712 (0.078871) | 3.876806 / 5.269862 (-1.393055) | 1.898991 / 4.565676 (-2.666685) | 0.083687 / 0.424275 (-0.340588) | 0.012122 / 0.007607 (0.004514) | 0.510870 / 0.226044 (0.284825) | 5.094523 / 2.268929 (2.825594) | 2.265557 / 55.444624 (-53.179067) | 1.930882 / 6.876477 (-4.945594) | 2.016090 / 2.142072 (-0.125983) | 0.833108 / 4.805227 (-3.972119) | 0.164804 / 6.500664 (-6.335860) | 0.062864 / 0.075469 (-0.012605) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.192673 / 1.841788 (-0.649115) | 14.730393 / 8.074308 (6.656085) | 14.550736 / 10.191392 (4.359344) | 0.154451 / 0.680424 (-0.525973) | 0.029222 / 0.534201 (-0.504979) | 0.440939 / 0.579283 (-0.138345) | 0.442772 / 0.434364 (0.008409) | 0.543948 / 0.540337 (0.003610) | 0.638113 / 1.386936 (-0.748824) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007589 / 0.011353 (-0.003764) | 0.005208 / 0.011008 (-0.005800) | 0.073797 / 0.038508 (0.035289) | 0.034021 / 0.023109 (0.010912) | 0.366120 / 0.275898 (0.090222) | 0.397105 / 0.323480 (0.073625) | 0.005837 / 0.007986 (-0.002148) | 0.004028 / 0.004328 (-0.000301) | 0.073502 / 0.004250 (0.069252) | 0.051233 / 0.037052 (0.014181) | 0.359849 / 0.258489 (0.101360) | 0.397476 / 0.293841 (0.103635) | 0.036727 / 0.128546 (-0.091819) | 0.012249 / 0.075646 (-0.063397) | 0.086600 / 0.419271 (-0.332671) | 0.051156 / 0.043533 (0.007623) | 0.343441 / 0.255139 (0.088302) | 0.389672 / 0.283200 (0.106472) | 0.105180 / 0.141683 (-0.036503) | 1.439719 / 1.452155 (-0.012435) | 1.537779 / 1.492716 (0.045062) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.199429 / 0.018006 (0.181422) | 0.440837 / 0.000490 (0.440347) | 0.005333 / 0.000200 (0.005133) | 0.000099 / 0.000054 (0.000044) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029581 / 0.037411 (-0.007830) | 0.113789 / 0.014526 (0.099263) | 0.123799 / 0.176557 (-0.052758) | 0.163772 / 0.737135 (-0.573363) | 0.127156 / 0.296338 (-0.169183) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.422803 / 0.215209 (0.207594) | 4.192400 / 2.077655 (2.114745) | 1.994561 / 1.504120 (0.490441) | 1.807085 / 1.541195 (0.265890) | 1.927539 / 1.468490 (0.459049) | 0.708804 / 4.584777 (-3.875973) | 3.790662 / 3.745712 (0.044950) | 3.667207 / 5.269862 (-1.602655) | 1.985107 / 4.565676 (-2.580570) | 0.086609 / 0.424275 (-0.337666) | 0.012613 / 0.007607 (0.005006) | 0.520167 / 0.226044 (0.294122) | 5.208657 / 2.268929 (2.939729) | 2.500383 / 55.444624 (-52.944241) | 2.129817 / 6.876477 (-4.746660) | 2.181205 / 2.142072 (0.039133) | 0.847925 / 4.805227 (-3.957303) | 0.168293 / 6.500664 (-6.332372) | 0.065066 / 0.075469 (-0.010403) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.261053 / 1.841788 (-0.580735) | 15.091644 / 8.074308 (7.017336) | 14.126139 / 10.191392 (3.934747) | 0.184956 / 0.680424 (-0.495468) | 0.017909 / 0.534201 (-0.516292) | 0.428918 / 0.579283 (-0.150365) | 0.429637 / 0.434364 (-0.004727) | 0.530900 / 0.540337 (-0.009437) | 0.627966 / 1.386936 (-0.758970) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#a72fd153d3499a5c5eda783673073c9f557f11e0 \"CML watermark\")\n", "I think we should also suggest installing `resampy` in the error message thrown by the Audio feature when `librosa` is not installed.", "exploring a better solution at https://github.com/huggingface/datasets/pull/5556" ]
null
[]
Add resampy dep
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/5554/timeline
In librosa 0.10 they removed the `resmpy` dependency and set it to optional. However it is necessary for resampling. I added it to the "audio" extra dependencies.
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/5554.diff", "html_url": "https://github.com/huggingface/datasets/pull/5554", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/5554.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5554" }
1,592,285,062
https://api.github.com/repos/huggingface/datasets/issues/5554/comments
PR_kwDODunzps5KXhZh
null
5,554
https://api.github.com/repos/huggingface/datasets/issues/5554/events
true
closed
2023-02-20T17:29:14Z
null
https://api.github.com/repos/huggingface/datasets/issues/5553
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5553/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/5553/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/26489385?v=4", "events_url": "https://api.github.com/users/Plutone11011/events{/privacy}", "followers_url": "https://api.github.com/users/Plutone11011/followers", "following_url": "https://api.github.com/users/Plutone11011/following{/other_user}", "gists_url": "https://api.github.com/users/Plutone11011/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Plutone11011", "id": 26489385, "login": "Plutone11011", "node_id": "MDQ6VXNlcjI2NDg5Mzg1", "organizations_url": "https://api.github.com/users/Plutone11011/orgs", "received_events_url": "https://api.github.com/users/Plutone11011/received_events", "repos_url": "https://api.github.com/users/Plutone11011/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Plutone11011/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Plutone11011/subscriptions", "type": "User", "url": "https://api.github.com/users/Plutone11011" }
https://github.com/huggingface/datasets/pull/5553
[]
false
2023-02-21T13:08:25Z
2023-02-21T12:58:12Z
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.014953 / 0.011353 (0.003600) | 0.006936 / 0.011008 (-0.004072) | 0.144039 / 0.038508 (0.105531) | 0.046719 / 0.023109 (0.023610) | 0.408832 / 0.275898 (0.132934) | 0.501419 / 0.323480 (0.177939) | 0.010190 / 0.007986 (0.002204) | 0.007618 / 0.004328 (0.003290) | 0.108553 / 0.004250 (0.104303) | 0.048484 / 0.037052 (0.011432) | 0.451586 / 0.258489 (0.193097) | 0.469864 / 0.293841 (0.176023) | 0.062159 / 0.128546 (-0.066387) | 0.019937 / 0.075646 (-0.055710) | 0.473718 / 0.419271 (0.054446) | 0.064777 / 0.043533 (0.021244) | 0.428675 / 0.255139 (0.173536) | 0.467665 / 0.283200 (0.184465) | 0.133528 / 0.141683 (-0.008155) | 1.978084 / 1.452155 (0.525930) | 1.965878 / 1.492716 (0.473162) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.290112 / 0.018006 (0.272106) | 0.629481 / 0.000490 (0.628992) | 0.003600 / 0.000200 (0.003400) | 0.000144 / 0.000054 (0.000089) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030806 / 0.037411 (-0.006605) | 0.142376 / 0.014526 (0.127850) | 0.150020 / 0.176557 (-0.026537) | 0.193679 / 0.737135 (-0.543457) | 0.151329 / 0.296338 (-0.145009) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.629725 / 0.215209 (0.414516) | 6.656313 / 2.077655 (4.578659) | 2.712160 / 1.504120 (1.208041) | 2.328461 / 1.541195 (0.787266) | 2.452502 / 1.468490 (0.984012) | 1.353183 / 4.584777 (-3.231594) | 5.981521 / 3.745712 (2.235809) | 3.707186 / 5.269862 (-1.562676) | 2.460583 / 4.565676 (-2.105094) | 0.178300 / 0.424275 (-0.245975) | 0.020357 / 0.007607 (0.012750) | 0.813564 / 0.226044 (0.587520) | 8.465600 / 2.268929 (6.196671) | 3.491507 / 55.444624 (-51.953117) | 2.810781 / 6.876477 (-4.065695) | 3.100182 / 2.142072 (0.958110) | 1.539321 / 4.805227 (-3.265906) | 0.257735 / 6.500664 (-6.242929) | 0.082785 / 0.075469 (0.007316) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.766586 / 1.841788 (-0.075201) | 20.534638 / 8.074308 (12.460330) | 24.066176 / 10.191392 (13.874784) | 0.272419 / 0.680424 (-0.408005) | 0.048940 / 0.534201 (-0.485261) | 0.606004 / 0.579283 (0.026721) | 0.669684 / 0.434364 (0.235320) | 0.716858 / 0.540337 (0.176521) | 0.949394 / 1.386936 (-0.437542) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010865 / 0.011353 (-0.000488) | 0.009855 / 0.011008 (-0.001153) | 0.105973 / 0.038508 (0.067465) | 0.039818 / 0.023109 (0.016709) | 0.544505 / 0.275898 (0.268607) | 0.511253 / 0.323480 (0.187773) | 0.007350 / 0.007986 (-0.000635) | 0.006950 / 0.004328 (0.002622) | 0.106548 / 0.004250 (0.102298) | 0.062740 / 0.037052 (0.025688) | 0.465881 / 0.258489 (0.207392) | 0.524426 / 0.293841 (0.230585) | 0.056052 / 0.128546 (-0.072495) | 0.020906 / 0.075646 (-0.054741) | 0.125337 / 0.419271 (-0.293935) | 0.064689 / 0.043533 (0.021156) | 0.483055 / 0.255139 (0.227916) | 0.518878 / 0.283200 (0.235678) | 0.127288 / 0.141683 (-0.014394) | 1.936246 / 1.452155 (0.484092) | 2.162532 / 1.492716 (0.669816) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.253691 / 0.018006 (0.235685) | 0.606244 / 0.000490 (0.605754) | 0.004251 / 0.000200 (0.004051) | 0.000126 / 0.000054 (0.000071) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.038356 / 0.037411 (0.000944) | 0.146690 / 0.014526 (0.132164) | 0.146545 / 0.176557 (-0.030012) | 0.218452 / 0.737135 (-0.518684) | 0.165314 / 0.296338 (-0.131025) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.645768 / 0.215209 (0.430559) | 7.229186 / 2.077655 (5.151531) | 3.484778 / 1.504120 (1.980658) | 2.585310 / 1.541195 (1.044116) | 2.727670 / 1.468490 (1.259180) | 1.393416 / 4.584777 (-3.191361) | 6.448707 / 3.745712 (2.702995) | 3.433652 / 5.269862 (-1.836209) | 2.106450 / 4.565676 (-2.459226) | 0.143899 / 0.424275 (-0.280376) | 0.015097 / 0.007607 (0.007490) | 0.860960 / 0.226044 (0.634916) | 9.509725 / 2.268929 (7.240797) | 3.881601 / 55.444624 (-51.563024) | 3.156018 / 6.876477 (-3.720459) | 3.556330 / 2.142072 (1.414257) | 1.525940 / 4.805227 (-3.279287) | 0.264588 / 6.500664 (-6.236076) | 0.090327 / 0.075469 (0.014858) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.829761 / 1.841788 (-0.012027) | 21.037774 / 8.074308 (12.963466) | 24.464737 / 10.191392 (14.273345) | 0.394165 / 0.680424 (-0.286259) | 0.039286 / 0.534201 (-0.494915) | 0.546412 / 0.579283 (-0.032871) | 0.741760 / 0.434364 (0.307396) | 0.683969 / 0.540337 (0.143632) | 0.831392 / 1.386936 (-0.555544) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#e453eeac5239d0ff3e98adcba59a6724ee68b46b \"CML watermark\")\n" ]
null
[]
improved message error row formatting
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/5553/timeline
Solves #5539
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/5553.diff", "html_url": "https://github.com/huggingface/datasets/pull/5553", "merged_at": "2023-02-21T12:58:12Z", "patch_url": "https://github.com/huggingface/datasets/pull/5553.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5553" }
1,592,236,998
https://api.github.com/repos/huggingface/datasets/issues/5553/comments
PR_kwDODunzps5KXXUq
null
5,553
https://api.github.com/repos/huggingface/datasets/issues/5553/events
true
closed
2023-02-20T16:50:09Z
null
https://api.github.com/repos/huggingface/datasets/issues/5552
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5552/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/5552/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
https://github.com/huggingface/datasets/pull/5552
[]
false
2023-02-21T13:20:42Z
2023-02-21T13:13:05Z
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.011635 / 0.011353 (0.000282) | 0.005446 / 0.011008 (-0.005562) | 0.111044 / 0.038508 (0.072536) | 0.034243 / 0.023109 (0.011134) | 0.357560 / 0.275898 (0.081662) | 0.403940 / 0.323480 (0.080460) | 0.008532 / 0.007986 (0.000546) | 0.004327 / 0.004328 (-0.000002) | 0.084659 / 0.004250 (0.080408) | 0.040914 / 0.037052 (0.003861) | 0.367142 / 0.258489 (0.108653) | 0.381651 / 0.293841 (0.087810) | 0.053865 / 0.128546 (-0.074681) | 0.019060 / 0.075646 (-0.056587) | 0.371994 / 0.419271 (-0.047277) | 0.058417 / 0.043533 (0.014884) | 0.357740 / 0.255139 (0.102601) | 0.367423 / 0.283200 (0.084224) | 0.104336 / 0.141683 (-0.037347) | 1.632128 / 1.452155 (0.179974) | 1.676216 / 1.492716 (0.183499) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.199649 / 0.018006 (0.181642) | 0.490945 / 0.000490 (0.490455) | 0.001598 / 0.000200 (0.001398) | 0.000094 / 0.000054 (0.000039) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024541 / 0.037411 (-0.012871) | 0.104713 / 0.014526 (0.090187) | 0.119438 / 0.176557 (-0.057118) | 0.160854 / 0.737135 (-0.576281) | 0.127323 / 0.296338 (-0.169016) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.586483 / 0.215209 (0.371274) | 5.771689 / 2.077655 (3.694034) | 2.378962 / 1.504120 (0.874842) | 1.998787 / 1.541195 (0.457592) | 1.993016 / 1.468490 (0.524526) | 1.199169 / 4.584777 (-3.385608) | 5.281648 / 3.745712 (1.535936) | 5.589235 / 5.269862 (0.319373) | 2.715162 / 4.565676 (-1.850514) | 0.153312 / 0.424275 (-0.270963) | 0.014302 / 0.007607 (0.006695) | 0.761185 / 0.226044 (0.535140) | 7.602517 / 2.268929 (5.333589) | 3.095271 / 55.444624 (-52.349354) | 2.407394 / 6.876477 (-4.469083) | 2.519074 / 2.142072 (0.377002) | 1.459270 / 4.805227 (-3.345957) | 0.259578 / 6.500664 (-6.241086) | 0.077356 / 0.075469 (0.001887) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.502123 / 1.841788 (-0.339665) | 16.254010 / 8.074308 (8.179702) | 19.971713 / 10.191392 (9.780321) | 0.221491 / 0.680424 (-0.458933) | 0.043959 / 0.534201 (-0.490242) | 0.512566 / 0.579283 (-0.066717) | 0.594724 / 0.434364 (0.160360) | 0.573855 / 0.540337 (0.033518) | 0.680503 / 1.386936 (-0.706433) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008543 / 0.011353 (-0.002810) | 0.005828 / 0.011008 (-0.005180) | 0.083696 / 0.038508 (0.045188) | 0.036186 / 0.023109 (0.013077) | 0.379777 / 0.275898 (0.103879) | 0.437361 / 0.323480 (0.113881) | 0.006788 / 0.007986 (-0.001197) | 0.005110 / 0.004328 (0.000782) | 0.106075 / 0.004250 (0.101824) | 0.048770 / 0.037052 (0.011718) | 0.390770 / 0.258489 (0.132281) | 0.420813 / 0.293841 (0.126972) | 0.050622 / 0.128546 (-0.077924) | 0.019939 / 0.075646 (-0.055707) | 0.106890 / 0.419271 (-0.312382) | 0.070800 / 0.043533 (0.027267) | 0.406094 / 0.255139 (0.150955) | 0.419796 / 0.283200 (0.136597) | 0.107237 / 0.141683 (-0.034446) | 1.687894 / 1.452155 (0.235739) | 1.735680 / 1.492716 (0.242963) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.216403 / 0.018006 (0.198397) | 0.495002 / 0.000490 (0.494512) | 0.004841 / 0.000200 (0.004641) | 0.000117 / 0.000054 (0.000063) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.043774 / 0.037411 (0.006363) | 0.119144 / 0.014526 (0.104618) | 0.143694 / 0.176557 (-0.032862) | 0.195548 / 0.737135 (-0.541587) | 0.151426 / 0.296338 (-0.144912) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.617694 / 0.215209 (0.402485) | 6.216237 / 2.077655 (4.138582) | 2.578341 / 1.504120 (1.074221) | 2.184868 / 1.541195 (0.643673) | 2.244954 / 1.468490 (0.776464) | 1.236072 / 4.584777 (-3.348705) | 5.257919 / 3.745712 (1.512207) | 4.634682 / 5.269862 (-0.635180) | 2.722579 / 4.565676 (-1.843097) | 0.131433 / 0.424275 (-0.292843) | 0.012928 / 0.007607 (0.005321) | 0.768315 / 0.226044 (0.542270) | 7.625277 / 2.268929 (5.356349) | 3.146364 / 55.444624 (-52.298260) | 2.577886 / 6.876477 (-4.298590) | 2.572626 / 2.142072 (0.430554) | 1.468160 / 4.805227 (-3.337067) | 0.252524 / 6.500664 (-6.248140) | 0.083264 / 0.075469 (0.007794) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.452614 / 1.841788 (-0.389174) | 15.906162 / 8.074308 (7.831853) | 17.803630 / 10.191392 (7.612238) | 0.210769 / 0.680424 (-0.469655) | 0.024672 / 0.534201 (-0.509529) | 0.486486 / 0.579283 (-0.092797) | 0.545256 / 0.434364 (0.110892) | 0.598736 / 0.540337 (0.058399) | 0.689083 / 1.386936 (-0.697853) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#189a870b4f0964d77b43c2f4e79c4ca7b799f690 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008806 / 0.011353 (-0.002547) | 0.004947 / 0.011008 (-0.006061) | 0.098559 / 0.038508 (0.060051) | 0.034293 / 0.023109 (0.011183) | 0.311924 / 0.275898 (0.036026) | 0.377501 / 0.323480 (0.054021) | 0.007916 / 0.007986 (-0.000069) | 0.004131 / 0.004328 (-0.000197) | 0.074934 / 0.004250 (0.070684) | 0.043396 / 0.037052 (0.006344) | 0.344788 / 0.258489 (0.086299) | 0.369943 / 0.293841 (0.076102) | 0.036846 / 0.128546 (-0.091700) | 0.011803 / 0.075646 (-0.063843) | 0.331306 / 0.419271 (-0.087965) | 0.047015 / 0.043533 (0.003483) | 0.305890 / 0.255139 (0.050751) | 0.332658 / 0.283200 (0.049459) | 0.101134 / 0.141683 (-0.040549) | 1.485615 / 1.452155 (0.033461) | 1.510230 / 1.492716 (0.017514) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.274272 / 0.018006 (0.256266) | 0.514739 / 0.000490 (0.514250) | 0.003433 / 0.000200 (0.003234) | 0.000078 / 0.000054 (0.000023) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027054 / 0.037411 (-0.010357) | 0.106416 / 0.014526 (0.091890) | 0.118761 / 0.176557 (-0.057796) | 0.156115 / 0.737135 (-0.581021) | 0.123801 / 0.296338 (-0.172537) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.403121 / 0.215209 (0.187912) | 4.008806 / 2.077655 (1.931151) | 1.891253 / 1.504120 (0.387133) | 1.698523 / 1.541195 (0.157328) | 1.778533 / 1.468490 (0.310043) | 0.688207 / 4.584777 (-3.896570) | 3.674350 / 3.745712 (-0.071362) | 1.848438 / 5.269862 (-3.421423) | 1.202380 / 4.565676 (-3.363297) | 0.073490 / 0.424275 (-0.350785) | 0.010655 / 0.007607 (0.003048) | 0.446939 / 0.226044 (0.220894) | 4.478489 / 2.268929 (2.209560) | 1.992281 / 55.444624 (-53.452343) | 1.684077 / 6.876477 (-5.192400) | 1.715435 / 2.142072 (-0.426638) | 0.731454 / 4.805227 (-4.073773) | 0.143679 / 6.500664 (-6.356985) | 0.053415 / 0.075469 (-0.022054) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.060583 / 1.841788 (-0.781205) | 13.730462 / 8.074308 (5.656153) | 13.038976 / 10.191392 (2.847583) | 0.144168 / 0.680424 (-0.536256) | 0.025788 / 0.534201 (-0.508413) | 0.393332 / 0.579283 (-0.185951) | 0.409495 / 0.434364 (-0.024869) | 0.523745 / 0.540337 (-0.016592) | 0.601595 / 1.386936 (-0.785341) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006369 / 0.011353 (-0.004983) | 0.005019 / 0.011008 (-0.005990) | 0.065226 / 0.038508 (0.026718) | 0.029634 / 0.023109 (0.006524) | 0.302871 / 0.275898 (0.026972) | 0.331055 / 0.323480 (0.007575) | 0.005470 / 0.007986 (-0.002516) | 0.005372 / 0.004328 (0.001043) | 0.064930 / 0.004250 (0.060680) | 0.046979 / 0.037052 (0.009927) | 0.305633 / 0.258489 (0.047144) | 0.345305 / 0.293841 (0.051464) | 0.032951 / 0.128546 (-0.095596) | 0.011447 / 0.075646 (-0.064199) | 0.077054 / 0.419271 (-0.342218) | 0.045744 / 0.043533 (0.002211) | 0.303446 / 0.255139 (0.048307) | 0.319837 / 0.283200 (0.036637) | 0.098631 / 0.141683 (-0.043052) | 1.266593 / 1.452155 (-0.185562) | 1.355388 / 1.492716 (-0.137328) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.291301 / 0.018006 (0.273295) | 0.537848 / 0.000490 (0.537359) | 0.006697 / 0.000200 (0.006497) | 0.000110 / 0.000054 (0.000055) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027677 / 0.037411 (-0.009734) | 0.099633 / 0.014526 (0.085107) | 0.110626 / 0.176557 (-0.065931) | 0.144724 / 0.737135 (-0.592412) | 0.114955 / 0.296338 (-0.181383) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.375344 / 0.215209 (0.160135) | 3.717490 / 2.077655 (1.639835) | 1.845886 / 1.504120 (0.341766) | 1.713274 / 1.541195 (0.172079) | 1.761286 / 1.468490 (0.292796) | 0.627924 / 4.584777 (-3.956853) | 3.628154 / 3.745712 (-0.117558) | 3.261851 / 5.269862 (-2.008011) | 1.701008 / 4.565676 (-2.864669) | 0.076703 / 0.424275 (-0.347572) | 0.010839 / 0.007607 (0.003231) | 0.459193 / 0.226044 (0.233148) | 4.589066 / 2.268929 (2.320137) | 2.193972 / 55.444624 (-53.250653) | 1.892115 / 6.876477 (-4.984362) | 1.892453 / 2.142072 (-0.249619) | 0.745727 / 4.805227 (-4.059500) | 0.150232 / 6.500664 (-6.350432) | 0.057245 / 0.075469 (-0.018224) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.114657 / 1.841788 (-0.727131) | 13.595215 / 8.074308 (5.520907) | 12.267177 / 10.191392 (2.075785) | 0.151362 / 0.680424 (-0.529061) | 0.015609 / 0.534201 (-0.518591) | 0.379151 / 0.579283 (-0.200132) | 0.386125 / 0.434364 (-0.048238) | 0.470037 / 0.540337 (-0.070301) | 0.562340 / 1.386936 (-0.824596) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#526578cd473a266fa86643d15905181bf346ecac \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009847 / 0.011353 (-0.001505) | 0.005609 / 0.011008 (-0.005399) | 0.101951 / 0.038508 (0.063443) | 0.038082 / 0.023109 (0.014972) | 0.299933 / 0.275898 (0.024035) | 0.377081 / 0.323480 (0.053601) | 0.008900 / 0.007986 (0.000915) | 0.004608 / 0.004328 (0.000279) | 0.077723 / 0.004250 (0.073473) | 0.048592 / 0.037052 (0.011540) | 0.310789 / 0.258489 (0.052300) | 0.345627 / 0.293841 (0.051787) | 0.038716 / 0.128546 (-0.089830) | 0.012653 / 0.075646 (-0.062993) | 0.336885 / 0.419271 (-0.082387) | 0.048715 / 0.043533 (0.005182) | 0.295336 / 0.255139 (0.040197) | 0.316735 / 0.283200 (0.033536) | 0.115142 / 0.141683 (-0.026541) | 1.480332 / 1.452155 (0.028177) | 1.604972 / 1.492716 (0.112256) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.299516 / 0.018006 (0.281510) | 0.525892 / 0.000490 (0.525402) | 0.002246 / 0.000200 (0.002046) | 0.000095 / 0.000054 (0.000040) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031547 / 0.037411 (-0.005864) | 0.120611 / 0.014526 (0.106085) | 0.124516 / 0.176557 (-0.052041) | 0.166036 / 0.737135 (-0.571100) | 0.131689 / 0.296338 (-0.164650) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.400728 / 0.215209 (0.185519) | 4.007027 / 2.077655 (1.929372) | 1.793922 / 1.504120 (0.289803) | 1.596709 / 1.541195 (0.055514) | 1.752130 / 1.468490 (0.283640) | 0.717464 / 4.584777 (-3.867313) | 3.798844 / 3.745712 (0.053132) | 3.685088 / 5.269862 (-1.584774) | 1.914041 / 4.565676 (-2.651636) | 0.086181 / 0.424275 (-0.338094) | 0.012753 / 0.007607 (0.005146) | 0.507984 / 0.226044 (0.281940) | 5.086255 / 2.268929 (2.817326) | 2.280650 / 55.444624 (-53.163974) | 1.929294 / 6.876477 (-4.947183) | 2.057884 / 2.142072 (-0.084188) | 0.852863 / 4.805227 (-3.952364) | 0.165497 / 6.500664 (-6.335168) | 0.063356 / 0.075469 (-0.012113) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.212593 / 1.841788 (-0.629194) | 16.270507 / 8.074308 (8.196199) | 15.708406 / 10.191392 (5.517014) | 0.162346 / 0.680424 (-0.518078) | 0.029702 / 0.534201 (-0.504499) | 0.447685 / 0.579283 (-0.131598) | 0.449361 / 0.434364 (0.014997) | 0.530536 / 0.540337 (-0.009801) | 0.613439 / 1.386936 (-0.773497) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007741 / 0.011353 (-0.003612) | 0.005752 / 0.011008 (-0.005256) | 0.076600 / 0.038508 (0.038092) | 0.034841 / 0.023109 (0.011732) | 0.345106 / 0.275898 (0.069208) | 0.385685 / 0.323480 (0.062205) | 0.006466 / 0.007986 (-0.001519) | 0.005806 / 0.004328 (0.001478) | 0.075110 / 0.004250 (0.070860) | 0.052936 / 0.037052 (0.015883) | 0.343576 / 0.258489 (0.085087) | 0.408749 / 0.293841 (0.114908) | 0.037345 / 0.128546 (-0.091201) | 0.012807 / 0.075646 (-0.062839) | 0.087732 / 0.419271 (-0.331540) | 0.050218 / 0.043533 (0.006685) | 0.338963 / 0.255139 (0.083824) | 0.361629 / 0.283200 (0.078429) | 0.107488 / 0.141683 (-0.034195) | 1.465284 / 1.452155 (0.013130) | 1.562218 / 1.492716 (0.069502) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.322496 / 0.018006 (0.304489) | 0.522782 / 0.000490 (0.522292) | 0.006680 / 0.000200 (0.006480) | 0.000144 / 0.000054 (0.000090) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031801 / 0.037411 (-0.005611) | 0.116839 / 0.014526 (0.102313) | 0.127552 / 0.176557 (-0.049005) | 0.167670 / 0.737135 (-0.569465) | 0.134170 / 0.296338 (-0.162168) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.425449 / 0.215209 (0.210240) | 4.229367 / 2.077655 (2.151713) | 2.014663 / 1.504120 (0.510543) | 1.812981 / 1.541195 (0.271787) | 1.964039 / 1.468490 (0.495549) | 0.703454 / 4.584777 (-3.881323) | 3.786985 / 3.745712 (0.041273) | 2.262377 / 5.269862 (-3.007485) | 1.404868 / 4.565676 (-3.160808) | 0.086234 / 0.424275 (-0.338041) | 0.012616 / 0.007607 (0.005009) | 0.525784 / 0.226044 (0.299739) | 5.268295 / 2.268929 (2.999366) | 2.496674 / 55.444624 (-52.947950) | 2.177773 / 6.876477 (-4.698704) | 2.313677 / 2.142072 (0.171605) | 0.846202 / 4.805227 (-3.959026) | 0.170152 / 6.500664 (-6.330513) | 0.066772 / 0.075469 (-0.008698) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.254719 / 1.841788 (-0.587069) | 16.017627 / 8.074308 (7.943319) | 14.560583 / 10.191392 (4.369191) | 0.168275 / 0.680424 (-0.512149) | 0.017935 / 0.534201 (-0.516266) | 0.430806 / 0.579283 (-0.148477) | 0.428737 / 0.434364 (-0.005626) | 0.532001 / 0.540337 (-0.008336) | 0.633680 / 1.386936 (-0.753256) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#c2c75dff81c3f060cc4731be3416fd962cc6383e \"CML watermark\")\n" ]
null
[]
Make tiktoken tokenizers hashable
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/5552/timeline
Fix for https://discord.com/channels/879548962464493619/1075729627546406912/1075729627546406912
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/5552.diff", "html_url": "https://github.com/huggingface/datasets/pull/5552", "merged_at": "2023-02-21T13:13:05Z", "patch_url": "https://github.com/huggingface/datasets/pull/5552.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5552" }
1,592,186,703
https://api.github.com/repos/huggingface/datasets/issues/5552/comments
PR_kwDODunzps5KXMjA
null
5,552
https://api.github.com/repos/huggingface/datasets/issues/5552/events
true
closed
2023-02-20T16:16:57Z
null
https://api.github.com/repos/huggingface/datasets/issues/5551
null
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/5551/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/5551/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/74963545?v=4", "events_url": "https://api.github.com/users/osbm/events{/privacy}", "followers_url": "https://api.github.com/users/osbm/followers", "following_url": "https://api.github.com/users/osbm/following{/other_user}", "gists_url": "https://api.github.com/users/osbm/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/osbm", "id": 74963545, "login": "osbm", "node_id": "MDQ6VXNlcjc0OTYzNTQ1", "organizations_url": "https://api.github.com/users/osbm/orgs", "received_events_url": "https://api.github.com/users/osbm/received_events", "repos_url": "https://api.github.com/users/osbm/repos", "site_admin": false, "starred_url": "https://api.github.com/users/osbm/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/osbm/subscriptions", "type": "User", "url": "https://api.github.com/users/osbm" }
https://github.com/huggingface/datasets/pull/5551
[]
false
2023-02-21T13:27:57Z
2023-02-21T13:21:07Z
null
[ "good catch!", "_The documentation is not available anymore as the PR was closed or merged._", "The test fail is unrelated to this PR and fixed on `main` - merging :)", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008942 / 0.011353 (-0.002411) | 0.004617 / 0.011008 (-0.006391) | 0.101310 / 0.038508 (0.062802) | 0.030997 / 0.023109 (0.007888) | 0.306292 / 0.275898 (0.030394) | 0.370533 / 0.323480 (0.047053) | 0.007318 / 0.007986 (-0.000667) | 0.003473 / 0.004328 (-0.000856) | 0.078557 / 0.004250 (0.074307) | 0.036312 / 0.037052 (-0.000740) | 0.308993 / 0.258489 (0.050504) | 0.344411 / 0.293841 (0.050570) | 0.034384 / 0.128546 (-0.094162) | 0.011631 / 0.075646 (-0.064016) | 0.323948 / 0.419271 (-0.095324) | 0.041176 / 0.043533 (-0.002357) | 0.302512 / 0.255139 (0.047373) | 0.322439 / 0.283200 (0.039239) | 0.088955 / 0.141683 (-0.052728) | 1.534918 / 1.452155 (0.082763) | 1.555803 / 1.492716 (0.063087) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.195639 / 0.018006 (0.177633) | 0.423068 / 0.000490 (0.422579) | 0.004101 / 0.000200 (0.003901) | 0.000079 / 0.000054 (0.000025) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023691 / 0.037411 (-0.013721) | 0.100536 / 0.014526 (0.086011) | 0.108399 / 0.176557 (-0.068157) | 0.143515 / 0.737135 (-0.593620) | 0.111886 / 0.296338 (-0.184452) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.417519 / 0.215209 (0.202310) | 4.180463 / 2.077655 (2.102808) | 1.862511 / 1.504120 (0.358391) | 1.658724 / 1.541195 (0.117529) | 1.735847 / 1.468490 (0.267357) | 0.688257 / 4.584777 (-3.896520) | 3.447976 / 3.745712 (-0.297737) | 1.877939 / 5.269862 (-3.391922) | 1.157385 / 4.565676 (-3.408292) | 0.081418 / 0.424275 (-0.342857) | 0.012395 / 0.007607 (0.004788) | 0.518935 / 0.226044 (0.292891) | 5.220355 / 2.268929 (2.951427) | 2.308355 / 55.444624 (-53.136269) | 1.960026 / 6.876477 (-4.916450) | 2.013179 / 2.142072 (-0.128893) | 0.802850 / 4.805227 (-4.002377) | 0.146941 / 6.500664 (-6.353723) | 0.064080 / 0.075469 (-0.011389) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.284443 / 1.841788 (-0.557344) | 13.903755 / 8.074308 (5.829447) | 14.467101 / 10.191392 (4.275709) | 0.156813 / 0.680424 (-0.523611) | 0.028583 / 0.534201 (-0.505618) | 0.406349 / 0.579283 (-0.172934) | 0.413178 / 0.434364 (-0.021186) | 0.491283 / 0.540337 (-0.049055) | 0.571171 / 1.386936 (-0.815765) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006868 / 0.011353 (-0.004484) | 0.004593 / 0.011008 (-0.006416) | 0.077574 / 0.038508 (0.039066) | 0.027703 / 0.023109 (0.004593) | 0.342096 / 0.275898 (0.066198) | 0.378500 / 0.323480 (0.055020) | 0.005785 / 0.007986 (-0.002201) | 0.003342 / 0.004328 (-0.000986) | 0.076105 / 0.004250 (0.071855) | 0.040369 / 0.037052 (0.003317) | 0.343611 / 0.258489 (0.085122) | 0.391859 / 0.293841 (0.098018) | 0.032675 / 0.128546 (-0.095871) | 0.011623 / 0.075646 (-0.064023) | 0.086623 / 0.419271 (-0.332648) | 0.051955 / 0.043533 (0.008423) | 0.343425 / 0.255139 (0.088286) | 0.368887 / 0.283200 (0.085688) | 0.097117 / 0.141683 (-0.044566) | 1.499546 / 1.452155 (0.047391) | 1.593100 / 1.492716 (0.100383) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.193568 / 0.018006 (0.175562) | 0.409211 / 0.000490 (0.408722) | 0.003797 / 0.000200 (0.003597) | 0.000083 / 0.000054 (0.000029) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024982 / 0.037411 (-0.012430) | 0.101367 / 0.014526 (0.086841) | 0.108546 / 0.176557 (-0.068010) | 0.144402 / 0.737135 (-0.592733) | 0.112233 / 0.296338 (-0.184105) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.432820 / 0.215209 (0.217611) | 4.341045 / 2.077655 (2.263391) | 2.058326 / 1.504120 (0.554207) | 1.853913 / 1.541195 (0.312718) | 1.942436 / 1.468490 (0.473946) | 0.699130 / 4.584777 (-3.885647) | 3.392879 / 3.745712 (-0.352833) | 1.908277 / 5.269862 (-3.361585) | 1.177998 / 4.565676 (-3.387678) | 0.082700 / 0.424275 (-0.341576) | 0.012505 / 0.007607 (0.004898) | 0.526286 / 0.226044 (0.300242) | 5.279599 / 2.268929 (3.010670) | 2.505771 / 55.444624 (-52.938854) | 2.158460 / 6.876477 (-4.718016) | 2.211437 / 2.142072 (0.069365) | 0.802065 / 4.805227 (-4.003163) | 0.150766 / 6.500664 (-6.349898) | 0.067639 / 0.075469 (-0.007830) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.286595 / 1.841788 (-0.555192) | 13.961894 / 8.074308 (5.887586) | 14.021865 / 10.191392 (3.830473) | 0.164590 / 0.680424 (-0.515834) | 0.016909 / 0.534201 (-0.517292) | 0.392215 / 0.579283 (-0.187069) | 0.408080 / 0.434364 (-0.026284) | 0.488247 / 0.540337 (-0.052090) | 0.575524 / 1.386936 (-0.811412) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#699b0293876015457bfce40f7245d346c34c7717 \"CML watermark\")\n" ]
null
[]
Suggest scikit-learn instead of sklearn
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/5551/timeline
This is kinda unimportant fix but, the suggested `pip install sklearn` does not work. The current error message if sklearn is not installed: ``` ImportError: To be able to use [dataset name], you need to install the following dependency: sklearn. Please install it using 'pip install sklearn' for instance. ```
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/5551.diff", "html_url": "https://github.com/huggingface/datasets/pull/5551", "merged_at": "2023-02-21T13:21:07Z", "patch_url": "https://github.com/huggingface/datasets/pull/5551.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5551" }
1,592,140,836
https://api.github.com/repos/huggingface/datasets/issues/5551/comments
PR_kwDODunzps5KXCof
null
5,551
https://api.github.com/repos/huggingface/datasets/issues/5551/events
true
closed
2023-02-20T08:52:11Z
null
https://api.github.com/repos/huggingface/datasets/issues/5550
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5550/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/5550/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/37621491?v=4", "events_url": "https://api.github.com/users/tomaarsen/events{/privacy}", "followers_url": "https://api.github.com/users/tomaarsen/followers", "following_url": "https://api.github.com/users/tomaarsen/following{/other_user}", "gists_url": "https://api.github.com/users/tomaarsen/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/tomaarsen", "id": 37621491, "login": "tomaarsen", "node_id": "MDQ6VXNlcjM3NjIxNDkx", "organizations_url": "https://api.github.com/users/tomaarsen/orgs", "received_events_url": "https://api.github.com/users/tomaarsen/received_events", "repos_url": "https://api.github.com/users/tomaarsen/repos", "site_admin": false, "starred_url": "https://api.github.com/users/tomaarsen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tomaarsen/subscriptions", "type": "User", "url": "https://api.github.com/users/tomaarsen" }
https://github.com/huggingface/datasets/pull/5550
[]
false
2023-02-20T15:16:13Z
2023-02-20T15:09:13Z
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "See the resolved changes [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5550/en/package_reference/main_classes#datasets.Dataset.class_encode_column), [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5550/en/package_reference/main_classes#datasets.Dataset.unique) and [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5550/en/package_reference/main_classes#datasets.DatasetDict.class_encode_column), respectively", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008256 / 0.011353 (-0.003097) | 0.004400 / 0.011008 (-0.006608) | 0.098676 / 0.038508 (0.060168) | 0.028937 / 0.023109 (0.005828) | 0.302578 / 0.275898 (0.026680) | 0.334170 / 0.323480 (0.010690) | 0.006657 / 0.007986 (-0.001329) | 0.004581 / 0.004328 (0.000253) | 0.076874 / 0.004250 (0.072624) | 0.034401 / 0.037052 (-0.002652) | 0.303928 / 0.258489 (0.045439) | 0.348421 / 0.293841 (0.054580) | 0.033303 / 0.128546 (-0.095243) | 0.011445 / 0.075646 (-0.064202) | 0.322137 / 0.419271 (-0.097135) | 0.041072 / 0.043533 (-0.002461) | 0.306007 / 0.255139 (0.050868) | 0.325945 / 0.283200 (0.042745) | 0.086685 / 0.141683 (-0.054998) | 1.454956 / 1.452155 (0.002801) | 1.545525 / 1.492716 (0.052809) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.175536 / 0.018006 (0.157530) | 0.400203 / 0.000490 (0.399713) | 0.002103 / 0.000200 (0.001903) | 0.000072 / 0.000054 (0.000018) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022750 / 0.037411 (-0.014661) | 0.095163 / 0.014526 (0.080637) | 0.103995 / 0.176557 (-0.072561) | 0.138806 / 0.737135 (-0.598330) | 0.105711 / 0.296338 (-0.190628) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.427860 / 0.215209 (0.212651) | 4.259594 / 2.077655 (2.181940) | 2.157986 / 1.504120 (0.653866) | 1.913814 / 1.541195 (0.372619) | 1.793455 / 1.468490 (0.324965) | 0.702341 / 4.584777 (-3.882436) | 3.353086 / 3.745712 (-0.392626) | 1.856952 / 5.269862 (-3.412909) | 1.149963 / 4.565676 (-3.415713) | 0.082926 / 0.424275 (-0.341349) | 0.012307 / 0.007607 (0.004700) | 0.524531 / 0.226044 (0.298487) | 5.254766 / 2.268929 (2.985838) | 2.590157 / 55.444624 (-52.854468) | 2.272613 / 6.876477 (-4.603864) | 2.304367 / 2.142072 (0.162294) | 0.819298 / 4.805227 (-3.985929) | 0.152170 / 6.500664 (-6.348494) | 0.066563 / 0.075469 (-0.008906) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.205054 / 1.841788 (-0.636733) | 13.729073 / 8.074308 (5.654765) | 14.061037 / 10.191392 (3.869645) | 0.138020 / 0.680424 (-0.542404) | 0.028042 / 0.534201 (-0.506159) | 0.392260 / 0.579283 (-0.187024) | 0.405632 / 0.434364 (-0.028732) | 0.469583 / 0.540337 (-0.070755) | 0.563110 / 1.386936 (-0.823826) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006513 / 0.011353 (-0.004839) | 0.004402 / 0.011008 (-0.006606) | 0.076339 / 0.038508 (0.037831) | 0.027222 / 0.023109 (0.004112) | 0.338968 / 0.275898 (0.063070) | 0.378475 / 0.323480 (0.054995) | 0.005443 / 0.007986 (-0.002542) | 0.003312 / 0.004328 (-0.001016) | 0.075352 / 0.004250 (0.071102) | 0.034951 / 0.037052 (-0.002102) | 0.342268 / 0.258489 (0.083779) | 0.381024 / 0.293841 (0.087183) | 0.031568 / 0.128546 (-0.096979) | 0.011558 / 0.075646 (-0.064088) | 0.085267 / 0.419271 (-0.334005) | 0.041248 / 0.043533 (-0.002284) | 0.340422 / 0.255139 (0.085283) | 0.365497 / 0.283200 (0.082297) | 0.088278 / 0.141683 (-0.053405) | 1.479838 / 1.452155 (0.027683) | 1.554440 / 1.492716 (0.061724) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.223240 / 0.018006 (0.205234) | 0.394771 / 0.000490 (0.394282) | 0.003022 / 0.000200 (0.002822) | 0.000071 / 0.000054 (0.000016) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024842 / 0.037411 (-0.012570) | 0.099167 / 0.014526 (0.084641) | 0.106376 / 0.176557 (-0.070180) | 0.141397 / 0.737135 (-0.595738) | 0.110355 / 0.296338 (-0.185983) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.437598 / 0.215209 (0.222389) | 4.394964 / 2.077655 (2.317310) | 2.082660 / 1.504120 (0.578540) | 1.868690 / 1.541195 (0.327496) | 1.915190 / 1.468490 (0.446700) | 0.701035 / 4.584777 (-3.883742) | 3.306594 / 3.745712 (-0.439118) | 1.842681 / 5.269862 (-3.427181) | 1.155022 / 4.565676 (-3.410654) | 0.083310 / 0.424275 (-0.340965) | 0.012413 / 0.007607 (0.004806) | 0.543179 / 0.226044 (0.317135) | 5.445605 / 2.268929 (3.176676) | 2.545080 / 55.444624 (-52.899544) | 2.188741 / 6.876477 (-4.687736) | 2.205561 / 2.142072 (0.063489) | 0.804967 / 4.805227 (-4.000261) | 0.151024 / 6.500664 (-6.349640) | 0.066448 / 0.075469 (-0.009021) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.304671 / 1.841788 (-0.537117) | 13.996631 / 8.074308 (5.922323) | 13.617626 / 10.191392 (3.426234) | 0.141512 / 0.680424 (-0.538912) | 0.016527 / 0.534201 (-0.517674) | 0.384981 / 0.579283 (-0.194302) | 0.385198 / 0.434364 (-0.049166) | 0.469033 / 0.540337 (-0.071305) | 0.554738 / 1.386936 (-0.832198) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#d09dc897e153fed7c7f459a122fb03faa46688ed \"CML watermark\")\n" ]
null
[]
Resolve four broken refs in the docs
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/5550/timeline
Hello! ## Pull Request overview * Resolve 4 broken references in the docs ## The problems Two broken references [here](https://huggingface.co/docs/datasets/package_reference/main_classes#datasets.Dataset.class_encode_column): ![image](https://user-images.githubusercontent.com/37621491/220056232-366b64dc-33c9-461b-8f82-1ac4aa570280.png) --- One broken reference [here](https://huggingface.co/docs/datasets/package_reference/main_classes#datasets.Dataset.unique): ![image](https://user-images.githubusercontent.com/37621491/220057135-2f249d60-c01d-48b5-82bb-5085a7635198.png) --- One missing reference [here](https://huggingface.co/docs/datasets/v2.9.0/en/package_reference/main_classes#datasets.DatasetDict.class_encode_column): ![image](https://user-images.githubusercontent.com/37621491/220057025-4a8e5556-5041-4ec7-b8d8-ed4fdc266495.png) - Tom Aarsen
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/5550.diff", "html_url": "https://github.com/huggingface/datasets/pull/5550", "merged_at": "2023-02-20T15:09:13Z", "patch_url": "https://github.com/huggingface/datasets/pull/5550.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5550" }
1,591,409,475
https://api.github.com/repos/huggingface/datasets/issues/5550/comments
PR_kwDODunzps5KUl5i
null
5,550
https://api.github.com/repos/huggingface/datasets/issues/5550/events
true
closed
2023-02-19T20:09:28Z
null
https://api.github.com/repos/huggingface/datasets/issues/5549
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5549/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/5549/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/2053727?v=4", "events_url": "https://api.github.com/users/Skylion007/events{/privacy}", "followers_url": "https://api.github.com/users/Skylion007/followers", "following_url": "https://api.github.com/users/Skylion007/following{/other_user}", "gists_url": "https://api.github.com/users/Skylion007/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Skylion007", "id": 2053727, "login": "Skylion007", "node_id": "MDQ6VXNlcjIwNTM3Mjc=", "organizations_url": "https://api.github.com/users/Skylion007/orgs", "received_events_url": "https://api.github.com/users/Skylion007/received_events", "repos_url": "https://api.github.com/users/Skylion007/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Skylion007/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Skylion007/subscriptions", "type": "User", "url": "https://api.github.com/users/Skylion007" }
https://github.com/huggingface/datasets/pull/5549
[]
false
2023-02-23T14:06:39Z
2023-02-23T13:59:39Z
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009598 / 0.011353 (-0.001755) | 0.005115 / 0.011008 (-0.005893) | 0.100100 / 0.038508 (0.061592) | 0.036193 / 0.023109 (0.013083) | 0.296478 / 0.275898 (0.020580) | 0.355997 / 0.323480 (0.032517) | 0.007846 / 0.007986 (-0.000140) | 0.004082 / 0.004328 (-0.000247) | 0.076949 / 0.004250 (0.072699) | 0.044304 / 0.037052 (0.007252) | 0.310775 / 0.258489 (0.052286) | 0.333914 / 0.293841 (0.040073) | 0.037783 / 0.128546 (-0.090763) | 0.012023 / 0.075646 (-0.063623) | 0.333311 / 0.419271 (-0.085961) | 0.047568 / 0.043533 (0.004035) | 0.295567 / 0.255139 (0.040428) | 0.315707 / 0.283200 (0.032507) | 0.102675 / 0.141683 (-0.039008) | 1.471546 / 1.452155 (0.019391) | 1.507991 / 1.492716 (0.015274) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.208658 / 0.018006 (0.190651) | 0.445026 / 0.000490 (0.444536) | 0.002593 / 0.000200 (0.002393) | 0.000084 / 0.000054 (0.000029) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026968 / 0.037411 (-0.010444) | 0.108188 / 0.014526 (0.093662) | 0.117965 / 0.176557 (-0.058592) | 0.182769 / 0.737135 (-0.554366) | 0.121671 / 0.296338 (-0.174667) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.400677 / 0.215209 (0.185468) | 4.012577 / 2.077655 (1.934922) | 1.821324 / 1.504120 (0.317204) | 1.624438 / 1.541195 (0.083244) | 1.731886 / 1.468490 (0.263396) | 0.698089 / 4.584777 (-3.886688) | 3.786165 / 3.745712 (0.040453) | 2.079742 / 5.269862 (-3.190119) | 1.325032 / 4.565676 (-3.240644) | 0.085229 / 0.424275 (-0.339046) | 0.012017 / 0.007607 (0.004410) | 0.511779 / 0.226044 (0.285734) | 5.114358 / 2.268929 (2.845430) | 2.324763 / 55.444624 (-53.119861) | 2.011864 / 6.876477 (-4.864612) | 2.075875 / 2.142072 (-0.066198) | 0.853475 / 4.805227 (-3.951752) | 0.166949 / 6.500664 (-6.333715) | 0.064669 / 0.075469 (-0.010800) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.230212 / 1.841788 (-0.611576) | 14.942371 / 8.074308 (6.868063) | 14.075795 / 10.191392 (3.884403) | 0.156920 / 0.680424 (-0.523504) | 0.029002 / 0.534201 (-0.505199) | 0.442213 / 0.579283 (-0.137070) | 0.436888 / 0.434364 (0.002524) | 0.519725 / 0.540337 (-0.020613) | 0.604634 / 1.386936 (-0.782303) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007649 / 0.011353 (-0.003704) | 0.005298 / 0.011008 (-0.005710) | 0.076559 / 0.038508 (0.038050) | 0.033723 / 0.023109 (0.010614) | 0.334946 / 0.275898 (0.059048) | 0.372785 / 0.323480 (0.049305) | 0.006032 / 0.007986 (-0.001953) | 0.004125 / 0.004328 (-0.000204) | 0.075366 / 0.004250 (0.071116) | 0.049061 / 0.037052 (0.012009) | 0.338188 / 0.258489 (0.079699) | 0.389693 / 0.293841 (0.095852) | 0.037246 / 0.128546 (-0.091301) | 0.012530 / 0.075646 (-0.063116) | 0.088053 / 0.419271 (-0.331219) | 0.049844 / 0.043533 (0.006311) | 0.338476 / 0.255139 (0.083337) | 0.361672 / 0.283200 (0.078473) | 0.101982 / 0.141683 (-0.039701) | 1.479550 / 1.452155 (0.027396) | 1.541031 / 1.492716 (0.048315) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.226162 / 0.018006 (0.208156) | 0.439108 / 0.000490 (0.438618) | 0.001102 / 0.000200 (0.000902) | 0.000076 / 0.000054 (0.000021) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030240 / 0.037411 (-0.007171) | 0.113754 / 0.014526 (0.099229) | 0.122839 / 0.176557 (-0.053717) | 0.192531 / 0.737135 (-0.544604) | 0.129455 / 0.296338 (-0.166884) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.424701 / 0.215209 (0.209492) | 4.208161 / 2.077655 (2.130507) | 2.045733 / 1.504120 (0.541613) | 1.892369 / 1.541195 (0.351174) | 1.997024 / 1.468490 (0.528534) | 0.739883 / 4.584777 (-3.844894) | 3.760939 / 3.745712 (0.015227) | 3.195748 / 5.269862 (-2.074113) | 1.731480 / 4.565676 (-2.834197) | 0.087013 / 0.424275 (-0.337262) | 0.012550 / 0.007607 (0.004943) | 0.540829 / 0.226044 (0.314785) | 5.329933 / 2.268929 (3.061005) | 2.507572 / 55.444624 (-52.937052) | 2.167761 / 6.876477 (-4.708716) | 2.250298 / 2.142072 (0.108226) | 0.868718 / 4.805227 (-3.936510) | 0.181643 / 6.500664 (-6.319021) | 0.064817 / 0.075469 (-0.010653) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.295001 / 1.841788 (-0.546787) | 15.236413 / 8.074308 (7.162105) | 13.692212 / 10.191392 (3.500820) | 0.186330 / 0.680424 (-0.494094) | 0.017492 / 0.534201 (-0.516709) | 0.427365 / 0.579283 (-0.151919) | 0.427781 / 0.434364 (-0.006583) | 0.533763 / 0.540337 (-0.006575) | 0.636011 / 1.386936 (-0.750925) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#94b16b674111ca5e1a03ddcb71dc0b53acc2f934 \"CML watermark\")\n" ]
null
[]
Apply ruff flake8-comprehension checks
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/5549/timeline
Fix #5548 Apply ruff's flake8-comprehension checks for better performance, and more readable code.
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/5549.diff", "html_url": "https://github.com/huggingface/datasets/pull/5549", "merged_at": "2023-02-23T13:59:39Z", "patch_url": "https://github.com/huggingface/datasets/pull/5549.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5549" }
1,590,836,848
https://api.github.com/repos/huggingface/datasets/issues/5549/comments
PR_kwDODunzps5KSsi3
null
5,549
https://api.github.com/repos/huggingface/datasets/issues/5549/events
true
closed
2023-02-19T20:05:38Z
null
https://api.github.com/repos/huggingface/datasets/issues/5548
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5548/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/5548/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/2053727?v=4", "events_url": "https://api.github.com/users/Skylion007/events{/privacy}", "followers_url": "https://api.github.com/users/Skylion007/followers", "following_url": "https://api.github.com/users/Skylion007/following{/other_user}", "gists_url": "https://api.github.com/users/Skylion007/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Skylion007", "id": 2053727, "login": "Skylion007", "node_id": "MDQ6VXNlcjIwNTM3Mjc=", "organizations_url": "https://api.github.com/users/Skylion007/orgs", "received_events_url": "https://api.github.com/users/Skylion007/received_events", "repos_url": "https://api.github.com/users/Skylion007/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Skylion007/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Skylion007/subscriptions", "type": "User", "url": "https://api.github.com/users/Skylion007" }
https://github.com/huggingface/datasets/issues/5548
[]
false
2023-02-23T13:59:41Z
2023-02-23T13:59:41Z
null
[]
completed
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
Apply flake8-comprehensions to codebase
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/5548/timeline
### Feature request Apply ruff flake8 comprehension checks to codebase. ### Motivation This should strictly improve the performance / readability of the codebase by removing unnecessary iteration, function calls, etc. This should generate better Python bytecode which should strictly improve performance. I already applied this fixes to PyTorch and Sympy with little issue and have opened PRs to diffusers and transformers todo this as well. ### Your contribution Making a PR.
https://api.github.com/repos/huggingface/datasets
null
1,590,835,479
https://api.github.com/repos/huggingface/datasets/issues/5548/comments
I_kwDODunzps5e0jkX
null
5,548
https://api.github.com/repos/huggingface/datasets/issues/5548/events
false
closed
2023-02-18T20:57:40Z
null
https://api.github.com/repos/huggingface/datasets/issues/5547
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5547/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/5547/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4", "events_url": "https://api.github.com/users/alvarobartt/events{/privacy}", "followers_url": "https://api.github.com/users/alvarobartt/followers", "following_url": "https://api.github.com/users/alvarobartt/following{/other_user}", "gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/alvarobartt", "id": 36760800, "login": "alvarobartt", "node_id": "MDQ6VXNlcjM2NzYwODAw", "organizations_url": "https://api.github.com/users/alvarobartt/orgs", "received_events_url": "https://api.github.com/users/alvarobartt/received_events", "repos_url": "https://api.github.com/users/alvarobartt/repos", "site_admin": false, "starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions", "type": "User", "url": "https://api.github.com/users/alvarobartt" }
https://github.com/huggingface/datasets/pull/5547
[]
false
2023-02-21T16:10:55Z
2023-02-21T16:04:03Z
null
[ "The code below was throwing a warning:\r\n\r\n```python\r\nclass JaxFormatter(Formatter[Mapping, \"jax.Array\", Mapping]):\r\n def __init__(self, features=None, device=None, **jnp_array_kwargs):\r\n super().__init__(features=features)\r\n import jax\r\n from jaxlib.xla_extension import Device\r\n \r\n self.device = (\r\n device if isinstance(device, Device) else jax.devices()[0]\r\n )\r\n self.jnp_array_kwargs = jnp_array_kwargs\r\n\r\n ...\r\n\r\n def _tensorize(self, value):\r\n ...\r\n\r\n with jax.default_device(self.device):\r\n # calling jnp.array on a np.ndarray does copy the data\r\n # see https://github.com/google/jax/issues/4486\r\n return jnp.array(value, **{**default_dtype, **self.jnp_array_kwargs})\r\n```\r\n\r\nWhen providing `device` via param:\r\n\r\n```python\r\nfrom datasets import Dataset\r\nimport jax\r\n\r\nds = Dataset.from_dict({\"a\": [1, 2, 3], \"b\": [4, 5, 6]})\r\nds = ds.with_format(\"jax\", device=jax.devices()[0])\r\nprint(ds[0])\r\n```\r\n\r\nProducing the following warning:\r\n\r\n```\r\nWARNING:datasets.fingerprint:Parameter 'device'=TFRT_CPU_0 of the transform datasets.arrow_dataset.Dataset.set_format couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameters are serializable with pickle or dill for the dataset fingerprinting and caching to work. If you reuse this transform, the caching mechanism will consider it to be different from the previous calls and recompute everything. This warning is only showed once. Subsequent hashing failures won't be showed.\r\n```\r\n\r\nThat's why I decided to map all the available devices, and assign their string representation e.g. `TFRT_CPU_0` to `self.device` instead of `jaxlib.xla_extension.Device`, so that the value of the param `device` is washable. So on, the code that remains at the end is:\r\n\r\n```python\r\nclass JaxFormatter(Formatter[Mapping, \"jax.Array\", Mapping]):\r\n def __init__(self, features=None, device=None, **jnp_array_kwargs):\r\n super().__init__(features=features)\r\n import jax\r\n from jaxlib.xla_client import Device\r\n\r\n self.device_mapping = self._map_devices_to_str()\r\n self.device = (\r\n device if isinstance(device, str) else str(device) if isinstance(device, Device) else str(jax.devices()[0])\r\n )\r\n self.jnp_array_kwargs = jnp_array_kwargs\r\n\r\n def _map_devices_to_str(self) -> Mapping[str, \"jaxlib.xla_extension.Device\"]:\r\n import jax\r\n\r\n return {str(device): device for device in jax.devices()}\r\n\r\n ...\r\n\r\n def _tensorize(self, value):\r\n ...\r\n\r\n with jax.default_device(self.device_mapping[self.device]):\r\n # calling jnp.array on a np.ndarray does copy the data\r\n # see https://github.com/google/jax/issues/4486\r\n return jnp.array(value, **{**default_dtype, **self.jnp_array_kwargs})\r\n```\r\n\r\nBut note that the latter also throws a warning if the provided `device` is not a string but a `jaxlib.xla_extension.Device`, so that's why it needs to be converted to string.", "_The documentation is not available anymore as the PR was closed or merged._", "After some investigation, it seems that when using `device=jaxlib.xla_extension.Device` instead of `device=string` it shows the warning so that later formats fail as that cannot be unpickled.\r\n\r\nSo I think we can either add that specifically in `use_with_jax.mdx` documentation entry I'm creating at #5535 so that the users know that they need to surroung the `jaxlib.xla_extension.Device` with `str()`, or find a workaround to override default `deepcopy` behavior with `def __deepcopy__(self)` so that the device param is converted to string if provided as a `jaxlib.xla_extension.Device`, but not sure if the latter works 😕 \r\n\r\nDo you think there's any other possible solution to this issue? Thanks, @lhoestq ", "Cool ! Specifying the device is indeed super important.\r\n\r\n\r\nI think we can just require `device` to always be a string for now, and add an example in the doc on how to get the string that corresponds to a `jaxlib.xla_extension.Device` ? This way we never deal with objects that are not picklable", "> Cool ! Specifying the device is indeed super important.\r\n> \r\n> I think we can just require `device` to always be a string for now, and add an example in the doc on how to get the string that corresponds to a `jaxlib.xla_extension.Device` ? This way we never deal with objects that are not picklable\r\n\r\nSure, then I'll restrict it to string for now! Also regarding the documentation update, should we wait until #5535 is merged so that I add this on top of that?", "CI is failing due to missing `resampy` in `librosa` already being fixed by @lhoestq in https://github.com/huggingface/datasets/pull/5554", "@lhoestq already moved to a global variable, I can confirm that the following now works:\r\n\r\n```python\r\nimport copy\r\nimport pickle\r\n\r\nimport jax\r\nimport pyarrow as pa\r\n\r\nfrom datasets.formatting import JaxFormatter\r\n\r\n\r\n_COL_A = [0, 1, 2]\r\n_COL_B = [\"foo\", \"bar\", \"foobar\"]\r\n_COL_C = [[[1.0, 0.0, 0.0]] * 2, [[0.0, 1.0, 0.0]] * 2, [[0.0, 0.0, 1.0]] * 2]\r\npa_table = pa.Table.from_pydict({\"a\": _COL_A, \"b\": _COL_B, \"c\": _COL_C})\r\n\r\ndevice = jax.devices()[0]\r\nformatter = JaxFormatter(device=str(device))\r\n\r\npickle.dumps(formatter)\r\ncopy.deepcopy(formatter)\r\n```", "> Looks all good now thank you !\r\n> \r\n> Is there anything else you wanted to add ? Otherwise I think it's ready for merge\r\n\r\nNothing else to add, I've already applied your suggestions, so ready to merge! Thanks for your input/feedback @lhoestq :hugs:", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009815 / 0.011353 (-0.001538) | 0.005443 / 0.011008 (-0.005565) | 0.101244 / 0.038508 (0.062736) | 0.036573 / 0.023109 (0.013464) | 0.304761 / 0.275898 (0.028863) | 0.365527 / 0.323480 (0.042047) | 0.008244 / 0.007986 (0.000258) | 0.004200 / 0.004328 (-0.000128) | 0.077471 / 0.004250 (0.073221) | 0.045266 / 0.037052 (0.008214) | 0.310213 / 0.258489 (0.051724) | 0.344247 / 0.293841 (0.050406) | 0.039530 / 0.128546 (-0.089016) | 0.012254 / 0.075646 (-0.063393) | 0.335039 / 0.419271 (-0.084233) | 0.049525 / 0.043533 (0.005992) | 0.298350 / 0.255139 (0.043211) | 0.312031 / 0.283200 (0.028832) | 0.108581 / 0.141683 (-0.033102) | 1.481178 / 1.452155 (0.029023) | 1.497662 / 1.492716 (0.004946) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.014762 / 0.018006 (-0.003244) | 0.447099 / 0.000490 (0.446609) | 0.009074 / 0.000200 (0.008874) | 0.000688 / 0.000054 (0.000633) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027466 / 0.037411 (-0.009945) | 0.109715 / 0.014526 (0.095189) | 0.119062 / 0.176557 (-0.057495) | 0.188964 / 0.737135 (-0.548171) | 0.127057 / 0.296338 (-0.169282) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.395092 / 0.215209 (0.179883) | 3.948091 / 2.077655 (1.870436) | 1.795160 / 1.504120 (0.291040) | 1.603704 / 1.541195 (0.062509) | 1.714491 / 1.468490 (0.246001) | 0.700489 / 4.584777 (-3.884288) | 3.767493 / 3.745712 (0.021781) | 3.288374 / 5.269862 (-1.981488) | 1.783711 / 4.565676 (-2.781965) | 0.085119 / 0.424275 (-0.339156) | 0.012349 / 0.007607 (0.004742) | 0.502135 / 0.226044 (0.276091) | 5.019321 / 2.268929 (2.750392) | 2.236469 / 55.444624 (-53.208155) | 1.914376 / 6.876477 (-4.962101) | 1.998579 / 2.142072 (-0.143494) | 0.847841 / 4.805227 (-3.957386) | 0.166035 / 6.500664 (-6.334629) | 0.062469 / 0.075469 (-0.013000) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.245380 / 1.841788 (-0.596408) | 14.757872 / 8.074308 (6.683564) | 14.460373 / 10.191392 (4.268981) | 0.152981 / 0.680424 (-0.527443) | 0.029001 / 0.534201 (-0.505200) | 0.439597 / 0.579283 (-0.139686) | 0.437232 / 0.434364 (0.002868) | 0.532464 / 0.540337 (-0.007873) | 0.629225 / 1.386936 (-0.757711) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007165 / 0.011353 (-0.004188) | 0.005220 / 0.011008 (-0.005789) | 0.075849 / 0.038508 (0.037341) | 0.032717 / 0.023109 (0.009608) | 0.331205 / 0.275898 (0.055307) | 0.364955 / 0.323480 (0.041475) | 0.005518 / 0.007986 (-0.002468) | 0.004069 / 0.004328 (-0.000259) | 0.073900 / 0.004250 (0.069650) | 0.046346 / 0.037052 (0.009294) | 0.337473 / 0.258489 (0.078984) | 0.393062 / 0.293841 (0.099222) | 0.037533 / 0.128546 (-0.091013) | 0.012577 / 0.075646 (-0.063070) | 0.087975 / 0.419271 (-0.331297) | 0.049508 / 0.043533 (0.005975) | 0.333423 / 0.255139 (0.078284) | 0.354345 / 0.283200 (0.071145) | 0.099879 / 0.141683 (-0.041804) | 1.413304 / 1.452155 (-0.038851) | 1.494222 / 1.492716 (0.001506) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.206835 / 0.018006 (0.188828) | 0.438246 / 0.000490 (0.437757) | 0.000410 / 0.000200 (0.000210) | 0.000059 / 0.000054 (0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028186 / 0.037411 (-0.009225) | 0.109322 / 0.014526 (0.094797) | 0.119581 / 0.176557 (-0.056975) | 0.191784 / 0.737135 (-0.545351) | 0.125100 / 0.296338 (-0.171238) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.419418 / 0.215209 (0.204209) | 4.167374 / 2.077655 (2.089720) | 1.995812 / 1.504120 (0.491693) | 1.804602 / 1.541195 (0.263407) | 1.869131 / 1.468490 (0.400641) | 0.709486 / 4.584777 (-3.875291) | 3.838019 / 3.745712 (0.092307) | 2.086206 / 5.269862 (-3.183656) | 1.323970 / 4.565676 (-3.241707) | 0.089477 / 0.424275 (-0.334798) | 0.012402 / 0.007607 (0.004795) | 0.519291 / 0.226044 (0.293246) | 5.194091 / 2.268929 (2.925162) | 2.487055 / 55.444624 (-52.957570) | 2.122495 / 6.876477 (-4.753982) | 2.194910 / 2.142072 (0.052837) | 0.842837 / 4.805227 (-3.962390) | 0.167229 / 6.500664 (-6.333435) | 0.064690 / 0.075469 (-0.010779) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.275931 / 1.841788 (-0.565857) | 14.577000 / 8.074308 (6.502692) | 13.633235 / 10.191392 (3.441843) | 0.184511 / 0.680424 (-0.495913) | 0.017439 / 0.534201 (-0.516762) | 0.424374 / 0.579283 (-0.154909) | 0.427803 / 0.434364 (-0.006561) | 0.527790 / 0.540337 (-0.012548) | 0.627301 / 1.386936 (-0.759635) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#21c86d570faad32c3abbed4305bfd3698daa7fd0 \"CML watermark\")\n" ]
null
[]
Add JAX device selection when formatting
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/5547/timeline
## What's in this PR? After exploring for a while the JAX integration in 🤗`datasets`, I found out that, even though JAX prioritizes the TPU and GPU as the default device when available, the `JaxFormatter` doesn't let you specify the device where you want to place the `jax.Array`s in case you don't want to rely on JAX's default array placement. So on, I've included the `device` param in `JaxFormatter` but there are some things to take into consideration: * A formatted `Dataset` is copied with `copy.deepcopy` which means that if one adds the param `device` in `JaxFormatter` as a `jaxlib.xla_extension.Device`, it "fails" because that object cannot be serialized (instead of serializing the param adds a random hash instead). That's the reason why I added a function `_map_devices_to_str` to basically create a mapping of strings to `jaxlib.xla_extension.Device`s so that `self.device` is a string and not a `jaxlib.xla_extension.Device`. * To create a `jax.Array` in a device you need to either create it in the default device and then move it to the desired device with `jax.device_put` or directly create it in the device you want with `jax.default_device()` context manager. * JAX will create an array by default in `jax.devices()[0]` More information on JAX device management is available at https://jax.readthedocs.io/en/latest/faq.html#controlling-data-and-computation-placement-on-devices ## What's missing in this PR? I've tested it both locally in CPU (Mac M2 and Mac M1, as no GPU support for Mac yet), and in GPU and TPU in Google Colab, let me know if you want me to provide you the Notebook for the latter. But I did not implement any integration test as I wanted to get your feedback first.
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/5547.diff", "html_url": "https://github.com/huggingface/datasets/pull/5547", "merged_at": "2023-02-21T16:04:03Z", "patch_url": "https://github.com/huggingface/datasets/pull/5547.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5547" }
1,590,468,200
https://api.github.com/repos/huggingface/datasets/issues/5547/comments
PR_kwDODunzps5KRmcf
null
5,547
https://api.github.com/repos/huggingface/datasets/issues/5547/events
true
closed
2023-02-18T13:30:35Z
null
https://api.github.com/repos/huggingface/datasets/issues/5546
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5546/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/5546/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/79091831?v=4", "events_url": "https://api.github.com/users/ErfanMoosaviMonazzah/events{/privacy}", "followers_url": "https://api.github.com/users/ErfanMoosaviMonazzah/followers", "following_url": "https://api.github.com/users/ErfanMoosaviMonazzah/following{/other_user}", "gists_url": "https://api.github.com/users/ErfanMoosaviMonazzah/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ErfanMoosaviMonazzah", "id": 79091831, "login": "ErfanMoosaviMonazzah", "node_id": "MDQ6VXNlcjc5MDkxODMx", "organizations_url": "https://api.github.com/users/ErfanMoosaviMonazzah/orgs", "received_events_url": "https://api.github.com/users/ErfanMoosaviMonazzah/received_events", "repos_url": "https://api.github.com/users/ErfanMoosaviMonazzah/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ErfanMoosaviMonazzah/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ErfanMoosaviMonazzah/subscriptions", "type": "User", "url": "https://api.github.com/users/ErfanMoosaviMonazzah" }
https://github.com/huggingface/datasets/issues/5546
[]
false
2023-07-24T14:22:43Z
2023-07-24T14:22:43Z
null
[ "Hi ! Can you make sure you set `HF_HOME` before importing `datasets` ?\r\n\r\nThen you can print\r\n```python\r\nprint(datasets.config.HF_CACHE_HOME)\r\nprint(datasets.config.HF_DATASETS_CACHE)\r\n```" ]
completed
[]
Downloaded datasets do not cache at $HF_HOME
NONE
https://api.github.com/repos/huggingface/datasets/issues/5546/timeline
### Describe the bug In the huggingface course (https://huggingface.co/course/chapter3/2?fw=pt) it said that if we set HF_HOME, downloaded datasets would be cached at specified address but it does not. downloaded models from checkpoint names are downloaded and cached at HF_HOME but this is not the case for datasets, they are still cached at ~/.cache/huggingface/datasets. ### Steps to reproduce the bug Run the following code ``` from datasets import load_dataset raw_datasets = load_dataset("glue", "mrpc") raw_datasets ``` it downloads and store dataset at ~/.cache/huggingface/datasets ### Expected behavior to cache dataset at HF_HOME. ### Environment info python 3.10.6 Kubuntu 22.04 HF_HOME located on a separate partition
https://api.github.com/repos/huggingface/datasets
null
1,590,346,349
https://api.github.com/repos/huggingface/datasets/issues/5546/comments
I_kwDODunzps5eysJt
null
5,546
https://api.github.com/repos/huggingface/datasets/issues/5546/events
false
open
2023-02-18T11:26:25Z
null
https://api.github.com/repos/huggingface/datasets/issues/5545
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5545/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/5545/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/25269220?v=4", "events_url": "https://api.github.com/users/davidberenstein1957/events{/privacy}", "followers_url": "https://api.github.com/users/davidberenstein1957/followers", "following_url": "https://api.github.com/users/davidberenstein1957/following{/other_user}", "gists_url": "https://api.github.com/users/davidberenstein1957/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/davidberenstein1957", "id": 25269220, "login": "davidberenstein1957", "node_id": "MDQ6VXNlcjI1MjY5MjIw", "organizations_url": "https://api.github.com/users/davidberenstein1957/orgs", "received_events_url": "https://api.github.com/users/davidberenstein1957/received_events", "repos_url": "https://api.github.com/users/davidberenstein1957/repos", "site_admin": false, "starred_url": "https://api.github.com/users/davidberenstein1957/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/davidberenstein1957/subscriptions", "type": "User", "url": "https://api.github.com/users/davidberenstein1957" }
https://github.com/huggingface/datasets/pull/5545
[]
false
2023-12-18T16:57:56Z
null
null
[ "Hi ! Maybe we'd need to align with `transformers` and other libraries that implement `push_to_hub` to agree on what it should return.\r\n\r\ne.g. in `transformers` the typing says it returns a string, but in practice it returns a `CommitInfo`.\r\n\r\nTherefore I'd not add an output to `push_to_hub` here unless we had a chance to discuss more broadly.\r\n\r\nAnyway in my opinion it should no just return the URL of the repository, but ideally the URL at the revision where the data were pushed", "Perhaps a mixin or something similar could be defined on the `hfh` side to ensure the `push_to_hub` API is aligned across our projects. \r\n\r\nPS: this would also mean that the PRs such as https://github.com/huggingface/datasets/pull/5528 would no longer be our responsibility\r\n\r\ncc @Wauplin ", "I agree, with universability and the idea is more about returning at least something that references where to find the uploaded file/model or otherwise. \r\n\r\nIdeally, the referenced PR would work.", "imo this would be a good use case to just use `huggingface_hub` and align to what we do there :)", "@mariosasko, can you give me some pointers to where I might help implementing this for the `huggingface-hub`?", "> @mariosasko: Perhaps a mixin or something similar could be defined on the hfh side to ensure the push_to_hub API is aligned across our projects.\r\n\r\n> @julien-c: imo this would be a good use case to just use huggingface_hub and align to what we do there :)\r\n\r\nI (finally) opened a PR to harmonize return types: https://github.com/huggingface/huggingface_hub/pull/1921. It should hopefully be shipped in next release later this week (:crossed_fingers:). " ]
null
[]
Added return methods for URL-references to the pushed dataset
NONE
https://api.github.com/repos/huggingface/datasets/issues/5545/timeline
Hi, I was missing the ability to easily open the pushed dataset and it seemed like a quick fix. Maybe we also want to log this info somewhere, but let me know if I need to add that too. Cheers, David
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/5545.diff", "html_url": "https://github.com/huggingface/datasets/pull/5545", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/5545.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5545" }
1,590,315,972
https://api.github.com/repos/huggingface/datasets/issues/5545/comments
PR_kwDODunzps5KRKct
null
5,545
https://api.github.com/repos/huggingface/datasets/issues/5545/events
true
closed
2023-02-17T08:40:11Z
null
https://api.github.com/repos/huggingface/datasets/issues/5543
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5543/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/5543/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/5126316?v=4", "events_url": "https://api.github.com/users/wjfwzzc/events{/privacy}", "followers_url": "https://api.github.com/users/wjfwzzc/followers", "following_url": "https://api.github.com/users/wjfwzzc/following{/other_user}", "gists_url": "https://api.github.com/users/wjfwzzc/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/wjfwzzc", "id": 5126316, "login": "wjfwzzc", "node_id": "MDQ6VXNlcjUxMjYzMTY=", "organizations_url": "https://api.github.com/users/wjfwzzc/orgs", "received_events_url": "https://api.github.com/users/wjfwzzc/received_events", "repos_url": "https://api.github.com/users/wjfwzzc/repos", "site_admin": false, "starred_url": "https://api.github.com/users/wjfwzzc/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wjfwzzc/subscriptions", "type": "User", "url": "https://api.github.com/users/wjfwzzc" }
https://github.com/huggingface/datasets/issues/5543
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
false
2023-02-21T06:37:00Z
2023-02-20T08:41:33Z
null
[ "Thanks for reporting, @wjfwzzc.\r\n\r\nI am transferring this issue to the corresponding dataset on the Hub: https://huggingface.co/datasets/bookcorpusopen/discussions/1", "Thank you. All fixes are done:\r\n- [x] https://huggingface.co/datasets/bookcorpusopen/discussions/2\r\n- [x] https://huggingface.co/datasets/the_pile/discussions/1\r\n- [x] https://huggingface.co/datasets/the_pile_books3/discussions/1\r\n- [x] https://huggingface.co/datasets/the_pile_openwebtext2/discussions/2\r\n- [x] https://huggingface.co/datasets/the_pile_stack_exchange/discussions/2" ]
completed
[]
the pile datasets url seems to change back
NONE
https://api.github.com/repos/huggingface/datasets/issues/5543/timeline
### Describe the bug in #3627, the host url of the pile dataset became `https://mystic.the-eye.eu`. Now the new url is broken, but `https://the-eye.eu` seems to work again. ### Steps to reproduce the bug ```python3 from datasets import load_dataset dataset = load_dataset("bookcorpusopen") ``` shows ```python3 ConnectionError: Couldn't reach https://mystic.the-eye.eu/public/AI/pile_preliminary_components/books1.tar.gz (ProxyError(MaxRetryError("HTTPSConnectionPool(host='mystic.the-eye.eu', port=443): Max retries exceeded with url: /public/AI/pile_pr eliminary_components/books1.tar.gz (Caused by ProxyError('Cannot connect to proxy.', OSError('Tunnel connection failed: 504 Gateway Timeout')))"))) ``` ### Expected behavior Downloading as normal. ### Environment info - `datasets` version: 2.9.0 - Platform: Linux-5.4.143.bsk.7-amd64-x86_64-with-glibc2.31 - Python version: 3.9.2 - PyArrow version: 6.0.1 - Pandas version: 1.5.3
https://api.github.com/repos/huggingface/datasets
null
1,588,951,379
https://api.github.com/repos/huggingface/datasets/issues/5543/comments
I_kwDODunzps5etXlT
null
5,543
https://api.github.com/repos/huggingface/datasets/issues/5543/events
false
closed
2023-02-17T01:52:38Z
null
https://api.github.com/repos/huggingface/datasets/issues/5542
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/5542/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/5542/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/6591505?v=4", "events_url": "https://api.github.com/users/marioga/events{/privacy}", "followers_url": "https://api.github.com/users/marioga/followers", "following_url": "https://api.github.com/users/marioga/following{/other_user}", "gists_url": "https://api.github.com/users/marioga/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/marioga", "id": 6591505, "login": "marioga", "node_id": "MDQ6VXNlcjY1OTE1MDU=", "organizations_url": "https://api.github.com/users/marioga/orgs", "received_events_url": "https://api.github.com/users/marioga/received_events", "repos_url": "https://api.github.com/users/marioga/repos", "site_admin": false, "starred_url": "https://api.github.com/users/marioga/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/marioga/subscriptions", "type": "User", "url": "https://api.github.com/users/marioga" }
https://github.com/huggingface/datasets/pull/5542
[]
false
2023-02-17T19:20:49Z
2023-02-17T11:12:32Z
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008452 / 0.011353 (-0.002901) | 0.004500 / 0.011008 (-0.006508) | 0.100103 / 0.038508 (0.061595) | 0.029395 / 0.023109 (0.006286) | 0.297740 / 0.275898 (0.021842) | 0.359132 / 0.323480 (0.035652) | 0.007045 / 0.007986 (-0.000941) | 0.003415 / 0.004328 (-0.000913) | 0.076389 / 0.004250 (0.072138) | 0.036612 / 0.037052 (-0.000440) | 0.308773 / 0.258489 (0.050284) | 0.345701 / 0.293841 (0.051860) | 0.033230 / 0.128546 (-0.095317) | 0.011463 / 0.075646 (-0.064183) | 0.322382 / 0.419271 (-0.096890) | 0.041194 / 0.043533 (-0.002339) | 0.300685 / 0.255139 (0.045546) | 0.323076 / 0.283200 (0.039876) | 0.087330 / 0.141683 (-0.054353) | 1.508661 / 1.452155 (0.056506) | 1.531776 / 1.492716 (0.039059) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.188391 / 0.018006 (0.170385) | 0.400102 / 0.000490 (0.399612) | 0.002006 / 0.000200 (0.001806) | 0.000075 / 0.000054 (0.000021) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023232 / 0.037411 (-0.014179) | 0.097313 / 0.014526 (0.082787) | 0.106244 / 0.176557 (-0.070313) | 0.141180 / 0.737135 (-0.595955) | 0.107871 / 0.296338 (-0.188468) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.418610 / 0.215209 (0.203400) | 4.162243 / 2.077655 (2.084588) | 1.884300 / 1.504120 (0.380180) | 1.694197 / 1.541195 (0.153002) | 1.727740 / 1.468490 (0.259250) | 0.692129 / 4.584777 (-3.892648) | 3.364230 / 3.745712 (-0.381482) | 1.871507 / 5.269862 (-3.398355) | 1.261520 / 4.565676 (-3.304156) | 0.083258 / 0.424275 (-0.341017) | 0.012479 / 0.007607 (0.004872) | 0.528802 / 0.226044 (0.302757) | 5.281029 / 2.268929 (3.012100) | 2.402222 / 55.444624 (-53.042403) | 2.064954 / 6.876477 (-4.811522) | 2.027044 / 2.142072 (-0.115029) | 0.813124 / 4.805227 (-3.992103) | 0.149397 / 6.500664 (-6.351267) | 0.065032 / 0.075469 (-0.010437) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.239192 / 1.841788 (-0.602595) | 13.529913 / 8.074308 (5.455605) | 14.253251 / 10.191392 (4.061859) | 0.165145 / 0.680424 (-0.515278) | 0.028367 / 0.534201 (-0.505834) | 0.395121 / 0.579283 (-0.184162) | 0.405372 / 0.434364 (-0.028992) | 0.472201 / 0.540337 (-0.068137) | 0.560620 / 1.386936 (-0.826316) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006368 / 0.011353 (-0.004985) | 0.004542 / 0.011008 (-0.006466) | 0.076361 / 0.038508 (0.037853) | 0.026893 / 0.023109 (0.003784) | 0.341210 / 0.275898 (0.065312) | 0.378377 / 0.323480 (0.054898) | 0.004833 / 0.007986 (-0.003153) | 0.003358 / 0.004328 (-0.000970) | 0.075516 / 0.004250 (0.071265) | 0.038841 / 0.037052 (0.001788) | 0.342230 / 0.258489 (0.083741) | 0.384317 / 0.293841 (0.090476) | 0.031874 / 0.128546 (-0.096672) | 0.011651 / 0.075646 (-0.063995) | 0.085816 / 0.419271 (-0.333455) | 0.042389 / 0.043533 (-0.001144) | 0.340678 / 0.255139 (0.085539) | 0.367441 / 0.283200 (0.084241) | 0.089748 / 0.141683 (-0.051935) | 1.487358 / 1.452155 (0.035203) | 1.615049 / 1.492716 (0.122333) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.220933 / 0.018006 (0.202926) | 0.397162 / 0.000490 (0.396673) | 0.002336 / 0.000200 (0.002136) | 0.000069 / 0.000054 (0.000015) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025004 / 0.037411 (-0.012407) | 0.100877 / 0.014526 (0.086351) | 0.110624 / 0.176557 (-0.065932) | 0.152042 / 0.737135 (-0.585094) | 0.112951 / 0.296338 (-0.183388) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.441071 / 0.215209 (0.225862) | 4.419471 / 2.077655 (2.341817) | 2.082976 / 1.504120 (0.578856) | 1.884023 / 1.541195 (0.342828) | 1.950590 / 1.468490 (0.482100) | 0.706104 / 4.584777 (-3.878673) | 3.329825 / 3.745712 (-0.415887) | 1.868850 / 5.269862 (-3.401011) | 1.178785 / 4.565676 (-3.386892) | 0.083910 / 0.424275 (-0.340365) | 0.012296 / 0.007607 (0.004689) | 0.542998 / 0.226044 (0.316953) | 5.429944 / 2.268929 (3.161015) | 2.502285 / 55.444624 (-52.942339) | 2.150507 / 6.876477 (-4.725970) | 2.170492 / 2.142072 (0.028420) | 0.813410 / 4.805227 (-3.991817) | 0.152310 / 6.500664 (-6.348354) | 0.066999 / 0.075469 (-0.008470) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.290839 / 1.841788 (-0.550949) | 14.089491 / 8.074308 (6.015183) | 13.704922 / 10.191392 (3.513530) | 0.130089 / 0.680424 (-0.550335) | 0.017000 / 0.534201 (-0.517201) | 0.381173 / 0.579283 (-0.198110) | 0.389271 / 0.434364 (-0.045093) | 0.461700 / 0.540337 (-0.078637) | 0.556428 / 1.386936 (-0.830508) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#2cfa9be08f17519ff3deeae63cb998f4be7616e0 \"CML watermark\")\n" ]
null
[]
Avoid saving sparse ChunkedArrays in pyarrow tables
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/5542/timeline
Fixes https://github.com/huggingface/datasets/issues/5541
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/5542.diff", "html_url": "https://github.com/huggingface/datasets/pull/5542", "merged_at": "2023-02-17T11:12:32Z", "patch_url": "https://github.com/huggingface/datasets/pull/5542.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5542" }
1,588,633,724
https://api.github.com/repos/huggingface/datasets/issues/5542/comments
PR_kwDODunzps5KLjMl
null
5,542
https://api.github.com/repos/huggingface/datasets/issues/5542/events
true
closed
2023-02-17T01:52:24Z
null
https://api.github.com/repos/huggingface/datasets/issues/5541
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5541/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/5541/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/6591505?v=4", "events_url": "https://api.github.com/users/marioga/events{/privacy}", "followers_url": "https://api.github.com/users/marioga/followers", "following_url": "https://api.github.com/users/marioga/following{/other_user}", "gists_url": "https://api.github.com/users/marioga/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/marioga", "id": 6591505, "login": "marioga", "node_id": "MDQ6VXNlcjY1OTE1MDU=", "organizations_url": "https://api.github.com/users/marioga/orgs", "received_events_url": "https://api.github.com/users/marioga/received_events", "repos_url": "https://api.github.com/users/marioga/repos", "site_admin": false, "starred_url": "https://api.github.com/users/marioga/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/marioga/subscriptions", "type": "User", "url": "https://api.github.com/users/marioga" }
https://github.com/huggingface/datasets/issues/5541
[]
false
2023-02-22T13:15:20Z
2023-02-17T11:12:33Z
null
[ "Running the script above on the branch https://github.com/huggingface/datasets/pull/5542 results in the expected behaviour:\r\n```\r\nNum chunks for original ds: 1\r\nOriginal ds save/load\r\nsave_to_disk -- RAM memory used: 0.671875 MB -- Total time: 0.255265 s\r\nload_from_disk -- RAM memory used: 42.796875 MB -- Total time: 0.014899 s\r\nNum chunks for original ds after reloading: 5000\r\n\r\nNum chunks for selected ds: 1\r\nflatten_indices -- RAM memory used: 42.546875 MB -- Total time: 23.735089 s\r\nNum chunks for selected ds after flattening: 5000\r\n\r\nSelected ds save/load\r\nsave_to_disk -- RAM memory used: 0.0 MB -- Total time: 0.287112 s\r\nload_from_disk -- RAM memory used: 38.84375 MB -- Total time: 0.014772 s\r\nNum chunks for selected ds after reloading: 5000\r\n```", "Wouahouh super cool @marioga thanks a lot!", "We just released `datasets==2.10.0` with this big improvement, thanks again @marioga " ]
completed
[]
Flattening indices in selected datasets is extremely inefficient
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/5541/timeline
### Describe the bug If we perform a `select` (or `shuffle`, `train_test_split`, etc.) operation on a dataset , we end up with a dataset with an `indices_table`. Currently, flattening such dataset consumes a lot of memory and the resulting flat dataset contains ChunkedArrays with as many chunks as there are rows. This is extremely inefficient and slows down the operations on the flat dataset, e.g., saving/loading the dataset to disk becomes really slow. Perhaps more importantly, loading the dataset back from disk basically loads the whole table into RAM, as it cannot take advantage of memory mapping. ### Steps to reproduce the bug The following script reproduces the issue: ```python import gc import os import psutil import tempfile import time from datasets import Dataset DATASET_SIZE = 5000000 def profile(func): def wrapper(*args, **kwargs): mem_before = psutil.Process(os.getpid()).memory_info().rss / (1024 * 1024) start = time.time() # Run function here out = func(*args, **kwargs) end = time.time() mem_after = psutil.Process(os.getpid()).memory_info().rss / (1024 * 1024) print(f"{func.__name__} -- RAM memory used: {mem_after - mem_before} MB -- Total time: {end - start:.6f} s") return out return wrapper def main(): ds = Dataset.from_list([{'col': i} for i in range(DATASET_SIZE)]) print(f"Num chunks for original ds: {ds.data['col'].num_chunks}") with tempfile.TemporaryDirectory() as tmpdir: path1 = os.path.join(tmpdir, 'ds1') print("Original ds save/load") profile(ds.save_to_disk)(path1) ds_loaded = profile(Dataset.load_from_disk)(path1) print(f"Num chunks for original ds after reloading: {ds_loaded.data['col'].num_chunks}") print("") ds_select = ds.select(reversed(range(len(ds)))) print(f"Num chunks for selected ds: {ds_select.data['col'].num_chunks}") del ds del ds_loaded gc.collect() # This would happen anyway when we call save_to_disk ds_select = profile(ds_select.flatten_indices)() print(f"Num chunks for selected ds after flattening: {ds_select.data['col'].num_chunks}") print("") path2 = os.path.join(tmpdir, 'ds2') print("Selected ds save/load") profile(ds_select.save_to_disk)(path2) del ds_select gc.collect() ds_select_loaded = profile(Dataset.load_from_disk)(path2) print(f"Num chunks for selected ds after reloading: {ds_select_loaded.data['col'].num_chunks}") if __name__ == '__main__': main() ``` Sample result: ``` Num chunks for original ds: 1 Original ds save/load save_to_disk -- RAM memory used: 0.515625 MB -- Total time: 0.253888 s load_from_disk -- RAM memory used: 42.765625 MB -- Total time: 0.015176 s Num chunks for original ds after reloading: 5000 Num chunks for selected ds: 1 flatten_indices -- RAM memory used: 4852.609375 MB -- Total time: 46.116774 s Num chunks for selected ds after flattening: 5000000 Selected ds save/load save_to_disk -- RAM memory used: 1326.65625 MB -- Total time: 42.309825 s load_from_disk -- RAM memory used: 2085.953125 MB -- Total time: 11.659137 s Num chunks for selected ds after reloading: 5000000 ``` ### Expected behavior Saving/loading the dataset should be much faster and consume almost no extra memory thanks to pyarrow memory mapping. ### Environment info - `datasets` version: 2.9.1.dev0 - Platform: macOS-13.1-arm64-arm-64bit - Python version: 3.10.8 - PyArrow version: 11.0.0 - Pandas version: 1.5.3
https://api.github.com/repos/huggingface/datasets
null
1,588,633,555
https://api.github.com/repos/huggingface/datasets/issues/5541/comments
I_kwDODunzps5esJ_T
null
5,541
https://api.github.com/repos/huggingface/datasets/issues/5541/events
false
closed
2023-02-16T22:09:35Z
null
https://api.github.com/repos/huggingface/datasets/issues/5540
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5540/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/5540/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/stevhliu", "id": 59462357, "login": "stevhliu", "node_id": "MDQ6VXNlcjU5NDYyMzU3", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "repos_url": "https://api.github.com/users/stevhliu/repos", "site_admin": false, "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "type": "User", "url": "https://api.github.com/users/stevhliu" }
https://github.com/huggingface/datasets/pull/5540
[]
false
2023-02-17T18:50:46Z
2023-02-17T18:41:28Z
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.012018 / 0.011353 (0.000665) | 0.006204 / 0.011008 (-0.004804) | 0.134119 / 0.038508 (0.095611) | 0.038436 / 0.023109 (0.015327) | 0.381397 / 0.275898 (0.105499) | 0.456362 / 0.323480 (0.132882) | 0.009826 / 0.007986 (0.001840) | 0.004746 / 0.004328 (0.000417) | 0.103755 / 0.004250 (0.099505) | 0.043867 / 0.037052 (0.006815) | 0.395322 / 0.258489 (0.136833) | 0.475812 / 0.293841 (0.181971) | 0.057865 / 0.128546 (-0.070682) | 0.019919 / 0.075646 (-0.055727) | 0.465343 / 0.419271 (0.046072) | 0.061574 / 0.043533 (0.018041) | 0.371668 / 0.255139 (0.116529) | 0.400375 / 0.283200 (0.117176) | 0.106539 / 0.141683 (-0.035144) | 1.822931 / 1.452155 (0.370776) | 1.875535 / 1.492716 (0.382819) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.013583 / 0.018006 (-0.004423) | 0.535515 / 0.000490 (0.535025) | 0.007920 / 0.000200 (0.007720) | 0.000305 / 0.000054 (0.000250) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030204 / 0.037411 (-0.007207) | 0.131671 / 0.014526 (0.117145) | 0.143977 / 0.176557 (-0.032579) | 0.175498 / 0.737135 (-0.561637) | 0.166134 / 0.296338 (-0.130204) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.630995 / 0.215209 (0.415786) | 6.152275 / 2.077655 (4.074620) | 2.519887 / 1.504120 (1.015767) | 2.110926 / 1.541195 (0.569732) | 2.207555 / 1.468490 (0.739064) | 1.296197 / 4.584777 (-3.288580) | 5.510619 / 3.745712 (1.764906) | 3.167468 / 5.269862 (-2.102394) | 2.043924 / 4.565676 (-2.521753) | 0.144772 / 0.424275 (-0.279503) | 0.014456 / 0.007607 (0.006848) | 0.783629 / 0.226044 (0.557585) | 7.836962 / 2.268929 (5.568033) | 3.248593 / 55.444624 (-52.196032) | 2.577092 / 6.876477 (-4.299385) | 2.671918 / 2.142072 (0.529846) | 1.471586 / 4.805227 (-3.333641) | 0.251391 / 6.500664 (-6.249273) | 0.091947 / 0.075469 (0.016478) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.594839 / 1.841788 (-0.246949) | 18.250630 / 8.074308 (10.176322) | 23.948781 / 10.191392 (13.757389) | 0.275505 / 0.680424 (-0.404919) | 0.045202 / 0.534201 (-0.488999) | 0.545552 / 0.579283 (-0.033731) | 0.639352 / 0.434364 (0.204989) | 0.666345 / 0.540337 (0.126008) | 0.795614 / 1.386936 (-0.591322) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.011234 / 0.011353 (-0.000119) | 0.005983 / 0.011008 (-0.005025) | 0.109144 / 0.038508 (0.070636) | 0.036070 / 0.023109 (0.012961) | 0.429313 / 0.275898 (0.153415) | 0.490615 / 0.323480 (0.167135) | 0.007448 / 0.007986 (-0.000538) | 0.004424 / 0.004328 (0.000095) | 0.097100 / 0.004250 (0.092850) | 0.049719 / 0.037052 (0.012667) | 0.412719 / 0.258489 (0.154230) | 0.485717 / 0.293841 (0.191876) | 0.061168 / 0.128546 (-0.067378) | 0.021510 / 0.075646 (-0.054136) | 0.116598 / 0.419271 (-0.302673) | 0.066116 / 0.043533 (0.022583) | 0.426212 / 0.255139 (0.171073) | 0.448368 / 0.283200 (0.165168) | 0.116003 / 0.141683 (-0.025680) | 1.799329 / 1.452155 (0.347175) | 1.967256 / 1.492716 (0.474540) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.214893 / 0.018006 (0.196887) | 0.497843 / 0.000490 (0.497354) | 0.000464 / 0.000200 (0.000264) | 0.000094 / 0.000054 (0.000039) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031758 / 0.037411 (-0.005653) | 0.131182 / 0.014526 (0.116656) | 0.141251 / 0.176557 (-0.035305) | 0.186526 / 0.737135 (-0.550609) | 0.142975 / 0.296338 (-0.153363) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.662094 / 0.215209 (0.446885) | 6.664841 / 2.077655 (4.587186) | 2.690613 / 1.504120 (1.186493) | 2.305399 / 1.541195 (0.764205) | 2.383697 / 1.468490 (0.915207) | 1.280692 / 4.584777 (-3.304085) | 5.629215 / 3.745712 (1.883503) | 5.007083 / 5.269862 (-0.262778) | 2.482163 / 4.565676 (-2.083513) | 0.147662 / 0.424275 (-0.276613) | 0.017770 / 0.007607 (0.010163) | 0.818380 / 0.226044 (0.592335) | 8.006521 / 2.268929 (5.737592) | 3.472262 / 55.444624 (-51.972363) | 2.709550 / 6.876477 (-4.166926) | 2.775138 / 2.142072 (0.633066) | 1.570545 / 4.805227 (-3.234683) | 0.266323 / 6.500664 (-6.234341) | 0.090591 / 0.075469 (0.015122) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.657927 / 1.841788 (-0.183861) | 18.448981 / 8.074308 (10.374673) | 20.336909 / 10.191392 (10.145517) | 0.230322 / 0.680424 (-0.450102) | 0.025972 / 0.534201 (-0.508229) | 0.561361 / 0.579283 (-0.017922) | 0.623758 / 0.434364 (0.189394) | 0.664120 / 0.540337 (0.123783) | 0.763144 / 1.386936 (-0.623792) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#29de6179766418c937fb33b0cc8803ec24a39e9e \"CML watermark\")\n" ]
null
[]
Tutorial for creating a dataset
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/5540/timeline
A tutorial for creating datasets based on the folder-based builders and `from_dict` and `from_generator` methods. I've also mentioned loading scripts as a next step, but I think we should keep the tutorial focused on the low-code methods. Let me know what you think! 🙂
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/5540.diff", "html_url": "https://github.com/huggingface/datasets/pull/5540", "merged_at": "2023-02-17T18:41:28Z", "patch_url": "https://github.com/huggingface/datasets/pull/5540.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5540" }
1,588,438,344
https://api.github.com/repos/huggingface/datasets/issues/5540/comments
PR_kwDODunzps5KK5qz
null
5,540
https://api.github.com/repos/huggingface/datasets/issues/5540/events
true
closed
2023-02-16T16:08:51Z
null
https://api.github.com/repos/huggingface/datasets/issues/5539
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5539/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/5539/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/41912135?v=4", "events_url": "https://api.github.com/users/aalbersk/events{/privacy}", "followers_url": "https://api.github.com/users/aalbersk/followers", "following_url": "https://api.github.com/users/aalbersk/following{/other_user}", "gists_url": "https://api.github.com/users/aalbersk/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/aalbersk", "id": 41912135, "login": "aalbersk", "node_id": "MDQ6VXNlcjQxOTEyMTM1", "organizations_url": "https://api.github.com/users/aalbersk/orgs", "received_events_url": "https://api.github.com/users/aalbersk/received_events", "repos_url": "https://api.github.com/users/aalbersk/repos", "site_admin": false, "starred_url": "https://api.github.com/users/aalbersk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/aalbersk/subscriptions", "type": "User", "url": "https://api.github.com/users/aalbersk" }
https://github.com/huggingface/datasets/issues/5539
[]
false
2023-02-22T10:30:30Z
2023-02-21T13:03:57Z
null
[ "Hi! The `set_transform` does not apply a custom formatting transform on a single example but the entire batch, so the fixed version of your transform would look as follows:\r\n```python\r\nfrom datasets import load_dataset\r\nimport torch\r\n\r\ndataset = load_dataset(\"lambdalabs/pokemon-blip-captions\", split='train')\r\ndef t(batch):\r\n return {\"test\": torch.tensor([1] * len(batch[next(iter(batch))]))}\r\n \r\ndataset.set_transform(t)\r\nd_0 = dataset[0]\r\n```\r\n\r\nStill, the formatter's error message should mention that a dict of **sequences** is expected as the returned value (not just a dict) to make debugging easier.", "I can take this", "Fixed in #5553 ", "> Hi! The `set_transform` does not apply a custom formatting transform on a single example but the entire batch, so the fixed version of your transform would look as follows:\r\n> \r\n> ```python\r\n> from datasets import load_dataset\r\n> import torch\r\n> \r\n> dataset = load_dataset(\"lambdalabs/pokemon-blip-captions\", split='train')\r\n> def t(batch):\r\n> return {\"test\": torch.tensor([1] * len(batch[next(iter(batch))]))}\r\n> \r\n> dataset.set_transform(t)\r\n> d_0 = dataset[0]\r\n> ```\r\n> \r\n> Still, the formatter's error message should mention that a dict of **sequences** is expected as the returned value (not just a dict) to make debugging easier.\r\n\r\nok, will change it according to suggestion. Thanks for the reply!" ]
completed
[ { "color": "7057ff", "default": true, "description": "Good for newcomers", "id": 1935892877, "name": "good first issue", "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue" } ]
IndexError: invalid index of a 0-dim tensor. Use `tensor.item()` in Python or `tensor.item<T>()` in C++ to convert a 0-dim tensor to a number
NONE
https://api.github.com/repos/huggingface/datasets/issues/5539/timeline
### Describe the bug When dataset contains a 0-dim tensor, formatting.py raises a following error and fails. ```bash Traceback (most recent call last): File "<path>/lib/python3.8/site-packages/datasets/formatting/formatting.py", line 501, in format_row return _unnest(formatted_batch) File "<path>/lib/python3.8/site-packages/datasets/formatting/formatting.py", line 137, in _unnest return {key: array[0] for key, array in py_dict.items()} File "<path>/lib/python3.8/site-packages/datasets/formatting/formatting.py", line 137, in <dictcomp> return {key: array[0] for key, array in py_dict.items()} IndexError: invalid index of a 0-dim tensor. Use `tensor.item()` in Python or `tensor.item<T>()` in C++ to convert a 0-dim tensor to a number ``` ### Steps to reproduce the bug Load whichever dataset and add transform method to add 0-dim tensor. Or create/find a dataset containing 0-dim tensor. E.g. ```python from datasets import load_dataset import torch dataset = load_dataset("lambdalabs/pokemon-blip-captions", split='train') def t(batch): return {"test": torch.tensor(1)} dataset.set_transform(t) d_0 = dataset[0] ``` ### Expected behavior Extractor will correctly get a row from the dataset, even if it contains 0-dim tensor. ### Environment info `datasets==2.8.0`, but it looks like it is also applicable to main branch version (as of 16th February)
https://api.github.com/repos/huggingface/datasets
null
1,587,970,083
https://api.github.com/repos/huggingface/datasets/issues/5539/comments
I_kwDODunzps5epoAj
null
5,539
https://api.github.com/repos/huggingface/datasets/issues/5539/events
false
closed
2023-02-16T14:01:58Z
null
https://api.github.com/repos/huggingface/datasets/issues/5538
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5538/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/5538/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/125575109?v=4", "events_url": "https://api.github.com/users/reemaranibarik/events{/privacy}", "followers_url": "https://api.github.com/users/reemaranibarik/followers", "following_url": "https://api.github.com/users/reemaranibarik/following{/other_user}", "gists_url": "https://api.github.com/users/reemaranibarik/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/reemaranibarik", "id": 125575109, "login": "reemaranibarik", "node_id": "U_kgDOB3wfxQ", "organizations_url": "https://api.github.com/users/reemaranibarik/orgs", "received_events_url": "https://api.github.com/users/reemaranibarik/received_events", "repos_url": "https://api.github.com/users/reemaranibarik/repos", "site_admin": false, "starred_url": "https://api.github.com/users/reemaranibarik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/reemaranibarik/subscriptions", "type": "User", "url": "https://api.github.com/users/reemaranibarik" }
https://github.com/huggingface/datasets/issues/5538
[]
false
2023-02-16T14:44:36Z
2023-02-16T14:44:36Z
null
[ "Hi! `seaborn`'s `load_dataset` pulls datasets from [here](https://github.com/mwaskom/seaborn-data) and not from our Hub, so this issue is not related to our library in any way and should be reported in their repo instead." ]
completed
[]
load_dataset in seaborn is not working for me. getting this error.
NONE
https://api.github.com/repos/huggingface/datasets/issues/5538/timeline
TimeoutError Traceback (most recent call last) ~\anaconda3\lib\urllib\request.py in do_open(self, http_class, req, **http_conn_args) 1345 try: -> 1346 h.request(req.get_method(), req.selector, req.data, headers, 1347 encode_chunked=req.has_header('Transfer-encoding')) ~\anaconda3\lib\http\client.py in request(self, method, url, body, headers, encode_chunked) 1278 """Send a complete request to the server.""" -> 1279 self._send_request(method, url, body, headers, encode_chunked) 1280 ~\anaconda3\lib\http\client.py in _send_request(self, method, url, body, headers, encode_chunked) 1324 body = _encode(body, 'body') -> 1325 self.endheaders(body, encode_chunked=encode_chunked) 1326 ~\anaconda3\lib\http\client.py in endheaders(self, message_body, encode_chunked) 1273 raise CannotSendHeader() -> 1274 self._send_output(message_body, encode_chunked=encode_chunked) 1275 ~\anaconda3\lib\http\client.py in _send_output(self, message_body, encode_chunked) 1033 del self._buffer[:] -> 1034 self.send(msg) 1035 ~\anaconda3\lib\http\client.py in send(self, data) 973 if self.auto_open: --> 974 self.connect() 975 else: ~\anaconda3\lib\http\client.py in connect(self) 1440 -> 1441 super().connect() 1442 ~\anaconda3\lib\http\client.py in connect(self) 944 """Connect to the host and port specified in __init__.""" --> 945 self.sock = self._create_connection( 946 (self.host,self.port), self.timeout, self.source_address) ~\anaconda3\lib\socket.py in create_connection(address, timeout, source_address) 843 try: --> 844 raise err 845 finally: ~\anaconda3\lib\socket.py in create_connection(address, timeout, source_address) 831 sock.bind(source_address) --> 832 sock.connect(sa) 833 # Break explicitly a reference cycle TimeoutError: [WinError 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond During handling of the above exception, another exception occurred: URLError Traceback (most recent call last) ~\AppData\Local\Temp/ipykernel_12220/2927704185.py in <module> 1 import seaborn as sn ----> 2 iris = sn.load_dataset('iris') ~\anaconda3\lib\site-packages\seaborn\utils.py in load_dataset(name, cache, data_home, **kws) 594 if name not in get_dataset_names(): 595 raise ValueError(f"'{name}' is not one of the example datasets.") --> 596 urlretrieve(url, cache_path) 597 full_path = cache_path 598 else: ~\anaconda3\lib\urllib\request.py in urlretrieve(url, filename, reporthook, data) 237 url_type, path = _splittype(url) 238 --> 239 with contextlib.closing(urlopen(url, data)) as fp: 240 headers = fp.info() 241 ~\anaconda3\lib\urllib\request.py in urlopen(url, data, timeout, cafile, capath, cadefault, context) 212 else: 213 opener = _opener --> 214 return opener.open(url, data, timeout) 215 216 def install_opener(opener): ~\anaconda3\lib\urllib\request.py in open(self, fullurl, data, timeout) 515 516 sys.audit('urllib.Request', req.full_url, req.data, req.headers, req.get_method()) --> 517 response = self._open(req, data) 518 519 # post-process response ~\anaconda3\lib\urllib\request.py in _open(self, req, data) 532 533 protocol = req.type --> 534 result = self._call_chain(self.handle_open, protocol, protocol + 535 '_open', req) 536 if result: ~\anaconda3\lib\urllib\request.py in _call_chain(self, chain, kind, meth_name, *args) 492 for handler in handlers: 493 func = getattr(handler, meth_name) --> 494 result = func(*args) 495 if result is not None: 496 return result ~\anaconda3\lib\urllib\request.py in https_open(self, req) 1387 1388 def https_open(self, req): -> 1389 return self.do_open(http.client.HTTPSConnection, req, 1390 context=self._context, check_hostname=self._check_hostname) 1391 ~\anaconda3\lib\urllib\request.py in do_open(self, http_class, req, **http_conn_args) 1347 encode_chunked=req.has_header('Transfer-encoding')) 1348 except OSError as err: # timeout error -> 1349 raise URLError(err) 1350 r = h.getresponse() 1351 except: URLError: <urlopen error [WinError 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond>
https://api.github.com/repos/huggingface/datasets
null
1,587,732,596
https://api.github.com/repos/huggingface/datasets/issues/5538/comments
I_kwDODunzps5eouB0
null
5,538
https://api.github.com/repos/huggingface/datasets/issues/5538/events
false
closed
2023-02-16T12:11:45Z
null
https://api.github.com/repos/huggingface/datasets/issues/5537
{ "avatar_url": "https://avatars.githubusercontent.com/u/35013374?v=4", "events_url": "https://api.github.com/users/semajyllek/events{/privacy}", "followers_url": "https://api.github.com/users/semajyllek/followers", "following_url": "https://api.github.com/users/semajyllek/following{/other_user}", "gists_url": "https://api.github.com/users/semajyllek/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/semajyllek", "id": 35013374, "login": "semajyllek", "node_id": "MDQ6VXNlcjM1MDEzMzc0", "organizations_url": "https://api.github.com/users/semajyllek/orgs", "received_events_url": "https://api.github.com/users/semajyllek/received_events", "repos_url": "https://api.github.com/users/semajyllek/repos", "site_admin": false, "starred_url": "https://api.github.com/users/semajyllek/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/semajyllek/subscriptions", "type": "User", "url": "https://api.github.com/users/semajyllek" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5537/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/5537/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://github.com/huggingface/datasets/issues/5537
[ { "avatar_url": "https://avatars.githubusercontent.com/u/35013374?v=4", "events_url": "https://api.github.com/users/semajyllek/events{/privacy}", "followers_url": "https://api.github.com/users/semajyllek/followers", "following_url": "https://api.github.com/users/semajyllek/following{/other_user}", "gists_url": "https://api.github.com/users/semajyllek/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/semajyllek", "id": 35013374, "login": "semajyllek", "node_id": "MDQ6VXNlcjM1MDEzMzc0", "organizations_url": "https://api.github.com/users/semajyllek/orgs", "received_events_url": "https://api.github.com/users/semajyllek/received_events", "repos_url": "https://api.github.com/users/semajyllek/repos", "site_admin": false, "starred_url": "https://api.github.com/users/semajyllek/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/semajyllek/subscriptions", "type": "User", "url": "https://api.github.com/users/semajyllek" } ]
false
2023-12-15T13:12:31Z
2023-12-15T13:12:31Z
null
[ "#self-assign", "You were right, if `self.dir_cache` is not None in glob, it is exactly the same as what is returned by find, at least for all the tests we have, and some extended evaluation I did across a random sample of about 1000 datasets. \r\n\r\nThanks for the nice hints, and let me know if this is not exactly what we want here!\r\n\r\nsee PR: https://github.com/huggingface/datasets/pull/5704\r\n\r\n", "I think we can make the data files resolution (significantly) faster in 2 steps:\r\n\r\n1. `glob` calls `find` (which in turn calls `ls`), so we need `find` to be fast, and this can be achieved by fetching all the entries in a single API call and avoiding calls to `ls`. Implementing this for `HfFileSystem.find` (the one in `huggingface_hub`) is on my TO-DO list.\r\n2. caching the repeated `find` calls in `_get_data_files_patterns` when the `data_files` patterns are not provided in `load_dataset`. To address this, we can introduce a `_resolve_single_pattern` function that would accept a filesystem object and a list of regex patterns to resolve. Then we can wrap this filesystem object in `_get_data_files_patterns` with an object that would cache the find calls before resolving the patterns with `_resolve_single_pattern`. (Feel free to suggest a cleaner implementation)\r\n\r\nWDYT?", "Good idea :) \r\n\r\nFor 2:\r\n\r\nThat would work ! It's also possible to have a FileSystem with a cache on `.find` and use it inside the resolver passed to `_get_data_files_patterns`. Right now they're pretty simple:\r\n\r\n```python\r\n# for remote repositories\r\nresolver = partial(_resolve_single_pattern_in_dataset_repository, dataset_info, base_path=base_path)\r\n# for local\r\nresolver = partial(_resolve_single_pattern_locally, base_path)\r\n```", "something like this maybe (with Quentin's reimplementation of `HfFilesystem.find`)?\r\n\r\n ```\r\n @lru_cache(max_size=None)\r\n def _find(self, path, maxdepth=None, withdirs=False, detail=False, **kwargs):\r\n```\r\n\r\nIn any case please let me know if I can help in any way!" ]
completed
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" }, { "color": "BDE59C", "default": false, "description": "Issues a bit more difficult than \"Good First\" issues", "id": 3761482852, "name": "good second issue", "node_id": "LA_kwDODunzps7gM6xk", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20second%20issue" } ]
Increase speed of data files resolution
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/5537/timeline
Certain datasets like `bigcode/the-stack-dedup` have so many files that loading them takes forever right from the data files resolution step. `datasets` uses file patterns to check the structure of the repository but it takes too much time to iterate over and over again on all the data files. This comes from `resolve_patterns_in_dataset_repository` which calls `_resolve_single_pattern_in_dataset_repository`, which iterates on all the files at ```python glob_iter = [PurePath(filepath) for filepath in fs.glob(PurePath(pattern).as_posix()) if fs.isfile(filepath)] ``` but calling `glob` on such a dataset is too expensive. Indeed it calls `ls()` in `hffilesystem.py` too many times. Maybe `glob` can be more optimized in `hffilesystem.py`, or the data files resolution can directly be implemented in the filesystem by checking its `dir_cache` ?
https://api.github.com/repos/huggingface/datasets
null
1,587,567,464
https://api.github.com/repos/huggingface/datasets/issues/5537/comments
I_kwDODunzps5eoFto
null
5,537
https://api.github.com/repos/huggingface/datasets/issues/5537/events
false
closed
2023-02-16T03:12:07Z
null
https://api.github.com/repos/huggingface/datasets/issues/5536
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5536/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/5536/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/6916056?v=4", "events_url": "https://api.github.com/users/venzen/events{/privacy}", "followers_url": "https://api.github.com/users/venzen/followers", "following_url": "https://api.github.com/users/venzen/following{/other_user}", "gists_url": "https://api.github.com/users/venzen/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/venzen", "id": 6916056, "login": "venzen", "node_id": "MDQ6VXNlcjY5MTYwNTY=", "organizations_url": "https://api.github.com/users/venzen/orgs", "received_events_url": "https://api.github.com/users/venzen/received_events", "repos_url": "https://api.github.com/users/venzen/repos", "site_admin": false, "starred_url": "https://api.github.com/users/venzen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/venzen/subscriptions", "type": "User", "url": "https://api.github.com/users/venzen" }
https://github.com/huggingface/datasets/issues/5536
[]
false
2023-09-08T21:06:01Z
2023-02-16T14:56:41Z
null
[ "Hi ! `enc` is not hashable:\r\n```python\r\nimport tiktoken\r\nfrom datasets.fingerprint import Hasher\r\n\r\nenc = tiktoken.get_encoding(\"gpt2\")\r\nHasher.hash(enc)\r\n# raises TypeError: cannot pickle 'builtins.CoreBPE' object\r\n```\r\nIt happens because it's not picklable, and because of that it's not possible to cache the result of `map`, hence the warning message.\r\n\r\nYou can find more details about caching here: https://huggingface.co/docs/datasets/about_cache\r\n\r\nYou can also provide your own unique hash in `map` if you want, with the `new_fingerprint` argument.\r\nOr disable caching using\r\n```python\r\nimport datasets\r\ndatasets.disable_caching()\r\n```", "@lhoestq Thank you for the explanation and advice. Will relay all of this to the repo where this (non)issue arose. \r\n\r\nGreat job with huggingface! ", "We made tiktoken tokenizers hashable in #5552, which is included in today's release `datasets==2.10.0`", "Just a heads up that when I'm trying to use TikToken along with the a given Dataset `.map()` method, I am still met with the following error :\r\n\r\n```\r\n File \"/opt/conda/lib/python3.8/site-packages/dill/_dill.py\", line 388, in save\r\n StockPickler.save(self, obj, save_persistent_id)\r\n File \"/opt/conda/lib/python3.8/pickle.py\", line 578, in save\r\n rv = reduce(self.proto)\r\nTypeError: cannot pickle 'builtins.CoreBPE' object\r\n```\r\n\r\nMy current environment is running datasets v2.10.0.", "cc @mariosasko ", "@lhoestq @edhenry I am also seeing this, do you have any suggested solution?", "With which `datasets` version ? Can you try to udpate ?", "@lhoestq @edhenry I am on datasets version `'2.12.0'. I see the same `TypeError: cannot pickle 'builtins.CoreBPE' object` that others are seeing.", "I am able to reproduce this on datasets 2.14.2. The `datasets.disable_caching()` doesn't work around it.\r\n\r\n@lhoestq - you might want to reopen this issue. Because of this issue folks won't be able run Karpathy's NanoGPT :(.", "update: temporarily solved the problem by setting\r\n```\r\n--preprocess_num_workers 1\r\n```\r\n\r\n-------------\r\nI have met the same problem, here is my env:\r\n```\r\ndatasets 2.14.4\r\ntransformers 4.31.0\r\ntiktoken 0.4.0\r\ntorch 1.13.1\r\n```", "@mengban I cannot reproduce the issue even with these versions installed. It would help if you could provide info about your system and the `pip list` output.", "@mariosasko Please take a look at this\r\n```python\r\nfrom typing import Any\r\nfrom datasets import Dataset\r\nimport tiktoken\r\n\r\ndataset = Dataset.from_list([{\"n\": str(i)} for i in range(20)])\r\nenc = tiktoken.get_encoding(\"gpt2\")\r\n\r\n\r\nclass A:\r\n tokenizer = enc #tiktoken.get_encoding(\"gpt2\")\r\n\r\n def __call__(self, example) -> Any:\r\n ids = self.tokenizer.encode(example[\"n\"])\r\n example[\"len\"] = len(ids)\r\n return example\r\n\r\na = A()\r\n\r\ndef process(example):\r\n ids = a.tokenizer.encode(example[\"n\"])\r\n example[\"len\"] = len(ids)\r\n return example\r\n\r\n# success\r\ntokenized = dataset.map(process, desc=\"tiktoken\", num_proc=2)\r\n\r\n# raise TypeError: cannot pickle 'builtins.CoreBPE' object\r\ntokenized = dataset.map(a, desc=\"tiktoken\", num_proc=2)\r\n```\r\n\r\npip list\r\n```\r\ndatasets 2.14.4\r\ntiktoken 0.4.0\r\n```", "Thanks @maxwellzh! Our `Hasher` works with this snippet, but the problem is running multiprocessing with a non-serializable `tiktoken.Encoding` object.\r\n\r\nInserting the following code before the `map` should fix this:\r\n```python\r\nimport copyreg\r\n\r\ndef pickle_Encoding(enc):\r\n return (functools.partial(tiktoken.core.Encoding, enc.name, pat_str=enc._pat_str, mergeable_ranks=enc._mergeable_ranks, special_tokens=enc._special_tokens), ())\r\n\r\ncopyreg.pickle(tiktoken.core.Encoding, pickle_Encoding)\r\n```\r\n\r\nBut the best fix would be implementing `__reduce__` for `tiktoken.Encoding` or `tiktoken.CoreBPE`. If I find time, I'll try to fix this in the `tiktoken` repo.", "I think the right way to fix this would be to have new tokenizer instance for each process. This applies to many other tokenizers that don't support multi-process or have bugs. To do this, first define tokenizer factory class like this:\r\n\r\n```\r\n class TikTokenFactory:\r\n def __init__(self):\r\n self._enc = None\r\n self.eot_token = None\r\n\r\n def encode_ordinary(self, text):\r\n if self._enc is None:\r\n self._enc = tiktoken.get_encoding(\"gpt2\")\r\n self.eot_token = self._enc.eot_token\r\n return self._enc.encode_ordinary(text)\r\n```\r\n\r\nNow use this in `.map()` like this:\r\n\r\n```\r\n # tokenize the dataset\r\n tokenized = dataset.map(\r\n partial(process, TikTokenFactory()),\r\n remove_columns=['text'],\r\n desc=\"tokenizing the splits\",\r\n num_proc=max(1, cpu_count()//2),\r\n )\r\n```\r\n\r\nA full working example is here: https://github.com/sytelus/nanoGPT/blob/refactor/nanogpt_common/hf_data_prepare.py" ]
completed
[]
Failure to hash function when using .map()
NONE
https://api.github.com/repos/huggingface/datasets/issues/5536/timeline
### Describe the bug _Parameter 'function'=<function process at 0x7f1ec4388af0> of the transform datasets.arrow_dataset.Dataset.\_map_single couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameters are serializable with pickle or dill for the dataset fingerprinting and caching to work. If you reuse this transform, the caching mechanism will consider it to be different from the previous calls and recompute everything. This warning is only showed once. Subsequent hashing failures won't be showed._ This issue with `.map()` happens for me consistently, as also described in closed issue #4506 Dataset indices can be individually serialized using dill and pickle without any errors. I'm using tiktoken to encode in the function passed to map(). Similarly, indices can be individually encoded without error. ### Steps to reproduce the bug ```py from datasets import load_dataset import tiktoken dataset = load_dataset("stas/openwebtext-10k") enc = tiktoken.get_encoding("gpt2") tokenized = dataset.map( process, remove_columns=['text'], desc="tokenizing the OWT splits", ) def process(example): ids = enc.encode(example['text']) ids.append(enc.eot_token) out = {'ids': ids, 'len': len(ids)} return out ``` ### Expected behavior Should encode simple text objects. ### Environment info Python versions tried: both 3.8 and 3.10.10 `PYTHONUTF8=1` as env variable Datasets tried: - stas/openwebtext-10k - rotten_tomatoes - local text file OS: Ubuntu Linux 20.04 Package versions: - torch 1.13.1 - dill 0.3.4 (if using 0.3.6 - same issue) - datasets 2.9.0 - tiktoken 0.2.0
https://api.github.com/repos/huggingface/datasets
null
1,586,930,643
https://api.github.com/repos/huggingface/datasets/issues/5536/comments
I_kwDODunzps5elqPT
null
5,536
https://api.github.com/repos/huggingface/datasets/issues/5536/events
false
closed
2023-02-15T20:35:11Z
null
https://api.github.com/repos/huggingface/datasets/issues/5535
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/5535/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/5535/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4", "events_url": "https://api.github.com/users/alvarobartt/events{/privacy}", "followers_url": "https://api.github.com/users/alvarobartt/followers", "following_url": "https://api.github.com/users/alvarobartt/following{/other_user}", "gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/alvarobartt", "id": 36760800, "login": "alvarobartt", "node_id": "MDQ6VXNlcjM2NzYwODAw", "organizations_url": "https://api.github.com/users/alvarobartt/orgs", "received_events_url": "https://api.github.com/users/alvarobartt/received_events", "repos_url": "https://api.github.com/users/alvarobartt/repos", "site_admin": false, "starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions", "type": "User", "url": "https://api.github.com/users/alvarobartt" }
https://github.com/huggingface/datasets/pull/5535
[]
false
2023-02-20T10:39:42Z
2023-02-20T10:32:39Z
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "> Awesome thank you !\r\n> \r\n> Could you also explain how to use certain types like ClassLabel, Image or Audio with jax ? You can get a lot of inspiration from the \"Other feature types\" section in the [PyTorch page](https://huggingface.co/docs/datasets/use_with_pytorch)\r\n> \r\n> I also think it's be nice if this page had the same structure as the pytorch or tf ones, with sections named\r\n> \r\n> * Dataset format\r\n> \r\n> * N-dimensional arrays\r\n> \r\n> * Other feature types\r\n> \r\n> * Data loading\r\n\r\nSure @lhoestq I'll do that later this afternoon whenever I'm done working! Thanks for the feedback as always 🤗", "Also, @lhoestq do you want me to elaborate more on the `## Data loading` section on how to use `datasets` to train a JAX model offering alternatives e.g. `Flax`, or do I keep it pure JAX? Thanks!", "If you have a good example with `flax` it can also be helpful for users", "For now, I think that probably it's not worth adding a `Flax` example, as train loops need to be done manually as in pure JAX, so probably the JAX example is enough. Anyway, let me know if you see something missing/incomplete/misleading/etc. and I'll update that ASAP 👍🏻 ", "P.S. I see that the `benchmark` action is being triggered on every PR, is it worth it? e.g. now I'm just editing the docs, so does it make any sense to trigger still the whole CI pipeline (including `benchmark`)? Just asking because in this PR for example it could be skipped.", "> P.S. I see that the benchmark action is being triggered on every PR, is it worth it? e.g. now I'm just editing the docs, so does it make any sense to trigger still the whole CI pipeline (including benchmark)? Just asking because in this PR for example it could be skipped.\r\n\r\nWe could restrict it to PRs modifying files in src/ indeed ^^'", "> LGTM :)\n\nCool thanks! My bad I didn't update those code blocks 🙃 Thanks for doing so before merge!", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009336 / 0.011353 (-0.002017) | 0.005037 / 0.011008 (-0.005971) | 0.102168 / 0.038508 (0.063659) | 0.035351 / 0.023109 (0.012242) | 0.299616 / 0.275898 (0.023718) | 0.333269 / 0.323480 (0.009789) | 0.008215 / 0.007986 (0.000229) | 0.005047 / 0.004328 (0.000718) | 0.074257 / 0.004250 (0.070007) | 0.045080 / 0.037052 (0.008028) | 0.300657 / 0.258489 (0.042168) | 0.357569 / 0.293841 (0.063728) | 0.038614 / 0.128546 (-0.089932) | 0.011995 / 0.075646 (-0.063651) | 0.369141 / 0.419271 (-0.050130) | 0.047603 / 0.043533 (0.004070) | 0.297694 / 0.255139 (0.042555) | 0.315380 / 0.283200 (0.032180) | 0.105009 / 0.141683 (-0.036674) | 1.421077 / 1.452155 (-0.031078) | 1.550024 / 1.492716 (0.057308) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.239026 / 0.018006 (0.221020) | 0.550010 / 0.000490 (0.549520) | 0.003294 / 0.000200 (0.003094) | 0.000093 / 0.000054 (0.000038) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027180 / 0.037411 (-0.010231) | 0.107942 / 0.014526 (0.093416) | 0.121092 / 0.176557 (-0.055464) | 0.161028 / 0.737135 (-0.576108) | 0.124615 / 0.296338 (-0.171723) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.399492 / 0.215209 (0.184283) | 3.984685 / 2.077655 (1.907030) | 1.794784 / 1.504120 (0.290664) | 1.604849 / 1.541195 (0.063654) | 1.682994 / 1.468490 (0.214504) | 0.691197 / 4.584777 (-3.893580) | 3.741816 / 3.745712 (-0.003897) | 2.092151 / 5.269862 (-3.177711) | 1.319106 / 4.565676 (-3.246570) | 0.083875 / 0.424275 (-0.340400) | 0.012473 / 0.007607 (0.004866) | 0.514057 / 0.226044 (0.288012) | 5.110217 / 2.268929 (2.841288) | 2.259105 / 55.444624 (-53.185519) | 1.914021 / 6.876477 (-4.962455) | 1.958371 / 2.142072 (-0.183701) | 0.819800 / 4.805227 (-3.985428) | 0.161153 / 6.500664 (-6.339511) | 0.061967 / 0.075469 (-0.013502) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.198553 / 1.841788 (-0.643234) | 14.793201 / 8.074308 (6.718893) | 14.646807 / 10.191392 (4.455415) | 0.152805 / 0.680424 (-0.527619) | 0.029206 / 0.534201 (-0.504995) | 0.440875 / 0.579283 (-0.138408) | 0.434925 / 0.434364 (0.000561) | 0.533495 / 0.540337 (-0.006842) | 0.624479 / 1.386936 (-0.762457) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007346 / 0.011353 (-0.004007) | 0.005422 / 0.011008 (-0.005586) | 0.073930 / 0.038508 (0.035422) | 0.032978 / 0.023109 (0.009869) | 0.335182 / 0.275898 (0.059284) | 0.371916 / 0.323480 (0.048436) | 0.005851 / 0.007986 (-0.002135) | 0.005582 / 0.004328 (0.001254) | 0.073090 / 0.004250 (0.068839) | 0.048395 / 0.037052 (0.011342) | 0.353921 / 0.258489 (0.095432) | 0.380678 / 0.293841 (0.086837) | 0.036628 / 0.128546 (-0.091919) | 0.012392 / 0.075646 (-0.063254) | 0.086265 / 0.419271 (-0.333006) | 0.049262 / 0.043533 (0.005729) | 0.334790 / 0.255139 (0.079651) | 0.355278 / 0.283200 (0.072078) | 0.102714 / 0.141683 (-0.038969) | 1.536366 / 1.452155 (0.084211) | 1.565984 / 1.492716 (0.073268) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.216050 / 0.018006 (0.198043) | 0.554972 / 0.000490 (0.554482) | 0.002432 / 0.000200 (0.002232) | 0.000110 / 0.000054 (0.000055) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028602 / 0.037411 (-0.008809) | 0.123681 / 0.014526 (0.109155) | 0.136763 / 0.176557 (-0.039793) | 0.170083 / 0.737135 (-0.567052) | 0.138771 / 0.296338 (-0.157567) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.420036 / 0.215209 (0.204827) | 4.188734 / 2.077655 (2.111079) | 2.014758 / 1.504120 (0.510638) | 1.818423 / 1.541195 (0.277228) | 1.940790 / 1.468490 (0.472300) | 0.691420 / 4.584777 (-3.893357) | 3.782996 / 3.745712 (0.037284) | 2.131278 / 5.269862 (-3.138583) | 1.363043 / 4.565676 (-3.202633) | 0.087182 / 0.424275 (-0.337093) | 0.012448 / 0.007607 (0.004841) | 0.519296 / 0.226044 (0.293252) | 5.220397 / 2.268929 (2.951469) | 2.474243 / 55.444624 (-52.970381) | 2.139726 / 6.876477 (-4.736751) | 2.200700 / 2.142072 (0.058627) | 0.841171 / 4.805227 (-3.964056) | 0.169234 / 6.500664 (-6.331430) | 0.063879 / 0.075469 (-0.011590) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.260262 / 1.841788 (-0.581526) | 14.853209 / 8.074308 (6.778901) | 13.944085 / 10.191392 (3.752693) | 0.192014 / 0.680424 (-0.488410) | 0.017811 / 0.534201 (-0.516390) | 0.427166 / 0.579283 (-0.152117) | 0.438263 / 0.434364 (0.003899) | 0.538815 / 0.540337 (-0.001523) | 0.641398 / 1.386936 (-0.745538) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#139e9ae67a88cd79274bbf8315d861ee8bc7175f \"CML watermark\")\n" ]
null
[]
Add JAX-formatting documentation
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/5535/timeline
## What's in this PR? As a follow-up of #5522, I've created this entry in the documentation to explain how to use `.with_format("jax")` and why is it useful. @lhoestq Feel free to drop any feedback and/or suggestion, as probably more useful features can be included there!
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/5535.diff", "html_url": "https://github.com/huggingface/datasets/pull/5535", "merged_at": "2023-02-20T10:32:39Z", "patch_url": "https://github.com/huggingface/datasets/pull/5535.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5535" }
1,586,520,369
https://api.github.com/repos/huggingface/datasets/issues/5535/comments
PR_kwDODunzps5KEb5L
null
5,535
https://api.github.com/repos/huggingface/datasets/issues/5535/events
true
open
2023-02-15T16:34:25Z
null
https://api.github.com/repos/huggingface/datasets/issues/5534
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5534/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/5534/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/3375489?v=4", "events_url": "https://api.github.com/users/ArneBinder/events{/privacy}", "followers_url": "https://api.github.com/users/ArneBinder/followers", "following_url": "https://api.github.com/users/ArneBinder/following{/other_user}", "gists_url": "https://api.github.com/users/ArneBinder/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ArneBinder", "id": 3375489, "login": "ArneBinder", "node_id": "MDQ6VXNlcjMzNzU0ODk=", "organizations_url": "https://api.github.com/users/ArneBinder/orgs", "received_events_url": "https://api.github.com/users/ArneBinder/received_events", "repos_url": "https://api.github.com/users/ArneBinder/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ArneBinder/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArneBinder/subscriptions", "type": "User", "url": "https://api.github.com/users/ArneBinder" }
https://github.com/huggingface/datasets/issues/5534
[]
false
2023-03-03T16:31:33Z
null
null
[ "Hi! This code works for me locally or in Colab. What's the output of `python -c \"import pyarrow as pa; print(pa.__version__)\"` when you run it inside your environment?", "Thanks for looking into this!\r\nThe output of `python -c \"import pyarrow as pa; print(pa.__version__)\"` is:\r\n```\r\n11.0.0\r\n```\r\n\r\nI did the following to setup the environment:\r\n```\r\nconda create -n datasets_debug python=3.9\r\nconda activate datasets_debug\r\npip install datasets==2.9.0\r\n```\r\n\r\nI just tested this on another machine (Ubuntu 18.04.6 LTS) with the same result as mentioned in the issue description.\r\n" ]
null
[]
map() breaks at certain dataset size when using Array3D
NONE
https://api.github.com/repos/huggingface/datasets/issues/5534/timeline
### Describe the bug `map()` magically breaks when using a `Array3D` feature and mapping it. I created a very simple dummy dataset (see below). When filtering it down to 95 elements I can apply map, but it breaks when filtering it down to just 96 entries with the following exception: ``` Traceback (most recent call last): File "/home/arbi01/miniconda3/envs/tmp9/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 3255, in _map_single writer.finalize() # close_stream=bool(buf_writer is None)) # We only close if we are writing in a file File "/home/arbi01/miniconda3/envs/tmp9/lib/python3.9/site-packages/datasets/arrow_writer.py", line 581, in finalize self.write_examples_on_file() File "/home/arbi01/miniconda3/envs/tmp9/lib/python3.9/site-packages/datasets/arrow_writer.py", line 440, in write_examples_on_file batch_examples[col] = array_concat(arrays) File "/home/arbi01/miniconda3/envs/tmp9/lib/python3.9/site-packages/datasets/table.py", line 1931, in array_concat return _concat_arrays(arrays) File "/home/arbi01/miniconda3/envs/tmp9/lib/python3.9/site-packages/datasets/table.py", line 1901, in _concat_arrays return array_type.wrap_array(_concat_arrays([array.storage for array in arrays])) File "/home/arbi01/miniconda3/envs/tmp9/lib/python3.9/site-packages/datasets/table.py", line 1922, in _concat_arrays _concat_arrays([array.values for array in arrays]), File "/home/arbi01/miniconda3/envs/tmp9/lib/python3.9/site-packages/datasets/table.py", line 1922, in _concat_arrays _concat_arrays([array.values for array in arrays]), File "/home/arbi01/miniconda3/envs/tmp9/lib/python3.9/site-packages/datasets/table.py", line 1920, in _concat_arrays return pa.ListArray.from_arrays( File "pyarrow/array.pxi", line 1997, in pyarrow.lib.ListArray.from_arrays File "pyarrow/array.pxi", line 1527, in pyarrow.lib.Array.validate File "pyarrow/error.pxi", line 100, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: Negative offsets in list array During handling of the above exception, another exception occurred: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/arbi01/miniconda3/envs/tmp9/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 2815, in map return self._map_single( File "/home/arbi01/miniconda3/envs/tmp9/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 546, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/home/arbi01/miniconda3/envs/tmp9/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 513, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/home/arbi01/miniconda3/envs/tmp9/lib/python3.9/site-packages/datasets/fingerprint.py", line 480, in wrapper out = func(self, *args, **kwargs) File "/home/arbi01/miniconda3/envs/tmp9/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 3259, in _map_single writer.finalize() File "/home/arbi01/miniconda3/envs/tmp9/lib/python3.9/site-packages/datasets/arrow_writer.py", line 581, in finalize self.write_examples_on_file() File "/home/arbi01/miniconda3/envs/tmp9/lib/python3.9/site-packages/datasets/arrow_writer.py", line 440, in write_examples_on_file batch_examples[col] = array_concat(arrays) File "/home/arbi01/miniconda3/envs/tmp9/lib/python3.9/site-packages/datasets/table.py", line 1931, in array_concat return _concat_arrays(arrays) File "/home/arbi01/miniconda3/envs/tmp9/lib/python3.9/site-packages/datasets/table.py", line 1901, in _concat_arrays return array_type.wrap_array(_concat_arrays([array.storage for array in arrays])) File "/home/arbi01/miniconda3/envs/tmp9/lib/python3.9/site-packages/datasets/table.py", line 1922, in _concat_arrays _concat_arrays([array.values for array in arrays]), File "/home/arbi01/miniconda3/envs/tmp9/lib/python3.9/site-packages/datasets/table.py", line 1922, in _concat_arrays _concat_arrays([array.values for array in arrays]), File "/home/arbi01/miniconda3/envs/tmp9/lib/python3.9/site-packages/datasets/table.py", line 1920, in _concat_arrays return pa.ListArray.from_arrays( File "pyarrow/array.pxi", line 1997, in pyarrow.lib.ListArray.from_arrays File "pyarrow/array.pxi", line 1527, in pyarrow.lib.Array.validate File "pyarrow/error.pxi", line 100, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: Negative offsets in list array ``` ### Steps to reproduce the bug 1. put following dataset loading script into: debug/debug.py ```python import datasets import numpy as np class DEBUG(datasets.GeneratorBasedBuilder): """DEBUG dataset.""" def _info(self): return datasets.DatasetInfo( features=datasets.Features( { "id": datasets.Value("uint8"), "img_data": datasets.Array3D(shape=(3, 224, 224), dtype="uint8"), }, ), supervised_keys=None, ) def _split_generators(self, dl_manager): return [datasets.SplitGenerator(name=datasets.Split.TRAIN)] def _generate_examples(self): for i in range(149): image_np = np.zeros(shape=(3, 224, 224), dtype=np.int8).tolist() yield f"id_{i}", {"id": i, "img_data": image_np} ``` 2. try the following code: ```python import datasets def add_dummy_col(ex): ex["dummy"] = "test" return ex ds = datasets.load_dataset(path="debug", split="train") # works ds_filtered_works = ds.filter(lambda example: example["id"] < 95) print(f"filtered result size: {len(ds_filtered_works)}") # output: # filtered result size: 95 ds_mapped_works = ds_filtered_works.map(add_dummy_col) # fails ds_filtered_error = ds.filter(lambda example: example["id"] < 96) print(f"filtered result size: {len(ds_filtered_error)}") # output: # filtered result size: 96 ds_mapped_error = ds_filtered_error.map(add_dummy_col) ``` ### Expected behavior The example code does not fail. ### Environment info Python 3.9.16 (main, Jan 11 2023, 16:05:54); [GCC 11.2.0] :: Anaconda, Inc. on linux datasets 2.9.0
https://api.github.com/repos/huggingface/datasets
null
1,586,177,862
https://api.github.com/repos/huggingface/datasets/issues/5534/comments
I_kwDODunzps5eiydG
null
5,534
https://api.github.com/repos/huggingface/datasets/issues/5534/events
false
closed
2023-02-15T13:44:01Z
null
https://api.github.com/repos/huggingface/datasets/issues/5533
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5533/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/5533/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/38854604?v=4", "events_url": "https://api.github.com/users/AJDERS/events{/privacy}", "followers_url": "https://api.github.com/users/AJDERS/followers", "following_url": "https://api.github.com/users/AJDERS/following{/other_user}", "gists_url": "https://api.github.com/users/AJDERS/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/AJDERS", "id": 38854604, "login": "AJDERS", "node_id": "MDQ6VXNlcjM4ODU0NjA0", "organizations_url": "https://api.github.com/users/AJDERS/orgs", "received_events_url": "https://api.github.com/users/AJDERS/received_events", "repos_url": "https://api.github.com/users/AJDERS/repos", "site_admin": false, "starred_url": "https://api.github.com/users/AJDERS/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AJDERS/subscriptions", "type": "User", "url": "https://api.github.com/users/AJDERS" }
https://github.com/huggingface/datasets/pull/5533
[]
false
2023-02-28T14:46:13Z
2023-02-28T14:46:12Z
null
[ "I agree that it would be a good idea to introduce a `combiner` argument in another PR.\r\n\r\nI did take quite a lot of inspiration from the implementation of `map`, but it did not seem obvious how to resuse `map` for the implementation. Do you have any suggestions, i could give a try?\r\n\r\nThose were exactly my thoughts, regarding the non-obvious initializer for batched and formatted datasets, so i agree! I'll introduce a `initializer` argument, and have it mandatory when `batched=True`.", "I added `initializer`. It is optional for `batched=False` and mandatory for `batched=True`. It has to be of the same length as `input_columns`, if `input_columns=None` it has to have the same length as `_data.column_names`. \r\n\r\nIf the initializer is not set for `batched=False` the first example is set as the `initializer`. \r\n\r\nThe initializer is used to initiliaze for each shard, so that means if that:\r\n```python\r\ndset = Dataset.from_dict({\"x\": [1, 2, 3]})\r\nsum_reduce = lambda x, y: x + y\r\nreduction = dset.reduce(sum_reduce, batched=True, initializer=1, input_columns='x', num_proc=2)\r\n# reduction is 8, i.e. reduction + num_proc * initializer\r\n```", "> I added initializer. It is optional for batched=False and mandatory for batched=True. It has to be of the same length as input_columns, if input_columns=None it has to have the same length as _data.column_names.\r\n> \r\n> If the initializer is not set for batched=False the first example is set as the initializer.\r\n\r\nSounds good to me !\r\n\r\n> The initializer is used to initiliaze for each shard, so that means if that:\r\n> \r\n> ```python\r\n> dset = Dataset.from_dict({\"x\": [1, 2, 3]})\r\n> sum_reduce = lambda x, y: x + y\r\n> reduction = dset.reduce(sum_reduce, batched=True, initializer=1, input_columns='x', num_proc=2)\r\n> # reduction is 8, i.e. reduction + num_proc * initializer\r\n> ```\r\n\r\nHmm this can be confusing for some users. Maybe we should consider making `combiner` mandatory for multiprocessing.\r\n\r\nIf we agree on this, maybe for this PR you can either:\r\n- remove multiprocessing (and we add combiner + multiprocessing in a subsequent PR)\r\n- OR add `combiner` directly\r\n\r\nMaybe we can get more feedback from @huggingface/datasets as well", "> > I added initializer. It is optional for batched=False and mandatory for batched=True. It has to be of the same length as input_columns, if input_columns=None it has to have the same length as _data.column_names.\r\n> > If the initializer is not set for batched=False the first example is set as the initializer.\r\n> \r\n> Sounds good to me !\r\n> \r\n> > The initializer is used to initiliaze for each shard, so that means if that:\r\n> > ```python\r\n> > dset = Dataset.from_dict({\"x\": [1, 2, 3]})\r\n> > sum_reduce = lambda x, y: x + y\r\n> > reduction = dset.reduce(sum_reduce, batched=True, initializer=1, input_columns='x', num_proc=2)\r\n> > # reduction is 8, i.e. reduction + num_proc * initializer\r\n> > ```\r\n> \r\n> Hmm this can be confusing for some users. Maybe we should consider making `combiner` mandatory for multiprocessing.\r\n> \r\n> If we agree on this, maybe for this PR you can either:\r\n> \r\n> * remove multiprocessing (and we add combiner + multiprocessing in a subsequent PR)\r\n> * OR add `combiner` directly\r\n> \r\n> Maybe we can get more feedback from @huggingface/datasets as well\r\n\r\nI think i prefer adding `combiner` in this PR. I think ill make `combiner` mandatory for `batched=True`, instead of assuming that `combiner=function`. Ill look at this one of the coming days. Also at some point i have to define `reduce` for `DatasetDict`, and not just `Dataset`.", "I added the `combiner` parameter as described. I added some examples in the docstring, as i felt it might still be a bit confusing what happens during multiprocessing / batching.\r\n\r\nStill need to look at `DatasetDict`.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5533). All of your documentation changes will be reflected on that endpoint.", "Feel free to merge `main` into your branch - we fixed some CI failures today", "The proposed API doesn't seem intuitive to me - one can already use `functools.reduce` or `Dataset.map` for this purpose ([Colab](https://colab.research.google.com/drive/1jCLv31Y4cDfqD0lhO0AnqEv3Or-LLvWe?usp=sharing) with examples), so perhaps we could have a section in the docs that uses these methods to perform reductions rather than introducing a new method (which needs to be maintained later)", "Thanks for sharing this google colab, it has nice examples !\r\n\r\nThough I still think `functools.reduce` with multiprocessing can be a pain - we offer something easier here:\r\n- no need to use a pool yourself\r\n- no need to use `map` just to iterate on the dataset (not its main purpose)\r\n- native support for lambdas (using dill)\r\n- the combiner is **mandatory** for multiprocessing to avoid ending up with an incorrect result as in your example\r\n\r\nHowever I agree that maintaining this can be challenging, especially if you think about how `map` already is, and if we also have to deal with dataset formatting.", "> native support for lambdas (using dill)\r\n\r\nReplacing `multiprocessing` with `multiprocess` in the example would allow that.\r\n\r\n> no need to use map just to iterate on the dataset (not its main purpose)\r\n\r\nNot the main purpose, but this was mentioned as a \"feature\" in the previous docs if I remember.\r\n\r\nAnd all this is related to the multi-processing case, which we can document.\r\n\r\nBesides the linked issue, I can't find requests for `Dataset.reduce`, which makes me think `functools.reduce` does the job for most users.", "> Besides the linked issue, I can't find requests for Dataset.reduce, which makes me think functools.reduce does the job for most users.\r\n\r\nI think @srush was looking for a way to do a word count but ended up using a single processed `map`. I also saw some users on the forum wanting to compute `max`\r\n\r\n> Not the main purpose, but this was mentioned as a \"feature\" in the previous docs if I remember.\r\n> \r\n> And all this is related to the multi-processing case, which we can document.\r\n\r\nYup indeed", "While counting is one example, I often find I want to compute different statistics over a dataset. This seems like a natural way to do it in a stateless manner.\n\n\nI guess you could use functools reduce, but that wouldn't allow batching, right?", "I've updated the [Colab](https://colab.research.google.com/drive/1jCLv31Y4cDfqD0lhO0AnqEv3Or-LLvWe?usp=sharing) with an example that reduces batches with `map` and then computes the final result. It would be nice to have a similar example (explained in detail) in the docs to show the full power of `map`.\r\n\r\nPlus, for simple reductions such as `max`, one can do `pc.max(ds.with_format(\"arrow\")[\"col\"])` to directly get the result (without loading the entire column in RAM).\r\n\r\n@srush \r\n\r\n> I guess you could use functools reduce, but that wouldn't allow batching, right?\r\n\r\nYou can use `.iter(batch_size)` to get batches\r\n ", "That `functools` tools example is clean. I didn't know about `iter`. That would handle my use case.\n\nThe stateful `map` with a global variable is pretty hairy. I don't think we should recommend people do that.\n\n", "Whenever I in the past wanted to calculate statistics for datasets I used `functools` similarly to how it's described in the colab, but I always felt it was a bit of a hassle to use it together with multiprocessing, which is why I picked up the issue, to do it \"once and for all\".", "Should i close this and open another PR, with descriptions of how to use `map` for reduction, or?", "Yes I think good documentation is the way to go here. @mariosasko 's examples are clear and efficient.\r\n\r\nMaybe we could have an `Aggregations` section in the `Process` page with some guides on how to:\r\n- use `.map()` to compute aggregates\r\n- use `.with_format(\"arrow\")` for max, min, etc. to save RAM and get max speed\r\n- use a multiprocessed `.map()` to get partial results in parallel and combine them (max text length example)\r\n- (advanced) use multiprocessing with an arbitrary accumulator (word count example)\r\n\r\nAnd also a new conceptual guide on `Multiprocessed mapping` to say that it helps speed up CPU intensive processing but why it may lead to incorrect results when computing aggregates.\r\n\r\ncc @stevhliu for visibility and if you have some comments", "I would create a `Reduce` - to be more exact - subsection under `Map` to demonstrate these examples since we're showing how they can be done with the `Dataset.map` function. It'd also be good to add a link to the new concept guide from this section to solidify user understanding :)", "Coolio. Ill close this PR and get going on another one adding what we've discussed during the next couple of days!" ]
null
[]
Add reduce function
NONE
https://api.github.com/repos/huggingface/datasets/issues/5533/timeline
This PR closes #5496 . I tried to imitate the `reduce`-method from `functools`, i.e. the function input must be a binary operation. I assume that the input type has an empty element, i.e. `input_type()` is defined, as the acumulant is instantiated as this object - im not sure that is this a reasonable assumption? If `batched= True` the reduction of each shard is _not_ returned, but the reduction of the entire dataset. I was unsure wether this was an intuitive API, or it would make more sense to return the reduction of each shard?
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/5533.diff", "html_url": "https://github.com/huggingface/datasets/pull/5533", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/5533.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5533" }
1,585,885,871
https://api.github.com/repos/huggingface/datasets/issues/5533/comments
PR_kwDODunzps5KCR5I
null
5,533
https://api.github.com/repos/huggingface/datasets/issues/5533/events
true
closed
2023-02-14T16:52:29Z
null
https://api.github.com/repos/huggingface/datasets/issues/5532
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5532/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/5532/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/37191008?v=4", "events_url": "https://api.github.com/users/Ulipenitz/events{/privacy}", "followers_url": "https://api.github.com/users/Ulipenitz/followers", "following_url": "https://api.github.com/users/Ulipenitz/following{/other_user}", "gists_url": "https://api.github.com/users/Ulipenitz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Ulipenitz", "id": 37191008, "login": "Ulipenitz", "node_id": "MDQ6VXNlcjM3MTkxMDA4", "organizations_url": "https://api.github.com/users/Ulipenitz/orgs", "received_events_url": "https://api.github.com/users/Ulipenitz/received_events", "repos_url": "https://api.github.com/users/Ulipenitz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Ulipenitz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Ulipenitz/subscriptions", "type": "User", "url": "https://api.github.com/users/Ulipenitz" }
https://github.com/huggingface/datasets/issues/5532
[]
false
2023-02-15T16:09:19Z
2023-02-15T16:09:19Z
null
[ "Hi! You can get this behavior by specifying `stratify_by_column=\"label\"` in `train_test_split`.\r\n\r\nThis is the full example:\r\n```python\r\nimport numpy as np\r\nfrom datasets import Dataset, ClassLabel\r\n\r\ndata = [\r\n {'label': 0, 'text': \"example1\"},\r\n {'label': 1, 'text': \"example2\"},\r\n {'label': 1, 'text': \"example3\"},\r\n {'label': 1, 'text': \"example4\"},\r\n {'label': 0, 'text': \"example5\"},\r\n {'label': 1, 'text': \"example6\"},\r\n {'label': 2, 'text': \"example7\"},\r\n {'label': 2, 'text': \"example8\"}\r\n]\r\n\r\nfor _ in range(10):\r\n data_set = Dataset.from_list(data)\r\n data_set = data_set.cast_column(\"label\", ClassLabel(num_classes=3))\r\n data_set = data_set.train_test_split(test_size=0.5, stratify_by_column=\"label\")\r\n unique_labels_train = np.unique(data_set[\"train\"][:][\"label\"])\r\n unique_labels_test = np.unique(data_set[\"test\"][:][\"label\"])\r\n assert len(unique_labels_train) >= len(unique_labels_test) \r\n```\r\n" ]
completed
[]
train_test_split in arrow_dataset does not ensure to keep single classes in test set
NONE
https://api.github.com/repos/huggingface/datasets/issues/5532/timeline
### Describe the bug When I have a dataset with very few (e.g. 1) examples per class and I call the train_test_split function on it, sometimes the single class will be in the test set. thus will never be considered for training. ### Steps to reproduce the bug ``` import numpy as np from datasets import Dataset data = [ {'label': 0, 'text': "example1"}, {'label': 1, 'text': "example2"}, {'label': 1, 'text': "example3"}, {'label': 1, 'text': "example4"}, {'label': 0, 'text': "example5"}, {'label': 1, 'text': "example6"}, {'label': 2, 'text': "example7"}, {'label': 2, 'text': "example8"} ] for _ in range(10): data_set = Dataset.from_list(data) data_set = data_set.train_test_split(test_size=0.5) data_set["train"] unique_labels_train = np.unique(data_set["train"][:]["label"]) unique_labels_test = np.unique(data_set["test"][:]["label"]) assert len(unique_labels_train) >= len(unique_labels_test) ``` ### Expected behavior I expect to have every available class at least once in my training set. ### Environment info - `datasets` version: 2.9.0 - Platform: Linux-5.15.65+-x86_64-with-debian-bullseye-sid - Python version: 3.7.12 - PyArrow version: 11.0.0 - Pandas version: 1.3.5
https://api.github.com/repos/huggingface/datasets
null
1,584,505,128
https://api.github.com/repos/huggingface/datasets/issues/5532/comments
I_kwDODunzps5ecaEo
null
5,532
https://api.github.com/repos/huggingface/datasets/issues/5532/events
false
open
2023-02-14T15:39:49Z
null
https://api.github.com/repos/huggingface/datasets/issues/5531
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/5531/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/5531/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://github.com/huggingface/datasets/issues/5531
[]
false
2023-02-14T15:46:09Z
null
null
[]
null
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
Invalid Arrow data from JSONL
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/5531/timeline
This code fails: ```python from datasets import Dataset ds = Dataset.from_json(path_to_file) ds.data.validate() ``` raises ```python ArrowInvalid: Column 2: In chunk 1: Invalid: Struct child array #3 invalid: Invalid: Length spanned by list offsets (4064) larger than values array (length 4063) ``` This causes many issues for @TevenLeScao: - `map` fails because it fails to rewrite invalid arrow arrays ```python ~/Desktop/hf/datasets/src/datasets/arrow_writer.py in write_examples_on_file(self) 438 if all(isinstance(row[0][col], (pa.Array, pa.ChunkedArray)) for row in self.current_examples): 439 arrays = [row[0][col] for row in self.current_examples] --> 440 batch_examples[col] = array_concat(arrays) 441 else: 442 batch_examples[col] = [ ~/Desktop/hf/datasets/src/datasets/table.py in array_concat(arrays) 1885 1886 if not _is_extension_type(array_type): -> 1887 return pa.concat_arrays(arrays) 1888 1889 def _offsets_concat(offsets): ~/.virtualenvs/hf-datasets/lib/python3.7/site-packages/pyarrow/array.pxi in pyarrow.lib.concat_arrays() ~/.virtualenvs/hf-datasets/lib/python3.7/site-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status() ~/.virtualenvs/hf-datasets/lib/python3.7/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status() ArrowIndexError: array slice would exceed array length ``` - `to_dict()` **segfaults** ⚠️ ```python /Users/runner/work/crossbow/crossbow/arrow/cpp/src/arrow/array/data.cc:99: Check failed: (off) <= (length) Slice offset greater than array length ``` To reproduce: unzip the archive and run the above code using `sanity_oscar_en.jsonl` [sanity_oscar_en.jsonl.zip](https://github.com/huggingface/datasets/files/10734124/sanity_oscar_en.jsonl.zip) PS: reading using pandas and converting to Arrow works though (note that the dataset lives in RAM in this case): ```python ds = Dataset.from_pandas(pd.read_json(path_to_file, lines=True)) ds.data.validate() ```
https://api.github.com/repos/huggingface/datasets
null
1,584,387,276
https://api.github.com/repos/huggingface/datasets/issues/5531/comments
I_kwDODunzps5eb9TM
null
5,531
https://api.github.com/repos/huggingface/datasets/issues/5531/events
false
closed
2023-02-13T19:33:23Z
null
https://api.github.com/repos/huggingface/datasets/issues/5530
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5530/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/5530/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4", "events_url": "https://api.github.com/users/alvarobartt/events{/privacy}", "followers_url": "https://api.github.com/users/alvarobartt/followers", "following_url": "https://api.github.com/users/alvarobartt/following{/other_user}", "gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/alvarobartt", "id": 36760800, "login": "alvarobartt", "node_id": "MDQ6VXNlcjM2NzYwODAw", "organizations_url": "https://api.github.com/users/alvarobartt/orgs", "received_events_url": "https://api.github.com/users/alvarobartt/received_events", "repos_url": "https://api.github.com/users/alvarobartt/repos", "site_admin": false, "starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions", "type": "User", "url": "https://api.github.com/users/alvarobartt" }
https://github.com/huggingface/datasets/pull/5530
[]
false
2023-02-14T14:40:41Z
2023-02-14T12:23:58Z
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008837 / 0.011353 (-0.002516) | 0.004608 / 0.011008 (-0.006400) | 0.101821 / 0.038508 (0.063312) | 0.030300 / 0.023109 (0.007191) | 0.301275 / 0.275898 (0.025377) | 0.365027 / 0.323480 (0.041547) | 0.007043 / 0.007986 (-0.000943) | 0.003493 / 0.004328 (-0.000835) | 0.078444 / 0.004250 (0.074194) | 0.036963 / 0.037052 (-0.000089) | 0.310510 / 0.258489 (0.052020) | 0.343769 / 0.293841 (0.049928) | 0.033560 / 0.128546 (-0.094986) | 0.011427 / 0.075646 (-0.064220) | 0.323542 / 0.419271 (-0.095730) | 0.043063 / 0.043533 (-0.000470) | 0.308869 / 0.255139 (0.053730) | 0.326436 / 0.283200 (0.043236) | 0.091775 / 0.141683 (-0.049908) | 1.471020 / 1.452155 (0.018865) | 1.494328 / 1.492716 (0.001612) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.009299 / 0.018006 (-0.008707) | 0.415705 / 0.000490 (0.415215) | 0.002406 / 0.000200 (0.002206) | 0.000066 / 0.000054 (0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022959 / 0.037411 (-0.014452) | 0.097111 / 0.014526 (0.082585) | 0.103399 / 0.176557 (-0.073157) | 0.144385 / 0.737135 (-0.592750) | 0.109069 / 0.296338 (-0.187269) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.417796 / 0.215209 (0.202587) | 4.158198 / 2.077655 (2.080543) | 1.862036 / 1.504120 (0.357916) | 1.650130 / 1.541195 (0.108936) | 1.717150 / 1.468490 (0.248660) | 0.691704 / 4.584777 (-3.893073) | 3.328254 / 3.745712 (-0.417458) | 1.850070 / 5.269862 (-3.419792) | 1.154331 / 4.565676 (-3.411346) | 0.082199 / 0.424275 (-0.342076) | 0.012226 / 0.007607 (0.004619) | 0.522491 / 0.226044 (0.296446) | 5.244181 / 2.268929 (2.975253) | 2.286651 / 55.444624 (-53.157973) | 1.954439 / 6.876477 (-4.922038) | 1.992052 / 2.142072 (-0.150020) | 0.804779 / 4.805227 (-4.000449) | 0.147341 / 6.500664 (-6.353323) | 0.063863 / 0.075469 (-0.011606) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.270778 / 1.841788 (-0.571010) | 13.676378 / 8.074308 (5.602070) | 14.253498 / 10.191392 (4.062106) | 0.170748 / 0.680424 (-0.509676) | 0.028451 / 0.534201 (-0.505750) | 0.395034 / 0.579283 (-0.184249) | 0.407512 / 0.434364 (-0.026852) | 0.466740 / 0.540337 (-0.073598) | 0.564338 / 1.386936 (-0.822598) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006733 / 0.011353 (-0.004620) | 0.004635 / 0.011008 (-0.006373) | 0.075464 / 0.038508 (0.036956) | 0.027732 / 0.023109 (0.004623) | 0.343622 / 0.275898 (0.067724) | 0.380388 / 0.323480 (0.056908) | 0.005177 / 0.007986 (-0.002808) | 0.003435 / 0.004328 (-0.000893) | 0.074546 / 0.004250 (0.070296) | 0.039115 / 0.037052 (0.002063) | 0.342207 / 0.258489 (0.083718) | 0.390324 / 0.293841 (0.096483) | 0.031665 / 0.128546 (-0.096882) | 0.011695 / 0.075646 (-0.063951) | 0.085788 / 0.419271 (-0.333484) | 0.042423 / 0.043533 (-0.001110) | 0.340748 / 0.255139 (0.085609) | 0.372813 / 0.283200 (0.089614) | 0.092395 / 0.141683 (-0.049288) | 1.502158 / 1.452155 (0.050004) | 1.618233 / 1.492716 (0.125516) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.224451 / 0.018006 (0.206444) | 0.398712 / 0.000490 (0.398222) | 0.002739 / 0.000200 (0.002539) | 0.000074 / 0.000054 (0.000020) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025393 / 0.037411 (-0.012018) | 0.100480 / 0.014526 (0.085954) | 0.106913 / 0.176557 (-0.069644) | 0.148639 / 0.737135 (-0.588496) | 0.110098 / 0.296338 (-0.186240) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.439359 / 0.215209 (0.224150) | 4.396801 / 2.077655 (2.319146) | 2.069809 / 1.504120 (0.565689) | 1.851014 / 1.541195 (0.309820) | 1.885003 / 1.468490 (0.416513) | 0.701387 / 4.584777 (-3.883390) | 3.404943 / 3.745712 (-0.340769) | 1.874506 / 5.269862 (-3.395355) | 1.174925 / 4.565676 (-3.390752) | 0.083282 / 0.424275 (-0.340993) | 0.012352 / 0.007607 (0.004745) | 0.543058 / 0.226044 (0.317013) | 5.458186 / 2.268929 (3.189258) | 2.562159 / 55.444624 (-52.882466) | 2.198810 / 6.876477 (-4.677667) | 2.238976 / 2.142072 (0.096903) | 0.810958 / 4.805227 (-3.994269) | 0.153341 / 6.500664 (-6.347323) | 0.067773 / 0.075469 (-0.007696) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.303938 / 1.841788 (-0.537850) | 14.170363 / 8.074308 (6.096055) | 13.727012 / 10.191392 (3.535620) | 0.129118 / 0.680424 (-0.551306) | 0.016746 / 0.534201 (-0.517455) | 0.382759 / 0.579283 (-0.196524) | 0.391070 / 0.434364 (-0.043294) | 0.461197 / 0.540337 (-0.079141) | 0.557641 / 1.386936 (-0.829295) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#004bba88db03fb87d57252e38a4d7abdb0a5f0a9 \"CML watermark\")\n" ]
null
[]
Add missing license in `NumpyFormatter`
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/5530/timeline
## What's in this PR? As discussed with @lhoestq in https://github.com/huggingface/datasets/pull/5522, the license for `NumpyFormatter` at `datasets/formatting/np_formatter.py` was missing, but present on the rest of the `formatting/*.py` files. So this PR is basically to include it there.
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/5530.diff", "html_url": "https://github.com/huggingface/datasets/pull/5530", "merged_at": "2023-02-14T12:23:58Z", "patch_url": "https://github.com/huggingface/datasets/pull/5530.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5530" }
1,582,938,241
https://api.github.com/repos/huggingface/datasets/issues/5530/comments
PR_kwDODunzps5J4W_4
null
5,530
https://api.github.com/repos/huggingface/datasets/issues/5530/events
true
closed
2023-02-13T14:54:55Z
null
https://api.github.com/repos/huggingface/datasets/issues/5529
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5529/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/5529/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4", "events_url": "https://api.github.com/users/alvarobartt/events{/privacy}", "followers_url": "https://api.github.com/users/alvarobartt/followers", "following_url": "https://api.github.com/users/alvarobartt/following{/other_user}", "gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/alvarobartt", "id": 36760800, "login": "alvarobartt", "node_id": "MDQ6VXNlcjM2NzYwODAw", "organizations_url": "https://api.github.com/users/alvarobartt/orgs", "received_events_url": "https://api.github.com/users/alvarobartt/received_events", "repos_url": "https://api.github.com/users/alvarobartt/repos", "site_admin": false, "starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions", "type": "User", "url": "https://api.github.com/users/alvarobartt" }
https://github.com/huggingface/datasets/pull/5529
[]
false
2023-02-23T18:14:32Z
2023-02-23T18:05:26Z
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "Hmm, should this also be updated in `Dataset.load_from_disk` and `DatasetDict.load_from_disk`? https://github.com/huggingface/datasets/pull/5466 As there the paths are joined using `Path(..., ...)` and it won't work on Windows OS according to that PR, right?", "Hi, @lhoestq could you review this PR? Thank you in advance and sorry for the ping 🤗 ", "Besides that, I was also thinking of adding a `skip_validation` boolean arg in both `Dataset.load_from_disk` and `DatasetDict.load_from_disk` to avoid duplicating those calls too when those functions are called from `datasets.load_from_disk`.\r\n\r\nSo that `skip_validation` is set to `False` by default, but passed as `True` if called from `datasets.load_from_disk`, and that just affects the file checking part of the code on both functions, do you agree @lhoestq?", "I think we should always verify", "> I think we should always verify\r\n\r\nBut with the current way we're also verifying twice right? First on `datasets.load_from_disk` then on `Dataset.load_from_disk`, right?\r\n\r\nMaybe a warning before calling either `Dataset.load_from_disk` or `DatasetDict.load_from_disk` is enough?\r\n\r\ne.g. **\"Consider using `Dataset.load_from_disk` instead to avoid `fsspec` from verifying the presence of `dataset_info.json` and `state.json` in the remote filesystem twice.\"** to be showed just when `fs` is remote.", "I don't think it's worth adding a new argument just for that. Usually we keep the set of arguments to the strict minimum", "> I don't think it's worth adding a new argument just for that. Usually we keep the set of arguments to the strict minimum\r\n\r\nWhat about the warning?\r\n\r\nAnyway, if you don't think that's worth it feel free to merge 👍🏻 ", "> What about the warning?\r\n\r\nWe may show warnings for suggestions, but only if the user does a very unoptimized thing. Here we're not at that level ^^'", "Thanks for the explanation and feedback @lhoestq 🤗 ", "> Thank you :) Added my last suggestions:\r\n\r\nThanks for the feedback, I agree with everything besides one nit! 👍🏻 ", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.011556 / 0.011353 (0.000203) | 0.006213 / 0.011008 (-0.004796) | 0.132390 / 0.038508 (0.093882) | 0.034609 / 0.023109 (0.011500) | 0.361156 / 0.275898 (0.085258) | 0.402524 / 0.323480 (0.079044) | 0.009138 / 0.007986 (0.001152) | 0.005728 / 0.004328 (0.001399) | 0.115406 / 0.004250 (0.111156) | 0.041440 / 0.037052 (0.004388) | 0.370232 / 0.258489 (0.111742) | 0.409944 / 0.293841 (0.116103) | 0.053803 / 0.128546 (-0.074744) | 0.022029 / 0.075646 (-0.053617) | 0.400325 / 0.419271 (-0.018946) | 0.055324 / 0.043533 (0.011791) | 0.368699 / 0.255139 (0.113560) | 0.391836 / 0.283200 (0.108636) | 0.099356 / 0.141683 (-0.042327) | 1.687881 / 1.452155 (0.235726) | 1.752202 / 1.492716 (0.259485) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.012992 / 0.018006 (-0.005014) | 0.518756 / 0.000490 (0.518267) | 0.004702 / 0.000200 (0.004502) | 0.000105 / 0.000054 (0.000050) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028371 / 0.037411 (-0.009041) | 0.127058 / 0.014526 (0.112532) | 0.136908 / 0.176557 (-0.039649) | 0.210168 / 0.737135 (-0.526968) | 0.139600 / 0.296338 (-0.156738) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.570901 / 0.215209 (0.355692) | 5.967213 / 2.077655 (3.889558) | 2.286745 / 1.504120 (0.782626) | 1.950682 / 1.541195 (0.409487) | 2.062536 / 1.468490 (0.594046) | 1.255671 / 4.584777 (-3.329106) | 5.454951 / 3.745712 (1.709238) | 3.076429 / 5.269862 (-2.193433) | 2.082871 / 4.565676 (-2.482806) | 0.150069 / 0.424275 (-0.274206) | 0.014864 / 0.007607 (0.007257) | 0.774672 / 0.226044 (0.548627) | 7.873992 / 2.268929 (5.605064) | 3.196165 / 55.444624 (-52.248459) | 2.366854 / 6.876477 (-4.509623) | 2.407381 / 2.142072 (0.265309) | 1.419130 / 4.805227 (-3.386097) | 0.249210 / 6.500664 (-6.251454) | 0.088648 / 0.075469 (0.013179) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.528368 / 1.841788 (-0.313420) | 17.554000 / 8.074308 (9.479692) | 20.773300 / 10.191392 (10.581908) | 0.216903 / 0.680424 (-0.463521) | 0.046544 / 0.534201 (-0.487657) | 0.538238 / 0.579283 (-0.041045) | 0.673926 / 0.434364 (0.239562) | 0.656108 / 0.540337 (0.115770) | 0.774026 / 1.386936 (-0.612910) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010177 / 0.011353 (-0.001176) | 0.006334 / 0.011008 (-0.004675) | 0.100097 / 0.038508 (0.061589) | 0.039996 / 0.023109 (0.016887) | 0.420225 / 0.275898 (0.144327) | 0.437694 / 0.323480 (0.114214) | 0.007987 / 0.007986 (0.000002) | 0.005782 / 0.004328 (0.001454) | 0.106421 / 0.004250 (0.102171) | 0.046993 / 0.037052 (0.009941) | 0.397304 / 0.258489 (0.138815) | 0.441780 / 0.293841 (0.147939) | 0.064594 / 0.128546 (-0.063952) | 0.020823 / 0.075646 (-0.054823) | 0.108854 / 0.419271 (-0.310417) | 0.076457 / 0.043533 (0.032924) | 0.401712 / 0.255139 (0.146573) | 0.459292 / 0.283200 (0.176093) | 0.125044 / 0.141683 (-0.016639) | 1.765531 / 1.452155 (0.313377) | 1.845429 / 1.492716 (0.352713) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.225549 / 0.018006 (0.207543) | 0.524402 / 0.000490 (0.523913) | 0.006994 / 0.000200 (0.006794) | 0.000120 / 0.000054 (0.000065) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033787 / 0.037411 (-0.003624) | 0.144895 / 0.014526 (0.130369) | 0.147185 / 0.176557 (-0.029371) | 0.228227 / 0.737135 (-0.508908) | 0.164967 / 0.296338 (-0.131371) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.628242 / 0.215209 (0.413033) | 6.348176 / 2.077655 (4.270522) | 2.615832 / 1.504120 (1.111712) | 2.217481 / 1.541195 (0.676286) | 2.287058 / 1.468490 (0.818568) | 1.322854 / 4.584777 (-3.261923) | 5.547831 / 3.745712 (1.802119) | 3.199467 / 5.269862 (-2.070395) | 2.135297 / 4.565676 (-2.430380) | 0.165134 / 0.424275 (-0.259141) | 0.014753 / 0.007607 (0.007146) | 0.778579 / 0.226044 (0.552535) | 7.982329 / 2.268929 (5.713401) | 3.331712 / 55.444624 (-52.112913) | 2.642606 / 6.876477 (-4.233871) | 2.699362 / 2.142072 (0.557290) | 1.572268 / 4.805227 (-3.232959) | 0.273348 / 6.500664 (-6.227316) | 0.082975 / 0.075469 (0.007506) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.730421 / 1.841788 (-0.111367) | 18.154495 / 8.074308 (10.080187) | 20.969885 / 10.191392 (10.778493) | 0.233652 / 0.680424 (-0.446772) | 0.026609 / 0.534201 (-0.507592) | 0.546874 / 0.579283 (-0.032410) | 0.602891 / 0.434364 (0.168527) | 0.641073 / 0.540337 (0.100736) | 0.772138 / 1.386936 (-0.614798) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#20703458e3c42ee7bfc1a26e47805c0db4dda2d6 \"CML watermark\")\n" ]
null
[]
Fix `datasets.load_from_disk`, `DatasetDict.load_from_disk` and `Dataset.load_from_disk`
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/5529/timeline
## What's in this PR? After playing around a little bit with 🤗`datasets` in Google Cloud Storage (GCS), I found out some things that should be fixed IMO in the code: * `datasets.load_from_disk` is not checking whether `state.json` is there too when trying to load a `Dataset`, just `dataset_info.json` is checked * `DatasetDict.load_from_disk` is not checking whether `state.json` is there too when redirecting the user to load it as `datasets.load_from_disk`, just `dataset_info.json` is checked, which is misleading, as it won't be loadable that way either * `Dataset.load_from_disk` is missing the `extract_path_from_uri` call before checking in the `fs` whether `dataset_info.json` and `dataset_dict.json` exist, which when using `gcsfs` leads to 400 error code (not blocking) due to `gcsfs.retry.HttpError: Invalid bucket name: 'gs:', 400` * And, finally, the exception messages are a little bit misleading / incomplete IMO so I've tried to include all the relevant information in the messages to avoid issues when interpreting the exceptions
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/5529.diff", "html_url": "https://github.com/huggingface/datasets/pull/5529", "merged_at": "2023-02-23T18:05:26Z", "patch_url": "https://github.com/huggingface/datasets/pull/5529.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5529" }
1,582,501,233
https://api.github.com/repos/huggingface/datasets/issues/5529/comments
PR_kwDODunzps5J26Fq
null
5,529
https://api.github.com/repos/huggingface/datasets/issues/5529/events
true
open
2023-02-13T11:43:47Z
null
https://api.github.com/repos/huggingface/datasets/issues/5528
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5528/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/5528/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/38854604?v=4", "events_url": "https://api.github.com/users/AJDERS/events{/privacy}", "followers_url": "https://api.github.com/users/AJDERS/followers", "following_url": "https://api.github.com/users/AJDERS/following{/other_user}", "gists_url": "https://api.github.com/users/AJDERS/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/AJDERS", "id": 38854604, "login": "AJDERS", "node_id": "MDQ6VXNlcjM4ODU0NjA0", "organizations_url": "https://api.github.com/users/AJDERS/orgs", "received_events_url": "https://api.github.com/users/AJDERS/received_events", "repos_url": "https://api.github.com/users/AJDERS/repos", "site_admin": false, "starred_url": "https://api.github.com/users/AJDERS/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AJDERS/subscriptions", "type": "User", "url": "https://api.github.com/users/AJDERS" }
https://github.com/huggingface/datasets/pull/5528
[]
false
2023-10-06T21:58:02Z
null
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5528). All of your documentation changes will be reflected on that endpoint.", "It seems that the parameter `create_pr` is available for [`0.8.0`](https://huggingface.co/docs/huggingface_hub/v0.8.1/en/package_reference/hf_api#huggingface_hub.HfApi.upload_file) (its not here: [`0.7.0`](https://huggingface.co/docs/huggingface_hub/v0.7.0.rc0/en/package_reference/hf_api#huggingface_hub.HfApi.upload_file)) and onwards. I included a warning, informing the user that no PR was created.", "@nateraw you are completely right! Actually, the dataset shards is never added to the created pr, only the metadata, as the code is now. Ill look into you suggestion asap. Thank!", "@nateraw Nothing more to add, that's a perfect usage of `huggingface_hub` as far as I can tell ! :fire: \r\n\r\nA very nit improvement would be to use the [for .. else ... python statement](https://book.pythontips.com/en/latest/for_-_else.html).\r\ni.e:\r\n\r\n```py\r\nif create_pr is True and revision is not None:\r\n for discussion in get_repo_discussions(repo_id, repo_type='dataset'):\r\n if discussion.is_pull_request and discussion.git_reference == revision:\r\n create_pr = False\r\n break\r\n else:\r\n raise ValueError(\"Provided revision not found\")\r\n```\r\nNo need for the `revision_found` temporary flag when do so. Yeah ok, it's niche :wink: ", "I added the suggestions from @nateraw and @Wauplin .", "> Thanks. Some comments/suggestions below...\r\n> \r\n> Why have you removed the test for create_pr? You could add it again and just add a pytest skipif when version of huggingface_hub is lower than 0.8.1.\r\n\r\nI have added the test again. I removed it because i kept getting errors when calling `create_pull_request` with `repo_id=ds_name` where `temporary_repo = ds_name`, and thought i might look more thoroughly at it later. I have added a test called `test_test` showing this, it gives:\r\n```\r\ntests/test_upstream_hub.py:360: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _\r\n.venv/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py:124: in _inner_fn\r\n return fn(*args, **kwargs)\r\n.venv/lib/python3.10/site-packages/huggingface_hub/hf_api.py:3451: in create_pull_request\r\n return self.create_discussion(\r\n.venv/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py:124: in _inner_fn\r\n return fn(*args, **kwargs)\r\n.venv/lib/python3.10/site-packages/huggingface_hub/hf_api.py:3393: in create_discussion\r\n hf_raise_for_status(resp)\r\n(...)\r\nE huggingface_hub.utils._errors.RepositoryNotFoundError: 401 Client Error. (Request ID: Root=1-63ecd2cb-2cf2557a332c86ad27f687b3)\r\nE \r\nE Repository Not Found for url: https://huggingface.co/api/models/__DUMMY_TRANSFORMERS_USER__/test-16764648321590/discussions.\r\nE Please make sure you specified the correct `repo_id` and `repo_type`.\r\nE If you are trying to access a private or gated repo, make sure you are authenticated.\r\nE Invalid username or password.\r\n```", "> > Thanks. Some comments/suggestions below...\r\n> > Why have you removed the test for create_pr? You could add it again and just add a pytest skipif when version of huggingface_hub is lower than 0.8.1.\r\n> \r\n> I have added the test again. I removed it because i kept getting errors when calling `create_pull_request` with `repo_id=ds_name` where `temporary_repo = ds_name`, and thought i might look more thoroughly at it later. I have added a test called `test_test` showing this, it gives:\r\n> \r\n> ```\r\n> tests/test_upstream_hub.py:360: \r\n> _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _\r\n> .venv/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py:124: in _inner_fn\r\n> return fn(*args, **kwargs)\r\n> .venv/lib/python3.10/site-packages/huggingface_hub/hf_api.py:3451: in create_pull_request\r\n> return self.create_discussion(\r\n> .venv/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py:124: in _inner_fn\r\n> return fn(*args, **kwargs)\r\n> .venv/lib/python3.10/site-packages/huggingface_hub/hf_api.py:3393: in create_discussion\r\n> hf_raise_for_status(resp)\r\n> (...)\r\n> E huggingface_hub.utils._errors.RepositoryNotFoundError: 401 Client Error. (Request ID: Root=1-63ecd2cb-2cf2557a332c86ad27f687b3)\r\n> E \r\n> E Repository Not Found for url: https://huggingface.co/api/models/__DUMMY_TRANSFORMERS_USER__/test-16764648321590/discussions.\r\n> E Please make sure you specified the correct `repo_id` and `repo_type`.\r\n> E If you are trying to access a private or gated repo, make sure you are authenticated.\r\n> E Invalid username or password.\r\n> ```\r\n\r\n@albertvillanova, @lhoestq , FYI I have looked at this again, and i haven't figured it out, so the test`test_push_dataset_to_hub_with_pull_request` and the minimal example `test_test` are still failing locally, while the other tests succeed. Do you have any advice?", "I tried to move all of the \"create pr safely\"-logic to a seperate function in `_hf_hub_fixes`. I looked at how the exceptions were raised before `huggingface_hub.utils.RepositoryNotFoundError`existed, and make changes accordingly. ", "`create_pr` was set during `push_to_hub`, even though it was `None` from the outset, hence causing tests to fail for older versions of `huggingface_hub`. This is now fixed.\r\n\r\nWith the implementation of `_hf_hub_fixes.upload_file` the function call expected `commit_message`, `commit_description`. If these are not set we call the function without them, even though we are on a version of `huggingface_hub` where they are not available in `upload_file`.\r\n\r\nWhen `huggingface_hub < 0.5.0` we assume `repo_id` of them form `organisation/name`, so now that we are calling `create_repo` in the tests with `repo_id` not of this form, we need to handle this case, this is now done.\r\n\r\nMany tests failed for `dataset_dict` for the above reasons, so the fixes from `arrow_dataset.py` were also added to `dataset_dict.py`. \r\n\r\n**All tests are now passing locally for `huggingface_hub==0.2.0` and `huggingface_hub==0.12.1`…** Im sorry I should have downgraded and went through this a long time ago, but I didn’t realise the extend of these version fixes until recently…", "Hi ! FYI bumped the `huggingface-hub` dependency to 0.11 and removed the `_hf_hub_fixes.py` - which should make this PR much easier", "Just now finding this - seems like a cool issue to contribute to. If any more help is needed please ping me! @AJDERS " ]
null
[]
Push to hub in a pull request
NONE
https://api.github.com/repos/huggingface/datasets/issues/5528/timeline
Fixes #5492. Introduce new kwarg `create_pr` in `push_to_hub`, which is passed to `HFapi.upload_file`.
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/5528.diff", "html_url": "https://github.com/huggingface/datasets/pull/5528", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/5528.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5528" }
1,582,195,085
https://api.github.com/repos/huggingface/datasets/issues/5528/comments
PR_kwDODunzps5J13wC
null
5,528
https://api.github.com/repos/huggingface/datasets/issues/5528/events
true
closed
2023-02-12T11:51:25Z
null
https://api.github.com/repos/huggingface/datasets/issues/5527
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5527/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/5527/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://github.com/huggingface/datasets/pull/5527
[]
false
2023-02-13T10:29:03Z
2023-02-13T09:24:16Z
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.011142 / 0.011353 (-0.000211) | 0.005885 / 0.011008 (-0.005123) | 0.115374 / 0.038508 (0.076866) | 0.041704 / 0.023109 (0.018594) | 0.356996 / 0.275898 (0.081098) | 0.395076 / 0.323480 (0.071596) | 0.008726 / 0.007986 (0.000740) | 0.005528 / 0.004328 (0.001199) | 0.087817 / 0.004250 (0.083566) | 0.049273 / 0.037052 (0.012221) | 0.363778 / 0.258489 (0.105289) | 0.408801 / 0.293841 (0.114960) | 0.045232 / 0.128546 (-0.083314) | 0.013788 / 0.075646 (-0.061859) | 0.395634 / 0.419271 (-0.023637) | 0.056583 / 0.043533 (0.013051) | 0.360779 / 0.255139 (0.105640) | 0.386843 / 0.283200 (0.103643) | 0.116632 / 0.141683 (-0.025051) | 1.830020 / 1.452155 (0.377865) | 1.808720 / 1.492716 (0.316003) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.221029 / 0.018006 (0.203023) | 0.489463 / 0.000490 (0.488973) | 0.002104 / 0.000200 (0.001904) | 0.000098 / 0.000054 (0.000043) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032873 / 0.037411 (-0.004539) | 0.129526 / 0.014526 (0.115000) | 0.141446 / 0.176557 (-0.035111) | 0.189222 / 0.737135 (-0.547913) | 0.149329 / 0.296338 (-0.147010) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.471389 / 0.215209 (0.256180) | 4.710174 / 2.077655 (2.632519) | 2.239122 / 1.504120 (0.735002) | 1.977789 / 1.541195 (0.436595) | 2.107336 / 1.468490 (0.638846) | 0.816852 / 4.584777 (-3.767925) | 4.944056 / 3.745712 (1.198344) | 4.637939 / 5.269862 (-0.631922) | 2.355546 / 4.565676 (-2.210131) | 0.099324 / 0.424275 (-0.324951) | 0.014529 / 0.007607 (0.006922) | 0.596322 / 0.226044 (0.370277) | 5.972216 / 2.268929 (3.703287) | 2.697281 / 55.444624 (-52.747344) | 2.293836 / 6.876477 (-4.582641) | 2.380271 / 2.142072 (0.238199) | 1.001307 / 4.805227 (-3.803920) | 0.196981 / 6.500664 (-6.303683) | 0.074390 / 0.075469 (-0.001079) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.482915 / 1.841788 (-0.358872) | 18.739511 / 8.074308 (10.665202) | 16.768191 / 10.191392 (6.576799) | 0.203163 / 0.680424 (-0.477261) | 0.037514 / 0.534201 (-0.496687) | 0.529017 / 0.579283 (-0.050266) | 0.577591 / 0.434364 (0.143227) | 0.634057 / 0.540337 (0.093720) | 0.759812 / 1.386936 (-0.627124) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008815 / 0.011353 (-0.002537) | 0.005956 / 0.011008 (-0.005052) | 0.087912 / 0.038508 (0.049404) | 0.040291 / 0.023109 (0.017182) | 0.404079 / 0.275898 (0.128181) | 0.447309 / 0.323480 (0.123829) | 0.006515 / 0.007986 (-0.001471) | 0.005917 / 0.004328 (0.001588) | 0.085560 / 0.004250 (0.081310) | 0.057077 / 0.037052 (0.020025) | 0.403349 / 0.258489 (0.144860) | 0.465644 / 0.293841 (0.171803) | 0.043530 / 0.128546 (-0.085016) | 0.014234 / 0.075646 (-0.061412) | 0.102203 / 0.419271 (-0.317068) | 0.058335 / 0.043533 (0.014802) | 0.398488 / 0.255139 (0.143349) | 0.424127 / 0.283200 (0.140927) | 0.119058 / 0.141683 (-0.022625) | 1.748748 / 1.452155 (0.296593) | 1.822190 / 1.492716 (0.329474) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.255782 / 0.018006 (0.237776) | 0.496665 / 0.000490 (0.496176) | 0.000471 / 0.000200 (0.000271) | 0.000069 / 0.000054 (0.000014) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034111 / 0.037411 (-0.003301) | 0.131442 / 0.014526 (0.116917) | 0.144660 / 0.176557 (-0.031897) | 0.188156 / 0.737135 (-0.548979) | 0.149875 / 0.296338 (-0.146463) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.502218 / 0.215209 (0.287009) | 5.004486 / 2.077655 (2.926832) | 2.420379 / 1.504120 (0.916259) | 2.194671 / 1.541195 (0.653476) | 2.306376 / 1.468490 (0.837886) | 0.856623 / 4.584777 (-3.728154) | 4.963211 / 3.745712 (1.217499) | 2.517965 / 5.269862 (-2.751896) | 1.743880 / 4.565676 (-2.821797) | 0.105270 / 0.424275 (-0.319005) | 0.014725 / 0.007607 (0.007118) | 0.621934 / 0.226044 (0.395890) | 6.183827 / 2.268929 (3.914898) | 2.945868 / 55.444624 (-52.498757) | 2.557676 / 6.876477 (-4.318801) | 2.622282 / 2.142072 (0.480210) | 1.011647 / 4.805227 (-3.793580) | 0.199573 / 6.500664 (-6.301091) | 0.076283 / 0.075469 (0.000814) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.518813 / 1.841788 (-0.322975) | 18.833017 / 8.074308 (10.758709) | 16.095249 / 10.191392 (5.903857) | 0.196667 / 0.680424 (-0.483757) | 0.022060 / 0.534201 (-0.512141) | 0.537802 / 0.579283 (-0.041481) | 0.523676 / 0.434364 (0.089312) | 0.629387 / 0.540337 (0.089049) | 0.738042 / 1.386936 (-0.648894) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#813712c3cd133f72f496d279e02344d6ee743fdf \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008608 / 0.011353 (-0.002745) | 0.004553 / 0.011008 (-0.006455) | 0.100031 / 0.038508 (0.061523) | 0.029498 / 0.023109 (0.006389) | 0.306913 / 0.275898 (0.031015) | 0.367369 / 0.323480 (0.043889) | 0.006883 / 0.007986 (-0.001103) | 0.004768 / 0.004328 (0.000440) | 0.077424 / 0.004250 (0.073173) | 0.034005 / 0.037052 (-0.003047) | 0.317772 / 0.258489 (0.059283) | 0.356859 / 0.293841 (0.063018) | 0.033717 / 0.128546 (-0.094829) | 0.011386 / 0.075646 (-0.064260) | 0.322832 / 0.419271 (-0.096439) | 0.043930 / 0.043533 (0.000397) | 0.308087 / 0.255139 (0.052948) | 0.338349 / 0.283200 (0.055149) | 0.094780 / 0.141683 (-0.046903) | 1.463454 / 1.452155 (0.011300) | 1.495055 / 1.492716 (0.002338) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.191039 / 0.018006 (0.173033) | 0.414650 / 0.000490 (0.414160) | 0.002435 / 0.000200 (0.002235) | 0.000075 / 0.000054 (0.000021) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023871 / 0.037411 (-0.013540) | 0.097140 / 0.014526 (0.082614) | 0.105914 / 0.176557 (-0.070643) | 0.147375 / 0.737135 (-0.589760) | 0.107985 / 0.296338 (-0.188354) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.420174 / 0.215209 (0.204965) | 4.208354 / 2.077655 (2.130700) | 1.904568 / 1.504120 (0.400448) | 1.687406 / 1.541195 (0.146212) | 1.723901 / 1.468490 (0.255411) | 0.693554 / 4.584777 (-3.891223) | 3.445474 / 3.745712 (-0.300238) | 1.904919 / 5.269862 (-3.364943) | 1.284378 / 4.565676 (-3.281298) | 0.082539 / 0.424275 (-0.341736) | 0.012490 / 0.007607 (0.004883) | 0.527778 / 0.226044 (0.301733) | 5.300766 / 2.268929 (3.031838) | 2.324666 / 55.444624 (-53.119958) | 1.977166 / 6.876477 (-4.899311) | 2.054396 / 2.142072 (-0.087677) | 0.820966 / 4.805227 (-3.984261) | 0.148584 / 6.500664 (-6.352080) | 0.063618 / 0.075469 (-0.011851) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.188075 / 1.841788 (-0.653712) | 13.706950 / 8.074308 (5.632642) | 13.725122 / 10.191392 (3.533730) | 0.167379 / 0.680424 (-0.513045) | 0.028729 / 0.534201 (-0.505472) | 0.395373 / 0.579283 (-0.183910) | 0.403604 / 0.434364 (-0.030760) | 0.464290 / 0.540337 (-0.076047) | 0.553792 / 1.386936 (-0.833144) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006565 / 0.011353 (-0.004787) | 0.004588 / 0.011008 (-0.006420) | 0.077312 / 0.038508 (0.038804) | 0.027348 / 0.023109 (0.004239) | 0.367753 / 0.275898 (0.091855) | 0.403250 / 0.323480 (0.079770) | 0.005201 / 0.007986 (-0.002785) | 0.004695 / 0.004328 (0.000366) | 0.076203 / 0.004250 (0.071953) | 0.039388 / 0.037052 (0.002336) | 0.374418 / 0.258489 (0.115929) | 0.413623 / 0.293841 (0.119782) | 0.031731 / 0.128546 (-0.096815) | 0.011644 / 0.075646 (-0.064002) | 0.086339 / 0.419271 (-0.332932) | 0.048902 / 0.043533 (0.005369) | 0.352064 / 0.255139 (0.096925) | 0.386637 / 0.283200 (0.103437) | 0.093662 / 0.141683 (-0.048021) | 1.479863 / 1.452155 (0.027709) | 1.562475 / 1.492716 (0.069758) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.231874 / 0.018006 (0.213867) | 0.402185 / 0.000490 (0.401695) | 0.005252 / 0.000200 (0.005052) | 0.000086 / 0.000054 (0.000032) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025402 / 0.037411 (-0.012010) | 0.099896 / 0.014526 (0.085370) | 0.106365 / 0.176557 (-0.070192) | 0.143309 / 0.737135 (-0.593827) | 0.112311 / 0.296338 (-0.184027) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.447637 / 0.215209 (0.232428) | 4.469337 / 2.077655 (2.391682) | 2.164332 / 1.504120 (0.660212) | 1.957826 / 1.541195 (0.416631) | 1.984580 / 1.468490 (0.516090) | 0.702909 / 4.584777 (-3.881868) | 3.361725 / 3.745712 (-0.383987) | 2.818102 / 5.269862 (-2.451760) | 1.589815 / 4.565676 (-2.975862) | 0.083647 / 0.424275 (-0.340628) | 0.012502 / 0.007607 (0.004895) | 0.545578 / 0.226044 (0.319534) | 5.480894 / 2.268929 (3.211966) | 2.605599 / 55.444624 (-52.839026) | 2.253444 / 6.876477 (-4.623032) | 2.289818 / 2.142072 (0.147746) | 0.803680 / 4.805227 (-4.001547) | 0.151870 / 6.500664 (-6.348794) | 0.066610 / 0.075469 (-0.008859) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.327390 / 1.841788 (-0.514398) | 14.046936 / 8.074308 (5.972628) | 13.643169 / 10.191392 (3.451777) | 0.128223 / 0.680424 (-0.552201) | 0.016941 / 0.534201 (-0.517260) | 0.383887 / 0.579283 (-0.195396) | 0.383891 / 0.434364 (-0.050473) | 0.440191 / 0.540337 (-0.100146) | 0.525357 / 1.386936 (-0.861579) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#1575be339bc14a12229d782e2788746f27aeeb2a \"CML watermark\")\n", "Yea there must have been an update in another package that unconstrained the protobuf dependency - idk which one though", "It is `tensorboard`. I have reported the issue to `tensorflow`:\r\n- https://github.com/tensorflow/tensorflow/issues/59665" ]
null
[]
Fix benchmarks CI - pin protobuf
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/5527/timeline
fix https://github.com/huggingface/datasets/actions/runs/4156059127/jobs/7189576331
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/5527.diff", "html_url": "https://github.com/huggingface/datasets/pull/5527", "merged_at": "2023-02-13T09:24:16Z", "patch_url": "https://github.com/huggingface/datasets/pull/5527.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5527" }
1,581,228,531
https://api.github.com/repos/huggingface/datasets/issues/5527/comments
PR_kwDODunzps5JysSM
null
5,527
https://api.github.com/repos/huggingface/datasets/issues/5527/events
true
closed
2023-02-10T23:37:14Z
null
https://api.github.com/repos/huggingface/datasets/issues/5526
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5526/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/5526/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/8976546?v=4", "events_url": "https://api.github.com/users/Dref360/events{/privacy}", "followers_url": "https://api.github.com/users/Dref360/followers", "following_url": "https://api.github.com/users/Dref360/following{/other_user}", "gists_url": "https://api.github.com/users/Dref360/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Dref360", "id": 8976546, "login": "Dref360", "node_id": "MDQ6VXNlcjg5NzY1NDY=", "organizations_url": "https://api.github.com/users/Dref360/orgs", "received_events_url": "https://api.github.com/users/Dref360/received_events", "repos_url": "https://api.github.com/users/Dref360/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Dref360/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Dref360/subscriptions", "type": "User", "url": "https://api.github.com/users/Dref360" }
https://github.com/huggingface/datasets/pull/5526
[]
false
2023-03-27T15:26:46Z
2023-03-27T15:18:20Z
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "Thanks for the quick review! I updated the code with your suggestion", "Thanks for the quick review @albertvillanova! I updated the code with your suggestions", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008577 / 0.011353 (-0.002776) | 0.005714 / 0.011008 (-0.005294) | 0.114718 / 0.038508 (0.076210) | 0.039799 / 0.023109 (0.016690) | 0.387530 / 0.275898 (0.111632) | 0.395739 / 0.323480 (0.072259) | 0.006775 / 0.007986 (-0.001211) | 0.006280 / 0.004328 (0.001952) | 0.086470 / 0.004250 (0.082220) | 0.054424 / 0.037052 (0.017371) | 0.361989 / 0.258489 (0.103500) | 0.424678 / 0.293841 (0.130837) | 0.043081 / 0.128546 (-0.085465) | 0.013903 / 0.075646 (-0.061743) | 0.397625 / 0.419271 (-0.021647) | 0.059789 / 0.043533 (0.016256) | 0.375195 / 0.255139 (0.120056) | 0.403724 / 0.283200 (0.120524) | 0.121470 / 0.141683 (-0.020213) | 1.734496 / 1.452155 (0.282341) | 1.820479 / 1.492716 (0.327763) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.239672 / 0.018006 (0.221665) | 0.499373 / 0.000490 (0.498883) | 0.005034 / 0.000200 (0.004834) | 0.000099 / 0.000054 (0.000045) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033000 / 0.037411 (-0.004411) | 0.130930 / 0.014526 (0.116404) | 0.151690 / 0.176557 (-0.024866) | 0.211839 / 0.737135 (-0.525296) | 0.148727 / 0.296338 (-0.147612) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.480592 / 0.215209 (0.265382) | 4.809700 / 2.077655 (2.732046) | 2.232414 / 1.504120 (0.728294) | 2.035432 / 1.541195 (0.494237) | 2.115991 / 1.468490 (0.647501) | 0.817841 / 4.584777 (-3.766936) | 4.718035 / 3.745712 (0.972323) | 4.107102 / 5.269862 (-1.162759) | 2.166838 / 4.565676 (-2.398839) | 0.102207 / 0.424275 (-0.322068) | 0.014686 / 0.007607 (0.007079) | 0.599922 / 0.226044 (0.373877) | 5.985840 / 2.268929 (3.716912) | 2.769199 / 55.444624 (-52.675425) | 2.427095 / 6.876477 (-4.449382) | 2.586666 / 2.142072 (0.444593) | 0.987650 / 4.805227 (-3.817578) | 0.199419 / 6.500664 (-6.301245) | 0.076710 / 0.075469 (0.001240) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.454509 / 1.841788 (-0.387278) | 18.267849 / 8.074308 (10.193541) | 16.701880 / 10.191392 (6.510488) | 0.204225 / 0.680424 (-0.476199) | 0.020295 / 0.534201 (-0.513906) | 0.504254 / 0.579283 (-0.075029) | 0.535071 / 0.434364 (0.100707) | 0.611825 / 0.540337 (0.071488) | 0.697289 / 1.386936 (-0.689647) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009141 / 0.011353 (-0.002211) | 0.005987 / 0.011008 (-0.005021) | 0.092003 / 0.038508 (0.053495) | 0.043239 / 0.023109 (0.020130) | 0.400425 / 0.275898 (0.124527) | 0.464849 / 0.323480 (0.141369) | 0.008256 / 0.007986 (0.000270) | 0.006251 / 0.004328 (0.001923) | 0.095263 / 0.004250 (0.091013) | 0.057899 / 0.037052 (0.020847) | 0.402899 / 0.258489 (0.144410) | 0.477411 / 0.293841 (0.183570) | 0.044122 / 0.128546 (-0.084424) | 0.014158 / 0.075646 (-0.061489) | 0.116354 / 0.419271 (-0.302917) | 0.061045 / 0.043533 (0.017512) | 0.411635 / 0.255139 (0.156497) | 0.466281 / 0.283200 (0.183082) | 0.129423 / 0.141683 (-0.012260) | 1.799790 / 1.452155 (0.347635) | 2.004578 / 1.492716 (0.511862) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.224012 / 0.018006 (0.206006) | 0.502972 / 0.000490 (0.502482) | 0.003560 / 0.000200 (0.003360) | 0.000110 / 0.000054 (0.000056) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034794 / 0.037411 (-0.002618) | 0.139646 / 0.014526 (0.125120) | 0.144330 / 0.176557 (-0.032226) | 0.202528 / 0.737135 (-0.534607) | 0.151561 / 0.296338 (-0.144777) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.504343 / 0.215209 (0.289133) | 5.050690 / 2.077655 (2.973035) | 2.433107 / 1.504120 (0.928987) | 2.197443 / 1.541195 (0.656248) | 2.331225 / 1.468490 (0.862734) | 0.834066 / 4.584777 (-3.750711) | 4.837648 / 3.745712 (1.091936) | 4.105672 / 5.269862 (-1.164189) | 2.281557 / 4.565676 (-2.284120) | 0.102257 / 0.424275 (-0.322018) | 0.014425 / 0.007607 (0.006818) | 0.629290 / 0.226044 (0.403245) | 6.251513 / 2.268929 (3.982585) | 2.959012 / 55.444624 (-52.485613) | 2.570031 / 6.876477 (-4.306446) | 2.657525 / 2.142072 (0.515453) | 1.002861 / 4.805227 (-3.802367) | 0.199326 / 6.500664 (-6.301338) | 0.078428 / 0.075469 (0.002958) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.579587 / 1.841788 (-0.262201) | 18.567509 / 8.074308 (10.493201) | 17.162144 / 10.191392 (6.970752) | 0.193460 / 0.680424 (-0.486964) | 0.020819 / 0.534201 (-0.513382) | 0.501929 / 0.579283 (-0.077354) | 0.508039 / 0.434364 (0.073675) | 0.582656 / 0.540337 (0.042319) | 0.693624 / 1.386936 (-0.693312) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#c410d321cd1289c6a630192b078f4892c2e13ff9 \"CML watermark\")\n" ]
null
[]
Allow loading/saving of FAISS index using fsspec
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/5526/timeline
Fixes #5428 Allow loading/saving of FAISS index using fsspec: 1. Simply use BufferedIOWriter/Reader to Read/Write indices on fsspec stream. 2. Needed `mockfs` in the test, so I took it out of the `TestCase`. Let me know if that makes sense. I can work on the documentation once the code changes are approved.
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/5526.diff", "html_url": "https://github.com/huggingface/datasets/pull/5526", "merged_at": "2023-03-27T15:18:20Z", "patch_url": "https://github.com/huggingface/datasets/pull/5526.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5526" }
1,580,488,133
https://api.github.com/repos/huggingface/datasets/issues/5526/comments
PR_kwDODunzps5JwVol
null
5,526
https://api.github.com/repos/huggingface/datasets/issues/5526/events
true
closed
2023-02-10T21:12:36Z
null
https://api.github.com/repos/huggingface/datasets/issues/5525
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5525/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/5525/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/74564958?v=4", "events_url": "https://api.github.com/users/TJ-Solergibert/events{/privacy}", "followers_url": "https://api.github.com/users/TJ-Solergibert/followers", "following_url": "https://api.github.com/users/TJ-Solergibert/following{/other_user}", "gists_url": "https://api.github.com/users/TJ-Solergibert/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/TJ-Solergibert", "id": 74564958, "login": "TJ-Solergibert", "node_id": "MDQ6VXNlcjc0NTY0OTU4", "organizations_url": "https://api.github.com/users/TJ-Solergibert/orgs", "received_events_url": "https://api.github.com/users/TJ-Solergibert/received_events", "repos_url": "https://api.github.com/users/TJ-Solergibert/repos", "site_admin": false, "starred_url": "https://api.github.com/users/TJ-Solergibert/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TJ-Solergibert/subscriptions", "type": "User", "url": "https://api.github.com/users/TJ-Solergibert" }
https://github.com/huggingface/datasets/issues/5525
[]
false
2023-02-14T17:41:08Z
2023-02-14T09:35:49Z
null
[ "Thanks for reporting, @TJ-Solergibert.\r\n\r\nWe cannot access your Colab notebook: `There was an error loading this notebook. Ensure that the file is accessible and try again.`\r\nCould you please make it publicly accessible?\r\n", "I swear it's public, I've checked the settings and I've been able to open it in incognito mode.\r\n\r\nNotebook: https://colab.research.google.com/drive/1JCrS7FlGfu_kFqChMrwKZ_bpabnIMqbP?usp=sharing\r\n\r\nAnyway, this is the code to reproduce the error:\r\n\r\n```python3\r\nfrom datasets import ClassLabel\r\nfrom datasets import load_dataset\r\n\r\neuroparl_ds = load_dataset(\"tj-solergibert/Europarl-ST\")\r\n\r\nsource_lang = \"nl\"\r\nlanguages = list(europarl_ds[\"train\"][0][\"transcriptions\"].keys())\r\nClassLabels = ClassLabel(num_classes = len(languages), names = languages)\r\n\r\ndef map_label2id(example):\r\n example['dest_lang'] = ClassLabels.str2int(example['dest_lang'])\r\n return example\r\n\r\ndef unfold_transcriptions(example):\r\n for lang in languages:\r\n example[lang] = example[\"transcriptions\"][lang]\r\n return example\r\n\r\ndef unroll(batch, src_lang, dest_langs):\r\n source_t, dest_t, dest_l = [], [], []\r\n for lang in dest_langs: \r\n source_t += batch[src_lang]\r\n dest_t += batch[lang]\r\n dest_l += [lang]\r\n return_dict = {\"source_text\": source_t, \"dest_text\": dest_t, \"dest_lang\": dest_l}\r\n return return_dict\r\n\r\ndef preprocess_split(ds_split, src_lang):\r\n dest_langs = [x for x in languages if x != src_lang]\r\n\r\n ds_split = ds_split.map(unroll, fn_kwargs= {\"src_lang\": src_lang, \"dest_langs\": dest_langs}, batched = True, batch_size = 1, remove_columns= list(languages))\r\n ds_split = ds_split.filter(lambda x: x[\"source_text\"] != None and x[\"dest_text\"] != None) # Remove incomplete translations\r\n ds_split = ds_split.filter(lambda x: x[\"source_text\"] != \"None\" and x[\"dest_text\"] != \"None\")\r\n ds_split = ds_split.map(map_label2id) \r\n ds_split = ds_split.cast_column(\"dest_lang\", ClassLabels)\r\n return ds_split\r\n\r\ndef reset_cortas(example):\r\n for lang in languages:\r\n if isinstance(example[lang], str):\r\n if example[lang].isnumeric () or len(example[lang]) <= 5:\r\n example[lang] = \"None\"\r\n return example\r\n\r\ndef clean_dataset(dataset):\r\n # Remove columns\r\n dataset = dataset.remove_columns([\"original_speech\", \"original_language\", \"audio_path\", \"segment_start\", \"segment_end\"])\r\n # Unfold\r\n dataset = dataset.map(unfold_transcriptions, remove_columns = [\"transcriptions\"])\r\n dataset = dataset.map(reset_cortas)\r\n return dataset\r\n\r\nprocessed_europarl = clean_dataset(europarl_ds[\"test\"])\r\nnew_train_ds = preprocess_split(processed_europarl, 'nl')\r\n```", "Thanks, @TJ-Solergibert. I can access your notebook now. Maybe it was just a temporary issue.\r\n\r\nAt first sight, it seems something related to your data: maybe some of the examples do not have all the transcriptions for all the languages. Then, some of them are null when unrolled. And when trying to concatenate with the other rows containing strings, the cast issue is raised (the arrays to be concatenated have different types).\r\n\r\nDo you think this could be the case?", "See, in this example, \"nl\" and \"ro\" transcripts are null:\r\n```python\r\n>>> europarl_ds[\"test\"][:1]\r\n{'original_speech': ['− Señor Presidente, en primer lugar, quisiera felicitar al señor Seeber por el trabajo realizado, porque en su informe se recogen muchas de las preocupaciones manifestadas en esta'],\r\n 'original_language': ['es'],\r\n 'audio_path': ['es/audios/en.20081008.24.3-238.m4a'],\r\n 'segment_start': [0.6200000047683716],\r\n 'segment_end': [11.319999694824219],\r\n 'transcriptions': [{'de': '− Herr Präsident! Zunächst möchte ich Richard Seeber zu der von ihm geleisteten Arbeit gratulieren, denn sein Bericht greift viele der in diesem Haus zum Ausdruck gebrachten Anliegen',\r\n 'en': '− Mr President, firstly I would like to congratulate Mr Seeber on the work he has done, because his report picks up many of the concerns expressed in this',\r\n 'es': '− Señor Presidente, en primer lugar, quisiera felicitar al señor Seeber por el trabajo realizado, porque en su informe se recogen muchas de las preocupaciones manifestadas en esta',\r\n 'fr': '− Monsieur le Président, je voudrais tout d ’ abord féliciter M. Seeber pour le travail qu ’ il a effectué, parce que son rapport reprend beaucoup des inquiétudes exprimées au sein de cette',\r\n 'it': \"− Signor Presidente, mi congratulo innanzi tutto con l'onorevole Seeber per il lavoro svolto, perché la sua relazione accoglie molti dei timori espressi da quest'Aula\",\r\n 'nl': None,\r\n 'pl': '− Panie przewodniczący! Po pierwsze chciałabym pogratulować panu posłowi Seeberowi wykonanej pracy, ponieważ jego sprawozdanie podejmuje szereg podnoszonych w tej Izbie',\r\n 'pt': '− Senhor Presidente, começo por felicitar o senhor deputado Seeber pelo trabalho que desenvolveu em torno deste relatório, que retoma muitas das preocupações expressas nesta',\r\n 'ro': None}]}\r\n```\r\n```python\r\n>>> processed_europarl[0]\r\n{'de': '− Herr Präsident! Zunächst möchte ich Richard Seeber zu der von ihm geleisteten Arbeit gratulieren, denn sein Bericht greift viele der in diesem Haus zum Ausdruck gebrachten Anliegen',\r\n 'en': '− Mr President, firstly I would like to congratulate Mr Seeber on the work he has done, because his report picks up many of the concerns expressed in this',\r\n 'es': '− Señor Presidente, en primer lugar, quisiera felicitar al señor Seeber por el trabajo realizado, porque en su informe se recogen muchas de las preocupaciones manifestadas en esta',\r\n 'fr': '− Monsieur le Président, je voudrais tout d ’ abord féliciter M. Seeber pour le travail qu ’ il a effectué, parce que son rapport reprend beaucoup des inquiétudes exprimées au sein de cette',\r\n 'it': \"− Signor Presidente, mi congratulo innanzi tutto con l'onorevole Seeber per il lavoro svolto, perché la sua relazione accoglie molti dei timori espressi da quest'Aula\",\r\n 'nl': None,\r\n 'pl': '− Panie przewodniczący! Po pierwsze chciałabym pogratulować panu posłowi Seeberowi wykonanej pracy, ponieważ jego sprawozdanie podejmuje szereg podnoszonych w tej Izbie',\r\n 'pt': '− Senhor Presidente, começo por felicitar o senhor deputado Seeber pelo trabalho que desenvolveu em torno deste relatório, que retoma muitas das preocupações expressas nesta',\r\n 'ro': None}\r\n```", "You can fix this issue by forcing the cast of None to str by hand:\r\n- If you replace this line:\r\n```python\r\nsource_t += batch[src_lang]\r\n```\r\n- With this line (because the batch size is 1):\r\n```python\r\nsource_t += [str(batch[src_lang][0])]\r\n```\r\n- Or with this line (if the batch size were larger than 1):\r\n```python\r\nsource_t += [str(text) for text in batch[src_lang]]\r\n```", "Problem solved! Thanks @albertvillanova, now I have even increased the batch size and it's crazy fast :rocket: !" ]
completed
[]
TypeError: Couldn't cast array of type string to null
NONE
https://api.github.com/repos/huggingface/datasets/issues/5525/timeline
### Describe the bug Processing a dataset I alredy uploaded to the Hub (https://huggingface.co/datasets/tj-solergibert/Europarl-ST) I found that for some splits and some languages (test split, source_lang = "nl") after applying a map function I get the mentioned error. I alredy tried reseting the shorter strings (reset_cortas function). It only happends with NL, PL, RO and PT. It does not make sense since when processing the other languages I also use the corpus of those that fail and it does not cause any errors. I suspect that the error may be in this direction: We use cast_array_to_feature to support casting to custom types like Audio and Image # Also, when trying type "string", we don't want to convert integers or floats to "string". # We only do it if trying_type is False - since this is what the user asks for. ### Steps to reproduce the bug Here I link a colab notebook to reproduce the error: https://colab.research.google.com/drive/1JCrS7FlGfu_kFqChMrwKZ_bpabnIMqbP?authuser=1#scrollTo=FBAvlhMxIzpA ### Expected behavior Data processing does not fail. A correct example can be seen here: https://huggingface.co/datasets/tj-solergibert/Europarl-ST-processed-mt-en ### Environment info - `datasets` version: 2.9.0 - Platform: Linux-5.10.147+-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyArrow version: 9.0.0 - Pandas version: 1.3.5
https://api.github.com/repos/huggingface/datasets
null
1,580,342,729
https://api.github.com/repos/huggingface/datasets/issues/5525/comments
I_kwDODunzps5eMh3J
null
5,525
https://api.github.com/repos/huggingface/datasets/issues/5525/events
false
closed
2023-02-10T19:35:50Z
null
https://api.github.com/repos/huggingface/datasets/issues/5524
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5524/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/5524/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4", "events_url": "https://api.github.com/users/alvarobartt/events{/privacy}", "followers_url": "https://api.github.com/users/alvarobartt/followers", "following_url": "https://api.github.com/users/alvarobartt/following{/other_user}", "gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/alvarobartt", "id": 36760800, "login": "alvarobartt", "node_id": "MDQ6VXNlcjM2NzYwODAw", "organizations_url": "https://api.github.com/users/alvarobartt/orgs", "received_events_url": "https://api.github.com/users/alvarobartt/received_events", "repos_url": "https://api.github.com/users/alvarobartt/repos", "site_admin": false, "starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions", "type": "User", "url": "https://api.github.com/users/alvarobartt" }
https://github.com/huggingface/datasets/pull/5524
[]
false
2023-02-10T19:51:45Z
2023-02-10T19:49:12Z
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
null
[]
[INVALID PR]
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/5524/timeline
Hi to whoever is reading this! 🤗 ## What's in this PR? ~~Basically, I've removed the 🤗`datasets` installation as `python -m pip install ".[quality]" in the `check_code_quality` job in `.github/workflows/ci.yaml`, as we don't need to install the whole package to run the CI, unless that's done on purpose e.g. to check that the Python package installation succeeds before running the tests over the matrix of os?~~ ~~So I just wanted to check whether the time was reduced doing this (which I assume it will), plus whether this is something that can be improved, or just discarded in case you're also using that step to make sure that the package can be installed.~~ ## What's missing? ~~I was just wondering whether you consider replacing `isort` and `flake8` with `ruff` (if possible), since it's way faster, more information at [`ruff`](https://github.com/charliermarsh/ruff). Before creating this PR the average time of the `check_code_quality` job was around 40s.~~ ## Edit Sorry for the inconvenience this may have caused, didn't realise that the config is defined in `setup.cfg` and `pyproject.toml`, so running those without installing the Python package leads to failure, my bad 😞
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/5524.diff", "html_url": "https://github.com/huggingface/datasets/pull/5524", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/5524.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5524" }
1,580,219,454
https://api.github.com/repos/huggingface/datasets/issues/5524/comments
PR_kwDODunzps5JvbMw
null
5,524
https://api.github.com/repos/huggingface/datasets/issues/5524/events
true
open
2023-02-10T19:13:03Z
null
https://api.github.com/repos/huggingface/datasets/issues/5523
{ "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/polinaeterna", "id": 16348744, "login": "polinaeterna", "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "repos_url": "https://api.github.com/users/polinaeterna/repos", "site_admin": false, "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "type": "User", "url": "https://api.github.com/users/polinaeterna" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5523/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/5523/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/polinaeterna", "id": 16348744, "login": "polinaeterna", "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "repos_url": "https://api.github.com/users/polinaeterna/repos", "site_admin": false, "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "type": "User", "url": "https://api.github.com/users/polinaeterna" }
https://github.com/huggingface/datasets/issues/5523
[ { "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/polinaeterna", "id": 16348744, "login": "polinaeterna", "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "repos_url": "https://api.github.com/users/polinaeterna/repos", "site_admin": false, "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "type": "User", "url": "https://api.github.com/users/polinaeterna" } ]
false
2023-02-10T19:14:50Z
null
null
[]
null
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
Checking that split name is correct happens only after the data is downloaded
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/5523/timeline
### Describe the bug Verification of split names (=indexing data by split) happens after downloading the data. So when the split name is incorrect, users learn about that only after the data is fully downloaded, for large datasets it might take a lot of time. ### Steps to reproduce the bug Load any dataset with random split name, for example: ```python from datasets import load_dataset load_dataset("mozilla-foundation/common_voice_11_0", "en", split="blabla") ``` and the download will start smoothly, despite there is no split named "blabla". ### Expected behavior Raise error when split name is incorrect. ### Environment info `datasets==2.9.1.dev0`
https://api.github.com/repos/huggingface/datasets
null
1,580,193,015
https://api.github.com/repos/huggingface/datasets/issues/5523/comments
I_kwDODunzps5eL9T3
null
5,523
https://api.github.com/repos/huggingface/datasets/issues/5523/events
false
closed
2023-02-10T19:05:00Z
null
https://api.github.com/repos/huggingface/datasets/issues/5522
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5522/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/5522/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4", "events_url": "https://api.github.com/users/alvarobartt/events{/privacy}", "followers_url": "https://api.github.com/users/alvarobartt/followers", "following_url": "https://api.github.com/users/alvarobartt/following{/other_user}", "gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/alvarobartt", "id": 36760800, "login": "alvarobartt", "node_id": "MDQ6VXNlcjM2NzYwODAw", "organizations_url": "https://api.github.com/users/alvarobartt/orgs", "received_events_url": "https://api.github.com/users/alvarobartt/received_events", "repos_url": "https://api.github.com/users/alvarobartt/repos", "site_admin": false, "starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions", "type": "User", "url": "https://api.github.com/users/alvarobartt" }
https://github.com/huggingface/datasets/pull/5522
[]
false
2023-02-15T14:48:27Z
2023-02-15T13:19:06Z
null
[ "P.S. For more context, I'm currently exploring the integration of 🤗`datasets` with JAX, so in case you need any help or want me to try something specific just let me know! (`jnp.asarray`/`jnp.array(..., copy=False)` still no zero-copy 😭)", "_The documentation is not available anymore as the PR was closed or merged._", "> Hi ! Thanks for improving this :)\r\n\r\nGlad to help, @lhoestq! Also, regarding the questions in the `## What's missing?` can I have your input? Thanks 🤗 ", "Whoops forgot to reply to these matters - sorry x)\r\n\r\nYea a JAX guide would be welcome in the documentation ! This can be done in a separate PR if you want :)\r\n\r\nPyarrow is always imported with `datasets`, so it doesn't really matter if it's under TYPE_CHECKING or not.\r\n\r\nRegarding the license : yes indeed it should be in every file, thanks for reporting.\r\n\r\nNo big preference between jnp.array and jnp.asarray, unless one offers better performance", "> Whoops forgot to reply to these matters - sorry x)\r\n> \r\n> Yea a JAX guide would be welcome in the documentation ! This can be done in a separate PR if you want :)\r\n> \r\n> Pyarrow is always imported with `datasets`, so it doesn't really matter if it's under TYPE_CHECKING or not.\r\n> \r\n> Regarding the license : yes indeed it should be in every file, thanks for reporting.\r\n> \r\n> No big preference between jnp.array and jnp.asarray, unless one offers better performance\r\n\r\nCool @lhoestq thanks for the input there!\r\n\r\n1. I can create a separate PR for JAX-format usage\r\n2. Regarding that, makes sense, we can just not put it there, unless it's more clear that in that file `pyarrow` is just required for typing?\r\n3. Do you want me to add the License? In this PR? In a separate one?\r\n4. Ideally `jnp.asarray` is similar to `np.asarray` which in the case of `numpy` tends to be more efficient as it does zero-copy when possible, while `np.array` has `copy=True` by default, anyway as I mentioned before (and as you already know) the copy from `numpy` to `jax` is not zero-copy, while the other way around (`jax` to `numpy`) it is", "Thanks, feel free to create separate PRs for the docs and the license.\r\n\r\nI guess you can move the `pyarrow` import back to where it was for consistency with the other files and we can merge this one ;)", "> Thanks, feel free to create separate PRs for the docs and the license.\r\n> \r\n> I guess you can move the `pyarrow` import back to where it was for consistency with the other files and we can merge this one ;)\r\n\r\nCool thanks I'll do that! 👍🏻 ", "Actually I just checked and there are still tens of thousands of users with jax 0.3.25 - so we need to support older versions as well. I guess it comes from `transformers` which doesn't support jax 0.4 (and doesn't want to until the jax team stops breaking the lib all the time).\r\n\r\nCould you make sure your changes work with older versions as well ? Sorry for not spotting this earlier.\r\nIf we have `\"jax>=0.2.8,!=0.3.2,<=0.4.3\"` that'b be nice, and we can update the latest supported release from time to time.\r\n\r\nIn the CI you can add `jax==0.2.8` for the `deps-minimum` job, and use `jax~=0.4.1` for the `deps-latest`.", "> Actually I just checked and there are still tens of thousands of users with jax 0.3.25 - so we need to support older versions as well. I guess it comes from `transformers` which doesn't support jax 0.4 (and doesn't want to until the jax team stops breaking the lib all the time).\r\n> \r\n> Could you make sure your changes work with older versions as well ? Sorry for not spotting this earlier. If we have `\"jax>=0.2.8,!=0.3.2,<=0.4.3\"` that'b be nice, and we can update the latest supported release from time to time.\r\n> \r\n> In the CI you can add `jax==0.2.8` for the `deps-minimum` job, and use `jax~=0.4.1` for the `deps-latest`.\r\n\r\nOk, didn't know that @lhoestq thanks for the detailed context! Sure, I'll update it and make sure it's also compatible with older versions.", "Oops forgot to add you as co-author of the last commit @lhoestq my bad 😞 ", "So it should be fixed right now @lhoestq! The thing is that `jax` doesn't provide support for Python 3.7 due to its EOL next June (more information at https://endoflife.date/python)...\r\n\r\nAnyway, I can confirm that `jax.Array` type works with 0.3.25 and that the following code works fine:\r\n\r\n```python\r\nimport jax\r\nimport jax.numpy as jnp\r\n\r\nx = jnp.ones((1, 10), dtype=jnp.float32) # Is a `jnp.DeviceArray`\r\nassert isinstance(x, jax.Array) # Is `True`\r\n```\r\n\r\nSo we can still use 0.3.25 as the maximum supported version, as well as 0.3.6 for `jaxlib` so as to be consistent with 🤗`transformers`.\r\n\r\nThanks for your comments @lhoestq those were really useful!", "Sorry for the spam, pinning versions leads to failure runs (not related to the type-hinting); I'll check that locally instead of here to avoid spam... Not pinning the dependencies work but I'll check the minimum required versions for both `jax` and `jaxlib` in Python 3.7", "> Cool ! Thanks for trying to make the CI support it, but it's maybe not worth spending more time on this for now ^^\r\n> \r\n> merging :)\r\n\r\nDo you want me to work on the CI in a separate branch? Thanks for merging and for your help as always :)", "> Do you want me to work on the CI in a separate branch? Thanks for merging and for your help as always :)\r\n\r\nIn the end I think we can keep it as is since we didn't modify the core code for jax. Maybe later if we do further changes and need to make sure we don't break anything ;) For example when we decide to add support for more recent versions", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010798 / 0.011353 (-0.000555) | 0.005690 / 0.011008 (-0.005318) | 0.116840 / 0.038508 (0.078332) | 0.041376 / 0.023109 (0.018266) | 0.345616 / 0.275898 (0.069718) | 0.413914 / 0.323480 (0.090434) | 0.009237 / 0.007986 (0.001252) | 0.004490 / 0.004328 (0.000162) | 0.085833 / 0.004250 (0.081582) | 0.050231 / 0.037052 (0.013179) | 0.367276 / 0.258489 (0.108787) | 0.393735 / 0.293841 (0.099894) | 0.043775 / 0.128546 (-0.084772) | 0.013215 / 0.075646 (-0.062432) | 0.391020 / 0.419271 (-0.028252) | 0.055102 / 0.043533 (0.011569) | 0.360333 / 0.255139 (0.105194) | 0.370531 / 0.283200 (0.087331) | 0.115484 / 0.141683 (-0.026199) | 1.694779 / 1.452155 (0.242625) | 1.756249 / 1.492716 (0.263532) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.230508 / 0.018006 (0.212501) | 0.478681 / 0.000490 (0.478191) | 0.010305 / 0.000200 (0.010105) | 0.000147 / 0.000054 (0.000093) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030953 / 0.037411 (-0.006459) | 0.124320 / 0.014526 (0.109794) | 0.140417 / 0.176557 (-0.036140) | 0.189522 / 0.737135 (-0.547613) | 0.143635 / 0.296338 (-0.152704) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.485995 / 0.215209 (0.270786) | 4.799668 / 2.077655 (2.722014) | 2.195655 / 1.504120 (0.691535) | 1.940073 / 1.541195 (0.398879) | 2.053853 / 1.468490 (0.585363) | 0.825399 / 4.584777 (-3.759378) | 4.522180 / 3.745712 (0.776468) | 2.484626 / 5.269862 (-2.785236) | 1.727617 / 4.565676 (-2.838059) | 0.098808 / 0.424275 (-0.325467) | 0.014753 / 0.007607 (0.007146) | 0.606798 / 0.226044 (0.380754) | 5.918090 / 2.268929 (3.649162) | 2.668124 / 55.444624 (-52.776500) | 2.300447 / 6.876477 (-4.576030) | 2.411203 / 2.142072 (0.269130) | 0.999826 / 4.805227 (-3.805401) | 0.193683 / 6.500664 (-6.306981) | 0.069341 / 0.075469 (-0.006129) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.455816 / 1.841788 (-0.385972) | 17.176476 / 8.074308 (9.102168) | 16.359100 / 10.191392 (6.167708) | 0.199669 / 0.680424 (-0.480755) | 0.033456 / 0.534201 (-0.500745) | 0.512478 / 0.579283 (-0.066805) | 0.526350 / 0.434364 (0.091986) | 0.637669 / 0.540337 (0.097332) | 0.753821 / 1.386936 (-0.633115) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008176 / 0.011353 (-0.003177) | 0.005862 / 0.011008 (-0.005147) | 0.086123 / 0.038508 (0.047615) | 0.037144 / 0.023109 (0.014035) | 0.398328 / 0.275898 (0.122430) | 0.439126 / 0.323480 (0.115647) | 0.006455 / 0.007986 (-0.001531) | 0.004575 / 0.004328 (0.000246) | 0.083396 / 0.004250 (0.079146) | 0.052827 / 0.037052 (0.015775) | 0.401039 / 0.258489 (0.142550) | 0.441374 / 0.293841 (0.147533) | 0.041671 / 0.128546 (-0.086875) | 0.014098 / 0.075646 (-0.061548) | 0.100873 / 0.419271 (-0.318398) | 0.058690 / 0.043533 (0.015157) | 0.395817 / 0.255139 (0.140678) | 0.409226 / 0.283200 (0.126026) | 0.119804 / 0.141683 (-0.021879) | 1.704583 / 1.452155 (0.252428) | 1.782527 / 1.492716 (0.289811) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.255166 / 0.018006 (0.237160) | 0.485091 / 0.000490 (0.484601) | 0.007458 / 0.000200 (0.007258) | 0.000116 / 0.000054 (0.000061) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034531 / 0.037411 (-0.002880) | 0.134332 / 0.014526 (0.119806) | 0.144944 / 0.176557 (-0.031613) | 0.199352 / 0.737135 (-0.537783) | 0.152243 / 0.296338 (-0.144095) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.495361 / 0.215209 (0.280152) | 4.895144 / 2.077655 (2.817489) | 2.350419 / 1.504120 (0.846299) | 2.112131 / 1.541195 (0.570937) | 2.234469 / 1.468490 (0.765978) | 0.815862 / 4.584777 (-3.768915) | 4.531638 / 3.745712 (0.785926) | 2.405186 / 5.269862 (-2.864676) | 1.559020 / 4.565676 (-3.006656) | 0.100432 / 0.424275 (-0.323843) | 0.014217 / 0.007607 (0.006610) | 0.614622 / 0.226044 (0.388577) | 5.984541 / 2.268929 (3.715613) | 2.929897 / 55.444624 (-52.514727) | 2.484010 / 6.876477 (-4.392467) | 2.533538 / 2.142072 (0.391466) | 0.972119 / 4.805227 (-3.833108) | 0.193630 / 6.500664 (-6.307034) | 0.073694 / 0.075469 (-0.001775) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.503725 / 1.841788 (-0.338063) | 17.421529 / 8.074308 (9.347221) | 15.686433 / 10.191392 (5.495041) | 0.216688 / 0.680424 (-0.463736) | 0.020929 / 0.534201 (-0.513272) | 0.512523 / 0.579283 (-0.066760) | 0.499878 / 0.434364 (0.065514) | 0.639238 / 0.540337 (0.098900) | 0.769598 / 1.386936 (-0.617338) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#99200127ade6d7b7d2cfb7b88365e5844b5c9c2e \"CML watermark\")\n", "> > Do you want me to work on the CI in a separate branch? Thanks for merging and for your help as always :)\r\n> \r\n> In the end I think we can keep it as is since we didn't modify the core code for jax. Maybe later if we do further changes and need to make sure we don't break anything ;) For example when we decide to add support for more recent versions\r\n\r\nMakes sense, thank you @lhoestq!" ]
null
[]
Minor changes in JAX-formatting docstrings & type-hints
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/5522/timeline
Hi to whoever is reading this! 🤗 ## What's in this PR? I was exploring the code regarding the `JaxFormatter` implemented in 🤗`datasets`, and found some things that IMO could be changed. Those are mainly regarding the docstrings and the type-hints based on `jax`'s 0.4.1 release where `jax.Array` was introduced as the default type for JAX-arrays (instead of `jnp.DeviceArray`, `jnp.SharedDeviceArray`, and `jnp.GlobalDeviceArray`). Even though `isinstance(..., jax.Array)` also works with lower versions such as e.g. `0.3.25`. More information about the latter at [`jax` v0.4.1 - Release Notes](https://github.com/google/jax/releases/tag/jax-v0.4.1) and [jax.Array migration - JAX documentation](https://jax.readthedocs.io/en/latest/jax_array_migration.html). ## What's missing? * Do you want me to write an entry in the documentation on how to use 🤗`datasets` with JAX as https://huggingface.co/docs/datasets/use_with_pytorch with PyTorch? * Do we need to actually include `pyarrow` under the `TYPE_CHECKING` when needed? I just did it for JAX, but if we are OK with that, I can do that with the rest of the formatters, just LMK. * Should the License header be included in `datasets.formatting.np_formatter`? If so, do I include the one from 2020 e.g. https://github.com/huggingface/datasets/blob/b065547654efa0ec633cf373ac1512884c68b2e1/src/datasets/formatting/tf_formatter.py#L1-L13 * Is there any reason why `jnp.array` is being used instead of `jnp.asarray`? There's no difference between both, just that `jnp.asarray` has `copy=False` as default, even though `numpy` to `jax.numpy` conversion is not zero-copy, but just asking :)
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/5522.diff", "html_url": "https://github.com/huggingface/datasets/pull/5522", "merged_at": "2023-02-15T13:19:06Z", "patch_url": "https://github.com/huggingface/datasets/pull/5522.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5522" }
1,580,183,124
https://api.github.com/repos/huggingface/datasets/issues/5522/comments
PR_kwDODunzps5JvTVp
null
5,522
https://api.github.com/repos/huggingface/datasets/issues/5522/events
true
closed
2023-02-09T18:47:59Z
null
https://api.github.com/repos/huggingface/datasets/issues/5521
null
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/5521/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/5521/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/6591505?v=4", "events_url": "https://api.github.com/users/marioga/events{/privacy}", "followers_url": "https://api.github.com/users/marioga/followers", "following_url": "https://api.github.com/users/marioga/following{/other_user}", "gists_url": "https://api.github.com/users/marioga/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/marioga", "id": 6591505, "login": "marioga", "node_id": "MDQ6VXNlcjY1OTE1MDU=", "organizations_url": "https://api.github.com/users/marioga/orgs", "received_events_url": "https://api.github.com/users/marioga/received_events", "repos_url": "https://api.github.com/users/marioga/repos", "site_admin": false, "starred_url": "https://api.github.com/users/marioga/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/marioga/subscriptions", "type": "User", "url": "https://api.github.com/users/marioga" }
https://github.com/huggingface/datasets/pull/5521
[]
false
2023-02-13T20:40:48Z
2023-02-12T11:17:17Z
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
null
[]
Fix bug when casting empty array to class labels
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/5521/timeline
Fix https://github.com/huggingface/datasets/issues/5520.
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/5521.diff", "html_url": "https://github.com/huggingface/datasets/pull/5521", "merged_at": "2023-02-12T11:17:17Z", "patch_url": "https://github.com/huggingface/datasets/pull/5521.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5521" }
1,578,418,289
https://api.github.com/repos/huggingface/datasets/issues/5521/comments
PR_kwDODunzps5JpWnp
null
5,521
https://api.github.com/repos/huggingface/datasets/issues/5521/events
true
closed
2023-02-09T18:46:52Z
null
https://api.github.com/repos/huggingface/datasets/issues/5520
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5520/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/5520/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/6591505?v=4", "events_url": "https://api.github.com/users/marioga/events{/privacy}", "followers_url": "https://api.github.com/users/marioga/followers", "following_url": "https://api.github.com/users/marioga/following{/other_user}", "gists_url": "https://api.github.com/users/marioga/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/marioga", "id": 6591505, "login": "marioga", "node_id": "MDQ6VXNlcjY1OTE1MDU=", "organizations_url": "https://api.github.com/users/marioga/orgs", "received_events_url": "https://api.github.com/users/marioga/received_events", "repos_url": "https://api.github.com/users/marioga/repos", "site_admin": false, "starred_url": "https://api.github.com/users/marioga/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/marioga/subscriptions", "type": "User", "url": "https://api.github.com/users/marioga" }
https://github.com/huggingface/datasets/issues/5520
[]
false
2023-02-12T11:17:18Z
2023-02-12T11:17:18Z
null
[]
completed
[]
ClassLabel.cast_storage raises TypeError when called on an empty IntegerArray
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/5520/timeline
### Describe the bug `ClassLabel.cast_storage` raises `TypeError` when called on an empty `IntegerArray`. ### Steps to reproduce the bug Minimal steps: ```python import pyarrow as pa from datasets import ClassLabel ClassLabel(names=['foo', 'bar']).cast_storage(pa.array([], pa.int64())) ``` In practice, this bug arises in situations like the one below: ```python from datasets import ClassLabel, Dataset, Features, Sequence dataset = Dataset.from_dict({'labels': [[], []]}, features=Features({'labels': Sequence(ClassLabel(names=['foo', 'bar']))})) # this raises TypeError dataset.map(batched=True, batch_size=1) ``` ### Expected behavior `ClassLabel.cast_storage` should return an empty Int64Array. ### Environment info - `datasets` version: 2.9.1.dev0 - Platform: Linux-4.15.0-1032-aws-x86_64-with-glibc2.27 - Python version: 3.10.6 - PyArrow version: 11.0.0 - Pandas version: 1.5.3
https://api.github.com/repos/huggingface/datasets
null
1,578,417,074
https://api.github.com/repos/huggingface/datasets/issues/5520/comments
I_kwDODunzps5eFLuy
null
5,520
https://api.github.com/repos/huggingface/datasets/issues/5520/events
false
closed
2023-02-09T17:50:21Z
null
https://api.github.com/repos/huggingface/datasets/issues/5519
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5519/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/5519/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
https://github.com/huggingface/datasets/pull/5519
[]
false
2023-02-14T16:28:27Z
2023-02-14T16:18:38Z
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009729 / 0.011353 (-0.001624) | 0.005342 / 0.011008 (-0.005666) | 0.100194 / 0.038508 (0.061686) | 0.036391 / 0.023109 (0.013282) | 0.294163 / 0.275898 (0.018264) | 0.364117 / 0.323480 (0.040637) | 0.008231 / 0.007986 (0.000246) | 0.005954 / 0.004328 (0.001626) | 0.076484 / 0.004250 (0.072234) | 0.045028 / 0.037052 (0.007976) | 0.308163 / 0.258489 (0.049674) | 0.339473 / 0.293841 (0.045632) | 0.039268 / 0.128546 (-0.089279) | 0.012357 / 0.075646 (-0.063289) | 0.334176 / 0.419271 (-0.085096) | 0.049502 / 0.043533 (0.005969) | 0.294134 / 0.255139 (0.038995) | 0.319370 / 0.283200 (0.036170) | 0.113040 / 0.141683 (-0.028643) | 1.450750 / 1.452155 (-0.001405) | 1.490265 / 1.492716 (-0.002452) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.252860 / 0.018006 (0.234854) | 0.554299 / 0.000490 (0.553810) | 0.002105 / 0.000200 (0.001905) | 0.000091 / 0.000054 (0.000037) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026557 / 0.037411 (-0.010854) | 0.104464 / 0.014526 (0.089938) | 0.116724 / 0.176557 (-0.059833) | 0.154736 / 0.737135 (-0.582399) | 0.122017 / 0.296338 (-0.174322) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.398170 / 0.215209 (0.182961) | 3.979309 / 2.077655 (1.901654) | 1.773051 / 1.504120 (0.268931) | 1.587247 / 1.541195 (0.046053) | 1.620446 / 1.468490 (0.151956) | 0.692152 / 4.584777 (-3.892625) | 3.724821 / 3.745712 (-0.020891) | 2.133122 / 5.269862 (-3.136739) | 1.455612 / 4.565676 (-3.110065) | 0.084721 / 0.424275 (-0.339554) | 0.012461 / 0.007607 (0.004854) | 0.498909 / 0.226044 (0.272865) | 4.983837 / 2.268929 (2.714908) | 2.258489 / 55.444624 (-53.186135) | 1.891690 / 6.876477 (-4.984786) | 1.976944 / 2.142072 (-0.165128) | 0.836950 / 4.805227 (-3.968277) | 0.165401 / 6.500664 (-6.335263) | 0.061623 / 0.075469 (-0.013846) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.205945 / 1.841788 (-0.635842) | 15.101603 / 8.074308 (7.027295) | 14.393739 / 10.191392 (4.202347) | 0.176313 / 0.680424 (-0.504110) | 0.029102 / 0.534201 (-0.505099) | 0.439785 / 0.579283 (-0.139498) | 0.437360 / 0.434364 (0.002996) | 0.539668 / 0.540337 (-0.000669) | 0.641452 / 1.386936 (-0.745484) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007184 / 0.011353 (-0.004169) | 0.005215 / 0.011008 (-0.005793) | 0.074617 / 0.038508 (0.036109) | 0.033209 / 0.023109 (0.010100) | 0.334304 / 0.275898 (0.058406) | 0.370270 / 0.323480 (0.046790) | 0.005851 / 0.007986 (-0.002135) | 0.004106 / 0.004328 (-0.000222) | 0.075487 / 0.004250 (0.071237) | 0.051133 / 0.037052 (0.014080) | 0.335401 / 0.258489 (0.076912) | 0.391457 / 0.293841 (0.097616) | 0.036525 / 0.128546 (-0.092021) | 0.012423 / 0.075646 (-0.063223) | 0.086446 / 0.419271 (-0.332825) | 0.050707 / 0.043533 (0.007174) | 0.336186 / 0.255139 (0.081047) | 0.353273 / 0.283200 (0.070074) | 0.105625 / 0.141683 (-0.036057) | 1.486118 / 1.452155 (0.033963) | 1.584931 / 1.492716 (0.092214) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.237589 / 0.018006 (0.219583) | 0.552030 / 0.000490 (0.551540) | 0.002863 / 0.000200 (0.002663) | 0.000097 / 0.000054 (0.000042) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028078 / 0.037411 (-0.009333) | 0.112516 / 0.014526 (0.097990) | 0.121119 / 0.176557 (-0.055438) | 0.158874 / 0.737135 (-0.578262) | 0.129501 / 0.296338 (-0.166837) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.419479 / 0.215209 (0.204270) | 4.192216 / 2.077655 (2.114561) | 1.990513 / 1.504120 (0.486393) | 1.792892 / 1.541195 (0.251697) | 1.853904 / 1.468490 (0.385413) | 0.712702 / 4.584777 (-3.872074) | 3.820682 / 3.745712 (0.074970) | 2.143695 / 5.269862 (-3.126166) | 1.369621 / 4.565676 (-3.196055) | 0.087451 / 0.424275 (-0.336824) | 0.012622 / 0.007607 (0.005014) | 0.521056 / 0.226044 (0.295011) | 5.204873 / 2.268929 (2.935944) | 2.481169 / 55.444624 (-52.963455) | 2.112134 / 6.876477 (-4.764342) | 2.200681 / 2.142072 (0.058609) | 0.860323 / 4.805227 (-3.944904) | 0.171452 / 6.500664 (-6.329212) | 0.065235 / 0.075469 (-0.010234) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.241047 / 1.841788 (-0.600741) | 14.977890 / 8.074308 (6.903582) | 13.584265 / 10.191392 (3.392873) | 0.180050 / 0.680424 (-0.500374) | 0.018247 / 0.534201 (-0.515954) | 0.429585 / 0.579283 (-0.149698) | 0.429448 / 0.434364 (-0.004916) | 0.542663 / 0.540337 (0.002326) | 0.649525 / 1.386936 (-0.737411) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#26cf1d2548eb313a06565d36bd400436e350bc86 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.011289 / 0.011353 (-0.000064) | 0.005841 / 0.011008 (-0.005167) | 0.120994 / 0.038508 (0.082486) | 0.043627 / 0.023109 (0.020517) | 0.353254 / 0.275898 (0.077356) | 0.394685 / 0.323480 (0.071205) | 0.009520 / 0.007986 (0.001535) | 0.004770 / 0.004328 (0.000442) | 0.088857 / 0.004250 (0.084607) | 0.048426 / 0.037052 (0.011373) | 0.353815 / 0.258489 (0.095326) | 0.404109 / 0.293841 (0.110268) | 0.060079 / 0.128546 (-0.068467) | 0.013840 / 0.075646 (-0.061806) | 0.403133 / 0.419271 (-0.016139) | 0.072227 / 0.043533 (0.028694) | 0.354585 / 0.255139 (0.099446) | 0.377937 / 0.283200 (0.094737) | 0.139080 / 0.141683 (-0.002602) | 1.733266 / 1.452155 (0.281112) | 1.828402 / 1.492716 (0.335686) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.215095 / 0.018006 (0.197088) | 0.486669 / 0.000490 (0.486179) | 0.001425 / 0.000200 (0.001225) | 0.000097 / 0.000054 (0.000043) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032832 / 0.037411 (-0.004579) | 0.136335 / 0.014526 (0.121809) | 0.141827 / 0.176557 (-0.034730) | 0.185917 / 0.737135 (-0.551218) | 0.149046 / 0.296338 (-0.147293) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.474587 / 0.215209 (0.259378) | 4.753686 / 2.077655 (2.676031) | 2.152147 / 1.504120 (0.648027) | 1.941762 / 1.541195 (0.400567) | 2.077493 / 1.468490 (0.609003) | 0.822432 / 4.584777 (-3.762345) | 4.860151 / 3.745712 (1.114439) | 2.527292 / 5.269862 (-2.742569) | 1.580442 / 4.565676 (-2.985234) | 0.102104 / 0.424275 (-0.322171) | 0.015060 / 0.007607 (0.007453) | 0.598780 / 0.226044 (0.372736) | 5.998318 / 2.268929 (3.729390) | 2.754115 / 55.444624 (-52.690509) | 2.317509 / 6.876477 (-4.558967) | 2.409942 / 2.142072 (0.267870) | 1.008830 / 4.805227 (-3.796397) | 0.196203 / 6.500664 (-6.304461) | 0.075378 / 0.075469 (-0.000091) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.430676 / 1.841788 (-0.411112) | 19.597628 / 8.074308 (11.523320) | 17.364673 / 10.191392 (7.173281) | 0.216621 / 0.680424 (-0.463803) | 0.039505 / 0.534201 (-0.494696) | 0.529027 / 0.579283 (-0.050256) | 0.572014 / 0.434364 (0.137650) | 0.702898 / 0.540337 (0.162560) | 0.785748 / 1.386936 (-0.601188) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009150 / 0.011353 (-0.002203) | 0.006088 / 0.011008 (-0.004920) | 0.090629 / 0.038508 (0.052121) | 0.044284 / 0.023109 (0.021174) | 0.411363 / 0.275898 (0.135465) | 0.445499 / 0.323480 (0.122020) | 0.007129 / 0.007986 (-0.000856) | 0.004843 / 0.004328 (0.000515) | 0.087919 / 0.004250 (0.083668) | 0.060329 / 0.037052 (0.023277) | 0.405802 / 0.258489 (0.147313) | 0.468301 / 0.293841 (0.174460) | 0.044271 / 0.128546 (-0.084275) | 0.014895 / 0.075646 (-0.060751) | 0.103728 / 0.419271 (-0.315544) | 0.084190 / 0.043533 (0.040657) | 0.407210 / 0.255139 (0.152071) | 0.432585 / 0.283200 (0.149386) | 0.137132 / 0.141683 (-0.004550) | 1.720261 / 1.452155 (0.268107) | 1.858575 / 1.492716 (0.365858) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.331395 / 0.018006 (0.313389) | 0.494757 / 0.000490 (0.494267) | 0.043426 / 0.000200 (0.043226) | 0.000470 / 0.000054 (0.000415) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.035288 / 0.037411 (-0.002123) | 0.140856 / 0.014526 (0.126330) | 0.146597 / 0.176557 (-0.029959) | 0.192775 / 0.737135 (-0.544360) | 0.155307 / 0.296338 (-0.141032) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.504000 / 0.215209 (0.288791) | 5.011081 / 2.077655 (2.933427) | 2.380420 / 1.504120 (0.876300) | 2.154819 / 1.541195 (0.613624) | 2.293883 / 1.468490 (0.825393) | 0.864429 / 4.584777 (-3.720348) | 5.134475 / 3.745712 (1.388763) | 4.984024 / 5.269862 (-0.285837) | 2.333754 / 4.565676 (-2.231923) | 0.105854 / 0.424275 (-0.318422) | 0.015833 / 0.007607 (0.008226) | 0.633614 / 0.226044 (0.407569) | 6.330974 / 2.268929 (4.062046) | 3.020498 / 55.444624 (-52.424126) | 2.578234 / 6.876477 (-4.298243) | 2.654429 / 2.142072 (0.512357) | 1.022041 / 4.805227 (-3.783186) | 0.205085 / 6.500664 (-6.295579) | 0.081122 / 0.075469 (0.005653) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.538929 / 1.841788 (-0.302859) | 19.907799 / 8.074308 (11.833490) | 17.174568 / 10.191392 (6.983176) | 0.228165 / 0.680424 (-0.452258) | 0.024688 / 0.534201 (-0.509513) | 0.508958 / 0.579283 (-0.070326) | 0.544469 / 0.434364 (0.110105) | 0.590805 / 0.540337 (0.050468) | 0.705947 / 1.386936 (-0.680989) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#2573861afb170fd575dbe67270294a4e88ab4be6 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008377 / 0.011353 (-0.002975) | 0.004445 / 0.011008 (-0.006563) | 0.100671 / 0.038508 (0.062163) | 0.029216 / 0.023109 (0.006107) | 0.300311 / 0.275898 (0.024413) | 0.356907 / 0.323480 (0.033427) | 0.006921 / 0.007986 (-0.001065) | 0.003384 / 0.004328 (-0.000944) | 0.078529 / 0.004250 (0.074278) | 0.034689 / 0.037052 (-0.002364) | 0.304647 / 0.258489 (0.046158) | 0.343584 / 0.293841 (0.049743) | 0.032700 / 0.128546 (-0.095846) | 0.011403 / 0.075646 (-0.064244) | 0.321540 / 0.419271 (-0.097732) | 0.040770 / 0.043533 (-0.002762) | 0.306900 / 0.255139 (0.051761) | 0.322482 / 0.283200 (0.039282) | 0.085396 / 0.141683 (-0.056287) | 1.450735 / 1.452155 (-0.001419) | 1.491829 / 1.492716 (-0.000888) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.009439 / 0.018006 (-0.008567) | 0.406805 / 0.000490 (0.406315) | 0.002993 / 0.000200 (0.002793) | 0.000102 / 0.000054 (0.000048) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025034 / 0.037411 (-0.012378) | 0.100567 / 0.014526 (0.086042) | 0.107267 / 0.176557 (-0.069290) | 0.149945 / 0.737135 (-0.587190) | 0.111150 / 0.296338 (-0.185189) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.418387 / 0.215209 (0.203178) | 4.177979 / 2.077655 (2.100324) | 1.886650 / 1.504120 (0.382530) | 1.685692 / 1.541195 (0.144497) | 1.728270 / 1.468490 (0.259780) | 0.700904 / 4.584777 (-3.883873) | 3.379998 / 3.745712 (-0.365714) | 1.874779 / 5.269862 (-3.395083) | 1.170366 / 4.565676 (-3.395310) | 0.083190 / 0.424275 (-0.341085) | 0.012506 / 0.007607 (0.004899) | 0.528633 / 0.226044 (0.302589) | 5.301793 / 2.268929 (3.032865) | 2.334050 / 55.444624 (-53.110574) | 1.986988 / 6.876477 (-4.889488) | 2.020508 / 2.142072 (-0.121565) | 0.817227 / 4.805227 (-3.988000) | 0.150284 / 6.500664 (-6.350380) | 0.065489 / 0.075469 (-0.009980) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.224216 / 1.841788 (-0.617572) | 13.729808 / 8.074308 (5.655500) | 14.283402 / 10.191392 (4.092010) | 0.159434 / 0.680424 (-0.520990) | 0.028471 / 0.534201 (-0.505730) | 0.395102 / 0.579283 (-0.184181) | 0.402733 / 0.434364 (-0.031631) | 0.470852 / 0.540337 (-0.069485) | 0.568530 / 1.386936 (-0.818406) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006750 / 0.011353 (-0.004603) | 0.004479 / 0.011008 (-0.006529) | 0.074926 / 0.038508 (0.036418) | 0.027619 / 0.023109 (0.004510) | 0.342070 / 0.275898 (0.066172) | 0.372452 / 0.323480 (0.048972) | 0.005094 / 0.007986 (-0.002892) | 0.003494 / 0.004328 (-0.000834) | 0.074963 / 0.004250 (0.070713) | 0.038457 / 0.037052 (0.001405) | 0.340587 / 0.258489 (0.082098) | 0.381212 / 0.293841 (0.087371) | 0.031597 / 0.128546 (-0.096950) | 0.011631 / 0.075646 (-0.064015) | 0.084646 / 0.419271 (-0.334626) | 0.042072 / 0.043533 (-0.001461) | 0.340977 / 0.255139 (0.085838) | 0.366502 / 0.283200 (0.083302) | 0.091181 / 0.141683 (-0.050502) | 1.435119 / 1.452155 (-0.017035) | 1.520426 / 1.492716 (0.027710) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.211320 / 0.018006 (0.193313) | 0.466154 / 0.000490 (0.465664) | 0.002901 / 0.000200 (0.002701) | 0.000080 / 0.000054 (0.000026) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025122 / 0.037411 (-0.012289) | 0.098929 / 0.014526 (0.084403) | 0.106551 / 0.176557 (-0.070005) | 0.142820 / 0.737135 (-0.594316) | 0.110701 / 0.296338 (-0.185637) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.445187 / 0.215209 (0.229978) | 4.457524 / 2.077655 (2.379870) | 2.088323 / 1.504120 (0.584203) | 1.888076 / 1.541195 (0.346881) | 1.923340 / 1.468490 (0.454850) | 0.723354 / 4.584777 (-3.861423) | 3.428479 / 3.745712 (-0.317233) | 1.914580 / 5.269862 (-3.355281) | 1.191810 / 4.565676 (-3.373866) | 0.087008 / 0.424275 (-0.337267) | 0.013431 / 0.007607 (0.005824) | 0.545089 / 0.226044 (0.319044) | 5.465887 / 2.268929 (3.196958) | 2.527431 / 55.444624 (-52.917194) | 2.240622 / 6.876477 (-4.635854) | 2.232472 / 2.142072 (0.090399) | 0.815968 / 4.805227 (-3.989259) | 0.152842 / 6.500664 (-6.347822) | 0.067152 / 0.075469 (-0.008317) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.328360 / 1.841788 (-0.513427) | 14.163349 / 8.074308 (6.089040) | 13.814255 / 10.191392 (3.622863) | 0.131684 / 0.680424 (-0.548740) | 0.016980 / 0.534201 (-0.517221) | 0.396045 / 0.579283 (-0.183238) | 0.395078 / 0.434364 (-0.039286) | 0.471728 / 0.540337 (-0.068609) | 0.567830 / 1.386936 (-0.819106) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#82331b032891671c334afe30c5f3cc21245b2d72 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.012630 / 0.011353 (0.001277) | 0.007038 / 0.011008 (-0.003970) | 0.158816 / 0.038508 (0.120308) | 0.044142 / 0.023109 (0.021032) | 0.389393 / 0.275898 (0.113495) | 0.479745 / 0.323480 (0.156265) | 0.009335 / 0.007986 (0.001349) | 0.005434 / 0.004328 (0.001105) | 0.107747 / 0.004250 (0.103497) | 0.048382 / 0.037052 (0.011330) | 0.398144 / 0.258489 (0.139655) | 0.446373 / 0.293841 (0.152532) | 0.066285 / 0.128546 (-0.062261) | 0.021174 / 0.075646 (-0.054472) | 0.449176 / 0.419271 (0.029905) | 0.063044 / 0.043533 (0.019511) | 0.390523 / 0.255139 (0.135384) | 0.451435 / 0.283200 (0.168236) | 0.116369 / 0.141683 (-0.025314) | 1.881269 / 1.452155 (0.429114) | 1.944527 / 1.492716 (0.451811) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.227989 / 0.018006 (0.209983) | 0.538514 / 0.000490 (0.538024) | 0.009404 / 0.000200 (0.009204) | 0.000510 / 0.000054 (0.000455) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029826 / 0.037411 (-0.007585) | 0.129623 / 0.014526 (0.115098) | 0.142067 / 0.176557 (-0.034489) | 0.218586 / 0.737135 (-0.518549) | 0.160524 / 0.296338 (-0.135814) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.667195 / 0.215209 (0.451986) | 6.694192 / 2.077655 (4.616537) | 2.542493 / 1.504120 (1.038373) | 2.124042 / 1.541195 (0.582847) | 2.024854 / 1.468490 (0.556364) | 1.306222 / 4.584777 (-3.278555) | 5.631557 / 3.745712 (1.885845) | 3.405978 / 5.269862 (-1.863884) | 2.471399 / 4.565676 (-2.094278) | 0.165187 / 0.424275 (-0.259088) | 0.014880 / 0.007607 (0.007273) | 0.842718 / 0.226044 (0.616673) | 8.584358 / 2.268929 (6.315430) | 3.377228 / 55.444624 (-52.067396) | 2.667265 / 6.876477 (-4.209212) | 2.699462 / 2.142072 (0.557389) | 1.623115 / 4.805227 (-3.182112) | 0.253929 / 6.500664 (-6.246735) | 0.077189 / 0.075469 (0.001720) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.778962 / 1.841788 (-0.062825) | 18.997636 / 8.074308 (10.923328) | 24.255222 / 10.191392 (14.063830) | 0.304754 / 0.680424 (-0.375670) | 0.049656 / 0.534201 (-0.484545) | 0.590871 / 0.579283 (0.011588) | 0.649292 / 0.434364 (0.214928) | 0.751281 / 0.540337 (0.210943) | 0.872193 / 1.386936 (-0.514743) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010660 / 0.011353 (-0.000693) | 0.006492 / 0.011008 (-0.004516) | 0.112190 / 0.038508 (0.073682) | 0.045391 / 0.023109 (0.022281) | 0.439852 / 0.275898 (0.163954) | 0.486489 / 0.323480 (0.163009) | 0.007155 / 0.007986 (-0.000830) | 0.006323 / 0.004328 (0.001995) | 0.099775 / 0.004250 (0.095525) | 0.055762 / 0.037052 (0.018709) | 0.439457 / 0.258489 (0.180968) | 0.505322 / 0.293841 (0.211481) | 0.057019 / 0.128546 (-0.071527) | 0.031382 / 0.075646 (-0.044264) | 0.121211 / 0.419271 (-0.298061) | 0.066091 / 0.043533 (0.022558) | 0.499760 / 0.255139 (0.244622) | 0.508312 / 0.283200 (0.225113) | 0.146975 / 0.141683 (0.005292) | 1.916347 / 1.452155 (0.464193) | 2.065860 / 1.492716 (0.573144) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.247176 / 0.018006 (0.229170) | 0.565141 / 0.000490 (0.564652) | 0.004841 / 0.000200 (0.004641) | 0.000141 / 0.000054 (0.000087) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.036378 / 0.037411 (-0.001033) | 0.143470 / 0.014526 (0.128944) | 0.148096 / 0.176557 (-0.028461) | 0.225877 / 0.737135 (-0.511258) | 0.147072 / 0.296338 (-0.149266) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.723119 / 0.215209 (0.507910) | 6.824981 / 2.077655 (4.747326) | 2.883840 / 1.504120 (1.379720) | 2.468707 / 1.541195 (0.927513) | 2.525549 / 1.468490 (1.057059) | 1.426640 / 4.584777 (-3.158137) | 5.816045 / 3.745712 (2.070333) | 5.727037 / 5.269862 (0.457175) | 2.650307 / 4.565676 (-1.915369) | 0.160306 / 0.424275 (-0.263970) | 0.015371 / 0.007607 (0.007764) | 0.835778 / 0.226044 (0.609733) | 8.622836 / 2.268929 (6.353907) | 3.616338 / 55.444624 (-51.828287) | 2.974243 / 6.876477 (-3.902234) | 2.884557 / 2.142072 (0.742485) | 1.734874 / 4.805227 (-3.070353) | 0.277474 / 6.500664 (-6.223190) | 0.094189 / 0.075469 (0.018720) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.785728 / 1.841788 (-0.056059) | 19.376490 / 8.074308 (11.302182) | 24.560403 / 10.191392 (14.369011) | 0.250686 / 0.680424 (-0.429738) | 0.034333 / 0.534201 (-0.499868) | 0.557331 / 0.579283 (-0.021952) | 0.641007 / 0.434364 (0.206643) | 0.657138 / 0.540337 (0.116800) | 0.759023 / 1.386936 (-0.627913) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#06ae3f678651bfbb3ca7dd3274ee2f38e0e0237e \"CML watermark\")\n" ]
null
[]
Format code with `ruff`
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/5519/timeline
Use `ruff` for formatting instead of `isort` and `black` to be consistent with [`transformers`](https://github.com/huggingface/transformers/pull/21480) and [`hfh`](https://github.com/huggingface/huggingface_hub/pull/1323). TODO: - [x] ~Merge the community contributors' PR to avoid having to run `make style` on their PR branches~ (we have some new PRs, but fixing those shouldn't be too big of a problem)
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/5519.diff", "html_url": "https://github.com/huggingface/datasets/pull/5519", "merged_at": "2023-02-14T16:18:38Z", "patch_url": "https://github.com/huggingface/datasets/pull/5519.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5519" }
1,578,341,785
https://api.github.com/repos/huggingface/datasets/issues/5519/comments
PR_kwDODunzps5JpGPl
null
5,519
https://api.github.com/repos/huggingface/datasets/issues/5519/events
true
closed
2023-02-09T16:22:29Z
null
https://api.github.com/repos/huggingface/datasets/issues/5518
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5518/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/5518/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
https://github.com/huggingface/datasets/pull/5518
[]
false
2023-02-13T13:55:49Z
2023-02-13T13:48:40Z
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008283 / 0.011353 (-0.003070) | 0.004450 / 0.011008 (-0.006558) | 0.099773 / 0.038508 (0.061265) | 0.029068 / 0.023109 (0.005959) | 0.296799 / 0.275898 (0.020901) | 0.350946 / 0.323480 (0.027466) | 0.007331 / 0.007986 (-0.000655) | 0.004550 / 0.004328 (0.000222) | 0.077603 / 0.004250 (0.073352) | 0.034307 / 0.037052 (-0.002746) | 0.313174 / 0.258489 (0.054685) | 0.342270 / 0.293841 (0.048429) | 0.033463 / 0.128546 (-0.095083) | 0.011421 / 0.075646 (-0.064225) | 0.317188 / 0.419271 (-0.102083) | 0.040985 / 0.043533 (-0.002548) | 0.300800 / 0.255139 (0.045661) | 0.360171 / 0.283200 (0.076972) | 0.086702 / 0.141683 (-0.054981) | 1.474679 / 1.452155 (0.022525) | 1.518319 / 1.492716 (0.025603) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.198059 / 0.018006 (0.180052) | 0.403502 / 0.000490 (0.403012) | 0.002663 / 0.000200 (0.002463) | 0.000218 / 0.000054 (0.000164) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022946 / 0.037411 (-0.014465) | 0.096466 / 0.014526 (0.081940) | 0.104092 / 0.176557 (-0.072465) | 0.138499 / 0.737135 (-0.598636) | 0.106941 / 0.296338 (-0.189397) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.416000 / 0.215209 (0.200791) | 4.153120 / 2.077655 (2.075465) | 1.843957 / 1.504120 (0.339837) | 1.650391 / 1.541195 (0.109197) | 1.684765 / 1.468490 (0.216275) | 0.688917 / 4.584777 (-3.895860) | 3.442797 / 3.745712 (-0.302916) | 1.834685 / 5.269862 (-3.435176) | 1.148046 / 4.565676 (-3.417631) | 0.082299 / 0.424275 (-0.341976) | 0.012399 / 0.007607 (0.004792) | 0.521099 / 0.226044 (0.295054) | 5.223695 / 2.268929 (2.954767) | 2.270970 / 55.444624 (-53.173654) | 1.921321 / 6.876477 (-4.955156) | 1.954675 / 2.142072 (-0.187398) | 0.809383 / 4.805227 (-3.995845) | 0.148562 / 6.500664 (-6.352102) | 0.064764 / 0.075469 (-0.010705) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.212687 / 1.841788 (-0.629101) | 13.491641 / 8.074308 (5.417333) | 12.972926 / 10.191392 (2.781534) | 0.137036 / 0.680424 (-0.543388) | 0.028591 / 0.534201 (-0.505610) | 0.391980 / 0.579283 (-0.187303) | 0.394474 / 0.434364 (-0.039889) | 0.456582 / 0.540337 (-0.083755) | 0.535984 / 1.386936 (-0.850952) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006419 / 0.011353 (-0.004934) | 0.004295 / 0.011008 (-0.006713) | 0.077702 / 0.038508 (0.039194) | 0.027368 / 0.023109 (0.004259) | 0.336713 / 0.275898 (0.060815) | 0.370074 / 0.323480 (0.046594) | 0.004657 / 0.007986 (-0.003328) | 0.003308 / 0.004328 (-0.001021) | 0.075747 / 0.004250 (0.071496) | 0.037323 / 0.037052 (0.000271) | 0.342382 / 0.258489 (0.083893) | 0.381109 / 0.293841 (0.087269) | 0.031804 / 0.128546 (-0.096742) | 0.011761 / 0.075646 (-0.063885) | 0.086818 / 0.419271 (-0.332454) | 0.042058 / 0.043533 (-0.001475) | 0.346295 / 0.255139 (0.091156) | 0.366857 / 0.283200 (0.083658) | 0.088666 / 0.141683 (-0.053016) | 1.533711 / 1.452155 (0.081556) | 1.537422 / 1.492716 (0.044705) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.220416 / 0.018006 (0.202410) | 0.387393 / 0.000490 (0.386903) | 0.003739 / 0.000200 (0.003539) | 0.000076 / 0.000054 (0.000021) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024083 / 0.037411 (-0.013329) | 0.098036 / 0.014526 (0.083510) | 0.102908 / 0.176557 (-0.073648) | 0.139512 / 0.737135 (-0.597623) | 0.107703 / 0.296338 (-0.188635) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.437615 / 0.215209 (0.222406) | 4.373140 / 2.077655 (2.295486) | 2.065063 / 1.504120 (0.560943) | 1.863938 / 1.541195 (0.322743) | 1.907955 / 1.468490 (0.439465) | 0.695830 / 4.584777 (-3.888947) | 3.394248 / 3.745712 (-0.351464) | 1.842794 / 5.269862 (-3.427068) | 1.156928 / 4.565676 (-3.408748) | 0.082505 / 0.424275 (-0.341771) | 0.012405 / 0.007607 (0.004798) | 0.538041 / 0.226044 (0.311997) | 5.363508 / 2.268929 (3.094579) | 2.509383 / 55.444624 (-52.935241) | 2.160416 / 6.876477 (-4.716061) | 2.162054 / 2.142072 (0.019982) | 0.802419 / 4.805227 (-4.002809) | 0.150529 / 6.500664 (-6.350135) | 0.066418 / 0.075469 (-0.009051) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.257221 / 1.841788 (-0.584567) | 13.748839 / 8.074308 (5.674531) | 13.310555 / 10.191392 (3.119163) | 0.152997 / 0.680424 (-0.527427) | 0.016618 / 0.534201 (-0.517583) | 0.375443 / 0.579283 (-0.203840) | 0.374942 / 0.434364 (-0.059422) | 0.466704 / 0.540337 (-0.073633) | 0.553563 / 1.386936 (-0.833373) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#1ac8343af4e2dc6fe0771d0be70eaf8a6e5a8fbc \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009260 / 0.011353 (-0.002092) | 0.005213 / 0.011008 (-0.005795) | 0.102151 / 0.038508 (0.063643) | 0.035619 / 0.023109 (0.012510) | 0.296266 / 0.275898 (0.020368) | 0.359884 / 0.323480 (0.036404) | 0.008176 / 0.007986 (0.000190) | 0.005031 / 0.004328 (0.000703) | 0.077178 / 0.004250 (0.072927) | 0.041898 / 0.037052 (0.004846) | 0.305640 / 0.258489 (0.047151) | 0.346275 / 0.293841 (0.052434) | 0.037684 / 0.128546 (-0.090863) | 0.011816 / 0.075646 (-0.063831) | 0.334853 / 0.419271 (-0.084419) | 0.046535 / 0.043533 (0.003002) | 0.291544 / 0.255139 (0.036405) | 0.317194 / 0.283200 (0.033994) | 0.103212 / 0.141683 (-0.038471) | 1.424994 / 1.452155 (-0.027161) | 1.486216 / 1.492716 (-0.006501) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.011816 / 0.018006 (-0.006190) | 0.442092 / 0.000490 (0.441602) | 0.001297 / 0.000200 (0.001097) | 0.000078 / 0.000054 (0.000024) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028277 / 0.037411 (-0.009134) | 0.110431 / 0.014526 (0.095905) | 0.118456 / 0.176557 (-0.058100) | 0.156778 / 0.737135 (-0.580357) | 0.123036 / 0.296338 (-0.173302) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.399006 / 0.215209 (0.183797) | 3.990367 / 2.077655 (1.912712) | 1.798739 / 1.504120 (0.294620) | 1.607133 / 1.541195 (0.065938) | 1.748897 / 1.468490 (0.280407) | 0.690666 / 4.584777 (-3.894111) | 3.795892 / 3.745712 (0.050180) | 3.479317 / 5.269862 (-1.790545) | 1.861268 / 4.565676 (-2.704409) | 0.085235 / 0.424275 (-0.339040) | 0.012997 / 0.007607 (0.005390) | 0.512489 / 0.226044 (0.286445) | 5.039515 / 2.268929 (2.770587) | 2.258079 / 55.444624 (-53.186545) | 1.907178 / 6.876477 (-4.969299) | 1.985953 / 2.142072 (-0.156119) | 0.843595 / 4.805227 (-3.961633) | 0.165286 / 6.500664 (-6.335378) | 0.063026 / 0.075469 (-0.012443) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.186680 / 1.841788 (-0.655108) | 14.976016 / 8.074308 (6.901708) | 14.436941 / 10.191392 (4.245549) | 0.172620 / 0.680424 (-0.507804) | 0.028760 / 0.534201 (-0.505441) | 0.443505 / 0.579283 (-0.135778) | 0.435665 / 0.434364 (0.001301) | 0.520164 / 0.540337 (-0.020174) | 0.608348 / 1.386936 (-0.778588) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007510 / 0.011353 (-0.003842) | 0.005012 / 0.011008 (-0.005996) | 0.077865 / 0.038508 (0.039357) | 0.033610 / 0.023109 (0.010500) | 0.365996 / 0.275898 (0.090098) | 0.416393 / 0.323480 (0.092913) | 0.005672 / 0.007986 (-0.002314) | 0.005334 / 0.004328 (0.001006) | 0.074948 / 0.004250 (0.070698) | 0.045962 / 0.037052 (0.008909) | 0.362209 / 0.258489 (0.103719) | 0.410522 / 0.293841 (0.116681) | 0.036247 / 0.128546 (-0.092299) | 0.012432 / 0.075646 (-0.063214) | 0.088754 / 0.419271 (-0.330517) | 0.048848 / 0.043533 (0.005315) | 0.370994 / 0.255139 (0.115855) | 0.382476 / 0.283200 (0.099277) | 0.103443 / 0.141683 (-0.038240) | 1.483127 / 1.452155 (0.030972) | 1.573366 / 1.492716 (0.080650) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.224163 / 0.018006 (0.206157) | 0.475136 / 0.000490 (0.474646) | 0.000394 / 0.000200 (0.000194) | 0.000057 / 0.000054 (0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030612 / 0.037411 (-0.006799) | 0.113983 / 0.014526 (0.099457) | 0.121835 / 0.176557 (-0.054722) | 0.160092 / 0.737135 (-0.577043) | 0.127431 / 0.296338 (-0.168908) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.421389 / 0.215209 (0.206179) | 4.207638 / 2.077655 (2.129984) | 2.040265 / 1.504120 (0.536145) | 1.868617 / 1.541195 (0.327422) | 1.979016 / 1.468490 (0.510526) | 0.712499 / 4.584777 (-3.872278) | 3.783091 / 3.745712 (0.037379) | 2.124293 / 5.269862 (-3.145569) | 1.382028 / 4.565676 (-3.183649) | 0.087133 / 0.424275 (-0.337142) | 0.012634 / 0.007607 (0.005027) | 0.518965 / 0.226044 (0.292920) | 5.188330 / 2.268929 (2.919401) | 2.556593 / 55.444624 (-52.888031) | 2.243081 / 6.876477 (-4.633396) | 2.340420 / 2.142072 (0.198347) | 0.858010 / 4.805227 (-3.947218) | 0.169165 / 6.500664 (-6.331499) | 0.065177 / 0.075469 (-0.010292) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.297350 / 1.841788 (-0.544438) | 15.404241 / 8.074308 (7.329933) | 13.806039 / 10.191392 (3.614647) | 0.182055 / 0.680424 (-0.498369) | 0.017789 / 0.534201 (-0.516412) | 0.422828 / 0.579283 (-0.156455) | 0.418269 / 0.434364 (-0.016095) | 0.521561 / 0.540337 (-0.018777) | 0.642526 / 1.386936 (-0.744410) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#0009eea6819c32a888f65b0fdb5889b6d311c436 \"CML watermark\")\n" ]
null
[]
Remove py.typed
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/5518/timeline
Fix https://github.com/huggingface/datasets/issues/3841
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/5518.diff", "html_url": "https://github.com/huggingface/datasets/pull/5518", "merged_at": "2023-02-13T13:48:40Z", "patch_url": "https://github.com/huggingface/datasets/pull/5518.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5518" }
1,578,203,962
https://api.github.com/repos/huggingface/datasets/issues/5518/comments
PR_kwDODunzps5Joom3
null
5,518
https://api.github.com/repos/huggingface/datasets/issues/5518/events
true
open
2023-02-09T14:18:00Z
null
https://api.github.com/repos/huggingface/datasets/issues/5517
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5517/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/5517/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/1250234?v=4", "events_url": "https://api.github.com/users/ernestum/events{/privacy}", "followers_url": "https://api.github.com/users/ernestum/followers", "following_url": "https://api.github.com/users/ernestum/following{/other_user}", "gists_url": "https://api.github.com/users/ernestum/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ernestum", "id": 1250234, "login": "ernestum", "node_id": "MDQ6VXNlcjEyNTAyMzQ=", "organizations_url": "https://api.github.com/users/ernestum/orgs", "received_events_url": "https://api.github.com/users/ernestum/received_events", "repos_url": "https://api.github.com/users/ernestum/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ernestum/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ernestum/subscriptions", "type": "User", "url": "https://api.github.com/users/ernestum" }
https://github.com/huggingface/datasets/issues/5517
[]
false
2024-01-18T08:42:17Z
null
{ "closed_at": null, "closed_issues": 0, "created_at": "2023-02-13T16:22:42Z", "creator": { "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }, "description": "Next major release", "due_on": null, "html_url": "https://github.com/huggingface/datasets/milestone/10", "id": 9038583, "labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/10/labels", "node_id": "MI_kwDODunzps4Aier3", "number": 10, "open_issues": 4, "state": "open", "title": "3.0", "updated_at": "2023-09-22T14:07:52Z", "url": "https://api.github.com/repos/huggingface/datasets/milestones/10" }
[ "Hi! This behavior stems from these lines:\r\n\r\nhttps://github.com/huggingface/datasets/blob/b065547654efa0ec633cf373ac1512884c68b2e1/src/datasets/formatting/np_formatter.py#L45-L46\r\n\r\nI agree we should preserve the original type whenever possible and downcast explicitly with a warning.\r\n\r\n@lhoestq Do you remember why we need this \"default dtype\" logic in our formatters?", "I was also wondering why the default type logic is needed. Me just deleting it is probably too naive of a solution.", "Hmm I think the idea was to end up with the usual default precision for deep learning models - no matter how the data was stored or where it comes from.\r\n\r\nFor example in NLP we store tokens using an optimized low precision to save disk space, but when we set the format to `torch` we actually need to get `int64`. Although the need for a default for integers also comes from numpy not returning the same integer precision depending on your machine. Finally I guess we added a default for floats as well for consistency.\r\n\r\nI'm a bit embarrassed by this though, as a user I'd have expected to get the same precision indeed as well and get a zero copy view.", "Will you fix this or should I open a PR?", "Unfortunately removing it for integers is a breaking change for most `transformers` + `datasets` users for NLP (which is a common case). Removing it for floats is a breaking change for `transformers` + `datasets` for ASR as well. And it also is a breaking change for the other users relying on this behavior.\r\n\r\nTherefore I think that the only short term solution is for the user to provide `dtype=` manually and document better this behavior. We could also extend `dtype` to accept a value that means \"return the same dtype as the underlying storage\" and make it easier to do zero copy.", "@lhoestq It should be fine to remove this conversion in Datasets 3.0, no? For now, we can warn the user (with a log message) about the future change when the default type is changed.", "Let's see with the transformers team if it sounds reasonable ? We'd have to fix multiple example scripts though.\r\n\r\nIf it's not ok we can also explore keeping this behavior only for tokens and audio data.", "IMO being coupled with Transformers can lead to unexpected behavior when one tries to use our lib without pairing it with Transformers, so I think it's still important to \"fix\" this, even if it means we will need to update Transformers' example scripts afterward.\r\n", "Ideally let's update the `transformers` example scripts before the change :P", "For others that run into the same issue: A temporary workaround for me is this:\r\n```python\r\ndef numpy_transform(batch):\r\n return {key: np.asarray(val) for key, val in batch.items()}\r\n\r\ndataset = dataset.with_transform(numpy_transform)\r\n```", "This behavior (silent upcast from `int32` to `int64`) is also unexpected for the user in https://discuss.huggingface.co/t/standard-getitem-returns-wrong-data-type-for-arrays/62470/2", "Hi, I stumbled on a variation that upcasts uint8 to int64. I would expect the dtype to be the same as it was when I generated the dataset.\r\n\r\n```\r\nimport numpy as np\r\nimport datasets as ds\r\n\r\nfoo = np.random.randint(0, 256, size=(5, 10, 10), dtype=np.uint8)\r\n\r\nfeatures = ds.Features({\"foo\": ds.Array2D((10, 10), \"uint8\")})\r\ndataset = ds.Dataset.from_dict({\"foo\": foo}, features=features)\r\ndataset.set_format(\"torch\")\r\nprint(\"feature dtype:\", dataset.features[\"foo\"].dtype)\r\nprint(\"array dtype:\", dataset[\"foo\"].dtype)\r\n\r\n# feature dtype: uint8\r\n# array dtype: torch.int64\r\n```\r\n", "workaround to remove torch upcasting\r\n\r\n```\r\nimport datasets as ds\r\nimport torch\r\n\r\nclass FixedTorchFormatter(ds.formatting.TorchFormatter):\r\n def _tensorize(self, value):\r\n return torch.from_numpy(value)\r\n\r\n\r\nds.formatting._register_formatter(FixedTorchFormatter, \"torch\")\r\n```" ]
null
[]
`with_format("numpy")` silently downcasts float64 to float32 features
NONE
https://api.github.com/repos/huggingface/datasets/issues/5517/timeline
### Describe the bug When I create a dataset with a `float64` feature, then apply numpy formatting the returned numpy arrays are silently downcasted to `float32`. ### Steps to reproduce the bug ```python import datasets dataset = datasets.Dataset.from_dict({'a': [1.0, 2.0, 3.0]}).with_format("numpy") print("feature dtype:", dataset.features['a'].dtype) print("array dtype:", dataset['a'].dtype) ``` output: ``` feature dtype: float64 array dtype: float32 ``` ### Expected behavior ``` feature dtype: float64 array dtype: float64 ``` ### Environment info - `datasets` version: 2.8.0 - Platform: Linux-5.4.0-135-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyArrow version: 10.0.1 - Pandas version: 1.4.4 ### Suggested Fix Changing [the `_tensorize` function of the numpy formatter](https://github.com/huggingface/datasets/blob/b065547654efa0ec633cf373ac1512884c68b2e1/src/datasets/formatting/np_formatter.py#L32) to ```python def _tensorize(self, value): if isinstance(value, (str, bytes, type(None))): return value elif isinstance(value, (np.character, np.ndarray)) and np.issubdtype(value.dtype, np.character): return value elif isinstance(value, np.number): return value return np.asarray(value, **self.np_array_kwargs) ``` fixes this particular issue for me. Not sure if this would break other tests. This should also avoid unnecessary copying of the array.
https://api.github.com/repos/huggingface/datasets
null
1,577,976,608
https://api.github.com/repos/huggingface/datasets/issues/5517/comments
I_kwDODunzps5eDgMg
null
5,517
https://api.github.com/repos/huggingface/datasets/issues/5517/events
false
closed
2023-02-09T10:52:15Z
null
https://api.github.com/repos/huggingface/datasets/issues/5516
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5516/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/5516/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/6368040?v=4", "events_url": "https://api.github.com/users/MFreidank/events{/privacy}", "followers_url": "https://api.github.com/users/MFreidank/followers", "following_url": "https://api.github.com/users/MFreidank/following{/other_user}", "gists_url": "https://api.github.com/users/MFreidank/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/MFreidank", "id": 6368040, "login": "MFreidank", "node_id": "MDQ6VXNlcjYzNjgwNDA=", "organizations_url": "https://api.github.com/users/MFreidank/orgs", "received_events_url": "https://api.github.com/users/MFreidank/received_events", "repos_url": "https://api.github.com/users/MFreidank/repos", "site_admin": false, "starred_url": "https://api.github.com/users/MFreidank/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MFreidank/subscriptions", "type": "User", "url": "https://api.github.com/users/MFreidank" }
https://github.com/huggingface/datasets/pull/5516
[]
false
2023-02-12T16:00:00Z
2023-02-12T15:57:01Z
null
[ "Thanks a lot for your help @lhoestq. I've simplified what turned out to be a simple fix and added the unit test.\r\n\r\nDoes this look ready to be merged or is there anything I'm still missing?", "Cool ! I think you just need to remove the unused import in `io/parquet.py`\r\n```\r\nsrc/datasets/io/parquet.py:4:1: F401 'pyarrow as pa' imported but unused\r\n```\r\nand we're good to merge :)", "_The documentation is not available anymore as the PR was closed or merged._", "> Cool ! I think you just need to remove the unused import in `io/parquet.py`\r\n> \r\n> ```\r\n> src/datasets/io/parquet.py:4:1: F401 'pyarrow as pa' imported but unused\r\n> ```\r\n> \r\n> and we're good to merge :)\r\n\r\nDone! Thanks a lot, this was fun :)" ]
null
[]
Reload features from Parquet metadata
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/5516/timeline
Resolves #5482. Attaches feature metadata to parquet files serialised using `Dataset.to_parquet`. This allows retrieving data with "rich" feature types (e.g., `datasets.features.image.Image` or `datasets.features.audio.Audio`) from parquet files without cumbersome casting (for an example, see #5482). @lhoestq It seems that it is sufficient to attach metadata to the schema prior to serialising and features are loaded back with correct types afterwards automatically. I used the following script to test the implementation: ```python from pathlib import Path import datasets dataset_name = "Maysee/tiny-imagenet" ds = datasets.load_dataset(dataset_name, split=datasets.Split.TRAIN) output_directory_path = Path(__file__).parent.joinpath("example_test_outputs", dataset_name.replace("/", "_")) output_directory_path.mkdir(exist_ok=True, parents=True) output_filepath = output_directory_path.joinpath("ds.parquet") ds.to_parquet(str(output_filepath)) reloaded_ds = datasets.load_dataset(str(output_directory_path), split=datasets.Split.TRAIN) assert ds.features == reloaded_ds.features ``` Prior to the change in this PR this script raises an `AssertionError` and the `Image` features lose their type after serialisation. After the change in this PR, the assertion does not raise an error and manual inspection of the features shows type `Image` for the respective columns of `reloaded_ds `. Some open questions: * How/where can I best add new unit tests for this implementation? * What dataset would I best use in the tests? I chose `Maysee/tiny-imagenet` mainly because it is small and contains an ?Image` feature that can be used to test, but I'd be happy for suggestions on a suitable data source to use. * Currently I'm calling `datasets.arrow_writer.ArrowWriter._build_metadata` as I need the same logic. However, I'm not happy with the coupling between `datasets.io.parquet` and `datasets.arrow_writer` it leaves me with. Suggest to factor this common logic out into a helper function and reuse it from both of these. Do you agree and if yes, could you please guide me where I would best place this function? Many thanks in advance and kind regards, MFreidank
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/5516.diff", "html_url": "https://github.com/huggingface/datasets/pull/5516", "merged_at": "2023-02-12T15:57:01Z", "patch_url": "https://github.com/huggingface/datasets/pull/5516.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5516" }
1,577,661,640
https://api.github.com/repos/huggingface/datasets/issues/5516/comments
PR_kwDODunzps5JmzPQ
null
5,516
https://api.github.com/repos/huggingface/datasets/issues/5516/events
true
closed
2023-02-09T10:04:46Z
null
https://api.github.com/repos/huggingface/datasets/issues/5515
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5515/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/5515/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/22773355?v=4", "events_url": "https://api.github.com/users/HallerPatrick/events{/privacy}", "followers_url": "https://api.github.com/users/HallerPatrick/followers", "following_url": "https://api.github.com/users/HallerPatrick/following{/other_user}", "gists_url": "https://api.github.com/users/HallerPatrick/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/HallerPatrick", "id": 22773355, "login": "HallerPatrick", "node_id": "MDQ6VXNlcjIyNzczMzU1", "organizations_url": "https://api.github.com/users/HallerPatrick/orgs", "received_events_url": "https://api.github.com/users/HallerPatrick/received_events", "repos_url": "https://api.github.com/users/HallerPatrick/repos", "site_admin": false, "starred_url": "https://api.github.com/users/HallerPatrick/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/HallerPatrick/subscriptions", "type": "User", "url": "https://api.github.com/users/HallerPatrick" }
https://github.com/huggingface/datasets/pull/5515
[]
false
2023-02-14T15:38:13Z
2023-02-14T14:26:42Z
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "The commit also includes the changes to the `DatasetDict` methods or am I missing something?", "Oh, indeed. Feel free to mark the PR as \"Ready for review\" then.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010149 / 0.011353 (-0.001204) | 0.005606 / 0.011008 (-0.005402) | 0.103455 / 0.038508 (0.064947) | 0.042934 / 0.023109 (0.019825) | 0.308365 / 0.275898 (0.032467) | 0.394188 / 0.323480 (0.070708) | 0.008760 / 0.007986 (0.000774) | 0.004567 / 0.004328 (0.000239) | 0.077959 / 0.004250 (0.073708) | 0.050115 / 0.037052 (0.013063) | 0.318009 / 0.258489 (0.059520) | 0.358578 / 0.293841 (0.064737) | 0.039231 / 0.128546 (-0.089315) | 0.012381 / 0.075646 (-0.063265) | 0.340046 / 0.419271 (-0.079226) | 0.048366 / 0.043533 (0.004834) | 0.307643 / 0.255139 (0.052504) | 0.342886 / 0.283200 (0.059687) | 0.109628 / 0.141683 (-0.032055) | 1.457297 / 1.452155 (0.005142) | 1.518067 / 1.492716 (0.025351) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.295590 / 0.018006 (0.277584) | 0.531515 / 0.000490 (0.531026) | 0.005677 / 0.000200 (0.005477) | 0.000095 / 0.000054 (0.000041) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030901 / 0.037411 (-0.006511) | 0.118312 / 0.014526 (0.103786) | 0.123146 / 0.176557 (-0.053410) | 0.163608 / 0.737135 (-0.573527) | 0.128604 / 0.296338 (-0.167734) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.404143 / 0.215209 (0.188934) | 4.000118 / 2.077655 (1.922464) | 1.804502 / 1.504120 (0.300382) | 1.597287 / 1.541195 (0.056093) | 1.738512 / 1.468490 (0.270022) | 0.704658 / 4.584777 (-3.880119) | 3.830101 / 3.745712 (0.084389) | 2.186598 / 5.269862 (-3.083263) | 1.367873 / 4.565676 (-3.197804) | 0.085550 / 0.424275 (-0.338725) | 0.012226 / 0.007607 (0.004619) | 0.505760 / 0.226044 (0.279716) | 5.054583 / 2.268929 (2.785655) | 2.284942 / 55.444624 (-53.159682) | 1.961413 / 6.876477 (-4.915064) | 2.059449 / 2.142072 (-0.082623) | 0.845009 / 4.805227 (-3.960218) | 0.167204 / 6.500664 (-6.333460) | 0.065998 / 0.075469 (-0.009471) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.221861 / 1.841788 (-0.619927) | 15.925213 / 8.074308 (7.850905) | 15.359308 / 10.191392 (5.167916) | 0.171776 / 0.680424 (-0.508648) | 0.029234 / 0.534201 (-0.504967) | 0.446349 / 0.579283 (-0.132934) | 0.447873 / 0.434364 (0.013509) | 0.527400 / 0.540337 (-0.012937) | 0.610208 / 1.386936 (-0.776728) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008030 / 0.011353 (-0.003323) | 0.005686 / 0.011008 (-0.005322) | 0.076204 / 0.038508 (0.037696) | 0.037131 / 0.023109 (0.014022) | 0.341461 / 0.275898 (0.065563) | 0.378734 / 0.323480 (0.055255) | 0.006580 / 0.007986 (-0.001406) | 0.004379 / 0.004328 (0.000050) | 0.073983 / 0.004250 (0.069732) | 0.055895 / 0.037052 (0.018842) | 0.342667 / 0.258489 (0.084178) | 0.401464 / 0.293841 (0.107623) | 0.037710 / 0.128546 (-0.090837) | 0.012604 / 0.075646 (-0.063042) | 0.087563 / 0.419271 (-0.331709) | 0.050887 / 0.043533 (0.007354) | 0.333491 / 0.255139 (0.078352) | 0.357437 / 0.283200 (0.074237) | 0.109566 / 0.141683 (-0.032117) | 1.423372 / 1.452155 (-0.028783) | 1.569423 / 1.492716 (0.076706) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.340986 / 0.018006 (0.322980) | 0.530885 / 0.000490 (0.530395) | 0.004172 / 0.000200 (0.003972) | 0.000115 / 0.000054 (0.000060) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030424 / 0.037411 (-0.006987) | 0.121191 / 0.014526 (0.106666) | 0.129066 / 0.176557 (-0.047491) | 0.166938 / 0.737135 (-0.570198) | 0.132000 / 0.296338 (-0.164338) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.418718 / 0.215209 (0.203509) | 4.163973 / 2.077655 (2.086318) | 1.982665 / 1.504120 (0.478545) | 1.798866 / 1.541195 (0.257671) | 1.918867 / 1.468490 (0.450377) | 0.724634 / 4.584777 (-3.860143) | 3.864549 / 3.745712 (0.118837) | 3.697768 / 5.269862 (-1.572093) | 1.983942 / 4.565676 (-2.581735) | 0.086818 / 0.424275 (-0.337457) | 0.012336 / 0.007607 (0.004728) | 0.522314 / 0.226044 (0.296269) | 5.216813 / 2.268929 (2.947884) | 2.516187 / 55.444624 (-52.928437) | 2.172057 / 6.876477 (-4.704420) | 2.342773 / 2.142072 (0.200701) | 0.851805 / 4.805227 (-3.953422) | 0.170139 / 6.500664 (-6.330525) | 0.068494 / 0.075469 (-0.006975) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.307370 / 1.841788 (-0.534418) | 16.737937 / 8.074308 (8.663629) | 14.483384 / 10.191392 (4.291992) | 0.172418 / 0.680424 (-0.508006) | 0.018241 / 0.534201 (-0.515960) | 0.432049 / 0.579283 (-0.147234) | 0.447590 / 0.434364 (0.013227) | 0.550332 / 0.540337 (0.009994) | 0.646756 / 1.386936 (-0.740180) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#819bc6e9f88459f363e6fb6948e9cbe5c231500d \"CML watermark\")\n" ]
null
[]
Unify `load_from_cache_file` type and logic
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/5515/timeline
* Updating type annotations for #`load_from_cache_file` * Added logic for cache checking if needed * Updated documentation following the wording of `Dataset.map`
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/5515.diff", "html_url": "https://github.com/huggingface/datasets/pull/5515", "merged_at": "2023-02-14T14:26:42Z", "patch_url": "https://github.com/huggingface/datasets/pull/5515.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5515" }
1,577,590,611
https://api.github.com/repos/huggingface/datasets/issues/5515/comments
PR_kwDODunzps5Jmj5X
null
5,515
https://api.github.com/repos/huggingface/datasets/issues/5515/events
true
closed
2023-02-08T16:40:44Z
null
https://api.github.com/repos/huggingface/datasets/issues/5514
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5514/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/5514/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/22773355?v=4", "events_url": "https://api.github.com/users/HallerPatrick/events{/privacy}", "followers_url": "https://api.github.com/users/HallerPatrick/followers", "following_url": "https://api.github.com/users/HallerPatrick/following{/other_user}", "gists_url": "https://api.github.com/users/HallerPatrick/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/HallerPatrick", "id": 22773355, "login": "HallerPatrick", "node_id": "MDQ6VXNlcjIyNzczMzU1", "organizations_url": "https://api.github.com/users/HallerPatrick/orgs", "received_events_url": "https://api.github.com/users/HallerPatrick/received_events", "repos_url": "https://api.github.com/users/HallerPatrick/repos", "site_admin": false, "starred_url": "https://api.github.com/users/HallerPatrick/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/HallerPatrick/subscriptions", "type": "User", "url": "https://api.github.com/users/HallerPatrick" }
https://github.com/huggingface/datasets/issues/5514
[]
false
2023-02-14T14:26:44Z
2023-02-14T14:26:44Z
null
[ "Hi, thanks for noticing this! We can't just remove the cache control as this allows us to control where the arrow files generated by the ops are written (cached on disk if enabled or a temporary directory if disabled). The right way to address this inconsistency would be by having `load_from_cache_file=None` by default everywhere.", "Hi! Yes, this seems more plausible. I can implement that. One last thing is the type annotation `load_from_cache_file: bool = None`. Which I then would change to `load_from_cache_file: Optional[bool] = None`.", "PR #5515 ", "Yes, `Optional[bool]` is the correct type annotation and thanks for the PR." ]
completed
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
Improve inconsistency of `Dataset.map` interface for `load_from_cache_file`
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/5514/timeline
### Feature request 1. Replace the `load_from_cache_file` default value to `True`. 2. Remove or alter checks from `is_caching_enabled` logic. ### Motivation I stumbled over an inconsistency in the `Dataset.map` interface. The documentation (and source) states for the parameter `load_from_cache_file`: ``` load_from_cache_file (`bool`, defaults to `True` if caching is enabled): If a cache file storing the current computation from `function` can be identified, use it instead of recomputing. ``` 1. `load_from_cache_file` default value is `None`, while being annotated as `bool` 2. It is inconsistent with other method signatures like `filter`, that have the default value `True` 3. The logic is inconsistent, as the `map` method checks if caching is enabled through `is_caching_enabled`. This logic is not used for other similar methods. ### Your contribution I am not fully aware of the logic behind caching checks. If this is just a inconsistency that historically grew, I would suggest to remove the `is_caching_enabled` logic as the "default" logic. Maybe someone can give insights, if environment variables have a higher priority than local variables or vice versa. If this is clarified, I could adjust the source according to the "Feature request" section of this issue.
https://api.github.com/repos/huggingface/datasets
null
1,576,453,837
https://api.github.com/repos/huggingface/datasets/issues/5514/comments
I_kwDODunzps5d9sbN
null
5,514
https://api.github.com/repos/huggingface/datasets/issues/5514/events
false
closed
2023-02-08T15:13:46Z
null
https://api.github.com/repos/huggingface/datasets/issues/5513
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5513/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/5513/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4", "events_url": "https://api.github.com/users/alvarobartt/events{/privacy}", "followers_url": "https://api.github.com/users/alvarobartt/followers", "following_url": "https://api.github.com/users/alvarobartt/following{/other_user}", "gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/alvarobartt", "id": 36760800, "login": "alvarobartt", "node_id": "MDQ6VXNlcjM2NzYwODAw", "organizations_url": "https://api.github.com/users/alvarobartt/orgs", "received_events_url": "https://api.github.com/users/alvarobartt/received_events", "repos_url": "https://api.github.com/users/alvarobartt/repos", "site_admin": false, "starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions", "type": "User", "url": "https://api.github.com/users/alvarobartt" }
https://github.com/huggingface/datasets/issues/5513
[]
false
2023-07-24T16:02:18Z
2023-07-24T14:27:59Z
null
[ "Hi! Let's not do this - renaming it would be a breaking change, and going through the deprecation cycle is only worth it if it improves user experience.", "Hi @mariosasko, ok it makes sense. Anyway, don't you think it's worth it at some point to start a deprecation cycle e.g. `fs` in `load_from_disk`? It doesn't affect user experience but it's for sure a bad practice IMO, but's up to you 😄 Feel free to close this issue otherwise!", "I don't think deprecating a param name in this particular instance is worth the hassle, so I'm closing the issue 🙂.", "Sure, makes sense @mariosasko thanks!" ]
completed
[]
Some functions use a param named `type` shouldn't that be avoided since it's a Python reserved name?
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/5513/timeline
Hi @mariosasko, @lhoestq, or whoever reads this! :) After going through `ArrowDataset.set_format` I found out that the `type` param is actually named `type` which is a Python reserved name as you may already know, shouldn't that be renamed to `format_type` before the 3.0.0 is released? Just wanted to get your input, and if applicable, tackle this issue myself! Thanks 🤗
https://api.github.com/repos/huggingface/datasets
null
1,576,300,803
https://api.github.com/repos/huggingface/datasets/issues/5513/comments
I_kwDODunzps5d9HED
null
5,513
https://api.github.com/repos/huggingface/datasets/issues/5513/events
false
closed
2023-02-08T13:38:59Z
null
https://api.github.com/repos/huggingface/datasets/issues/5512
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5512/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/5512/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://github.com/huggingface/datasets/pull/5512
[]
false
2023-02-19T18:35:09Z
2023-02-19T18:27:29Z
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008882 / 0.011353 (-0.002471) | 0.004562 / 0.011008 (-0.006446) | 0.100035 / 0.038508 (0.061527) | 0.030654 / 0.023109 (0.007545) | 0.298745 / 0.275898 (0.022847) | 0.356869 / 0.323480 (0.033389) | 0.007170 / 0.007986 (-0.000815) | 0.003471 / 0.004328 (-0.000858) | 0.077975 / 0.004250 (0.073725) | 0.037861 / 0.037052 (0.000809) | 0.311643 / 0.258489 (0.053154) | 0.343504 / 0.293841 (0.049663) | 0.033768 / 0.128546 (-0.094778) | 0.011342 / 0.075646 (-0.064304) | 0.323953 / 0.419271 (-0.095319) | 0.040818 / 0.043533 (-0.002715) | 0.298492 / 0.255139 (0.043353) | 0.327292 / 0.283200 (0.044092) | 0.088423 / 0.141683 (-0.053260) | 1.489520 / 1.452155 (0.037366) | 1.532962 / 1.492716 (0.040245) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.223654 / 0.018006 (0.205647) | 0.415134 / 0.000490 (0.414644) | 0.007394 / 0.000200 (0.007194) | 0.000080 / 0.000054 (0.000026) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023616 / 0.037411 (-0.013795) | 0.096652 / 0.014526 (0.082126) | 0.105239 / 0.176557 (-0.071318) | 0.148637 / 0.737135 (-0.588498) | 0.107937 / 0.296338 (-0.188402) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.426816 / 0.215209 (0.211607) | 4.241533 / 2.077655 (2.163878) | 1.946493 / 1.504120 (0.442373) | 1.735765 / 1.541195 (0.194570) | 1.781424 / 1.468490 (0.312934) | 0.688082 / 4.584777 (-3.896694) | 3.396444 / 3.745712 (-0.349268) | 1.920333 / 5.269862 (-3.349528) | 1.293833 / 4.565676 (-3.271843) | 0.081967 / 0.424275 (-0.342308) | 0.012911 / 0.007607 (0.005304) | 0.536928 / 0.226044 (0.310884) | 5.452327 / 2.268929 (3.183399) | 2.505785 / 55.444624 (-52.938840) | 2.173627 / 6.876477 (-4.702850) | 2.119978 / 2.142072 (-0.022095) | 0.809012 / 4.805227 (-3.996215) | 0.149124 / 6.500664 (-6.351540) | 0.066008 / 0.075469 (-0.009461) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.215702 / 1.841788 (-0.626085) | 13.757525 / 8.074308 (5.683217) | 13.999208 / 10.191392 (3.807816) | 0.164875 / 0.680424 (-0.515549) | 0.028517 / 0.534201 (-0.505684) | 0.394829 / 0.579283 (-0.184454) | 0.404962 / 0.434364 (-0.029401) | 0.484455 / 0.540337 (-0.055882) | 0.575008 / 1.386936 (-0.811928) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006754 / 0.011353 (-0.004598) | 0.004579 / 0.011008 (-0.006430) | 0.076617 / 0.038508 (0.038109) | 0.027902 / 0.023109 (0.004793) | 0.346278 / 0.275898 (0.070380) | 0.398060 / 0.323480 (0.074580) | 0.004938 / 0.007986 (-0.003047) | 0.004681 / 0.004328 (0.000353) | 0.076336 / 0.004250 (0.072086) | 0.038018 / 0.037052 (0.000966) | 0.358701 / 0.258489 (0.100212) | 0.408413 / 0.293841 (0.114572) | 0.031772 / 0.128546 (-0.096774) | 0.011604 / 0.075646 (-0.064042) | 0.085964 / 0.419271 (-0.333308) | 0.042030 / 0.043533 (-0.001502) | 0.343568 / 0.255139 (0.088429) | 0.381805 / 0.283200 (0.098605) | 0.090759 / 0.141683 (-0.050924) | 1.504553 / 1.452155 (0.052398) | 1.594006 / 1.492716 (0.101289) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.227395 / 0.018006 (0.209389) | 0.403097 / 0.000490 (0.402608) | 0.000413 / 0.000200 (0.000213) | 0.000060 / 0.000054 (0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024693 / 0.037411 (-0.012718) | 0.100470 / 0.014526 (0.085944) | 0.108481 / 0.176557 (-0.068076) | 0.142791 / 0.737135 (-0.594345) | 0.109949 / 0.296338 (-0.186389) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.443674 / 0.215209 (0.228465) | 4.412207 / 2.077655 (2.334553) | 2.073752 / 1.504120 (0.569632) | 1.863153 / 1.541195 (0.321958) | 1.940063 / 1.468490 (0.471573) | 0.696456 / 4.584777 (-3.888321) | 3.422120 / 3.745712 (-0.323592) | 1.902579 / 5.269862 (-3.367282) | 1.184948 / 4.565676 (-3.380729) | 0.083079 / 0.424275 (-0.341196) | 0.012649 / 0.007607 (0.005042) | 0.542035 / 0.226044 (0.315991) | 5.421826 / 2.268929 (3.152897) | 2.525092 / 55.444624 (-52.919532) | 2.177144 / 6.876477 (-4.699332) | 2.225224 / 2.142072 (0.083151) | 0.804739 / 4.805227 (-4.000488) | 0.151000 / 6.500664 (-6.349664) | 0.066987 / 0.075469 (-0.008482) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.277199 / 1.841788 (-0.564589) | 14.184146 / 8.074308 (6.109838) | 13.413348 / 10.191392 (3.221956) | 0.128551 / 0.680424 (-0.551872) | 0.016461 / 0.534201 (-0.517740) | 0.379963 / 0.579283 (-0.199320) | 0.381350 / 0.434364 (-0.053014) | 0.439044 / 0.540337 (-0.101293) | 0.521559 / 1.386936 (-0.865377) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#4f3c152c1c35df250d2fbeb25d5823a65714f2d8 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008876 / 0.011353 (-0.002477) | 0.004629 / 0.011008 (-0.006379) | 0.101697 / 0.038508 (0.063189) | 0.030373 / 0.023109 (0.007264) | 0.302206 / 0.275898 (0.026308) | 0.365835 / 0.323480 (0.042355) | 0.007877 / 0.007986 (-0.000109) | 0.004473 / 0.004328 (0.000144) | 0.077334 / 0.004250 (0.073084) | 0.038066 / 0.037052 (0.001014) | 0.308064 / 0.258489 (0.049575) | 0.347329 / 0.293841 (0.053488) | 0.034478 / 0.128546 (-0.094068) | 0.011651 / 0.075646 (-0.063995) | 0.323481 / 0.419271 (-0.095791) | 0.043515 / 0.043533 (-0.000018) | 0.299885 / 0.255139 (0.044746) | 0.328959 / 0.283200 (0.045760) | 0.095308 / 0.141683 (-0.046375) | 1.474058 / 1.452155 (0.021903) | 1.535335 / 1.492716 (0.042619) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.197416 / 0.018006 (0.179410) | 0.421935 / 0.000490 (0.421446) | 0.003490 / 0.000200 (0.003290) | 0.000074 / 0.000054 (0.000020) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024519 / 0.037411 (-0.012892) | 0.100710 / 0.014526 (0.086185) | 0.104520 / 0.176557 (-0.072036) | 0.142048 / 0.737135 (-0.595087) | 0.109274 / 0.296338 (-0.187064) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.408766 / 0.215209 (0.193557) | 4.101720 / 2.077655 (2.024065) | 1.812375 / 1.504120 (0.308256) | 1.605819 / 1.541195 (0.064624) | 1.688923 / 1.468490 (0.220433) | 0.691198 / 4.584777 (-3.893579) | 3.422137 / 3.745712 (-0.323575) | 1.921318 / 5.269862 (-3.348544) | 1.168770 / 4.565676 (-3.396906) | 0.082840 / 0.424275 (-0.341435) | 0.012740 / 0.007607 (0.005133) | 0.524333 / 0.226044 (0.298289) | 5.258077 / 2.268929 (2.989149) | 2.273177 / 55.444624 (-53.171447) | 1.931919 / 6.876477 (-4.944558) | 1.988415 / 2.142072 (-0.153658) | 0.812227 / 4.805227 (-3.993000) | 0.150043 / 6.500664 (-6.350622) | 0.066422 / 0.075469 (-0.009047) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.188069 / 1.841788 (-0.653718) | 13.942681 / 8.074308 (5.868373) | 14.104658 / 10.191392 (3.913266) | 0.151966 / 0.680424 (-0.528458) | 0.028833 / 0.534201 (-0.505368) | 0.395125 / 0.579283 (-0.184158) | 0.408512 / 0.434364 (-0.025852) | 0.487587 / 0.540337 (-0.052751) | 0.570023 / 1.386936 (-0.816913) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006860 / 0.011353 (-0.004493) | 0.004582 / 0.011008 (-0.006426) | 0.079902 / 0.038508 (0.041394) | 0.027565 / 0.023109 (0.004456) | 0.341393 / 0.275898 (0.065495) | 0.378911 / 0.323480 (0.055431) | 0.005847 / 0.007986 (-0.002138) | 0.004681 / 0.004328 (0.000353) | 0.079422 / 0.004250 (0.075171) | 0.039135 / 0.037052 (0.002083) | 0.342026 / 0.258489 (0.083537) | 0.387510 / 0.293841 (0.093669) | 0.031999 / 0.128546 (-0.096547) | 0.011782 / 0.075646 (-0.063865) | 0.088563 / 0.419271 (-0.330709) | 0.042435 / 0.043533 (-0.001098) | 0.343055 / 0.255139 (0.087916) | 0.367437 / 0.283200 (0.084237) | 0.091578 / 0.141683 (-0.050104) | 1.506828 / 1.452155 (0.054673) | 1.599590 / 1.492716 (0.106874) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.217939 / 0.018006 (0.199932) | 0.408352 / 0.000490 (0.407863) | 0.000394 / 0.000200 (0.000194) | 0.000063 / 0.000054 (0.000009) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026344 / 0.037411 (-0.011067) | 0.102968 / 0.014526 (0.088442) | 0.110340 / 0.176557 (-0.066217) | 0.145696 / 0.737135 (-0.591439) | 0.111632 / 0.296338 (-0.184707) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.440764 / 0.215209 (0.225555) | 4.423179 / 2.077655 (2.345524) | 2.057016 / 1.504120 (0.552896) | 1.848741 / 1.541195 (0.307546) | 1.939827 / 1.468490 (0.471337) | 0.699370 / 4.584777 (-3.885407) | 3.472521 / 3.745712 (-0.273191) | 3.232557 / 5.269862 (-2.037305) | 1.755534 / 4.565676 (-2.810143) | 0.083469 / 0.424275 (-0.340807) | 0.012980 / 0.007607 (0.005373) | 0.557662 / 0.226044 (0.331618) | 5.435657 / 2.268929 (3.166729) | 2.545106 / 55.444624 (-52.899519) | 2.168047 / 6.876477 (-4.708430) | 2.234070 / 2.142072 (0.091997) | 0.804662 / 4.805227 (-4.000565) | 0.152832 / 6.500664 (-6.347833) | 0.069372 / 0.075469 (-0.006097) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.299189 / 1.841788 (-0.542598) | 14.752880 / 8.074308 (6.678572) | 13.607676 / 10.191392 (3.416284) | 0.150773 / 0.680424 (-0.529650) | 0.016701 / 0.534201 (-0.517500) | 0.379507 / 0.579283 (-0.199776) | 0.389401 / 0.434364 (-0.044963) | 0.444199 / 0.540337 (-0.096139) | 0.524264 / 1.386936 (-0.862672) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#12be850b36c0b9d4841af86c75e08c0a726ffb5c \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008694 / 0.011353 (-0.002659) | 0.004549 / 0.011008 (-0.006459) | 0.101164 / 0.038508 (0.062656) | 0.029644 / 0.023109 (0.006535) | 0.294849 / 0.275898 (0.018950) | 0.366755 / 0.323480 (0.043275) | 0.007205 / 0.007986 (-0.000780) | 0.004255 / 0.004328 (-0.000074) | 0.077433 / 0.004250 (0.073183) | 0.038024 / 0.037052 (0.000972) | 0.310380 / 0.258489 (0.051891) | 0.347093 / 0.293841 (0.053252) | 0.033232 / 0.128546 (-0.095314) | 0.011404 / 0.075646 (-0.064242) | 0.323341 / 0.419271 (-0.095930) | 0.040586 / 0.043533 (-0.002946) | 0.296083 / 0.255139 (0.040944) | 0.321870 / 0.283200 (0.038671) | 0.087377 / 0.141683 (-0.054306) | 1.466869 / 1.452155 (0.014715) | 1.514763 / 1.492716 (0.022046) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.010272 / 0.018006 (-0.007734) | 0.414645 / 0.000490 (0.414155) | 0.003730 / 0.000200 (0.003530) | 0.000076 / 0.000054 (0.000021) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024093 / 0.037411 (-0.013318) | 0.098718 / 0.014526 (0.084192) | 0.105526 / 0.176557 (-0.071030) | 0.141578 / 0.737135 (-0.595557) | 0.109679 / 0.296338 (-0.186660) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.412907 / 0.215209 (0.197698) | 4.134934 / 2.077655 (2.057280) | 1.881180 / 1.504120 (0.377060) | 1.693207 / 1.541195 (0.152012) | 1.753725 / 1.468490 (0.285235) | 0.693077 / 4.584777 (-3.891700) | 3.367409 / 3.745712 (-0.378303) | 2.749035 / 5.269862 (-2.520827) | 1.565015 / 4.565676 (-3.000662) | 0.082609 / 0.424275 (-0.341666) | 0.012500 / 0.007607 (0.004892) | 0.523619 / 0.226044 (0.297575) | 5.250188 / 2.268929 (2.981259) | 2.314255 / 55.444624 (-53.130369) | 1.962357 / 6.876477 (-4.914120) | 2.020632 / 2.142072 (-0.121441) | 0.812504 / 4.805227 (-3.992724) | 0.149921 / 6.500664 (-6.350743) | 0.065816 / 0.075469 (-0.009653) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.230811 / 1.841788 (-0.610977) | 14.008566 / 8.074308 (5.934258) | 14.371285 / 10.191392 (4.179893) | 0.166323 / 0.680424 (-0.514101) | 0.029702 / 0.534201 (-0.504499) | 0.408629 / 0.579283 (-0.170654) | 0.410529 / 0.434364 (-0.023835) | 0.484482 / 0.540337 (-0.055855) | 0.572360 / 1.386936 (-0.814576) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006873 / 0.011353 (-0.004480) | 0.004609 / 0.011008 (-0.006400) | 0.075492 / 0.038508 (0.036984) | 0.028560 / 0.023109 (0.005450) | 0.340321 / 0.275898 (0.064423) | 0.376758 / 0.323480 (0.053278) | 0.005271 / 0.007986 (-0.002715) | 0.004786 / 0.004328 (0.000457) | 0.074843 / 0.004250 (0.070592) | 0.041072 / 0.037052 (0.004019) | 0.339952 / 0.258489 (0.081463) | 0.384375 / 0.293841 (0.090534) | 0.031771 / 0.128546 (-0.096775) | 0.011607 / 0.075646 (-0.064039) | 0.084338 / 0.419271 (-0.334933) | 0.042251 / 0.043533 (-0.001282) | 0.338904 / 0.255139 (0.083765) | 0.365360 / 0.283200 (0.082160) | 0.093151 / 0.141683 (-0.048532) | 1.449833 / 1.452155 (-0.002322) | 1.601946 / 1.492716 (0.109229) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.225149 / 0.018006 (0.207142) | 0.409855 / 0.000490 (0.409365) | 0.000384 / 0.000200 (0.000184) | 0.000060 / 0.000054 (0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025914 / 0.037411 (-0.011497) | 0.100443 / 0.014526 (0.085917) | 0.108557 / 0.176557 (-0.067999) | 0.150338 / 0.737135 (-0.586798) | 0.111472 / 0.296338 (-0.184866) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.440221 / 0.215209 (0.225012) | 4.409268 / 2.077655 (2.331613) | 2.096008 / 1.504120 (0.591888) | 1.849443 / 1.541195 (0.308248) | 1.934901 / 1.468490 (0.466410) | 0.704072 / 4.584777 (-3.880705) | 3.371370 / 3.745712 (-0.374343) | 3.185478 / 5.269862 (-2.084384) | 1.514541 / 4.565676 (-3.051135) | 0.083724 / 0.424275 (-0.340551) | 0.012674 / 0.007607 (0.005067) | 0.542155 / 0.226044 (0.316111) | 5.413456 / 2.268929 (3.144528) | 2.508567 / 55.444624 (-52.936057) | 2.163235 / 6.876477 (-4.713242) | 2.193914 / 2.142072 (0.051842) | 0.810955 / 4.805227 (-3.994272) | 0.152769 / 6.500664 (-6.347895) | 0.068009 / 0.075469 (-0.007460) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.272511 / 1.841788 (-0.569276) | 14.334861 / 8.074308 (6.260553) | 13.555445 / 10.191392 (3.364053) | 0.160520 / 0.680424 (-0.519904) | 0.018363 / 0.534201 (-0.515838) | 0.384937 / 0.579283 (-0.194346) | 0.409138 / 0.434364 (-0.025225) | 0.484037 / 0.540337 (-0.056300) | 0.565595 / 1.386936 (-0.821341) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#23f076ef0187a4009d3c62b14a02e146baf0e35f \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010077 / 0.011353 (-0.001276) | 0.005650 / 0.011008 (-0.005359) | 0.101285 / 0.038508 (0.062777) | 0.039571 / 0.023109 (0.016462) | 0.291855 / 0.275898 (0.015957) | 0.363582 / 0.323480 (0.040102) | 0.008513 / 0.007986 (0.000527) | 0.004472 / 0.004328 (0.000144) | 0.077314 / 0.004250 (0.073064) | 0.050707 / 0.037052 (0.013654) | 0.317282 / 0.258489 (0.058792) | 0.342348 / 0.293841 (0.048507) | 0.042951 / 0.128546 (-0.085595) | 0.012295 / 0.075646 (-0.063351) | 0.337269 / 0.419271 (-0.082003) | 0.048953 / 0.043533 (0.005420) | 0.292547 / 0.255139 (0.037408) | 0.325436 / 0.283200 (0.042236) | 0.111859 / 0.141683 (-0.029824) | 1.501958 / 1.452155 (0.049804) | 1.522281 / 1.492716 (0.029565) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.011775 / 0.018006 (-0.006231) | 0.513283 / 0.000490 (0.512793) | 0.002941 / 0.000200 (0.002741) | 0.000099 / 0.000054 (0.000044) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028702 / 0.037411 (-0.008710) | 0.108465 / 0.014526 (0.093940) | 0.121806 / 0.176557 (-0.054750) | 0.158424 / 0.737135 (-0.578712) | 0.128077 / 0.296338 (-0.168262) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.395392 / 0.215209 (0.180183) | 3.944138 / 2.077655 (1.866483) | 1.773698 / 1.504120 (0.269578) | 1.588907 / 1.541195 (0.047712) | 1.697794 / 1.468490 (0.229304) | 0.690281 / 4.584777 (-3.894496) | 3.819661 / 3.745712 (0.073948) | 3.228006 / 5.269862 (-2.041856) | 1.755625 / 4.565676 (-2.810052) | 0.083169 / 0.424275 (-0.341106) | 0.012337 / 0.007607 (0.004730) | 0.504730 / 0.226044 (0.278686) | 5.016916 / 2.268929 (2.747988) | 2.245484 / 55.444624 (-53.199141) | 1.911682 / 6.876477 (-4.964795) | 1.957659 / 2.142072 (-0.184413) | 0.818361 / 4.805227 (-3.986866) | 0.162386 / 6.500664 (-6.338279) | 0.062461 / 0.075469 (-0.013008) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.197654 / 1.841788 (-0.644134) | 15.465611 / 8.074308 (7.391303) | 14.409126 / 10.191392 (4.217734) | 0.171776 / 0.680424 (-0.508647) | 0.028749 / 0.534201 (-0.505452) | 0.439666 / 0.579283 (-0.139618) | 0.445159 / 0.434364 (0.010795) | 0.543992 / 0.540337 (0.003655) | 0.643911 / 1.386936 (-0.743025) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007036 / 0.011353 (-0.004317) | 0.005273 / 0.011008 (-0.005735) | 0.075314 / 0.038508 (0.036806) | 0.033075 / 0.023109 (0.009966) | 0.350133 / 0.275898 (0.074235) | 0.399366 / 0.323480 (0.075886) | 0.005945 / 0.007986 (-0.002041) | 0.004276 / 0.004328 (-0.000052) | 0.074975 / 0.004250 (0.070725) | 0.051758 / 0.037052 (0.014706) | 0.355077 / 0.258489 (0.096588) | 0.430296 / 0.293841 (0.136455) | 0.036257 / 0.128546 (-0.092290) | 0.012376 / 0.075646 (-0.063270) | 0.087441 / 0.419271 (-0.331830) | 0.049066 / 0.043533 (0.005534) | 0.339867 / 0.255139 (0.084728) | 0.384379 / 0.283200 (0.101179) | 0.104843 / 0.141683 (-0.036840) | 1.498897 / 1.452155 (0.046742) | 1.551400 / 1.492716 (0.058684) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.334504 / 0.018006 (0.316498) | 0.516551 / 0.000490 (0.516061) | 0.000450 / 0.000200 (0.000250) | 0.000057 / 0.000054 (0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029313 / 0.037411 (-0.008099) | 0.110667 / 0.014526 (0.096141) | 0.124001 / 0.176557 (-0.052556) | 0.159154 / 0.737135 (-0.577981) | 0.129503 / 0.296338 (-0.166836) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.416749 / 0.215209 (0.201540) | 4.171163 / 2.077655 (2.093508) | 1.981071 / 1.504120 (0.476951) | 1.788303 / 1.541195 (0.247108) | 1.912118 / 1.468490 (0.443628) | 0.708764 / 4.584777 (-3.876013) | 3.815222 / 3.745712 (0.069510) | 2.121633 / 5.269862 (-3.148229) | 1.347866 / 4.565676 (-3.217811) | 0.086340 / 0.424275 (-0.337935) | 0.012646 / 0.007607 (0.005039) | 0.525286 / 0.226044 (0.299241) | 5.254922 / 2.268929 (2.985994) | 2.488743 / 55.444624 (-52.955881) | 2.128069 / 6.876477 (-4.748408) | 2.180358 / 2.142072 (0.038286) | 0.841011 / 4.805227 (-3.964216) | 0.168732 / 6.500664 (-6.331932) | 0.065559 / 0.075469 (-0.009910) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.270518 / 1.841788 (-0.571270) | 15.557563 / 8.074308 (7.483255) | 13.660757 / 10.191392 (3.469365) | 0.185636 / 0.680424 (-0.494788) | 0.018152 / 0.534201 (-0.516049) | 0.423553 / 0.579283 (-0.155730) | 0.412718 / 0.434364 (-0.021646) | 0.528455 / 0.540337 (-0.011882) | 0.635274 / 1.386936 (-0.751662) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#d40f05ef827c52344a2c6e83f7c8d13bb6b660d3 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.011194 / 0.011353 (-0.000159) | 0.006344 / 0.011008 (-0.004664) | 0.122013 / 0.038508 (0.083505) | 0.044323 / 0.023109 (0.021214) | 0.356665 / 0.275898 (0.080767) | 0.439871 / 0.323480 (0.116391) | 0.010694 / 0.007986 (0.002709) | 0.004648 / 0.004328 (0.000320) | 0.091140 / 0.004250 (0.086890) | 0.052457 / 0.037052 (0.015404) | 0.369282 / 0.258489 (0.110793) | 0.403279 / 0.293841 (0.109438) | 0.054075 / 0.128546 (-0.074472) | 0.014484 / 0.075646 (-0.061162) | 0.407932 / 0.419271 (-0.011340) | 0.060681 / 0.043533 (0.017148) | 0.350889 / 0.255139 (0.095750) | 0.392041 / 0.283200 (0.108841) | 0.121252 / 0.141683 (-0.020431) | 1.809527 / 1.452155 (0.357373) | 1.835141 / 1.492716 (0.342425) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.227372 / 0.018006 (0.209366) | 0.481908 / 0.000490 (0.481418) | 0.007262 / 0.000200 (0.007062) | 0.000148 / 0.000054 (0.000093) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031039 / 0.037411 (-0.006372) | 0.133947 / 0.014526 (0.119421) | 0.141935 / 0.176557 (-0.034622) | 0.197854 / 0.737135 (-0.539281) | 0.152393 / 0.296338 (-0.143945) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.517400 / 0.215209 (0.302191) | 4.899972 / 2.077655 (2.822317) | 2.171023 / 1.504120 (0.666903) | 2.008706 / 1.541195 (0.467511) | 1.988777 / 1.468490 (0.520287) | 0.859872 / 4.584777 (-3.724905) | 4.673923 / 3.745712 (0.928211) | 2.703189 / 5.269862 (-2.566672) | 1.891680 / 4.565676 (-2.673997) | 0.109601 / 0.424275 (-0.314674) | 0.014622 / 0.007607 (0.007015) | 0.618990 / 0.226044 (0.392946) | 6.255608 / 2.268929 (3.986679) | 2.822199 / 55.444624 (-52.622425) | 2.457684 / 6.876477 (-4.418793) | 2.500041 / 2.142072 (0.357968) | 1.054529 / 4.805227 (-3.750698) | 0.209501 / 6.500664 (-6.291163) | 0.074929 / 0.075469 (-0.000540) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.532780 / 1.841788 (-0.309008) | 19.159455 / 8.074308 (11.085147) | 17.817063 / 10.191392 (7.625671) | 0.194078 / 0.680424 (-0.486346) | 0.038211 / 0.534201 (-0.495990) | 0.537366 / 0.579283 (-0.041917) | 0.538995 / 0.434364 (0.104631) | 0.679431 / 0.540337 (0.139094) | 0.801960 / 1.386936 (-0.584976) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008729 / 0.011353 (-0.002624) | 0.005711 / 0.011008 (-0.005297) | 0.091570 / 0.038508 (0.053062) | 0.039805 / 0.023109 (0.016696) | 0.413507 / 0.275898 (0.137609) | 0.456342 / 0.323480 (0.132862) | 0.006201 / 0.007986 (-0.001785) | 0.009700 / 0.004328 (0.005372) | 0.089146 / 0.004250 (0.084896) | 0.057543 / 0.037052 (0.020490) | 0.420806 / 0.258489 (0.162317) | 0.471962 / 0.293841 (0.178121) | 0.043940 / 0.128546 (-0.084606) | 0.014457 / 0.075646 (-0.061190) | 0.106674 / 0.419271 (-0.312598) | 0.058930 / 0.043533 (0.015397) | 0.419111 / 0.255139 (0.163972) | 0.452974 / 0.283200 (0.169774) | 0.124573 / 0.141683 (-0.017110) | 1.864753 / 1.452155 (0.412599) | 1.935387 / 1.492716 (0.442670) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.275657 / 0.018006 (0.257651) | 0.498096 / 0.000490 (0.497606) | 0.000480 / 0.000200 (0.000280) | 0.000066 / 0.000054 (0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034377 / 0.037411 (-0.003035) | 0.138050 / 0.014526 (0.123524) | 0.153718 / 0.176557 (-0.022838) | 0.201445 / 0.737135 (-0.535690) | 0.160346 / 0.296338 (-0.135992) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.540670 / 0.215209 (0.325461) | 5.376291 / 2.077655 (3.298636) | 2.581799 / 1.504120 (1.077679) | 2.328858 / 1.541195 (0.787663) | 2.446458 / 1.468490 (0.977968) | 0.923005 / 4.584777 (-3.661772) | 4.815977 / 3.745712 (1.070265) | 4.205725 / 5.269862 (-1.064137) | 2.400466 / 4.565676 (-2.165211) | 0.107207 / 0.424275 (-0.317068) | 0.015427 / 0.007607 (0.007819) | 0.657267 / 0.226044 (0.431222) | 6.491256 / 2.268929 (4.222327) | 3.179099 / 55.444624 (-52.265525) | 2.722434 / 6.876477 (-4.154042) | 2.788202 / 2.142072 (0.646129) | 1.060016 / 4.805227 (-3.745211) | 0.206899 / 6.500664 (-6.293766) | 0.077868 / 0.075469 (0.002399) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.567894 / 1.841788 (-0.273893) | 19.314330 / 8.074308 (11.240022) | 17.597614 / 10.191392 (7.406222) | 0.195777 / 0.680424 (-0.484647) | 0.022160 / 0.534201 (-0.512041) | 0.530592 / 0.579283 (-0.048691) | 0.508591 / 0.434364 (0.074227) | 0.619794 / 0.540337 (0.079457) | 0.749773 / 1.386936 (-0.637163) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#8637141a67639c510294620306c9bb25d31d34ef \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.012431 / 0.011353 (0.001078) | 0.006526 / 0.011008 (-0.004482) | 0.132266 / 0.038508 (0.093757) | 0.043199 / 0.023109 (0.020089) | 0.405230 / 0.275898 (0.129332) | 0.494643 / 0.323480 (0.171163) | 0.009927 / 0.007986 (0.001941) | 0.005227 / 0.004328 (0.000899) | 0.110914 / 0.004250 (0.106664) | 0.047815 / 0.037052 (0.010763) | 0.419099 / 0.258489 (0.160610) | 0.463405 / 0.293841 (0.169564) | 0.057858 / 0.128546 (-0.070688) | 0.018918 / 0.075646 (-0.056728) | 0.450584 / 0.419271 (0.031313) | 0.060457 / 0.043533 (0.016924) | 0.408234 / 0.255139 (0.153095) | 0.433722 / 0.283200 (0.150523) | 0.119403 / 0.141683 (-0.022280) | 1.966742 / 1.452155 (0.514587) | 1.980685 / 1.492716 (0.487969) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.292853 / 0.018006 (0.274847) | 0.619697 / 0.000490 (0.619207) | 0.002135 / 0.000200 (0.001935) | 0.000117 / 0.000054 (0.000062) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031283 / 0.037411 (-0.006129) | 0.128649 / 0.014526 (0.114123) | 0.150116 / 0.176557 (-0.026441) | 0.187605 / 0.737135 (-0.549530) | 0.153334 / 0.296338 (-0.143005) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.659660 / 0.215209 (0.444451) | 6.459749 / 2.077655 (4.382094) | 2.764566 / 1.504120 (1.260446) | 2.362630 / 1.541195 (0.821435) | 2.426421 / 1.468490 (0.957931) | 1.282407 / 4.584777 (-3.302370) | 5.668865 / 3.745712 (1.923153) | 3.236255 / 5.269862 (-2.033606) | 2.248836 / 4.565676 (-2.316841) | 0.145861 / 0.424275 (-0.278414) | 0.015707 / 0.007607 (0.008100) | 0.805218 / 0.226044 (0.579174) | 8.146831 / 2.268929 (5.877903) | 3.506283 / 55.444624 (-51.938341) | 2.736682 / 6.876477 (-4.139795) | 2.959039 / 2.142072 (0.816967) | 1.528428 / 4.805227 (-3.276799) | 0.270980 / 6.500664 (-6.229684) | 0.086824 / 0.075469 (0.011355) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.682506 / 1.841788 (-0.159282) | 18.844103 / 8.074308 (10.769795) | 21.008471 / 10.191392 (10.817079) | 0.258372 / 0.680424 (-0.422052) | 0.046505 / 0.534201 (-0.487696) | 0.574760 / 0.579283 (-0.004523) | 0.663745 / 0.434364 (0.229381) | 0.702411 / 0.540337 (0.162074) | 0.824024 / 1.386936 (-0.562912) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010016 / 0.011353 (-0.001337) | 0.007459 / 0.011008 (-0.003549) | 0.103954 / 0.038508 (0.065446) | 0.036363 / 0.023109 (0.013254) | 0.464079 / 0.275898 (0.188181) | 0.504730 / 0.323480 (0.181250) | 0.007865 / 0.007986 (-0.000121) | 0.005210 / 0.004328 (0.000882) | 0.105018 / 0.004250 (0.100767) | 0.062191 / 0.037052 (0.025139) | 0.483304 / 0.258489 (0.224815) | 0.547030 / 0.293841 (0.253189) | 0.055436 / 0.128546 (-0.073110) | 0.021073 / 0.075646 (-0.054573) | 0.120952 / 0.419271 (-0.298319) | 0.075593 / 0.043533 (0.032060) | 0.459930 / 0.255139 (0.204791) | 0.486924 / 0.283200 (0.203724) | 0.129465 / 0.141683 (-0.012218) | 1.902322 / 1.452155 (0.450167) | 1.980809 / 1.492716 (0.488092) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.259263 / 0.018006 (0.241257) | 0.596703 / 0.000490 (0.596213) | 0.004520 / 0.000200 (0.004320) | 0.000124 / 0.000054 (0.000070) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032802 / 0.037411 (-0.004609) | 0.138751 / 0.014526 (0.124225) | 0.147106 / 0.176557 (-0.029451) | 0.194791 / 0.737135 (-0.542345) | 0.152643 / 0.296338 (-0.143696) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.678455 / 0.215209 (0.463246) | 6.673643 / 2.077655 (4.595989) | 2.943368 / 1.504120 (1.439248) | 2.591223 / 1.541195 (1.050029) | 2.741097 / 1.468490 (1.272607) | 1.261178 / 4.584777 (-3.323599) | 5.773853 / 3.745712 (2.028141) | 3.171559 / 5.269862 (-2.098303) | 2.124898 / 4.565676 (-2.440779) | 0.161849 / 0.424275 (-0.262426) | 0.015498 / 0.007607 (0.007891) | 0.857984 / 0.226044 (0.631940) | 8.456946 / 2.268929 (6.188018) | 3.818787 / 55.444624 (-51.625837) | 3.009953 / 6.876477 (-3.866523) | 3.113006 / 2.142072 (0.970934) | 1.477299 / 4.805227 (-3.327929) | 0.267207 / 6.500664 (-6.233457) | 0.087590 / 0.075469 (0.012121) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.757389 / 1.841788 (-0.084398) | 19.287690 / 8.074308 (11.213381) | 21.601991 / 10.191392 (11.410599) | 0.260464 / 0.680424 (-0.419960) | 0.028552 / 0.534201 (-0.505649) | 0.558934 / 0.579283 (-0.020349) | 0.673651 / 0.434364 (0.239287) | 0.714448 / 0.540337 (0.174111) | 0.857608 / 1.386936 (-0.529328) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#2d3bd0134de444ffd10c4a39873dbf9aa3732c08 \"CML watermark\")\n", "Ready for review @mariosasko, LMKWYT :)\r\n\r\nSorry it tooks me a few tries to fix the CI - I ended up not trying to use the latest `torch` version in the CI.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009474 / 0.011353 (-0.001878) | 0.005507 / 0.011008 (-0.005501) | 0.101219 / 0.038508 (0.062711) | 0.035591 / 0.023109 (0.012481) | 0.305841 / 0.275898 (0.029943) | 0.339135 / 0.323480 (0.015656) | 0.007920 / 0.007986 (-0.000066) | 0.004252 / 0.004328 (-0.000077) | 0.076912 / 0.004250 (0.072662) | 0.041923 / 0.037052 (0.004871) | 0.301405 / 0.258489 (0.042916) | 0.356488 / 0.293841 (0.062647) | 0.039342 / 0.128546 (-0.089204) | 0.012711 / 0.075646 (-0.062935) | 0.334193 / 0.419271 (-0.085079) | 0.049112 / 0.043533 (0.005579) | 0.301484 / 0.255139 (0.046345) | 0.315306 / 0.283200 (0.032106) | 0.102959 / 0.141683 (-0.038724) | 1.420677 / 1.452155 (-0.031478) | 1.549493 / 1.492716 (0.056777) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.284639 / 0.018006 (0.266633) | 0.501226 / 0.000490 (0.500736) | 0.004328 / 0.000200 (0.004128) | 0.000091 / 0.000054 (0.000036) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027034 / 0.037411 (-0.010377) | 0.108066 / 0.014526 (0.093540) | 0.122106 / 0.176557 (-0.054451) | 0.162908 / 0.737135 (-0.574227) | 0.127233 / 0.296338 (-0.169105) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.394023 / 0.215209 (0.178813) | 3.932729 / 2.077655 (1.855075) | 1.771195 / 1.504120 (0.267075) | 1.582788 / 1.541195 (0.041594) | 1.703219 / 1.468490 (0.234728) | 0.702629 / 4.584777 (-3.882148) | 3.780187 / 3.745712 (0.034475) | 2.180433 / 5.269862 (-3.089428) | 1.504806 / 4.565676 (-3.060871) | 0.085289 / 0.424275 (-0.338986) | 0.012580 / 0.007607 (0.004973) | 0.515408 / 0.226044 (0.289363) | 5.010613 / 2.268929 (2.741685) | 2.256648 / 55.444624 (-53.187976) | 1.914971 / 6.876477 (-4.961505) | 2.038436 / 2.142072 (-0.103636) | 0.846240 / 4.805227 (-3.958987) | 0.164920 / 6.500664 (-6.335744) | 0.063899 / 0.075469 (-0.011570) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.224160 / 1.841788 (-0.617627) | 15.089995 / 8.074308 (7.015687) | 14.777003 / 10.191392 (4.585611) | 0.169873 / 0.680424 (-0.510551) | 0.029233 / 0.534201 (-0.504968) | 0.445424 / 0.579283 (-0.133859) | 0.439194 / 0.434364 (0.004830) | 0.536370 / 0.540337 (-0.003968) | 0.636694 / 1.386936 (-0.750242) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008230 / 0.011353 (-0.003122) | 0.005499 / 0.011008 (-0.005509) | 0.076108 / 0.038508 (0.037600) | 0.037444 / 0.023109 (0.014335) | 0.364420 / 0.275898 (0.088522) | 0.412308 / 0.323480 (0.088828) | 0.006704 / 0.007986 (-0.001282) | 0.004359 / 0.004328 (0.000031) | 0.075080 / 0.004250 (0.070830) | 0.057698 / 0.037052 (0.020646) | 0.366088 / 0.258489 (0.107599) | 0.409583 / 0.293841 (0.115742) | 0.037882 / 0.128546 (-0.090664) | 0.012421 / 0.075646 (-0.063225) | 0.087701 / 0.419271 (-0.331571) | 0.050669 / 0.043533 (0.007136) | 0.351139 / 0.255139 (0.096000) | 0.384340 / 0.283200 (0.101140) | 0.108097 / 0.141683 (-0.033586) | 1.445010 / 1.452155 (-0.007145) | 1.559570 / 1.492716 (0.066853) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.324114 / 0.018006 (0.306108) | 0.549134 / 0.000490 (0.548644) | 0.003544 / 0.000200 (0.003344) | 0.000097 / 0.000054 (0.000042) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030646 / 0.037411 (-0.006765) | 0.108573 / 0.014526 (0.094047) | 0.125291 / 0.176557 (-0.051266) | 0.174798 / 0.737135 (-0.562338) | 0.128000 / 0.296338 (-0.168338) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.428881 / 0.215209 (0.213672) | 4.282320 / 2.077655 (2.204665) | 2.061462 / 1.504120 (0.557342) | 1.858477 / 1.541195 (0.317283) | 1.971646 / 1.468490 (0.503156) | 0.723631 / 4.584777 (-3.861146) | 3.822376 / 3.745712 (0.076664) | 2.174427 / 5.269862 (-3.095434) | 1.386066 / 4.565676 (-3.179611) | 0.088391 / 0.424275 (-0.335884) | 0.012948 / 0.007607 (0.005341) | 0.524423 / 0.226044 (0.298378) | 5.249389 / 2.268929 (2.980460) | 2.528662 / 55.444624 (-52.915962) | 2.245329 / 6.876477 (-4.631147) | 2.402733 / 2.142072 (0.260660) | 0.868864 / 4.805227 (-3.936364) | 0.174066 / 6.500664 (-6.326598) | 0.066165 / 0.075469 (-0.009304) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.296922 / 1.841788 (-0.544865) | 15.814109 / 8.074308 (7.739801) | 14.086059 / 10.191392 (3.894667) | 0.190952 / 0.680424 (-0.489472) | 0.017679 / 0.534201 (-0.516522) | 0.428872 / 0.579283 (-0.150411) | 0.435399 / 0.434364 (0.001035) | 0.540856 / 0.540337 (0.000519) | 0.648904 / 1.386936 (-0.738032) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#f401758c5019ede4404994d5d59220125984874d \"CML watermark\")\n" ]
null
[]
Speed up batched PyTorch DataLoader
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/5512/timeline
I implemented `__getitems__` to speed up batched data loading in PyTorch close https://github.com/huggingface/datasets/issues/5505
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/5512.diff", "html_url": "https://github.com/huggingface/datasets/pull/5512", "merged_at": "2023-02-19T18:27:29Z", "patch_url": "https://github.com/huggingface/datasets/pull/5512.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5512" }
1,576,142,432
https://api.github.com/repos/huggingface/datasets/issues/5512/comments
PR_kwDODunzps5JhtQy
null
5,512
https://api.github.com/repos/huggingface/datasets/issues/5512/events
true
closed
2023-02-08T10:18:41Z
null
https://api.github.com/repos/huggingface/datasets/issues/5511
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5511/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/5511/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/patrickvonplaten", "id": 23423619, "login": "patrickvonplaten", "node_id": "MDQ6VXNlcjIzNDIzNjE5", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "site_admin": false, "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "type": "User", "url": "https://api.github.com/users/patrickvonplaten" }
https://github.com/huggingface/datasets/issues/5511
[]
false
2023-12-28T18:21:01Z
2023-02-08T10:35:48Z
null
[ "Update `datasets` or downgrade `huggingface-hub` ;)\r\n\r\nThe `huggingface-hub` lib did a breaking change a few months ago, and you're using an old version of `datasets` that does't support it", "Awesome thanks a lot! Everything works just fine with `datasets==2.9.0` :-) ", "Getting same error with latest versions.\r\n\r\n\r\n```shell\r\n---------------------------------------------------------------------------\r\nTypeError Traceback (most recent call last)\r\nCell In[99], line 1\r\n----> 1 dataset.push_to_hub(\"mirfan899/kids_phoneme_asr\")\r\n\r\nFile /opt/conda/lib/python3.10/site-packages/datasets/arrow_dataset.py:3538, in Dataset.push_to_hub(self, repo_id, split, private, token, branch, shard_size, embed_external_files)\r\n 3493 def push_to_hub(\r\n 3494 self,\r\n 3495 repo_id: str,\r\n (...)\r\n 3501 embed_external_files: bool = True,\r\n 3502 ):\r\n 3503 \"\"\"Pushes the dataset to the hub.\r\n 3504 The dataset is pushed using HTTP requests and does not need to have neither git or git-lfs installed.\r\n 3505 \r\n (...)\r\n 3536 ```\r\n 3537 \"\"\"\r\n-> 3538 repo_id, split, uploaded_size, dataset_nbytes = self._push_parquet_shards_to_hub(\r\n 3539 repo_id=repo_id,\r\n 3540 split=split,\r\n 3541 private=private,\r\n 3542 token=token,\r\n 3543 branch=branch,\r\n 3544 shard_size=shard_size,\r\n 3545 embed_external_files=embed_external_files,\r\n 3546 )\r\n 3547 organization, dataset_name = repo_id.split(\"/\")\r\n 3548 info_to_dump = self.info.copy()\r\n\r\nFile /opt/conda/lib/python3.10/site-packages/datasets/arrow_dataset.py:3474, in Dataset._push_parquet_shards_to_hub(self, repo_id, split, private, token, branch, shard_size, embed_external_files)\r\n 3472 shard.to_parquet(buffer)\r\n 3473 uploaded_size += buffer.tell()\r\n-> 3474 _retry(\r\n 3475 api.upload_file,\r\n 3476 func_kwargs=dict(\r\n 3477 path_or_fileobj=buffer.getvalue(),\r\n 3478 path_in_repo=path_in_repo(index),\r\n 3479 repo_id=repo_id,\r\n 3480 token=token,\r\n 3481 repo_type=\"dataset\",\r\n 3482 revision=branch,\r\n 3483 identical_ok=True,\r\n 3484 ),\r\n 3485 exceptions=HTTPError,\r\n 3486 status_codes=[504],\r\n 3487 base_wait_time=2.0,\r\n 3488 max_retries=5,\r\n 3489 max_wait_time=20.0,\r\n 3490 )\r\n 3491 return repo_id, split, uploaded_size, dataset_nbytes\r\n\r\nFile /opt/conda/lib/python3.10/site-packages/datasets/utils/file_utils.py:330, in _retry(func, func_args, func_kwargs, exceptions, status_codes, max_retries, base_wait_time, max_wait_time)\r\n 328 while True:\r\n 329 try:\r\n--> 330 return func(*func_args, **func_kwargs)\r\n 331 except exceptions as err:\r\n 332 if retry >= max_retries or (status_codes and err.response.status_code not in status_codes):\r\n\r\nFile /opt/conda/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py:120, in validate_hf_hub_args.<locals>._inner_fn(*args, **kwargs)\r\n 117 if check_use_auth_token:\r\n 118 kwargs = smoothly_deprecate_use_auth_token(fn_name=fn.__name__, has_token=has_token, kwargs=kwargs)\r\n--> 120 return fn(*args, **kwargs)\r\n\r\nTypeError: HfApi.upload_file() got an unexpected keyword argument 'identical_ok'\r\n```", "Feel free to update `datasets` and `huggingface-hub`, it should fix it :)", "I went ahead and upgraded both datasets and hub and still getting the same error\r\n", "Which version do you have ? It's been a while since it has been fixed", "huggingface 0.0.1\r\nhuggingface-hub 0.17.1\r\ndatasets 2.14.5\r\n\r\nstill has the issue!!", "I face the same issue even after upgrading :/" ]
completed
[]
Creating a dummy dataset from a bigger one
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/5511/timeline
### Describe the bug I often want to create a dummy dataset from a bigger dataset for fast iteration when training. However, I'm having a hard time doing this especially when trying to upload the dataset to the Hub. ### Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("lambdalabs/pokemon-blip-captions") dataset["train"] = dataset["train"].select(range(20)) dataset.push_to_hub("patrickvonplaten/dummy_image_data") ``` gives: ``` ~/python_bin/datasets/arrow_dataset.py in _push_parquet_shards_to_hub(self, repo_id, split, private, token, branch, max_shard_size, embed_external_files) 4003 base_wait_time=2.0, 4004 max_retries=5, -> 4005 max_wait_time=20.0, 4006 ) 4007 return repo_id, split, uploaded_size, dataset_nbytes ~/python_bin/datasets/utils/file_utils.py in _retry(func, func_args, func_kwargs, exceptions, status_codes, max_retries, base_wait_time, max_wait_time) 328 while True: 329 try: --> 330 return func(*func_args, **func_kwargs) 331 except exceptions as err: 332 if retry >= max_retries or (status_codes and err.response.status_code not in status_codes): ~/hf/lib/python3.7/site-packages/huggingface_hub/utils/_validators.py in _inner_fn(*args, **kwargs) 122 ) 123 --> 124 return fn(*args, **kwargs) 125 126 return _inner_fn # type: ignore TypeError: upload_file() got an unexpected keyword argument 'identical_ok' In [2]: ``` ### Expected behavior I would have expected this to work. It's for me the most intuitive way of creating a dummy dataset. ### Environment info ``` - `datasets` version: 2.1.1.dev0 - Platform: Linux-4.19.0-22-cloud-amd64-x86_64-with-debian-10.13 - Python version: 3.7.3 - PyArrow version: 11.0.0 - Pandas version: 1.3.5 ```
https://api.github.com/repos/huggingface/datasets
null
1,575,851,768
https://api.github.com/repos/huggingface/datasets/issues/5511/comments
I_kwDODunzps5d7Zb4
null
5,511
https://api.github.com/repos/huggingface/datasets/issues/5511/events
false
open
2023-02-07T23:30:26Z
null
https://api.github.com/repos/huggingface/datasets/issues/5510
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5510/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/5510/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/81822489?v=4", "events_url": "https://api.github.com/users/filip-halt/events{/privacy}", "followers_url": "https://api.github.com/users/filip-halt/followers", "following_url": "https://api.github.com/users/filip-halt/following{/other_user}", "gists_url": "https://api.github.com/users/filip-halt/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/filip-halt", "id": 81822489, "login": "filip-halt", "node_id": "MDQ6VXNlcjgxODIyNDg5", "organizations_url": "https://api.github.com/users/filip-halt/orgs", "received_events_url": "https://api.github.com/users/filip-halt/received_events", "repos_url": "https://api.github.com/users/filip-halt/repos", "site_admin": false, "starred_url": "https://api.github.com/users/filip-halt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/filip-halt/subscriptions", "type": "User", "url": "https://api.github.com/users/filip-halt" }
https://github.com/huggingface/datasets/pull/5510
[]
false
2023-02-24T16:45:09Z
null
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5510). All of your documentation changes will be reflected on that endpoint.", "To the maintainer, sorry about the repeated run requests for formatting. Missed the `make style` outlined in contributing guidelines. ", "Anything I can do to get the workflow to run? @lhoestq ", "cc @mariosasko \r\n\r\n> Anything I can do to get the workflow to run?\r\n\r\nYou can merge `main` into your branch to fix code formatting (we switched from isort+flake8 to ruff this week), and then run `make style`", "I believe that should be good. @mariosasko" ]
null
[]
Milvus integration for search
NONE
https://api.github.com/repos/huggingface/datasets/issues/5510/timeline
Signed-off-by: Filip Haltmayer <filip.haltmayer@zilliz.com>
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/5510.diff", "html_url": "https://github.com/huggingface/datasets/pull/5510", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/5510.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5510" }
1,575,191,549
https://api.github.com/repos/huggingface/datasets/issues/5510/comments
PR_kwDODunzps5JehbR
null
5,510
https://api.github.com/repos/huggingface/datasets/issues/5510/events
true
open
2023-02-07T11:42:40Z
null
https://api.github.com/repos/huggingface/datasets/issues/5509
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5509/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/5509/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/14248012?v=4", "events_url": "https://api.github.com/users/LoicGrobol/events{/privacy}", "followers_url": "https://api.github.com/users/LoicGrobol/followers", "following_url": "https://api.github.com/users/LoicGrobol/following{/other_user}", "gists_url": "https://api.github.com/users/LoicGrobol/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/LoicGrobol", "id": 14248012, "login": "LoicGrobol", "node_id": "MDQ6VXNlcjE0MjQ4MDEy", "organizations_url": "https://api.github.com/users/LoicGrobol/orgs", "received_events_url": "https://api.github.com/users/LoicGrobol/received_events", "repos_url": "https://api.github.com/users/LoicGrobol/repos", "site_admin": false, "starred_url": "https://api.github.com/users/LoicGrobol/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LoicGrobol/subscriptions", "type": "User", "url": "https://api.github.com/users/LoicGrobol" }
https://github.com/huggingface/datasets/pull/5509
[]
false
2023-02-08T17:48:24Z
null
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5509). All of your documentation changes will be reflected on that endpoint.", "Hi! I've commented on the original issue to provide some context. Feel free to share your opinion there." ]
null
[]
Add a static `__all__` to `__init__.py` for typecheckers
NONE
https://api.github.com/repos/huggingface/datasets/issues/5509/timeline
This adds a static `__all__` field to `__init__.py`, allowing typecheckers to know which symbols are accessible from `datasets` at runtime. In particular [Pyright](https://github.com/microsoft/pylance-release/issues/2328#issuecomment-1029381258) seems to rely on this. At this point I have added all (modulo oversight) the symbols mentioned in the Reference part of [the docs](https://huggingface.co/docs/datasets), but that could be adjusted. As a side effect, only these symbols will be imported by `from datasets import *`, which may or may not be a good thing (and if it isn't, that's easy to fix). Another option would be to add a pyi stub, but I think `__all__` should be the most pythonic solution. This should fix #3841.
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/5509.diff", "html_url": "https://github.com/huggingface/datasets/pull/5509", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/5509.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5509" }
1,574,177,320
https://api.github.com/repos/huggingface/datasets/issues/5509/comments
PR_kwDODunzps5JbH-u
null
5,509
https://api.github.com/repos/huggingface/datasets/issues/5509/events
true
closed
2023-02-06T21:08:58Z
null
https://api.github.com/repos/huggingface/datasets/issues/5508
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5508/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/5508/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/13984157?v=4", "events_url": "https://api.github.com/users/joebhakim/events{/privacy}", "followers_url": "https://api.github.com/users/joebhakim/followers", "following_url": "https://api.github.com/users/joebhakim/following{/other_user}", "gists_url": "https://api.github.com/users/joebhakim/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/joebhakim", "id": 13984157, "login": "joebhakim", "node_id": "MDQ6VXNlcjEzOTg0MTU3", "organizations_url": "https://api.github.com/users/joebhakim/orgs", "received_events_url": "https://api.github.com/users/joebhakim/received_events", "repos_url": "https://api.github.com/users/joebhakim/repos", "site_admin": false, "starred_url": "https://api.github.com/users/joebhakim/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/joebhakim/subscriptions", "type": "User", "url": "https://api.github.com/users/joebhakim" }
https://github.com/huggingface/datasets/issues/5508
[]
false
2023-02-09T14:55:26Z
2023-02-09T14:55:26Z
null
[ "Hey, I'm a research engineer working on language modelling wanting to contribute to open source. I was wondering if I could give it a shot?", "Hi! This issue was fixed in https://github.com/huggingface/datasets/pull/4972, so please install `datasets>=2.5.0` to avoid it." ]
completed
[]
Saving a dataset after setting format to torch doesn't work, but only if filtering
NONE
https://api.github.com/repos/huggingface/datasets/issues/5508/timeline
### Describe the bug Saving a dataset after setting format to torch doesn't work, but only if filtering ### Steps to reproduce the bug ``` a = Dataset.from_dict({"b": [1, 2]}) a.set_format('torch') a.save_to_disk("test_save") # saves successfully a.filter(None).save_to_disk("test_save_filter") # does not >> [...] TypeError: Provided `function` which is applied to all elements of table returns a `dict` of types [<class 'torch.Tensor'>]. When using `batched=True`, make sure provided `function` returns a `dict` of types like `(<class 'list'>, <class 'numpy.ndarray'>)`. # note: skipping the format change to torch lets this work. ### Expected behavior Saving to work ### Environment info - `datasets` version: 2.4.0 - Platform: Linux-6.1.9-arch1-1-x86_64-with-glibc2.36 - Python version: 3.10.9 - PyArrow version: 9.0.0 - Pandas version: 1.4.4
https://api.github.com/repos/huggingface/datasets
null
1,573,290,359
https://api.github.com/repos/huggingface/datasets/issues/5508/comments
I_kwDODunzps5dxoF3
null
5,508
https://api.github.com/repos/huggingface/datasets/issues/5508/events
false
open
2023-02-06T14:25:55Z
null
https://api.github.com/repos/huggingface/datasets/issues/5507
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/5507/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/5507/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
https://github.com/huggingface/datasets/issues/5507
[ { "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" } ]
false
2023-02-28T18:19:18Z
null
null
[]
null
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
Optimise behaviour in respect to indices mapping
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/5507/timeline
_Originally [posted](https://huggingface.slack.com/archives/C02V51Q3800/p1675443873878489?thread_ts=1675418893.373479&cid=C02V51Q3800) on Slack_ Considering all this, perhaps for Datasets 3.0, we can do the following: * [ ] have `continuous=True` by default in `.shard` (requested in the survey and makes more sense for us since it doesn't create an indices mapping) * [x] allow calling `save_to_disk` on "unflattened" datasets * [ ] remove "hidden" expensive calls in `save_to_disk`, `unique`, `concatenate_datasets`, etc. For instance, instead of silently calling `flatten_indices` where it's needed, it's probably better to be explicit (considering how expensive these ops can be) and raise an error instead
https://api.github.com/repos/huggingface/datasets
null
1,572,667,036
https://api.github.com/repos/huggingface/datasets/issues/5507/comments
I_kwDODunzps5dvP6c
null
5,507
https://api.github.com/repos/huggingface/datasets/issues/5507/events
false
closed
2023-02-06T03:26:03Z
null
https://api.github.com/repos/huggingface/datasets/issues/5506
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5506/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/5506/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/38166299?v=4", "events_url": "https://api.github.com/users/kheyer/events{/privacy}", "followers_url": "https://api.github.com/users/kheyer/followers", "following_url": "https://api.github.com/users/kheyer/following{/other_user}", "gists_url": "https://api.github.com/users/kheyer/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/kheyer", "id": 38166299, "login": "kheyer", "node_id": "MDQ6VXNlcjM4MTY2Mjk5", "organizations_url": "https://api.github.com/users/kheyer/orgs", "received_events_url": "https://api.github.com/users/kheyer/received_events", "repos_url": "https://api.github.com/users/kheyer/repos", "site_admin": false, "starred_url": "https://api.github.com/users/kheyer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kheyer/subscriptions", "type": "User", "url": "https://api.github.com/users/kheyer" }
https://github.com/huggingface/datasets/issues/5506
[]
false
2023-02-08T18:30:08Z
2023-02-08T18:30:07Z
null
[ "Hi ! `datasets` doesn't do batching - the PyTorch DataLoader does and is created by the `Trainer`. Do you pass other arguments to training_args with respect to data loading ?\r\n\r\nAlso we recently released `.to_iterable_dataset` that does pretty much what you implemented, but using contiguous shards to get a better speed:\r\n```python\r\nif use_iterable_dataset:\r\n num_shards = 100\r\n dataset = dataset.to_iterable_dataset(num_shards=num_shards)\r\n```", "This is the full set of training args passed. No training args were changed when switching dataset types.\r\n\r\n```python\r\ntraining_args = TrainingArguments(\r\n output_dir=\"./checkpoints\",\r\n overwrite_output_dir=True,\r\n num_train_epochs=1,\r\n per_device_train_batch_size=256,\r\n save_steps=2000,\r\n save_total_limit=4,\r\n prediction_loss_only=True,\r\n report_to='none',\r\n gradient_accumulation_steps=6,\r\n fp16=True,\r\n max_steps=60000,\r\n lr_scheduler_type='linear',\r\n warmup_ratio=0.1,\r\n logging_steps=100,\r\n weight_decay=0.01,\r\n adam_beta1=0.9,\r\n adam_beta2=0.98,\r\n adam_epsilon=1e-6,\r\n learning_rate=1e-4\r\n)\r\n```", "I think the issue comes from `transformers`: https://github.com/huggingface/transformers/issues/21444", "Makes sense. Given that it's a `transformers` issue and already being tracked, I'll close this out." ]
completed
[]
IterableDataset and Dataset return different batch sizes when using Trainer with multiple GPUs
NONE
https://api.github.com/repos/huggingface/datasets/issues/5506/timeline
### Describe the bug I am training a Roberta model using 2 GPUs and the `Trainer` API with a batch size of 256. Initially I used a standard `Dataset`, but had issues with slow data loading. After reading [this issue](https://github.com/huggingface/datasets/issues/2252), I swapped to loading my dataset as contiguous shards and passing those to an `IterableDataset`. I observed an unexpected drop in GPU memory utilization, and found the batch size returned from the model had been cut in half. When using `Trainer` with 2 GPUs and a batch size of 256, `Dataset` returns a batch of size 512 (256 per GPU), while `IterableDataset` returns a batch size of 256 (256 total). My guess is `IterableDataset` isn't accounting for multiple cards. ### Steps to reproduce the bug ```python import datasets from datasets import IterableDataset from transformers import RobertaConfig from transformers import RobertaTokenizerFast from transformers import RobertaForMaskedLM from transformers import DataCollatorForLanguageModeling from transformers import Trainer, TrainingArguments use_iterable_dataset = True def gen_from_shards(shards): for shard in shards: for example in shard: yield example dataset = datasets.load_from_disk('my_dataset.hf') if use_iterable_dataset: n_shards = 100 shards = [dataset.shard(num_shards=n_shards, index=i) for i in range(n_shards)] dataset = IterableDataset.from_generator(gen_from_shards, gen_kwargs={"shards": shards}) tokenizer = RobertaTokenizerFast.from_pretrained("./my_tokenizer", max_len=160, use_fast=True) config = RobertaConfig( vocab_size=8248, max_position_embeddings=256, num_attention_heads=8, num_hidden_layers=6, type_vocab_size=1) model = RobertaForMaskedLM(config=config) data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=True, mlm_probability=0.15) training_args = TrainingArguments( per_device_train_batch_size=256 # other args removed for brevity ) trainer = Trainer( model=model, args=training_args, data_collator=data_collator, train_dataset=dataset, ) trainer.train() ``` ### Expected behavior Expected `Dataset` and `IterableDataset` to have the same batch size behavior. If the current behavior is intentional, the batch size printout at the start of training should be updated. Currently, both dataset classes result in `Trainer` printing the same total batch size, even though the batch size sent to the GPUs are different. ### Environment info datasets 2.7.1 transformers 4.25.1
https://api.github.com/repos/huggingface/datasets
null
1,571,838,641
https://api.github.com/repos/huggingface/datasets/issues/5506/comments
I_kwDODunzps5dsFqx
null
5,506
https://api.github.com/repos/huggingface/datasets/issues/5506/events
false
closed
2023-02-06T01:14:55Z
null
https://api.github.com/repos/huggingface/datasets/issues/5505
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5505/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/5505/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/4443482?v=4", "events_url": "https://api.github.com/users/davidgilbertson/events{/privacy}", "followers_url": "https://api.github.com/users/davidgilbertson/followers", "following_url": "https://api.github.com/users/davidgilbertson/following{/other_user}", "gists_url": "https://api.github.com/users/davidgilbertson/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/davidgilbertson", "id": 4443482, "login": "davidgilbertson", "node_id": "MDQ6VXNlcjQ0NDM0ODI=", "organizations_url": "https://api.github.com/users/davidgilbertson/orgs", "received_events_url": "https://api.github.com/users/davidgilbertson/received_events", "repos_url": "https://api.github.com/users/davidgilbertson/repos", "site_admin": false, "starred_url": "https://api.github.com/users/davidgilbertson/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/davidgilbertson/subscriptions", "type": "User", "url": "https://api.github.com/users/davidgilbertson" }
https://github.com/huggingface/datasets/issues/5505
[]
false
2023-02-19T18:27:30Z
2023-02-19T18:27:30Z
null
[ "This change seems to come from a few months ago in the PyTorch side. That's good news and it means we may not need to pass a batch_sampler as soon as we add `Dataset.__getitems__` to get the optimal speed :)\r\n\r\nThanks for reporting ! Would you like to open a PR to add `__getitems__` and remove this outdated documentation ?", "Yeah I figured this was the sort of thing that probably once worked. I can confirm that you no longer need the batch sampler, just `batch_size=n` in the `DataLoader`.\r\n\r\nI'll pass on the PR, I'm flat out right now, sorry." ]
completed
[]
PyTorch BatchSampler still loads from Dataset one-by-one
NONE
https://api.github.com/repos/huggingface/datasets/issues/5505/timeline
### Describe the bug In [the docs here](https://huggingface.co/docs/datasets/use_with_pytorch#use-a-batchsampler), it mentions the issue of the Dataset being read one-by-one, then states that using a BatchSampler resolves the issue. I'm not sure if this is a mistake in the docs or the code, but it seems that the only way for a Dataset to be passed a list of indexes by PyTorch (instead of one index at a time) is to define a `__getitems__` method (note the plural) on the Dataset object, and since the HF Dataset doesn't have this, PyTorch executes [this line of code](https://github.com/pytorch/pytorch/blob/master/torch/utils/data/_utils/fetch.py#L58), reverting to fetching one-by-one. ### Steps to reproduce the bug You can put a breakpoint in `Dataset.__getitem__()` or just print the args from there and see that it's called multiple times for a single `next(iter(dataloader))`, even when using the code from the docs: ```py from torch.utils.data.sampler import BatchSampler, RandomSampler batch_sampler = BatchSampler(RandomSampler(ds), batch_size=32, drop_last=False) dataloader = DataLoader(ds, batch_sampler=batch_sampler) ``` ### Expected behavior The expected behaviour would be for it to fetch batches from the dataset, rather than one-by-one. To demonstrate that there is room for improvement: once I have a HF dataset `ds`, if I just add this line: ```py ds.__getitems__ = ds.__getitem__ ``` ...then the time taken to loop over the dataset improves considerably (for wikitext-103, from one minute to 13 seconds with batch size 32). Probably not a big deal in the grand scheme of things, but seems like an easy win. ### Environment info - `datasets` version: 2.9.0 - Platform: Linux-5.10.102.1-microsoft-standard-WSL2-x86_64-with-glibc2.31 - Python version: 3.10.8 - PyArrow version: 10.0.1 - Pandas version: 1.5.3
https://api.github.com/repos/huggingface/datasets
null
1,571,720,814
https://api.github.com/repos/huggingface/datasets/issues/5505/comments
I_kwDODunzps5dro5u
null
5,505
https://api.github.com/repos/huggingface/datasets/issues/5505/events
false
closed
2023-02-03T23:39:04Z
null
https://api.github.com/repos/huggingface/datasets/issues/5504
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5504/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/5504/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/2512762?v=4", "events_url": "https://api.github.com/users/dwyatte/events{/privacy}", "followers_url": "https://api.github.com/users/dwyatte/followers", "following_url": "https://api.github.com/users/dwyatte/following{/other_user}", "gists_url": "https://api.github.com/users/dwyatte/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dwyatte", "id": 2512762, "login": "dwyatte", "node_id": "MDQ6VXNlcjI1MTI3NjI=", "organizations_url": "https://api.github.com/users/dwyatte/orgs", "received_events_url": "https://api.github.com/users/dwyatte/received_events", "repos_url": "https://api.github.com/users/dwyatte/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dwyatte/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dwyatte/subscriptions", "type": "User", "url": "https://api.github.com/users/dwyatte" }
https://github.com/huggingface/datasets/pull/5504
[]
false
2023-02-08T17:28:50Z
2023-02-08T14:33:17Z
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008606 / 0.011353 (-0.002747) | 0.004659 / 0.011008 (-0.006349) | 0.101311 / 0.038508 (0.062802) | 0.029664 / 0.023109 (0.006555) | 0.321850 / 0.275898 (0.045952) | 0.380497 / 0.323480 (0.057017) | 0.007003 / 0.007986 (-0.000982) | 0.003393 / 0.004328 (-0.000936) | 0.078704 / 0.004250 (0.074453) | 0.035810 / 0.037052 (-0.001242) | 0.327271 / 0.258489 (0.068782) | 0.369302 / 0.293841 (0.075461) | 0.033625 / 0.128546 (-0.094921) | 0.011563 / 0.075646 (-0.064084) | 0.323950 / 0.419271 (-0.095322) | 0.040660 / 0.043533 (-0.002872) | 0.327211 / 0.255139 (0.072072) | 0.350325 / 0.283200 (0.067125) | 0.085427 / 0.141683 (-0.056256) | 1.464370 / 1.452155 (0.012216) | 1.490355 / 1.492716 (-0.002362) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.202879 / 0.018006 (0.184873) | 0.419836 / 0.000490 (0.419346) | 0.000303 / 0.000200 (0.000103) | 0.000063 / 0.000054 (0.000008) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023336 / 0.037411 (-0.014075) | 0.096817 / 0.014526 (0.082291) | 0.103990 / 0.176557 (-0.072567) | 0.137749 / 0.737135 (-0.599386) | 0.108236 / 0.296338 (-0.188102) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.420801 / 0.215209 (0.205592) | 4.205308 / 2.077655 (2.127653) | 2.050363 / 1.504120 (0.546243) | 1.877390 / 1.541195 (0.336195) | 2.031060 / 1.468490 (0.562570) | 0.687950 / 4.584777 (-3.896827) | 3.363202 / 3.745712 (-0.382510) | 1.869482 / 5.269862 (-3.400379) | 1.159131 / 4.565676 (-3.406545) | 0.082374 / 0.424275 (-0.341901) | 0.012425 / 0.007607 (0.004818) | 0.519775 / 0.226044 (0.293731) | 5.244612 / 2.268929 (2.975684) | 2.371314 / 55.444624 (-53.073311) | 2.052713 / 6.876477 (-4.823764) | 2.190015 / 2.142072 (0.047942) | 0.803806 / 4.805227 (-4.001421) | 0.148110 / 6.500664 (-6.352554) | 0.064174 / 0.075469 (-0.011295) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.250424 / 1.841788 (-0.591364) | 13.487870 / 8.074308 (5.413561) | 13.080736 / 10.191392 (2.889344) | 0.147715 / 0.680424 (-0.532709) | 0.028409 / 0.534201 (-0.505792) | 0.397531 / 0.579283 (-0.181752) | 0.399458 / 0.434364 (-0.034905) | 0.461467 / 0.540337 (-0.078871) | 0.541639 / 1.386936 (-0.845297) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006753 / 0.011353 (-0.004600) | 0.004573 / 0.011008 (-0.006435) | 0.076122 / 0.038508 (0.037614) | 0.027529 / 0.023109 (0.004419) | 0.341291 / 0.275898 (0.065393) | 0.376889 / 0.323480 (0.053409) | 0.005032 / 0.007986 (-0.002953) | 0.003447 / 0.004328 (-0.000882) | 0.075186 / 0.004250 (0.070936) | 0.038516 / 0.037052 (0.001463) | 0.340927 / 0.258489 (0.082438) | 0.386626 / 0.293841 (0.092785) | 0.031929 / 0.128546 (-0.096617) | 0.011759 / 0.075646 (-0.063888) | 0.085616 / 0.419271 (-0.333656) | 0.042858 / 0.043533 (-0.000674) | 0.341881 / 0.255139 (0.086742) | 0.367502 / 0.283200 (0.084303) | 0.090788 / 0.141683 (-0.050895) | 1.472871 / 1.452155 (0.020716) | 1.577825 / 1.492716 (0.085109) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.233137 / 0.018006 (0.215131) | 0.415016 / 0.000490 (0.414526) | 0.000379 / 0.000200 (0.000179) | 0.000059 / 0.000054 (0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024966 / 0.037411 (-0.012445) | 0.102794 / 0.014526 (0.088268) | 0.107543 / 0.176557 (-0.069014) | 0.143133 / 0.737135 (-0.594002) | 0.111494 / 0.296338 (-0.184845) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.438354 / 0.215209 (0.223145) | 4.382244 / 2.077655 (2.304589) | 2.056340 / 1.504120 (0.552220) | 1.851524 / 1.541195 (0.310330) | 1.933147 / 1.468490 (0.464657) | 0.701446 / 4.584777 (-3.883331) | 3.396893 / 3.745712 (-0.348819) | 2.837516 / 5.269862 (-2.432346) | 1.538298 / 4.565676 (-3.027379) | 0.083449 / 0.424275 (-0.340826) | 0.012793 / 0.007607 (0.005186) | 0.539661 / 0.226044 (0.313616) | 5.428415 / 2.268929 (3.159487) | 2.527582 / 55.444624 (-52.917042) | 2.172795 / 6.876477 (-4.703682) | 2.220011 / 2.142072 (0.077938) | 0.814338 / 4.805227 (-3.990889) | 0.153468 / 6.500664 (-6.347196) | 0.069056 / 0.075469 (-0.006413) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.278434 / 1.841788 (-0.563354) | 14.284924 / 8.074308 (6.210616) | 13.486596 / 10.191392 (3.295203) | 0.138457 / 0.680424 (-0.541967) | 0.016609 / 0.534201 (-0.517592) | 0.382828 / 0.579283 (-0.196455) | 0.387604 / 0.434364 (-0.046760) | 0.478801 / 0.540337 (-0.061536) | 0.565352 / 1.386936 (-0.821584) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#c39ba501daab763b9972f44f229c66d900d20bee \"CML watermark\")\n", "> Thanks! I modified the test a bit to make it more consistent with the rest of the \"extractor\" tests.\r\n\r\nAppreciate the assist on the tests! 🚀 " ]
null
[]
don't zero copy timestamps
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/5504/timeline
Fixes https://github.com/huggingface/datasets/issues/5495 I'm not sure whether we prefer a test here or if timestamps are known to be unsupported (like booleans). The current test at least covers the bug
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/5504.diff", "html_url": "https://github.com/huggingface/datasets/pull/5504", "merged_at": "2023-02-08T14:33:17Z", "patch_url": "https://github.com/huggingface/datasets/pull/5504.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5504" }
1,570,621,242
https://api.github.com/repos/huggingface/datasets/issues/5504/comments
PR_kwDODunzps5JPoWy
null
5,504
https://api.github.com/repos/huggingface/datasets/issues/5504/events
true
closed
2023-02-03T16:17:00Z
null
https://api.github.com/repos/huggingface/datasets/issues/5502
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5502/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/5502/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/7805682?v=4", "events_url": "https://api.github.com/users/MichlF/events{/privacy}", "followers_url": "https://api.github.com/users/MichlF/followers", "following_url": "https://api.github.com/users/MichlF/following{/other_user}", "gists_url": "https://api.github.com/users/MichlF/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/MichlF", "id": 7805682, "login": "MichlF", "node_id": "MDQ6VXNlcjc4MDU2ODI=", "organizations_url": "https://api.github.com/users/MichlF/orgs", "received_events_url": "https://api.github.com/users/MichlF/received_events", "repos_url": "https://api.github.com/users/MichlF/repos", "site_admin": false, "starred_url": "https://api.github.com/users/MichlF/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MichlF/subscriptions", "type": "User", "url": "https://api.github.com/users/MichlF" }
https://github.com/huggingface/datasets/pull/5502
[]
false
2023-02-21T14:46:49Z
2023-02-21T14:39:23Z
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "> Thanks! I've left some comments.\r\n> \r\n> We should also add some tests, mainly to make sure `reverse` behaves as expected. Let me know if you need help with that.\r\n\r\nThanks for the offer! I couldn't find any guidelines on how huggingface goes about testing, so it would indeed be great to get a few pointers on that. I assume I should expand on the `test_sort` function in `test_arrow_dataset.py` but since I am not very familiar with the `datasets` package, it isn't immediately for which cases I should test (i.e., expand on).", "@MichlF \r\n\r\nResolving a comment means that the comment has been addressed with the code change, so since this is not the case here, can you please \"unresolve\" the comments and address them adequately? \r\n\r\n> I assume I should expand on the `test_sort` function in `test_arrow_dataset.py`\r\n\r\nYes, that's correct. I think one test to check sorting on multiple keys and another one to check if an error is raised when `len(reverse)!=len(column_names)` should be enough.\r\n", "> Yes, that's correct. I think one test to check sorting on multiple keys and another one to check if an error is raised when `len(reverse)!=len(column_names)` should be enough.\r\n\r\nI have added the tests in https://github.com/huggingface/datasets/pull/5502/commits/0efa259732e822e94d67b96a70031a3daccedfc1 by keeping them in the same format of the tests of the old `sort` function. Let me know if they can be improved.\r\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010170 / 0.011353 (-0.001183) | 0.005891 / 0.011008 (-0.005117) | 0.100416 / 0.038508 (0.061908) | 0.041309 / 0.023109 (0.018200) | 0.300813 / 0.275898 (0.024915) | 0.376679 / 0.323480 (0.053199) | 0.008806 / 0.007986 (0.000821) | 0.005964 / 0.004328 (0.001636) | 0.075862 / 0.004250 (0.071611) | 0.050370 / 0.037052 (0.013318) | 0.313365 / 0.258489 (0.054876) | 0.351184 / 0.293841 (0.057343) | 0.039556 / 0.128546 (-0.088991) | 0.012462 / 0.075646 (-0.063185) | 0.337141 / 0.419271 (-0.082130) | 0.049678 / 0.043533 (0.006145) | 0.298547 / 0.255139 (0.043408) | 0.317547 / 0.283200 (0.034347) | 0.113595 / 0.141683 (-0.028088) | 1.448467 / 1.452155 (-0.003688) | 1.501303 / 1.492716 (0.008587) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.011005 / 0.018006 (-0.007002) | 0.527430 / 0.000490 (0.526940) | 0.005073 / 0.000200 (0.004873) | 0.000100 / 0.000054 (0.000045) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030377 / 0.037411 (-0.007034) | 0.116932 / 0.014526 (0.102406) | 0.124047 / 0.176557 (-0.052509) | 0.192358 / 0.737135 (-0.544777) | 0.130528 / 0.296338 (-0.165811) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.401158 / 0.215209 (0.185949) | 4.005854 / 2.077655 (1.928200) | 1.810365 / 1.504120 (0.306245) | 1.626490 / 1.541195 (0.085295) | 1.752591 / 1.468490 (0.284101) | 0.709065 / 4.584777 (-3.875712) | 3.893356 / 3.745712 (0.147643) | 3.655180 / 5.269862 (-1.614682) | 1.873660 / 4.565676 (-2.692017) | 0.085860 / 0.424275 (-0.338415) | 0.012671 / 0.007607 (0.005063) | 0.512804 / 0.226044 (0.286759) | 5.103426 / 2.268929 (2.834497) | 2.336148 / 55.444624 (-53.108477) | 2.000140 / 6.876477 (-4.876336) | 2.095155 / 2.142072 (-0.046918) | 0.848612 / 4.805227 (-3.956615) | 0.171840 / 6.500664 (-6.328824) | 0.064144 / 0.075469 (-0.011325) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.222106 / 1.841788 (-0.619682) | 15.828559 / 8.074308 (7.754251) | 14.995298 / 10.191392 (4.803906) | 0.172783 / 0.680424 (-0.507641) | 0.029296 / 0.534201 (-0.504905) | 0.447469 / 0.579283 (-0.131814) | 0.658615 / 0.434364 (0.224251) | 1.527607 / 0.540337 (0.987270) | 1.830018 / 1.386936 (0.443082) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007922 / 0.011353 (-0.003431) | 0.005369 / 0.011008 (-0.005639) | 0.076580 / 0.038508 (0.038071) | 0.038770 / 0.023109 (0.015661) | 0.338995 / 0.275898 (0.063097) | 0.380865 / 0.323480 (0.057385) | 0.006489 / 0.007986 (-0.001497) | 0.004421 / 0.004328 (0.000093) | 0.074143 / 0.004250 (0.069893) | 0.054224 / 0.037052 (0.017171) | 0.348887 / 0.258489 (0.090397) | 0.395044 / 0.293841 (0.101203) | 0.037040 / 0.128546 (-0.091507) | 0.012547 / 0.075646 (-0.063099) | 0.087521 / 0.419271 (-0.331751) | 0.049918 / 0.043533 (0.006385) | 0.342428 / 0.255139 (0.087289) | 0.362216 / 0.283200 (0.079016) | 0.107204 / 0.141683 (-0.034479) | 1.509206 / 1.452155 (0.057052) | 1.596010 / 1.492716 (0.103293) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.246795 / 0.018006 (0.228788) | 0.505998 / 0.000490 (0.505509) | 0.000446 / 0.000200 (0.000246) | 0.000064 / 0.000054 (0.000009) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031591 / 0.037411 (-0.005821) | 0.117595 / 0.014526 (0.103069) | 0.132500 / 0.176557 (-0.044056) | 0.202244 / 0.737135 (-0.534891) | 0.136624 / 0.296338 (-0.159715) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.428235 / 0.215209 (0.213026) | 4.262691 / 2.077655 (2.185036) | 2.057348 / 1.504120 (0.553228) | 1.928559 / 1.541195 (0.387364) | 2.120838 / 1.468490 (0.652347) | 0.706300 / 4.584777 (-3.878477) | 3.951828 / 3.745712 (0.206115) | 2.144218 / 5.269862 (-3.125644) | 1.359500 / 4.565676 (-3.206177) | 0.085404 / 0.424275 (-0.338872) | 0.012363 / 0.007607 (0.004756) | 0.529985 / 0.226044 (0.303941) | 5.295831 / 2.268929 (3.026903) | 2.522602 / 55.444624 (-52.922022) | 2.182850 / 6.876477 (-4.693627) | 2.270187 / 2.142072 (0.128114) | 0.841676 / 4.805227 (-3.963551) | 0.168366 / 6.500664 (-6.332298) | 0.065371 / 0.075469 (-0.010098) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.261464 / 1.841788 (-0.580324) | 17.010125 / 8.074308 (8.935817) | 14.304453 / 10.191392 (4.113061) | 0.177782 / 0.680424 (-0.502642) | 0.017762 / 0.534201 (-0.516439) | 0.427283 / 0.579283 (-0.152000) | 0.455176 / 0.434364 (0.020812) | 0.525962 / 0.540337 (-0.014375) | 0.625583 / 1.386936 (-0.761353) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#3b2aba6637dc61f145acda40e4e7b028c3947d72 \"CML watermark\")\n" ]
null
[]
Added functionality: sort datasets by multiple keys
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/5502/timeline
Added functionality implementation: sort datasets by multiple keys/columns as discussed in https://github.com/huggingface/datasets/issues/5425.
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/5502.diff", "html_url": "https://github.com/huggingface/datasets/pull/5502", "merged_at": "2023-02-21T14:39:23Z", "patch_url": "https://github.com/huggingface/datasets/pull/5502.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5502" }
1,570,091,225
https://api.github.com/repos/huggingface/datasets/issues/5502/comments
PR_kwDODunzps5JN0aX
null
5,502
https://api.github.com/repos/huggingface/datasets/issues/5502/events
true
open
2023-02-03T10:50:10Z
null
https://api.github.com/repos/huggingface/datasets/issues/5501
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5501/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/5501/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4", "events_url": "https://api.github.com/users/Narsil/events{/privacy}", "followers_url": "https://api.github.com/users/Narsil/followers", "following_url": "https://api.github.com/users/Narsil/following{/other_user}", "gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Narsil", "id": 204321, "login": "Narsil", "node_id": "MDQ6VXNlcjIwNDMyMQ==", "organizations_url": "https://api.github.com/users/Narsil/orgs", "received_events_url": "https://api.github.com/users/Narsil/received_events", "repos_url": "https://api.github.com/users/Narsil/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Narsil/subscriptions", "type": "User", "url": "https://api.github.com/users/Narsil" }
https://github.com/huggingface/datasets/pull/5501
[]
false
2023-02-09T11:04:11Z
null
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5501). All of your documentation changes will be reflected on that endpoint.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008407 / 0.011353 (-0.002946) | 0.004651 / 0.011008 (-0.006357) | 0.100367 / 0.038508 (0.061859) | 0.029107 / 0.023109 (0.005998) | 0.302798 / 0.275898 (0.026900) | 0.354379 / 0.323480 (0.030899) | 0.006985 / 0.007986 (-0.001001) | 0.003365 / 0.004328 (-0.000963) | 0.078312 / 0.004250 (0.074062) | 0.034205 / 0.037052 (-0.002847) | 0.310431 / 0.258489 (0.051941) | 0.346239 / 0.293841 (0.052398) | 0.033800 / 0.128546 (-0.094747) | 0.011515 / 0.075646 (-0.064131) | 0.323588 / 0.419271 (-0.095684) | 0.040766 / 0.043533 (-0.002767) | 0.300914 / 0.255139 (0.045775) | 0.332983 / 0.283200 (0.049784) | 0.087500 / 0.141683 (-0.054182) | 1.469505 / 1.452155 (0.017350) | 1.505119 / 1.492716 (0.012403) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.187319 / 0.018006 (0.169313) | 0.405498 / 0.000490 (0.405008) | 0.001000 / 0.000200 (0.000800) | 0.000069 / 0.000054 (0.000015) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022583 / 0.037411 (-0.014828) | 0.098096 / 0.014526 (0.083570) | 0.104272 / 0.176557 (-0.072284) | 0.142801 / 0.737135 (-0.594335) | 0.109749 / 0.296338 (-0.186590) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.423343 / 0.215209 (0.208134) | 4.215116 / 2.077655 (2.137461) | 1.899714 / 1.504120 (0.395594) | 1.689579 / 1.541195 (0.148384) | 1.710292 / 1.468490 (0.241801) | 0.690976 / 4.584777 (-3.893801) | 3.432501 / 3.745712 (-0.313212) | 1.899600 / 5.269862 (-3.370261) | 1.279801 / 4.565676 (-3.285876) | 0.082763 / 0.424275 (-0.341512) | 0.012545 / 0.007607 (0.004938) | 0.531381 / 0.226044 (0.305336) | 5.320077 / 2.268929 (3.051148) | 2.370705 / 55.444624 (-53.073919) | 2.007089 / 6.876477 (-4.869388) | 2.062412 / 2.142072 (-0.079661) | 0.814998 / 4.805227 (-3.990229) | 0.149822 / 6.500664 (-6.350842) | 0.064399 / 0.075469 (-0.011070) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.226196 / 1.841788 (-0.615591) | 13.823443 / 8.074308 (5.749134) | 13.813667 / 10.191392 (3.622275) | 0.161289 / 0.680424 (-0.519135) | 0.028569 / 0.534201 (-0.505632) | 0.390360 / 0.579283 (-0.188923) | 0.396217 / 0.434364 (-0.038147) | 0.483120 / 0.540337 (-0.057217) | 0.570041 / 1.386936 (-0.816895) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006422 / 0.011353 (-0.004931) | 0.004528 / 0.011008 (-0.006481) | 0.076043 / 0.038508 (0.037535) | 0.027631 / 0.023109 (0.004522) | 0.340622 / 0.275898 (0.064724) | 0.376694 / 0.323480 (0.053214) | 0.004993 / 0.007986 (-0.002992) | 0.003403 / 0.004328 (-0.000926) | 0.074521 / 0.004250 (0.070270) | 0.037568 / 0.037052 (0.000516) | 0.343423 / 0.258489 (0.084934) | 0.387729 / 0.293841 (0.093888) | 0.031790 / 0.128546 (-0.096757) | 0.011767 / 0.075646 (-0.063879) | 0.085182 / 0.419271 (-0.334090) | 0.042867 / 0.043533 (-0.000666) | 0.341269 / 0.255139 (0.086130) | 0.368460 / 0.283200 (0.085261) | 0.090153 / 0.141683 (-0.051530) | 1.536490 / 1.452155 (0.084335) | 1.596403 / 1.492716 (0.103686) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.222373 / 0.018006 (0.204367) | 0.396145 / 0.000490 (0.395655) | 0.000384 / 0.000200 (0.000184) | 0.000062 / 0.000054 (0.000008) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024801 / 0.037411 (-0.012610) | 0.099711 / 0.014526 (0.085185) | 0.106094 / 0.176557 (-0.070463) | 0.147819 / 0.737135 (-0.589316) | 0.110065 / 0.296338 (-0.186274) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.442863 / 0.215209 (0.227654) | 4.420043 / 2.077655 (2.342388) | 2.070136 / 1.504120 (0.566016) | 1.862363 / 1.541195 (0.321168) | 1.910890 / 1.468490 (0.442400) | 0.702570 / 4.584777 (-3.882207) | 3.435855 / 3.745712 (-0.309857) | 1.871290 / 5.269862 (-3.398572) | 1.169321 / 4.565676 (-3.396355) | 0.083674 / 0.424275 (-0.340601) | 0.012823 / 0.007607 (0.005216) | 0.539330 / 0.226044 (0.313285) | 5.403317 / 2.268929 (3.134389) | 2.536508 / 55.444624 (-52.908117) | 2.179629 / 6.876477 (-4.696847) | 2.207586 / 2.142072 (0.065514) | 0.812256 / 4.805227 (-3.992972) | 0.152915 / 6.500664 (-6.347749) | 0.068431 / 0.075469 (-0.007038) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.294982 / 1.841788 (-0.546806) | 13.912811 / 8.074308 (5.838503) | 13.415658 / 10.191392 (3.224266) | 0.149531 / 0.680424 (-0.530893) | 0.016785 / 0.534201 (-0.517416) | 0.381055 / 0.579283 (-0.198228) | 0.392084 / 0.434364 (-0.042280) | 0.472614 / 0.540337 (-0.067724) | 0.559799 / 1.386936 (-0.827137) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#6ef20f9b71acbb387caab2d297d8c22ba3db3633 \"CML watermark\")\n", "We simply do GET requests to hf.co to download files from the Hub right now. We may switch to hfh when we update how we do caching \r\n\r\nYou can try on any dataset hosted on the hub like `imagenet-1k`", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010931 / 0.011353 (-0.000422) | 0.005730 / 0.011008 (-0.005278) | 0.116653 / 0.038508 (0.078145) | 0.041439 / 0.023109 (0.018330) | 0.359559 / 0.275898 (0.083661) | 0.408398 / 0.323480 (0.084918) | 0.009193 / 0.007986 (0.001208) | 0.006024 / 0.004328 (0.001695) | 0.087743 / 0.004250 (0.083492) | 0.048636 / 0.037052 (0.011584) | 0.363133 / 0.258489 (0.104643) | 0.407144 / 0.293841 (0.113303) | 0.044610 / 0.128546 (-0.083936) | 0.014075 / 0.075646 (-0.061571) | 0.396506 / 0.419271 (-0.022766) | 0.057014 / 0.043533 (0.013482) | 0.358254 / 0.255139 (0.103115) | 0.399887 / 0.283200 (0.116687) | 0.115337 / 0.141683 (-0.026346) | 1.731655 / 1.452155 (0.279500) | 1.813276 / 1.492716 (0.320560) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.210197 / 0.018006 (0.192191) | 0.475887 / 0.000490 (0.475397) | 0.003323 / 0.000200 (0.003123) | 0.000100 / 0.000054 (0.000045) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031686 / 0.037411 (-0.005725) | 0.131167 / 0.014526 (0.116641) | 0.137919 / 0.176557 (-0.038637) | 0.184843 / 0.737135 (-0.552293) | 0.144998 / 0.296338 (-0.151340) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.471371 / 0.215209 (0.256162) | 4.693739 / 2.077655 (2.616084) | 2.251567 / 1.504120 (0.747447) | 1.993653 / 1.541195 (0.452458) | 2.053236 / 1.468490 (0.584746) | 0.809226 / 4.584777 (-3.775551) | 4.494120 / 3.745712 (0.748408) | 2.436921 / 5.269862 (-2.832940) | 1.541973 / 4.565676 (-3.023704) | 0.098401 / 0.424275 (-0.325874) | 0.014329 / 0.007607 (0.006722) | 0.597813 / 0.226044 (0.371769) | 5.964035 / 2.268929 (3.695107) | 2.709283 / 55.444624 (-52.735341) | 2.323537 / 6.876477 (-4.552940) | 2.401707 / 2.142072 (0.259635) | 0.976379 / 4.805227 (-3.828848) | 0.194638 / 6.500664 (-6.306026) | 0.076904 / 0.075469 (0.001435) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.516877 / 1.841788 (-0.324911) | 18.228010 / 8.074308 (10.153702) | 16.631750 / 10.191392 (6.440358) | 0.176030 / 0.680424 (-0.504394) | 0.033769 / 0.534201 (-0.500432) | 0.520511 / 0.579283 (-0.058773) | 0.531764 / 0.434364 (0.097400) | 0.648658 / 0.540337 (0.108321) | 0.779124 / 1.386936 (-0.607812) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008635 / 0.011353 (-0.002718) | 0.005785 / 0.011008 (-0.005223) | 0.087042 / 0.038508 (0.048534) | 0.039632 / 0.023109 (0.016523) | 0.419719 / 0.275898 (0.143821) | 0.463860 / 0.323480 (0.140380) | 0.006621 / 0.007986 (-0.001364) | 0.004655 / 0.004328 (0.000327) | 0.087003 / 0.004250 (0.082753) | 0.057122 / 0.037052 (0.020069) | 0.417820 / 0.258489 (0.159331) | 0.485981 / 0.293841 (0.192140) | 0.042606 / 0.128546 (-0.085940) | 0.014369 / 0.075646 (-0.061278) | 0.101939 / 0.419271 (-0.317333) | 0.058303 / 0.043533 (0.014770) | 0.415053 / 0.255139 (0.159914) | 0.439914 / 0.283200 (0.156714) | 0.134628 / 0.141683 (-0.007055) | 1.765464 / 1.452155 (0.313309) | 1.843963 / 1.492716 (0.351247) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.307156 / 0.018006 (0.289150) | 0.476657 / 0.000490 (0.476167) | 0.019718 / 0.000200 (0.019518) | 0.000160 / 0.000054 (0.000105) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.035286 / 0.037411 (-0.002125) | 0.138094 / 0.014526 (0.123568) | 0.144768 / 0.176557 (-0.031789) | 0.191386 / 0.737135 (-0.545750) | 0.151988 / 0.296338 (-0.144350) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.504733 / 0.215209 (0.289523) | 5.027048 / 2.077655 (2.949394) | 2.441571 / 1.504120 (0.937451) | 2.198242 / 1.541195 (0.657047) | 2.298473 / 1.468490 (0.829983) | 0.848048 / 4.584777 (-3.736729) | 4.613102 / 3.745712 (0.867390) | 2.522824 / 5.269862 (-2.747037) | 1.610159 / 4.565676 (-2.955517) | 0.105197 / 0.424275 (-0.319078) | 0.015195 / 0.007607 (0.007588) | 0.626976 / 0.226044 (0.400932) | 6.268459 / 2.268929 (3.999530) | 3.014387 / 55.444624 (-52.430237) | 2.554102 / 6.876477 (-4.322375) | 2.656051 / 2.142072 (0.513979) | 1.027978 / 4.805227 (-3.777249) | 0.200686 / 6.500664 (-6.299978) | 0.077104 / 0.075469 (0.001635) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.485228 / 1.841788 (-0.356560) | 18.319949 / 8.074308 (10.245641) | 15.855739 / 10.191392 (5.664347) | 0.204365 / 0.680424 (-0.476059) | 0.023824 / 0.534201 (-0.510377) | 0.505000 / 0.579283 (-0.074283) | 0.502866 / 0.434364 (0.068502) | 0.629574 / 0.540337 (0.089237) | 0.746602 / 1.386936 (-0.640334) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#900d429d3601657f766737b8670f855033078d57 \"CML watermark\")\n" ]
null
[]
Increase chunk size for speeding up file downloads
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/5501/timeline
Original fix: https://github.com/huggingface/huggingface_hub/pull/1267 Not sure this function is actually still called though. I haven't done benches on this. Is there a dataset where files are hosted on the hub through cloudfront so we can have the same setup as in `hf_hub` ?
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/5501.diff", "html_url": "https://github.com/huggingface/datasets/pull/5501", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/5501.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5501" }
1,569,644,159
https://api.github.com/repos/huggingface/datasets/issues/5501/comments
PR_kwDODunzps5JMTn8
null
5,501
https://api.github.com/repos/huggingface/datasets/issues/5501/events
true
closed
2023-02-03T05:45:37Z
null
https://api.github.com/repos/huggingface/datasets/issues/5500
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5500/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/5500/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/38466901?v=4", "events_url": "https://api.github.com/users/Hannibal046/events{/privacy}", "followers_url": "https://api.github.com/users/Hannibal046/followers", "following_url": "https://api.github.com/users/Hannibal046/following{/other_user}", "gists_url": "https://api.github.com/users/Hannibal046/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Hannibal046", "id": 38466901, "login": "Hannibal046", "node_id": "MDQ6VXNlcjM4NDY2OTAx", "organizations_url": "https://api.github.com/users/Hannibal046/orgs", "received_events_url": "https://api.github.com/users/Hannibal046/received_events", "repos_url": "https://api.github.com/users/Hannibal046/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Hannibal046/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Hannibal046/subscriptions", "type": "User", "url": "https://api.github.com/users/Hannibal046" }
https://github.com/huggingface/datasets/issues/5500
[]
false
2023-02-03T05:52:56Z
2023-02-03T05:52:56Z
null
[ "I update the `datatsets` version and it works." ]
completed
[]
WMT19 custom download checksum error
NONE
https://api.github.com/repos/huggingface/datasets/issues/5500/timeline
### Describe the bug I use the following scripts to download data from WMT19: ```python import datasets from datasets import inspect_dataset, load_dataset_builder from wmt19.wmt_utils import _TRAIN_SUBSETS,_DEV_SUBSETS ## this is a must due to: https://discuss.huggingface.co/t/load-dataset-hangs-with-local-files/28034/3 if __name__ == '__main__': dev_subsets,train_subsets = [],[] for subset in _TRAIN_SUBSETS: if subset.target=='en' and 'de' in subset.sources: train_subsets.append(subset.name) for subset in _DEV_SUBSETS: if subset.target=='en' and 'de' in subset.sources: dev_subsets.append(subset.name) inspect_dataset("wmt19", "./wmt19") builder = load_dataset_builder( "./wmt19/wmt_utils.py", language_pair=("de", "en"), subsets={ datasets.Split.TRAIN: train_subsets, datasets.Split.VALIDATION: dev_subsets, }, ) builder.download_and_prepare() ds = builder.as_dataset() ds.to_json("../data/wmt19/ende/data.json") ``` And I got the following error: ``` Traceback (most recent call last): | 0/2 [00:00<?, ?obj/s] File "draft.py", line 26, in <module> builder.download_and_prepare() | 0/1 [00:00<?, ?obj/s] File "/Users/hannibal046/anaconda3/lib/python3.8/site-packages/datasets/builder.py", line 605, in download_and_prepare self._download_and_prepare(%| | 0/1 [00:00<?, ?obj/s] File "/Users/hannibal046/anaconda3/lib/python3.8/site-packages/datasets/builder.py", line 1104, in _download_and_prepare super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos) | 0/1 [00:00<?, ?obj/s] File "/Users/hannibal046/anaconda3/lib/python3.8/site-packages/datasets/builder.py", line 676, in _download_and_prepare verify_checksums(s #13: 0%| | 0/1 [00:00<?, ?obj/s] File "/Users/hannibal046/anaconda3/lib/python3.8/site-packages/datasets/utils/info_utils.py", line 35, in verify_checksums raise UnexpectedDownloadedFile(str(set(recorded_checksums) - set(expected_checksums))) | 0/1 [00:00<?, ?obj/s] datasets.utils.info_utils.UnexpectedDownloadedFile: {'https://s3.amazonaws.com/web-language-models/paracrawl/release1/paracrawl-release1.en-de.zipporah0-dedup-clean.tgz', 'https://huggingface.co/datasets/wmt/wmt13/resolve/main-zip/training-parallel-europarl-v7.zip', 'https://huggingface.co/datasets/wmt/wmt18/resolve/main-zip/translation-task/rapid2016.zip', 'https://huggingface.co/datasets/wmt/wmt18/resolve/main-zip/translation-task/training-parallel-nc-v13.zip', 'https://huggingface.co/datasets/wmt/wmt17/resolve/main-zip/translation-task/training-parallel-nc-v12.zip', 'https://huggingface.co/datasets/wmt/wmt14/resolve/main-zip/training-parallel-nc-v9.zip', 'https://huggingface.co/datasets/wmt/wmt15/resolve/main-zip/training-parallel-nc-v10.zip', 'https://huggingface.co/datasets/wmt/wmt16/resolve/main-zip/translation-task/training-parallel-nc-v11.zip'} ``` ### Steps to reproduce the bug see above ### Expected behavior download data successfully ### Environment info datasets==2.1.0 python==3.8
https://api.github.com/repos/huggingface/datasets
null
1,569,257,240
https://api.github.com/repos/huggingface/datasets/issues/5500/comments
I_kwDODunzps5diPcY
null
5,500
https://api.github.com/repos/huggingface/datasets/issues/5500/events
false
open
2023-02-02T23:34:50Z
null
https://api.github.com/repos/huggingface/datasets/issues/5499
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5499/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/5499/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/4443482?v=4", "events_url": "https://api.github.com/users/davidgilbertson/events{/privacy}", "followers_url": "https://api.github.com/users/davidgilbertson/followers", "following_url": "https://api.github.com/users/davidgilbertson/following{/other_user}", "gists_url": "https://api.github.com/users/davidgilbertson/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/davidgilbertson", "id": 4443482, "login": "davidgilbertson", "node_id": "MDQ6VXNlcjQ0NDM0ODI=", "organizations_url": "https://api.github.com/users/davidgilbertson/orgs", "received_events_url": "https://api.github.com/users/davidgilbertson/received_events", "repos_url": "https://api.github.com/users/davidgilbertson/repos", "site_admin": false, "starred_url": "https://api.github.com/users/davidgilbertson/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/davidgilbertson/subscriptions", "type": "User", "url": "https://api.github.com/users/davidgilbertson" }
https://github.com/huggingface/datasets/issues/5499
[]
false
2023-02-07T19:35:11Z
null
null
[ "Hi ! To skip the verification step that checks if newer data exist, you can enable offline mode with `HF_DATASETS_OFFLINE=1`.\r\n\r\nAlthough I agree this step should be much faster for datasets hosted on the HF Hub - we could just compare the commit hash from the local data and the remote git repository. We're not been leveraging the git commit hashes, since the library was built before we even had git repositories for each dataset on HF.", "Thanks @lhoestq, for memory when I recorded those times I had `HF_DATASETS_OFFLINE` set." ]
null
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
`load_dataset` has ~4 seconds of overhead for cached data
NONE
https://api.github.com/repos/huggingface/datasets/issues/5499/timeline
### Feature request When loading a dataset that has been cached locally, the `load_dataset` function takes a lot longer than it should take to fetch the dataset from disk (or memory). This is particularly noticeable for smaller datasets. For example, wikitext-2, comparing `load_data` (once cached) and `load_from_disk`, the `load_dataset` method takes 40 times longer. ⏱ 4.84s ⮜ load_dataset ⏱ 119ms ⮜ load_from_disk ### Motivation I assume this is doing something like checking for a newer version. If so, that's an age old problem: do you make the user wait _every single time they load from cache_ or do you do something like load from cache always, _then_ check for a newer version and alert if they have stale data. The decision usually revolves around what percentage of the time the data will have been updated, and how dangerous old data is. For most datasets it's extremely unlikely that there will be a newer version on any given run, so 99% of the time this is just wasted time. Maybe you don't want to make that decision for all users, but at least having the _option_ to not wait for checks would be an improvement. ### Your contribution .
https://api.github.com/repos/huggingface/datasets
null
1,568,937,026
https://api.github.com/repos/huggingface/datasets/issues/5499/comments
I_kwDODunzps5dhBRC
null
5,499
https://api.github.com/repos/huggingface/datasets/issues/5499/events
false