url
stringlengths 61
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 75
75
| comments_url
stringlengths 70
70
| events_url
stringlengths 68
68
| html_url
stringlengths 49
51
| id
int64 1.49B
2.43B
| node_id
stringlengths 18
19
| number
int64 5.35k
7.07k
| title
stringlengths 1
290
| user
dict | labels
listlengths 0
3
| state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
listlengths 0
2
| milestone
dict | comments
sequencelengths 0
30
| created_at
timestamp[s] | updated_at
timestamp[s] | closed_at
timestamp[s] | author_association
stringclasses 4
values | active_lock_reason
null | draft
bool 2
classes | pull_request
dict | body
stringlengths 1
19.9k
⌀ | reactions
dict | timeline_url
stringlengths 70
70
| performed_via_github_app
null | state_reason
stringclasses 3
values | is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/5519 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5519/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5519/comments | https://api.github.com/repos/huggingface/datasets/issues/5519/events | https://github.com/huggingface/datasets/pull/5519 | 1,578,341,785 | PR_kwDODunzps5JpGPl | 5,519 | Lint code with `ruff` | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009729 / 0.011353 (-0.001624) | 0.005342 / 0.011008 (-0.005666) | 0.100194 / 0.038508 (0.061686) | 0.036391 / 0.023109 (0.013282) | 0.294163 / 0.275898 (0.018264) | 0.364117 / 0.323480 (0.040637) | 0.008231 / 0.007986 (0.000246) | 0.005954 / 0.004328 (0.001626) | 0.076484 / 0.004250 (0.072234) | 0.045028 / 0.037052 (0.007976) | 0.308163 / 0.258489 (0.049674) | 0.339473 / 0.293841 (0.045632) | 0.039268 / 0.128546 (-0.089279) | 0.012357 / 0.075646 (-0.063289) | 0.334176 / 0.419271 (-0.085096) | 0.049502 / 0.043533 (0.005969) | 0.294134 / 0.255139 (0.038995) | 0.319370 / 0.283200 (0.036170) | 0.113040 / 0.141683 (-0.028643) | 1.450750 / 1.452155 (-0.001405) | 1.490265 / 1.492716 (-0.002452) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.252860 / 0.018006 (0.234854) | 0.554299 / 0.000490 (0.553810) | 0.002105 / 0.000200 (0.001905) | 0.000091 / 0.000054 (0.000037) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026557 / 0.037411 (-0.010854) | 0.104464 / 0.014526 (0.089938) | 0.116724 / 0.176557 (-0.059833) | 0.154736 / 0.737135 (-0.582399) | 0.122017 / 0.296338 (-0.174322) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.398170 / 0.215209 (0.182961) | 3.979309 / 2.077655 (1.901654) | 1.773051 / 1.504120 (0.268931) | 1.587247 / 1.541195 (0.046053) | 1.620446 / 1.468490 (0.151956) | 0.692152 / 4.584777 (-3.892625) | 3.724821 / 3.745712 (-0.020891) | 2.133122 / 5.269862 (-3.136739) | 1.455612 / 4.565676 (-3.110065) | 0.084721 / 0.424275 (-0.339554) | 0.012461 / 0.007607 (0.004854) | 0.498909 / 0.226044 (0.272865) | 4.983837 / 2.268929 (2.714908) | 2.258489 / 55.444624 (-53.186135) | 1.891690 / 6.876477 (-4.984786) | 1.976944 / 2.142072 (-0.165128) | 0.836950 / 4.805227 (-3.968277) | 0.165401 / 6.500664 (-6.335263) | 0.061623 / 0.075469 (-0.013846) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.205945 / 1.841788 (-0.635842) | 15.101603 / 8.074308 (7.027295) | 14.393739 / 10.191392 (4.202347) | 0.176313 / 0.680424 (-0.504110) | 0.029102 / 0.534201 (-0.505099) | 0.439785 / 0.579283 (-0.139498) | 0.437360 / 0.434364 (0.002996) | 0.539668 / 0.540337 (-0.000669) | 0.641452 / 1.386936 (-0.745484) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007184 / 0.011353 (-0.004169) | 0.005215 / 0.011008 (-0.005793) | 0.074617 / 0.038508 (0.036109) | 0.033209 / 0.023109 (0.010100) | 0.334304 / 0.275898 (0.058406) | 0.370270 / 0.323480 (0.046790) | 0.005851 / 0.007986 (-0.002135) | 0.004106 / 0.004328 (-0.000222) | 0.075487 / 0.004250 (0.071237) | 0.051133 / 0.037052 (0.014080) | 0.335401 / 0.258489 (0.076912) | 0.391457 / 0.293841 (0.097616) | 0.036525 / 0.128546 (-0.092021) | 0.012423 / 0.075646 (-0.063223) | 0.086446 / 0.419271 (-0.332825) | 0.050707 / 0.043533 (0.007174) | 0.336186 / 0.255139 (0.081047) | 0.353273 / 0.283200 (0.070074) | 0.105625 / 0.141683 (-0.036057) | 1.486118 / 1.452155 (0.033963) | 1.584931 / 1.492716 (0.092214) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.237589 / 0.018006 (0.219583) | 0.552030 / 0.000490 (0.551540) | 0.002863 / 0.000200 (0.002663) | 0.000097 / 0.000054 (0.000042) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028078 / 0.037411 (-0.009333) | 0.112516 / 0.014526 (0.097990) | 0.121119 / 0.176557 (-0.055438) | 0.158874 / 0.737135 (-0.578262) | 0.129501 / 0.296338 (-0.166837) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.419479 / 0.215209 (0.204270) | 4.192216 / 2.077655 (2.114561) | 1.990513 / 1.504120 (0.486393) | 1.792892 / 1.541195 (0.251697) | 1.853904 / 1.468490 (0.385413) | 0.712702 / 4.584777 (-3.872074) | 3.820682 / 3.745712 (0.074970) | 2.143695 / 5.269862 (-3.126166) | 1.369621 / 4.565676 (-3.196055) | 0.087451 / 0.424275 (-0.336824) | 0.012622 / 0.007607 (0.005014) | 0.521056 / 0.226044 (0.295011) | 5.204873 / 2.268929 (2.935944) | 2.481169 / 55.444624 (-52.963455) | 2.112134 / 6.876477 (-4.764342) | 2.200681 / 2.142072 (0.058609) | 0.860323 / 4.805227 (-3.944904) | 0.171452 / 6.500664 (-6.329212) | 0.065235 / 0.075469 (-0.010234) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.241047 / 1.841788 (-0.600741) | 14.977890 / 8.074308 (6.903582) | 13.584265 / 10.191392 (3.392873) | 0.180050 / 0.680424 (-0.500374) | 0.018247 / 0.534201 (-0.515954) | 0.429585 / 0.579283 (-0.149698) | 0.429448 / 0.434364 (-0.004916) | 0.542663 / 0.540337 (0.002326) | 0.649525 / 1.386936 (-0.737411) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#26cf1d2548eb313a06565d36bd400436e350bc86 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.011289 / 0.011353 (-0.000064) | 0.005841 / 0.011008 (-0.005167) | 0.120994 / 0.038508 (0.082486) | 0.043627 / 0.023109 (0.020517) | 0.353254 / 0.275898 (0.077356) | 0.394685 / 0.323480 (0.071205) | 0.009520 / 0.007986 (0.001535) | 0.004770 / 0.004328 (0.000442) | 0.088857 / 0.004250 (0.084607) | 0.048426 / 0.037052 (0.011373) | 0.353815 / 0.258489 (0.095326) | 0.404109 / 0.293841 (0.110268) | 0.060079 / 0.128546 (-0.068467) | 0.013840 / 0.075646 (-0.061806) | 0.403133 / 0.419271 (-0.016139) | 0.072227 / 0.043533 (0.028694) | 0.354585 / 0.255139 (0.099446) | 0.377937 / 0.283200 (0.094737) | 0.139080 / 0.141683 (-0.002602) | 1.733266 / 1.452155 (0.281112) | 1.828402 / 1.492716 (0.335686) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.215095 / 0.018006 (0.197088) | 0.486669 / 0.000490 (0.486179) | 0.001425 / 0.000200 (0.001225) | 0.000097 / 0.000054 (0.000043) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032832 / 0.037411 (-0.004579) | 0.136335 / 0.014526 (0.121809) | 0.141827 / 0.176557 (-0.034730) | 0.185917 / 0.737135 (-0.551218) | 0.149046 / 0.296338 (-0.147293) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.474587 / 0.215209 (0.259378) | 4.753686 / 2.077655 (2.676031) | 2.152147 / 1.504120 (0.648027) | 1.941762 / 1.541195 (0.400567) | 2.077493 / 1.468490 (0.609003) | 0.822432 / 4.584777 (-3.762345) | 4.860151 / 3.745712 (1.114439) | 2.527292 / 5.269862 (-2.742569) | 1.580442 / 4.565676 (-2.985234) | 0.102104 / 0.424275 (-0.322171) | 0.015060 / 0.007607 (0.007453) | 0.598780 / 0.226044 (0.372736) | 5.998318 / 2.268929 (3.729390) | 2.754115 / 55.444624 (-52.690509) | 2.317509 / 6.876477 (-4.558967) | 2.409942 / 2.142072 (0.267870) | 1.008830 / 4.805227 (-3.796397) | 0.196203 / 6.500664 (-6.304461) | 0.075378 / 0.075469 (-0.000091) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.430676 / 1.841788 (-0.411112) | 19.597628 / 8.074308 (11.523320) | 17.364673 / 10.191392 (7.173281) | 0.216621 / 0.680424 (-0.463803) | 0.039505 / 0.534201 (-0.494696) | 0.529027 / 0.579283 (-0.050256) | 0.572014 / 0.434364 (0.137650) | 0.702898 / 0.540337 (0.162560) | 0.785748 / 1.386936 (-0.601188) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009150 / 0.011353 (-0.002203) | 0.006088 / 0.011008 (-0.004920) | 0.090629 / 0.038508 (0.052121) | 0.044284 / 0.023109 (0.021174) | 0.411363 / 0.275898 (0.135465) | 0.445499 / 0.323480 (0.122020) | 0.007129 / 0.007986 (-0.000856) | 0.004843 / 0.004328 (0.000515) | 0.087919 / 0.004250 (0.083668) | 0.060329 / 0.037052 (0.023277) | 0.405802 / 0.258489 (0.147313) | 0.468301 / 0.293841 (0.174460) | 0.044271 / 0.128546 (-0.084275) | 0.014895 / 0.075646 (-0.060751) | 0.103728 / 0.419271 (-0.315544) | 0.084190 / 0.043533 (0.040657) | 0.407210 / 0.255139 (0.152071) | 0.432585 / 0.283200 (0.149386) | 0.137132 / 0.141683 (-0.004550) | 1.720261 / 1.452155 (0.268107) | 1.858575 / 1.492716 (0.365858) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.331395 / 0.018006 (0.313389) | 0.494757 / 0.000490 (0.494267) | 0.043426 / 0.000200 (0.043226) | 0.000470 / 0.000054 (0.000415) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.035288 / 0.037411 (-0.002123) | 0.140856 / 0.014526 (0.126330) | 0.146597 / 0.176557 (-0.029959) | 0.192775 / 0.737135 (-0.544360) | 0.155307 / 0.296338 (-0.141032) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.504000 / 0.215209 (0.288791) | 5.011081 / 2.077655 (2.933427) | 2.380420 / 1.504120 (0.876300) | 2.154819 / 1.541195 (0.613624) | 2.293883 / 1.468490 (0.825393) | 0.864429 / 4.584777 (-3.720348) | 5.134475 / 3.745712 (1.388763) | 4.984024 / 5.269862 (-0.285837) | 2.333754 / 4.565676 (-2.231923) | 0.105854 / 0.424275 (-0.318422) | 0.015833 / 0.007607 (0.008226) | 0.633614 / 0.226044 (0.407569) | 6.330974 / 2.268929 (4.062046) | 3.020498 / 55.444624 (-52.424126) | 2.578234 / 6.876477 (-4.298243) | 2.654429 / 2.142072 (0.512357) | 1.022041 / 4.805227 (-3.783186) | 0.205085 / 6.500664 (-6.295579) | 0.081122 / 0.075469 (0.005653) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.538929 / 1.841788 (-0.302859) | 19.907799 / 8.074308 (11.833490) | 17.174568 / 10.191392 (6.983176) | 0.228165 / 0.680424 (-0.452258) | 0.024688 / 0.534201 (-0.509513) | 0.508958 / 0.579283 (-0.070326) | 0.544469 / 0.434364 (0.110105) | 0.590805 / 0.540337 (0.050468) | 0.705947 / 1.386936 (-0.680989) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#2573861afb170fd575dbe67270294a4e88ab4be6 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008377 / 0.011353 (-0.002975) | 0.004445 / 0.011008 (-0.006563) | 0.100671 / 0.038508 (0.062163) | 0.029216 / 0.023109 (0.006107) | 0.300311 / 0.275898 (0.024413) | 0.356907 / 0.323480 (0.033427) | 0.006921 / 0.007986 (-0.001065) | 0.003384 / 0.004328 (-0.000944) | 0.078529 / 0.004250 (0.074278) | 0.034689 / 0.037052 (-0.002364) | 0.304647 / 0.258489 (0.046158) | 0.343584 / 0.293841 (0.049743) | 0.032700 / 0.128546 (-0.095846) | 0.011403 / 0.075646 (-0.064244) | 0.321540 / 0.419271 (-0.097732) | 0.040770 / 0.043533 (-0.002762) | 0.306900 / 0.255139 (0.051761) | 0.322482 / 0.283200 (0.039282) | 0.085396 / 0.141683 (-0.056287) | 1.450735 / 1.452155 (-0.001419) | 1.491829 / 1.492716 (-0.000888) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.009439 / 0.018006 (-0.008567) | 0.406805 / 0.000490 (0.406315) | 0.002993 / 0.000200 (0.002793) | 0.000102 / 0.000054 (0.000048) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025034 / 0.037411 (-0.012378) | 0.100567 / 0.014526 (0.086042) | 0.107267 / 0.176557 (-0.069290) | 0.149945 / 0.737135 (-0.587190) | 0.111150 / 0.296338 (-0.185189) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.418387 / 0.215209 (0.203178) | 4.177979 / 2.077655 (2.100324) | 1.886650 / 1.504120 (0.382530) | 1.685692 / 1.541195 (0.144497) | 1.728270 / 1.468490 (0.259780) | 0.700904 / 4.584777 (-3.883873) | 3.379998 / 3.745712 (-0.365714) | 1.874779 / 5.269862 (-3.395083) | 1.170366 / 4.565676 (-3.395310) | 0.083190 / 0.424275 (-0.341085) | 0.012506 / 0.007607 (0.004899) | 0.528633 / 0.226044 (0.302589) | 5.301793 / 2.268929 (3.032865) | 2.334050 / 55.444624 (-53.110574) | 1.986988 / 6.876477 (-4.889488) | 2.020508 / 2.142072 (-0.121565) | 0.817227 / 4.805227 (-3.988000) | 0.150284 / 6.500664 (-6.350380) | 0.065489 / 0.075469 (-0.009980) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.224216 / 1.841788 (-0.617572) | 13.729808 / 8.074308 (5.655500) | 14.283402 / 10.191392 (4.092010) | 0.159434 / 0.680424 (-0.520990) | 0.028471 / 0.534201 (-0.505730) | 0.395102 / 0.579283 (-0.184181) | 0.402733 / 0.434364 (-0.031631) | 0.470852 / 0.540337 (-0.069485) | 0.568530 / 1.386936 (-0.818406) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006750 / 0.011353 (-0.004603) | 0.004479 / 0.011008 (-0.006529) | 0.074926 / 0.038508 (0.036418) | 0.027619 / 0.023109 (0.004510) | 0.342070 / 0.275898 (0.066172) | 0.372452 / 0.323480 (0.048972) | 0.005094 / 0.007986 (-0.002892) | 0.003494 / 0.004328 (-0.000834) | 0.074963 / 0.004250 (0.070713) | 0.038457 / 0.037052 (0.001405) | 0.340587 / 0.258489 (0.082098) | 0.381212 / 0.293841 (0.087371) | 0.031597 / 0.128546 (-0.096950) | 0.011631 / 0.075646 (-0.064015) | 0.084646 / 0.419271 (-0.334626) | 0.042072 / 0.043533 (-0.001461) | 0.340977 / 0.255139 (0.085838) | 0.366502 / 0.283200 (0.083302) | 0.091181 / 0.141683 (-0.050502) | 1.435119 / 1.452155 (-0.017035) | 1.520426 / 1.492716 (0.027710) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.211320 / 0.018006 (0.193313) | 0.466154 / 0.000490 (0.465664) | 0.002901 / 0.000200 (0.002701) | 0.000080 / 0.000054 (0.000026) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025122 / 0.037411 (-0.012289) | 0.098929 / 0.014526 (0.084403) | 0.106551 / 0.176557 (-0.070005) | 0.142820 / 0.737135 (-0.594316) | 0.110701 / 0.296338 (-0.185637) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.445187 / 0.215209 (0.229978) | 4.457524 / 2.077655 (2.379870) | 2.088323 / 1.504120 (0.584203) | 1.888076 / 1.541195 (0.346881) | 1.923340 / 1.468490 (0.454850) | 0.723354 / 4.584777 (-3.861423) | 3.428479 / 3.745712 (-0.317233) | 1.914580 / 5.269862 (-3.355281) | 1.191810 / 4.565676 (-3.373866) | 0.087008 / 0.424275 (-0.337267) | 0.013431 / 0.007607 (0.005824) | 0.545089 / 0.226044 (0.319044) | 5.465887 / 2.268929 (3.196958) | 2.527431 / 55.444624 (-52.917194) | 2.240622 / 6.876477 (-4.635854) | 2.232472 / 2.142072 (0.090399) | 0.815968 / 4.805227 (-3.989259) | 0.152842 / 6.500664 (-6.347822) | 0.067152 / 0.075469 (-0.008317) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.328360 / 1.841788 (-0.513427) | 14.163349 / 8.074308 (6.089040) | 13.814255 / 10.191392 (3.622863) | 0.131684 / 0.680424 (-0.548740) | 0.016980 / 0.534201 (-0.517221) | 0.396045 / 0.579283 (-0.183238) | 0.395078 / 0.434364 (-0.039286) | 0.471728 / 0.540337 (-0.068609) | 0.567830 / 1.386936 (-0.819106) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#82331b032891671c334afe30c5f3cc21245b2d72 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.012630 / 0.011353 (0.001277) | 0.007038 / 0.011008 (-0.003970) | 0.158816 / 0.038508 (0.120308) | 0.044142 / 0.023109 (0.021032) | 0.389393 / 0.275898 (0.113495) | 0.479745 / 0.323480 (0.156265) | 0.009335 / 0.007986 (0.001349) | 0.005434 / 0.004328 (0.001105) | 0.107747 / 0.004250 (0.103497) | 0.048382 / 0.037052 (0.011330) | 0.398144 / 0.258489 (0.139655) | 0.446373 / 0.293841 (0.152532) | 0.066285 / 0.128546 (-0.062261) | 0.021174 / 0.075646 (-0.054472) | 0.449176 / 0.419271 (0.029905) | 0.063044 / 0.043533 (0.019511) | 0.390523 / 0.255139 (0.135384) | 0.451435 / 0.283200 (0.168236) | 0.116369 / 0.141683 (-0.025314) | 1.881269 / 1.452155 (0.429114) | 1.944527 / 1.492716 (0.451811) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.227989 / 0.018006 (0.209983) | 0.538514 / 0.000490 (0.538024) | 0.009404 / 0.000200 (0.009204) | 0.000510 / 0.000054 (0.000455) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029826 / 0.037411 (-0.007585) | 0.129623 / 0.014526 (0.115098) | 0.142067 / 0.176557 (-0.034489) | 0.218586 / 0.737135 (-0.518549) | 0.160524 / 0.296338 (-0.135814) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.667195 / 0.215209 (0.451986) | 6.694192 / 2.077655 (4.616537) | 2.542493 / 1.504120 (1.038373) | 2.124042 / 1.541195 (0.582847) | 2.024854 / 1.468490 (0.556364) | 1.306222 / 4.584777 (-3.278555) | 5.631557 / 3.745712 (1.885845) | 3.405978 / 5.269862 (-1.863884) | 2.471399 / 4.565676 (-2.094278) | 0.165187 / 0.424275 (-0.259088) | 0.014880 / 0.007607 (0.007273) | 0.842718 / 0.226044 (0.616673) | 8.584358 / 2.268929 (6.315430) | 3.377228 / 55.444624 (-52.067396) | 2.667265 / 6.876477 (-4.209212) | 2.699462 / 2.142072 (0.557389) | 1.623115 / 4.805227 (-3.182112) | 0.253929 / 6.500664 (-6.246735) | 0.077189 / 0.075469 (0.001720) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.778962 / 1.841788 (-0.062825) | 18.997636 / 8.074308 (10.923328) | 24.255222 / 10.191392 (14.063830) | 0.304754 / 0.680424 (-0.375670) | 0.049656 / 0.534201 (-0.484545) | 0.590871 / 0.579283 (0.011588) | 0.649292 / 0.434364 (0.214928) | 0.751281 / 0.540337 (0.210943) | 0.872193 / 1.386936 (-0.514743) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010660 / 0.011353 (-0.000693) | 0.006492 / 0.011008 (-0.004516) | 0.112190 / 0.038508 (0.073682) | 0.045391 / 0.023109 (0.022281) | 0.439852 / 0.275898 (0.163954) | 0.486489 / 0.323480 (0.163009) | 0.007155 / 0.007986 (-0.000830) | 0.006323 / 0.004328 (0.001995) | 0.099775 / 0.004250 (0.095525) | 0.055762 / 0.037052 (0.018709) | 0.439457 / 0.258489 (0.180968) | 0.505322 / 0.293841 (0.211481) | 0.057019 / 0.128546 (-0.071527) | 0.031382 / 0.075646 (-0.044264) | 0.121211 / 0.419271 (-0.298061) | 0.066091 / 0.043533 (0.022558) | 0.499760 / 0.255139 (0.244622) | 0.508312 / 0.283200 (0.225113) | 0.146975 / 0.141683 (0.005292) | 1.916347 / 1.452155 (0.464193) | 2.065860 / 1.492716 (0.573144) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.247176 / 0.018006 (0.229170) | 0.565141 / 0.000490 (0.564652) | 0.004841 / 0.000200 (0.004641) | 0.000141 / 0.000054 (0.000087) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.036378 / 0.037411 (-0.001033) | 0.143470 / 0.014526 (0.128944) | 0.148096 / 0.176557 (-0.028461) | 0.225877 / 0.737135 (-0.511258) | 0.147072 / 0.296338 (-0.149266) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.723119 / 0.215209 (0.507910) | 6.824981 / 2.077655 (4.747326) | 2.883840 / 1.504120 (1.379720) | 2.468707 / 1.541195 (0.927513) | 2.525549 / 1.468490 (1.057059) | 1.426640 / 4.584777 (-3.158137) | 5.816045 / 3.745712 (2.070333) | 5.727037 / 5.269862 (0.457175) | 2.650307 / 4.565676 (-1.915369) | 0.160306 / 0.424275 (-0.263970) | 0.015371 / 0.007607 (0.007764) | 0.835778 / 0.226044 (0.609733) | 8.622836 / 2.268929 (6.353907) | 3.616338 / 55.444624 (-51.828287) | 2.974243 / 6.876477 (-3.902234) | 2.884557 / 2.142072 (0.742485) | 1.734874 / 4.805227 (-3.070353) | 0.277474 / 6.500664 (-6.223190) | 0.094189 / 0.075469 (0.018720) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.785728 / 1.841788 (-0.056059) | 19.376490 / 8.074308 (11.302182) | 24.560403 / 10.191392 (14.369011) | 0.250686 / 0.680424 (-0.429738) | 0.034333 / 0.534201 (-0.499868) | 0.557331 / 0.579283 (-0.021952) | 0.641007 / 0.434364 (0.206643) | 0.657138 / 0.540337 (0.116800) | 0.759023 / 1.386936 (-0.627913) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#06ae3f678651bfbb3ca7dd3274ee2f38e0e0237e \"CML watermark\")\n",
"I am editing the title and description of this PR:\r\n- It should be \"Lint code\" instead of \"Format code\": formatting was still done with `black`\r\n- This PR uses `ruff` instead of `isort` and `flake8` (not `black`): note that `black` was still used for formatting"
] | 2023-02-09T17:50:21 | 2024-06-01T15:35:02 | 2023-02-14T16:18:38 | COLLABORATOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5519",
"html_url": "https://github.com/huggingface/datasets/pull/5519",
"diff_url": "https://github.com/huggingface/datasets/pull/5519.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5519.patch",
"merged_at": "2023-02-14T16:18:38"
} | EDIT:
Use `ruff` for linting instead of `isort` and `flake8` ~~`black`~~ to be consistent with [`transformers`](https://github.com/huggingface/transformers/pull/21480) and [`hfh`](https://github.com/huggingface/huggingface_hub/pull/1323).
TODO:
- [x] ~Merge the community contributors' PR to avoid having to run `make style` on their PR branches~ (we have some new PRs, but fixing those shouldn't be too big of a problem) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5519/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5519/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5518 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5518/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5518/comments | https://api.github.com/repos/huggingface/datasets/issues/5518/events | https://github.com/huggingface/datasets/pull/5518 | 1,578,203,962 | PR_kwDODunzps5Joom3 | 5,518 | Remove py.typed | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008283 / 0.011353 (-0.003070) | 0.004450 / 0.011008 (-0.006558) | 0.099773 / 0.038508 (0.061265) | 0.029068 / 0.023109 (0.005959) | 0.296799 / 0.275898 (0.020901) | 0.350946 / 0.323480 (0.027466) | 0.007331 / 0.007986 (-0.000655) | 0.004550 / 0.004328 (0.000222) | 0.077603 / 0.004250 (0.073352) | 0.034307 / 0.037052 (-0.002746) | 0.313174 / 0.258489 (0.054685) | 0.342270 / 0.293841 (0.048429) | 0.033463 / 0.128546 (-0.095083) | 0.011421 / 0.075646 (-0.064225) | 0.317188 / 0.419271 (-0.102083) | 0.040985 / 0.043533 (-0.002548) | 0.300800 / 0.255139 (0.045661) | 0.360171 / 0.283200 (0.076972) | 0.086702 / 0.141683 (-0.054981) | 1.474679 / 1.452155 (0.022525) | 1.518319 / 1.492716 (0.025603) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.198059 / 0.018006 (0.180052) | 0.403502 / 0.000490 (0.403012) | 0.002663 / 0.000200 (0.002463) | 0.000218 / 0.000054 (0.000164) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022946 / 0.037411 (-0.014465) | 0.096466 / 0.014526 (0.081940) | 0.104092 / 0.176557 (-0.072465) | 0.138499 / 0.737135 (-0.598636) | 0.106941 / 0.296338 (-0.189397) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.416000 / 0.215209 (0.200791) | 4.153120 / 2.077655 (2.075465) | 1.843957 / 1.504120 (0.339837) | 1.650391 / 1.541195 (0.109197) | 1.684765 / 1.468490 (0.216275) | 0.688917 / 4.584777 (-3.895860) | 3.442797 / 3.745712 (-0.302916) | 1.834685 / 5.269862 (-3.435176) | 1.148046 / 4.565676 (-3.417631) | 0.082299 / 0.424275 (-0.341976) | 0.012399 / 0.007607 (0.004792) | 0.521099 / 0.226044 (0.295054) | 5.223695 / 2.268929 (2.954767) | 2.270970 / 55.444624 (-53.173654) | 1.921321 / 6.876477 (-4.955156) | 1.954675 / 2.142072 (-0.187398) | 0.809383 / 4.805227 (-3.995845) | 0.148562 / 6.500664 (-6.352102) | 0.064764 / 0.075469 (-0.010705) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.212687 / 1.841788 (-0.629101) | 13.491641 / 8.074308 (5.417333) | 12.972926 / 10.191392 (2.781534) | 0.137036 / 0.680424 (-0.543388) | 0.028591 / 0.534201 (-0.505610) | 0.391980 / 0.579283 (-0.187303) | 0.394474 / 0.434364 (-0.039889) | 0.456582 / 0.540337 (-0.083755) | 0.535984 / 1.386936 (-0.850952) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006419 / 0.011353 (-0.004934) | 0.004295 / 0.011008 (-0.006713) | 0.077702 / 0.038508 (0.039194) | 0.027368 / 0.023109 (0.004259) | 0.336713 / 0.275898 (0.060815) | 0.370074 / 0.323480 (0.046594) | 0.004657 / 0.007986 (-0.003328) | 0.003308 / 0.004328 (-0.001021) | 0.075747 / 0.004250 (0.071496) | 0.037323 / 0.037052 (0.000271) | 0.342382 / 0.258489 (0.083893) | 0.381109 / 0.293841 (0.087269) | 0.031804 / 0.128546 (-0.096742) | 0.011761 / 0.075646 (-0.063885) | 0.086818 / 0.419271 (-0.332454) | 0.042058 / 0.043533 (-0.001475) | 0.346295 / 0.255139 (0.091156) | 0.366857 / 0.283200 (0.083658) | 0.088666 / 0.141683 (-0.053016) | 1.533711 / 1.452155 (0.081556) | 1.537422 / 1.492716 (0.044705) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.220416 / 0.018006 (0.202410) | 0.387393 / 0.000490 (0.386903) | 0.003739 / 0.000200 (0.003539) | 0.000076 / 0.000054 (0.000021) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024083 / 0.037411 (-0.013329) | 0.098036 / 0.014526 (0.083510) | 0.102908 / 0.176557 (-0.073648) | 0.139512 / 0.737135 (-0.597623) | 0.107703 / 0.296338 (-0.188635) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.437615 / 0.215209 (0.222406) | 4.373140 / 2.077655 (2.295486) | 2.065063 / 1.504120 (0.560943) | 1.863938 / 1.541195 (0.322743) | 1.907955 / 1.468490 (0.439465) | 0.695830 / 4.584777 (-3.888947) | 3.394248 / 3.745712 (-0.351464) | 1.842794 / 5.269862 (-3.427068) | 1.156928 / 4.565676 (-3.408748) | 0.082505 / 0.424275 (-0.341771) | 0.012405 / 0.007607 (0.004798) | 0.538041 / 0.226044 (0.311997) | 5.363508 / 2.268929 (3.094579) | 2.509383 / 55.444624 (-52.935241) | 2.160416 / 6.876477 (-4.716061) | 2.162054 / 2.142072 (0.019982) | 0.802419 / 4.805227 (-4.002809) | 0.150529 / 6.500664 (-6.350135) | 0.066418 / 0.075469 (-0.009051) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.257221 / 1.841788 (-0.584567) | 13.748839 / 8.074308 (5.674531) | 13.310555 / 10.191392 (3.119163) | 0.152997 / 0.680424 (-0.527427) | 0.016618 / 0.534201 (-0.517583) | 0.375443 / 0.579283 (-0.203840) | 0.374942 / 0.434364 (-0.059422) | 0.466704 / 0.540337 (-0.073633) | 0.553563 / 1.386936 (-0.833373) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#1ac8343af4e2dc6fe0771d0be70eaf8a6e5a8fbc \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009260 / 0.011353 (-0.002092) | 0.005213 / 0.011008 (-0.005795) | 0.102151 / 0.038508 (0.063643) | 0.035619 / 0.023109 (0.012510) | 0.296266 / 0.275898 (0.020368) | 0.359884 / 0.323480 (0.036404) | 0.008176 / 0.007986 (0.000190) | 0.005031 / 0.004328 (0.000703) | 0.077178 / 0.004250 (0.072927) | 0.041898 / 0.037052 (0.004846) | 0.305640 / 0.258489 (0.047151) | 0.346275 / 0.293841 (0.052434) | 0.037684 / 0.128546 (-0.090863) | 0.011816 / 0.075646 (-0.063831) | 0.334853 / 0.419271 (-0.084419) | 0.046535 / 0.043533 (0.003002) | 0.291544 / 0.255139 (0.036405) | 0.317194 / 0.283200 (0.033994) | 0.103212 / 0.141683 (-0.038471) | 1.424994 / 1.452155 (-0.027161) | 1.486216 / 1.492716 (-0.006501) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.011816 / 0.018006 (-0.006190) | 0.442092 / 0.000490 (0.441602) | 0.001297 / 0.000200 (0.001097) | 0.000078 / 0.000054 (0.000024) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028277 / 0.037411 (-0.009134) | 0.110431 / 0.014526 (0.095905) | 0.118456 / 0.176557 (-0.058100) | 0.156778 / 0.737135 (-0.580357) | 0.123036 / 0.296338 (-0.173302) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.399006 / 0.215209 (0.183797) | 3.990367 / 2.077655 (1.912712) | 1.798739 / 1.504120 (0.294620) | 1.607133 / 1.541195 (0.065938) | 1.748897 / 1.468490 (0.280407) | 0.690666 / 4.584777 (-3.894111) | 3.795892 / 3.745712 (0.050180) | 3.479317 / 5.269862 (-1.790545) | 1.861268 / 4.565676 (-2.704409) | 0.085235 / 0.424275 (-0.339040) | 0.012997 / 0.007607 (0.005390) | 0.512489 / 0.226044 (0.286445) | 5.039515 / 2.268929 (2.770587) | 2.258079 / 55.444624 (-53.186545) | 1.907178 / 6.876477 (-4.969299) | 1.985953 / 2.142072 (-0.156119) | 0.843595 / 4.805227 (-3.961633) | 0.165286 / 6.500664 (-6.335378) | 0.063026 / 0.075469 (-0.012443) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.186680 / 1.841788 (-0.655108) | 14.976016 / 8.074308 (6.901708) | 14.436941 / 10.191392 (4.245549) | 0.172620 / 0.680424 (-0.507804) | 0.028760 / 0.534201 (-0.505441) | 0.443505 / 0.579283 (-0.135778) | 0.435665 / 0.434364 (0.001301) | 0.520164 / 0.540337 (-0.020174) | 0.608348 / 1.386936 (-0.778588) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007510 / 0.011353 (-0.003842) | 0.005012 / 0.011008 (-0.005996) | 0.077865 / 0.038508 (0.039357) | 0.033610 / 0.023109 (0.010500) | 0.365996 / 0.275898 (0.090098) | 0.416393 / 0.323480 (0.092913) | 0.005672 / 0.007986 (-0.002314) | 0.005334 / 0.004328 (0.001006) | 0.074948 / 0.004250 (0.070698) | 0.045962 / 0.037052 (0.008909) | 0.362209 / 0.258489 (0.103719) | 0.410522 / 0.293841 (0.116681) | 0.036247 / 0.128546 (-0.092299) | 0.012432 / 0.075646 (-0.063214) | 0.088754 / 0.419271 (-0.330517) | 0.048848 / 0.043533 (0.005315) | 0.370994 / 0.255139 (0.115855) | 0.382476 / 0.283200 (0.099277) | 0.103443 / 0.141683 (-0.038240) | 1.483127 / 1.452155 (0.030972) | 1.573366 / 1.492716 (0.080650) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.224163 / 0.018006 (0.206157) | 0.475136 / 0.000490 (0.474646) | 0.000394 / 0.000200 (0.000194) | 0.000057 / 0.000054 (0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030612 / 0.037411 (-0.006799) | 0.113983 / 0.014526 (0.099457) | 0.121835 / 0.176557 (-0.054722) | 0.160092 / 0.737135 (-0.577043) | 0.127431 / 0.296338 (-0.168908) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.421389 / 0.215209 (0.206179) | 4.207638 / 2.077655 (2.129984) | 2.040265 / 1.504120 (0.536145) | 1.868617 / 1.541195 (0.327422) | 1.979016 / 1.468490 (0.510526) | 0.712499 / 4.584777 (-3.872278) | 3.783091 / 3.745712 (0.037379) | 2.124293 / 5.269862 (-3.145569) | 1.382028 / 4.565676 (-3.183649) | 0.087133 / 0.424275 (-0.337142) | 0.012634 / 0.007607 (0.005027) | 0.518965 / 0.226044 (0.292920) | 5.188330 / 2.268929 (2.919401) | 2.556593 / 55.444624 (-52.888031) | 2.243081 / 6.876477 (-4.633396) | 2.340420 / 2.142072 (0.198347) | 0.858010 / 4.805227 (-3.947218) | 0.169165 / 6.500664 (-6.331499) | 0.065177 / 0.075469 (-0.010292) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.297350 / 1.841788 (-0.544438) | 15.404241 / 8.074308 (7.329933) | 13.806039 / 10.191392 (3.614647) | 0.182055 / 0.680424 (-0.498369) | 0.017789 / 0.534201 (-0.516412) | 0.422828 / 0.579283 (-0.156455) | 0.418269 / 0.434364 (-0.016095) | 0.521561 / 0.540337 (-0.018777) | 0.642526 / 1.386936 (-0.744410) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#0009eea6819c32a888f65b0fdb5889b6d311c436 \"CML watermark\")\n"
] | 2023-02-09T16:22:29 | 2023-02-13T13:55:49 | 2023-02-13T13:48:40 | COLLABORATOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5518",
"html_url": "https://github.com/huggingface/datasets/pull/5518",
"diff_url": "https://github.com/huggingface/datasets/pull/5518.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5518.patch",
"merged_at": "2023-02-13T13:48:40"
} | Fix https://github.com/huggingface/datasets/issues/3841 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5518/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5518/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5517 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5517/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5517/comments | https://api.github.com/repos/huggingface/datasets/issues/5517/events | https://github.com/huggingface/datasets/issues/5517 | 1,577,976,608 | I_kwDODunzps5eDgMg | 5,517 | `with_format("numpy")` silently downcasts float64 to float32 features | {
"login": "ernestum",
"id": 1250234,
"node_id": "MDQ6VXNlcjEyNTAyMzQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1250234?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ernestum",
"html_url": "https://github.com/ernestum",
"followers_url": "https://api.github.com/users/ernestum/followers",
"following_url": "https://api.github.com/users/ernestum/following{/other_user}",
"gists_url": "https://api.github.com/users/ernestum/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ernestum/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ernestum/subscriptions",
"organizations_url": "https://api.github.com/users/ernestum/orgs",
"repos_url": "https://api.github.com/users/ernestum/repos",
"events_url": "https://api.github.com/users/ernestum/events{/privacy}",
"received_events_url": "https://api.github.com/users/ernestum/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/milestones/10",
"html_url": "https://github.com/huggingface/datasets/milestone/10",
"labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/10/labels",
"id": 9038583,
"node_id": "MI_kwDODunzps4Aier3",
"number": 10,
"title": "3.0",
"description": "Next major release",
"creator": {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
},
"open_issues": 5,
"closed_issues": 3,
"state": "open",
"created_at": "2023-02-13T16:22:42",
"updated_at": "2024-06-28T06:51:30",
"due_on": null,
"closed_at": null
} | [
"Hi! This behavior stems from these lines:\r\n\r\nhttps://github.com/huggingface/datasets/blob/b065547654efa0ec633cf373ac1512884c68b2e1/src/datasets/formatting/np_formatter.py#L45-L46\r\n\r\nI agree we should preserve the original type whenever possible and downcast explicitly with a warning.\r\n\r\n@lhoestq Do you remember why we need this \"default dtype\" logic in our formatters?",
"I was also wondering why the default type logic is needed. Me just deleting it is probably too naive of a solution.",
"Hmm I think the idea was to end up with the usual default precision for deep learning models - no matter how the data was stored or where it comes from.\r\n\r\nFor example in NLP we store tokens using an optimized low precision to save disk space, but when we set the format to `torch` we actually need to get `int64`. Although the need for a default for integers also comes from numpy not returning the same integer precision depending on your machine. Finally I guess we added a default for floats as well for consistency.\r\n\r\nI'm a bit embarrassed by this though, as a user I'd have expected to get the same precision indeed as well and get a zero copy view.",
"Will you fix this or should I open a PR?",
"Unfortunately removing it for integers is a breaking change for most `transformers` + `datasets` users for NLP (which is a common case). Removing it for floats is a breaking change for `transformers` + `datasets` for ASR as well. And it also is a breaking change for the other users relying on this behavior.\r\n\r\nTherefore I think that the only short term solution is for the user to provide `dtype=` manually and document better this behavior. We could also extend `dtype` to accept a value that means \"return the same dtype as the underlying storage\" and make it easier to do zero copy.",
"@lhoestq It should be fine to remove this conversion in Datasets 3.0, no? For now, we can warn the user (with a log message) about the future change when the default type is changed.",
"Let's see with the transformers team if it sounds reasonable ? We'd have to fix multiple example scripts though.\r\n\r\nIf it's not ok we can also explore keeping this behavior only for tokens and audio data.",
"IMO being coupled with Transformers can lead to unexpected behavior when one tries to use our lib without pairing it with Transformers, so I think it's still important to \"fix\" this, even if it means we will need to update Transformers' example scripts afterward.\r\n",
"Ideally let's update the `transformers` example scripts before the change :P",
"For others that run into the same issue: A temporary workaround for me is this:\r\n```python\r\ndef numpy_transform(batch):\r\n return {key: np.asarray(val) for key, val in batch.items()}\r\n\r\ndataset = dataset.with_transform(numpy_transform)\r\n```",
"This behavior (silent upcast from `int32` to `int64`) is also unexpected for the user in https://discuss.huggingface.co/t/standard-getitem-returns-wrong-data-type-for-arrays/62470/2",
"Hi, I stumbled on a variation that upcasts uint8 to int64. I would expect the dtype to be the same as it was when I generated the dataset.\r\n\r\n```\r\nimport numpy as np\r\nimport datasets as ds\r\n\r\nfoo = np.random.randint(0, 256, size=(5, 10, 10), dtype=np.uint8)\r\n\r\nfeatures = ds.Features({\"foo\": ds.Array2D((10, 10), \"uint8\")})\r\ndataset = ds.Dataset.from_dict({\"foo\": foo}, features=features)\r\ndataset.set_format(\"torch\")\r\nprint(\"feature dtype:\", dataset.features[\"foo\"].dtype)\r\nprint(\"array dtype:\", dataset[\"foo\"].dtype)\r\n\r\n# feature dtype: uint8\r\n# array dtype: torch.int64\r\n```\r\n",
"workaround to remove torch upcasting\r\n\r\n```\r\nimport datasets as ds\r\nimport torch\r\n\r\nclass FixedTorchFormatter(ds.formatting.TorchFormatter):\r\n def _tensorize(self, value):\r\n return torch.from_numpy(value)\r\n\r\n\r\nds.formatting._register_formatter(FixedTorchFormatter, \"torch\")\r\n```"
] | 2023-02-09T14:18:00 | 2024-01-18T08:42:17 | null | NONE | null | null | null | ### Describe the bug
When I create a dataset with a `float64` feature, then apply numpy formatting the returned numpy arrays are silently downcasted to `float32`.
### Steps to reproduce the bug
```python
import datasets
dataset = datasets.Dataset.from_dict({'a': [1.0, 2.0, 3.0]}).with_format("numpy")
print("feature dtype:", dataset.features['a'].dtype)
print("array dtype:", dataset['a'].dtype)
```
output:
```
feature dtype: float64
array dtype: float32
```
### Expected behavior
```
feature dtype: float64
array dtype: float64
```
### Environment info
- `datasets` version: 2.8.0
- Platform: Linux-5.4.0-135-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 10.0.1
- Pandas version: 1.4.4
### Suggested Fix
Changing [the `_tensorize` function of the numpy formatter](https://github.com/huggingface/datasets/blob/b065547654efa0ec633cf373ac1512884c68b2e1/src/datasets/formatting/np_formatter.py#L32) to
```python
def _tensorize(self, value):
if isinstance(value, (str, bytes, type(None))):
return value
elif isinstance(value, (np.character, np.ndarray)) and np.issubdtype(value.dtype, np.character):
return value
elif isinstance(value, np.number):
return value
return np.asarray(value, **self.np_array_kwargs)
```
fixes this particular issue for me. Not sure if this would break other tests. This should also avoid unnecessary copying of the array.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5517/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5517/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5516 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5516/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5516/comments | https://api.github.com/repos/huggingface/datasets/issues/5516/events | https://github.com/huggingface/datasets/pull/5516 | 1,577,661,640 | PR_kwDODunzps5JmzPQ | 5,516 | Reload features from Parquet metadata | {
"login": "MFreidank",
"id": 6368040,
"node_id": "MDQ6VXNlcjYzNjgwNDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6368040?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MFreidank",
"html_url": "https://github.com/MFreidank",
"followers_url": "https://api.github.com/users/MFreidank/followers",
"following_url": "https://api.github.com/users/MFreidank/following{/other_user}",
"gists_url": "https://api.github.com/users/MFreidank/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MFreidank/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MFreidank/subscriptions",
"organizations_url": "https://api.github.com/users/MFreidank/orgs",
"repos_url": "https://api.github.com/users/MFreidank/repos",
"events_url": "https://api.github.com/users/MFreidank/events{/privacy}",
"received_events_url": "https://api.github.com/users/MFreidank/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Thanks a lot for your help @lhoestq. I've simplified what turned out to be a simple fix and added the unit test.\r\n\r\nDoes this look ready to be merged or is there anything I'm still missing?",
"Cool ! I think you just need to remove the unused import in `io/parquet.py`\r\n```\r\nsrc/datasets/io/parquet.py:4:1: F401 'pyarrow as pa' imported but unused\r\n```\r\nand we're good to merge :)",
"_The documentation is not available anymore as the PR was closed or merged._",
"> Cool ! I think you just need to remove the unused import in `io/parquet.py`\r\n> \r\n> ```\r\n> src/datasets/io/parquet.py:4:1: F401 'pyarrow as pa' imported but unused\r\n> ```\r\n> \r\n> and we're good to merge :)\r\n\r\nDone! Thanks a lot, this was fun :)"
] | 2023-02-09T10:52:15 | 2023-02-12T16:00:00 | 2023-02-12T15:57:01 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5516",
"html_url": "https://github.com/huggingface/datasets/pull/5516",
"diff_url": "https://github.com/huggingface/datasets/pull/5516.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5516.patch",
"merged_at": "2023-02-12T15:57:01"
} | Resolves #5482.
Attaches feature metadata to parquet files serialised using `Dataset.to_parquet`.
This allows retrieving data with "rich" feature types (e.g., `datasets.features.image.Image` or `datasets.features.audio.Audio`) from parquet files without cumbersome casting (for an example, see #5482).
@lhoestq It seems that it is sufficient to attach metadata to the schema prior to serialising and features are loaded back with correct types afterwards automatically.
I used the following script to test the implementation:
```python
from pathlib import Path
import datasets
dataset_name = "Maysee/tiny-imagenet"
ds = datasets.load_dataset(dataset_name, split=datasets.Split.TRAIN)
output_directory_path = Path(__file__).parent.joinpath("example_test_outputs", dataset_name.replace("/", "_"))
output_directory_path.mkdir(exist_ok=True, parents=True)
output_filepath = output_directory_path.joinpath("ds.parquet")
ds.to_parquet(str(output_filepath))
reloaded_ds = datasets.load_dataset(str(output_directory_path), split=datasets.Split.TRAIN)
assert ds.features == reloaded_ds.features
```
Prior to the change in this PR this script raises an `AssertionError` and the `Image` features lose their type after serialisation. After the change in this PR, the assertion does not raise an error and manual inspection of the features shows type `Image` for the respective columns of `reloaded_ds `.
Some open questions:
* How/where can I best add new unit tests for this implementation?
* What dataset would I best use in the tests? I chose `Maysee/tiny-imagenet` mainly because it is small and contains an ?Image` feature that can be used to test, but I'd be happy for suggestions on a suitable data source to use.
* Currently I'm calling `datasets.arrow_writer.ArrowWriter._build_metadata` as I need the same logic. However, I'm not happy with the coupling between `datasets.io.parquet` and `datasets.arrow_writer` it leaves me with. Suggest to factor this common logic out into a helper function and reuse it from both of these. Do you agree and if yes, could you please guide me where I would best place this function?
Many thanks in advance and kind regards,
MFreidank
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5516/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5516/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5515 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5515/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5515/comments | https://api.github.com/repos/huggingface/datasets/issues/5515/events | https://github.com/huggingface/datasets/pull/5515 | 1,577,590,611 | PR_kwDODunzps5Jmj5X | 5,515 | Unify `load_from_cache_file` type and logic | {
"login": "HallerPatrick",
"id": 22773355,
"node_id": "MDQ6VXNlcjIyNzczMzU1",
"avatar_url": "https://avatars.githubusercontent.com/u/22773355?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HallerPatrick",
"html_url": "https://github.com/HallerPatrick",
"followers_url": "https://api.github.com/users/HallerPatrick/followers",
"following_url": "https://api.github.com/users/HallerPatrick/following{/other_user}",
"gists_url": "https://api.github.com/users/HallerPatrick/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HallerPatrick/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HallerPatrick/subscriptions",
"organizations_url": "https://api.github.com/users/HallerPatrick/orgs",
"repos_url": "https://api.github.com/users/HallerPatrick/repos",
"events_url": "https://api.github.com/users/HallerPatrick/events{/privacy}",
"received_events_url": "https://api.github.com/users/HallerPatrick/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"The commit also includes the changes to the `DatasetDict` methods or am I missing something?",
"Oh, indeed. Feel free to mark the PR as \"Ready for review\" then.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010149 / 0.011353 (-0.001204) | 0.005606 / 0.011008 (-0.005402) | 0.103455 / 0.038508 (0.064947) | 0.042934 / 0.023109 (0.019825) | 0.308365 / 0.275898 (0.032467) | 0.394188 / 0.323480 (0.070708) | 0.008760 / 0.007986 (0.000774) | 0.004567 / 0.004328 (0.000239) | 0.077959 / 0.004250 (0.073708) | 0.050115 / 0.037052 (0.013063) | 0.318009 / 0.258489 (0.059520) | 0.358578 / 0.293841 (0.064737) | 0.039231 / 0.128546 (-0.089315) | 0.012381 / 0.075646 (-0.063265) | 0.340046 / 0.419271 (-0.079226) | 0.048366 / 0.043533 (0.004834) | 0.307643 / 0.255139 (0.052504) | 0.342886 / 0.283200 (0.059687) | 0.109628 / 0.141683 (-0.032055) | 1.457297 / 1.452155 (0.005142) | 1.518067 / 1.492716 (0.025351) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.295590 / 0.018006 (0.277584) | 0.531515 / 0.000490 (0.531026) | 0.005677 / 0.000200 (0.005477) | 0.000095 / 0.000054 (0.000041) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030901 / 0.037411 (-0.006511) | 0.118312 / 0.014526 (0.103786) | 0.123146 / 0.176557 (-0.053410) | 0.163608 / 0.737135 (-0.573527) | 0.128604 / 0.296338 (-0.167734) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.404143 / 0.215209 (0.188934) | 4.000118 / 2.077655 (1.922464) | 1.804502 / 1.504120 (0.300382) | 1.597287 / 1.541195 (0.056093) | 1.738512 / 1.468490 (0.270022) | 0.704658 / 4.584777 (-3.880119) | 3.830101 / 3.745712 (0.084389) | 2.186598 / 5.269862 (-3.083263) | 1.367873 / 4.565676 (-3.197804) | 0.085550 / 0.424275 (-0.338725) | 0.012226 / 0.007607 (0.004619) | 0.505760 / 0.226044 (0.279716) | 5.054583 / 2.268929 (2.785655) | 2.284942 / 55.444624 (-53.159682) | 1.961413 / 6.876477 (-4.915064) | 2.059449 / 2.142072 (-0.082623) | 0.845009 / 4.805227 (-3.960218) | 0.167204 / 6.500664 (-6.333460) | 0.065998 / 0.075469 (-0.009471) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.221861 / 1.841788 (-0.619927) | 15.925213 / 8.074308 (7.850905) | 15.359308 / 10.191392 (5.167916) | 0.171776 / 0.680424 (-0.508648) | 0.029234 / 0.534201 (-0.504967) | 0.446349 / 0.579283 (-0.132934) | 0.447873 / 0.434364 (0.013509) | 0.527400 / 0.540337 (-0.012937) | 0.610208 / 1.386936 (-0.776728) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008030 / 0.011353 (-0.003323) | 0.005686 / 0.011008 (-0.005322) | 0.076204 / 0.038508 (0.037696) | 0.037131 / 0.023109 (0.014022) | 0.341461 / 0.275898 (0.065563) | 0.378734 / 0.323480 (0.055255) | 0.006580 / 0.007986 (-0.001406) | 0.004379 / 0.004328 (0.000050) | 0.073983 / 0.004250 (0.069732) | 0.055895 / 0.037052 (0.018842) | 0.342667 / 0.258489 (0.084178) | 0.401464 / 0.293841 (0.107623) | 0.037710 / 0.128546 (-0.090837) | 0.012604 / 0.075646 (-0.063042) | 0.087563 / 0.419271 (-0.331709) | 0.050887 / 0.043533 (0.007354) | 0.333491 / 0.255139 (0.078352) | 0.357437 / 0.283200 (0.074237) | 0.109566 / 0.141683 (-0.032117) | 1.423372 / 1.452155 (-0.028783) | 1.569423 / 1.492716 (0.076706) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.340986 / 0.018006 (0.322980) | 0.530885 / 0.000490 (0.530395) | 0.004172 / 0.000200 (0.003972) | 0.000115 / 0.000054 (0.000060) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030424 / 0.037411 (-0.006987) | 0.121191 / 0.014526 (0.106666) | 0.129066 / 0.176557 (-0.047491) | 0.166938 / 0.737135 (-0.570198) | 0.132000 / 0.296338 (-0.164338) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.418718 / 0.215209 (0.203509) | 4.163973 / 2.077655 (2.086318) | 1.982665 / 1.504120 (0.478545) | 1.798866 / 1.541195 (0.257671) | 1.918867 / 1.468490 (0.450377) | 0.724634 / 4.584777 (-3.860143) | 3.864549 / 3.745712 (0.118837) | 3.697768 / 5.269862 (-1.572093) | 1.983942 / 4.565676 (-2.581735) | 0.086818 / 0.424275 (-0.337457) | 0.012336 / 0.007607 (0.004728) | 0.522314 / 0.226044 (0.296269) | 5.216813 / 2.268929 (2.947884) | 2.516187 / 55.444624 (-52.928437) | 2.172057 / 6.876477 (-4.704420) | 2.342773 / 2.142072 (0.200701) | 0.851805 / 4.805227 (-3.953422) | 0.170139 / 6.500664 (-6.330525) | 0.068494 / 0.075469 (-0.006975) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.307370 / 1.841788 (-0.534418) | 16.737937 / 8.074308 (8.663629) | 14.483384 / 10.191392 (4.291992) | 0.172418 / 0.680424 (-0.508006) | 0.018241 / 0.534201 (-0.515960) | 0.432049 / 0.579283 (-0.147234) | 0.447590 / 0.434364 (0.013227) | 0.550332 / 0.540337 (0.009994) | 0.646756 / 1.386936 (-0.740180) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#819bc6e9f88459f363e6fb6948e9cbe5c231500d \"CML watermark\")\n"
] | 2023-02-09T10:04:46 | 2023-02-14T15:38:13 | 2023-02-14T14:26:42 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5515",
"html_url": "https://github.com/huggingface/datasets/pull/5515",
"diff_url": "https://github.com/huggingface/datasets/pull/5515.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5515.patch",
"merged_at": "2023-02-14T14:26:42"
} | * Updating type annotations for #`load_from_cache_file`
* Added logic for cache checking if needed
* Updated documentation following the wording of `Dataset.map` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5515/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5515/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5514 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5514/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5514/comments | https://api.github.com/repos/huggingface/datasets/issues/5514/events | https://github.com/huggingface/datasets/issues/5514 | 1,576,453,837 | I_kwDODunzps5d9sbN | 5,514 | Improve inconsistency of `Dataset.map` interface for `load_from_cache_file` | {
"login": "HallerPatrick",
"id": 22773355,
"node_id": "MDQ6VXNlcjIyNzczMzU1",
"avatar_url": "https://avatars.githubusercontent.com/u/22773355?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HallerPatrick",
"html_url": "https://github.com/HallerPatrick",
"followers_url": "https://api.github.com/users/HallerPatrick/followers",
"following_url": "https://api.github.com/users/HallerPatrick/following{/other_user}",
"gists_url": "https://api.github.com/users/HallerPatrick/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HallerPatrick/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HallerPatrick/subscriptions",
"organizations_url": "https://api.github.com/users/HallerPatrick/orgs",
"repos_url": "https://api.github.com/users/HallerPatrick/repos",
"events_url": "https://api.github.com/users/HallerPatrick/events{/privacy}",
"received_events_url": "https://api.github.com/users/HallerPatrick/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | [
"Hi, thanks for noticing this! We can't just remove the cache control as this allows us to control where the arrow files generated by the ops are written (cached on disk if enabled or a temporary directory if disabled). The right way to address this inconsistency would be by having `load_from_cache_file=None` by default everywhere.",
"Hi! Yes, this seems more plausible. I can implement that. One last thing is the type annotation `load_from_cache_file: bool = None`. Which I then would change to `load_from_cache_file: Optional[bool] = None`.",
"PR #5515 ",
"Yes, `Optional[bool]` is the correct type annotation and thanks for the PR."
] | 2023-02-08T16:40:44 | 2023-02-14T14:26:44 | 2023-02-14T14:26:44 | CONTRIBUTOR | null | null | null | ### Feature request
1. Replace the `load_from_cache_file` default value to `True`.
2. Remove or alter checks from `is_caching_enabled` logic.
### Motivation
I stumbled over an inconsistency in the `Dataset.map` interface. The documentation (and source) states for the parameter `load_from_cache_file`:
```
load_from_cache_file (`bool`, defaults to `True` if caching is enabled):
If a cache file storing the current computation from `function`
can be identified, use it instead of recomputing.
```
1. `load_from_cache_file` default value is `None`, while being annotated as `bool`
2. It is inconsistent with other method signatures like `filter`, that have the default value `True`
3. The logic is inconsistent, as the `map` method checks if caching is enabled through `is_caching_enabled`. This logic is not used for other similar methods.
### Your contribution
I am not fully aware of the logic behind caching checks. If this is just a inconsistency that historically grew, I would suggest to remove the `is_caching_enabled` logic as the "default" logic. Maybe someone can give insights, if environment variables have a higher priority than local variables or vice versa.
If this is clarified, I could adjust the source according to the "Feature request" section of this issue. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5514/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5514/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5513 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5513/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5513/comments | https://api.github.com/repos/huggingface/datasets/issues/5513/events | https://github.com/huggingface/datasets/issues/5513 | 1,576,300,803 | I_kwDODunzps5d9HED | 5,513 | Some functions use a param named `type` shouldn't that be avoided since it's a Python reserved name? | {
"login": "alvarobartt",
"id": 36760800,
"node_id": "MDQ6VXNlcjM2NzYwODAw",
"avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alvarobartt",
"html_url": "https://github.com/alvarobartt",
"followers_url": "https://api.github.com/users/alvarobartt/followers",
"following_url": "https://api.github.com/users/alvarobartt/following{/other_user}",
"gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions",
"organizations_url": "https://api.github.com/users/alvarobartt/orgs",
"repos_url": "https://api.github.com/users/alvarobartt/repos",
"events_url": "https://api.github.com/users/alvarobartt/events{/privacy}",
"received_events_url": "https://api.github.com/users/alvarobartt/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi! Let's not do this - renaming it would be a breaking change, and going through the deprecation cycle is only worth it if it improves user experience.",
"Hi @mariosasko, ok it makes sense. Anyway, don't you think it's worth it at some point to start a deprecation cycle e.g. `fs` in `load_from_disk`? It doesn't affect user experience but it's for sure a bad practice IMO, but's up to you 😄 Feel free to close this issue otherwise!",
"I don't think deprecating a param name in this particular instance is worth the hassle, so I'm closing the issue 🙂.",
"Sure, makes sense @mariosasko thanks!"
] | 2023-02-08T15:13:46 | 2023-07-24T16:02:18 | 2023-07-24T14:27:59 | MEMBER | null | null | null | Hi @mariosasko, @lhoestq, or whoever reads this! :)
After going through `ArrowDataset.set_format` I found out that the `type` param is actually named `type` which is a Python reserved name as you may already know, shouldn't that be renamed to `format_type` before the 3.0.0 is released?
Just wanted to get your input, and if applicable, tackle this issue myself! Thanks 🤗 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5513/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5513/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5512 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5512/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5512/comments | https://api.github.com/repos/huggingface/datasets/issues/5512/events | https://github.com/huggingface/datasets/pull/5512 | 1,576,142,432 | PR_kwDODunzps5JhtQy | 5,512 | Speed up batched PyTorch DataLoader | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008882 / 0.011353 (-0.002471) | 0.004562 / 0.011008 (-0.006446) | 0.100035 / 0.038508 (0.061527) | 0.030654 / 0.023109 (0.007545) | 0.298745 / 0.275898 (0.022847) | 0.356869 / 0.323480 (0.033389) | 0.007170 / 0.007986 (-0.000815) | 0.003471 / 0.004328 (-0.000858) | 0.077975 / 0.004250 (0.073725) | 0.037861 / 0.037052 (0.000809) | 0.311643 / 0.258489 (0.053154) | 0.343504 / 0.293841 (0.049663) | 0.033768 / 0.128546 (-0.094778) | 0.011342 / 0.075646 (-0.064304) | 0.323953 / 0.419271 (-0.095319) | 0.040818 / 0.043533 (-0.002715) | 0.298492 / 0.255139 (0.043353) | 0.327292 / 0.283200 (0.044092) | 0.088423 / 0.141683 (-0.053260) | 1.489520 / 1.452155 (0.037366) | 1.532962 / 1.492716 (0.040245) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.223654 / 0.018006 (0.205647) | 0.415134 / 0.000490 (0.414644) | 0.007394 / 0.000200 (0.007194) | 0.000080 / 0.000054 (0.000026) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023616 / 0.037411 (-0.013795) | 0.096652 / 0.014526 (0.082126) | 0.105239 / 0.176557 (-0.071318) | 0.148637 / 0.737135 (-0.588498) | 0.107937 / 0.296338 (-0.188402) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.426816 / 0.215209 (0.211607) | 4.241533 / 2.077655 (2.163878) | 1.946493 / 1.504120 (0.442373) | 1.735765 / 1.541195 (0.194570) | 1.781424 / 1.468490 (0.312934) | 0.688082 / 4.584777 (-3.896694) | 3.396444 / 3.745712 (-0.349268) | 1.920333 / 5.269862 (-3.349528) | 1.293833 / 4.565676 (-3.271843) | 0.081967 / 0.424275 (-0.342308) | 0.012911 / 0.007607 (0.005304) | 0.536928 / 0.226044 (0.310884) | 5.452327 / 2.268929 (3.183399) | 2.505785 / 55.444624 (-52.938840) | 2.173627 / 6.876477 (-4.702850) | 2.119978 / 2.142072 (-0.022095) | 0.809012 / 4.805227 (-3.996215) | 0.149124 / 6.500664 (-6.351540) | 0.066008 / 0.075469 (-0.009461) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.215702 / 1.841788 (-0.626085) | 13.757525 / 8.074308 (5.683217) | 13.999208 / 10.191392 (3.807816) | 0.164875 / 0.680424 (-0.515549) | 0.028517 / 0.534201 (-0.505684) | 0.394829 / 0.579283 (-0.184454) | 0.404962 / 0.434364 (-0.029401) | 0.484455 / 0.540337 (-0.055882) | 0.575008 / 1.386936 (-0.811928) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006754 / 0.011353 (-0.004598) | 0.004579 / 0.011008 (-0.006430) | 0.076617 / 0.038508 (0.038109) | 0.027902 / 0.023109 (0.004793) | 0.346278 / 0.275898 (0.070380) | 0.398060 / 0.323480 (0.074580) | 0.004938 / 0.007986 (-0.003047) | 0.004681 / 0.004328 (0.000353) | 0.076336 / 0.004250 (0.072086) | 0.038018 / 0.037052 (0.000966) | 0.358701 / 0.258489 (0.100212) | 0.408413 / 0.293841 (0.114572) | 0.031772 / 0.128546 (-0.096774) | 0.011604 / 0.075646 (-0.064042) | 0.085964 / 0.419271 (-0.333308) | 0.042030 / 0.043533 (-0.001502) | 0.343568 / 0.255139 (0.088429) | 0.381805 / 0.283200 (0.098605) | 0.090759 / 0.141683 (-0.050924) | 1.504553 / 1.452155 (0.052398) | 1.594006 / 1.492716 (0.101289) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.227395 / 0.018006 (0.209389) | 0.403097 / 0.000490 (0.402608) | 0.000413 / 0.000200 (0.000213) | 0.000060 / 0.000054 (0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024693 / 0.037411 (-0.012718) | 0.100470 / 0.014526 (0.085944) | 0.108481 / 0.176557 (-0.068076) | 0.142791 / 0.737135 (-0.594345) | 0.109949 / 0.296338 (-0.186389) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.443674 / 0.215209 (0.228465) | 4.412207 / 2.077655 (2.334553) | 2.073752 / 1.504120 (0.569632) | 1.863153 / 1.541195 (0.321958) | 1.940063 / 1.468490 (0.471573) | 0.696456 / 4.584777 (-3.888321) | 3.422120 / 3.745712 (-0.323592) | 1.902579 / 5.269862 (-3.367282) | 1.184948 / 4.565676 (-3.380729) | 0.083079 / 0.424275 (-0.341196) | 0.012649 / 0.007607 (0.005042) | 0.542035 / 0.226044 (0.315991) | 5.421826 / 2.268929 (3.152897) | 2.525092 / 55.444624 (-52.919532) | 2.177144 / 6.876477 (-4.699332) | 2.225224 / 2.142072 (0.083151) | 0.804739 / 4.805227 (-4.000488) | 0.151000 / 6.500664 (-6.349664) | 0.066987 / 0.075469 (-0.008482) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.277199 / 1.841788 (-0.564589) | 14.184146 / 8.074308 (6.109838) | 13.413348 / 10.191392 (3.221956) | 0.128551 / 0.680424 (-0.551872) | 0.016461 / 0.534201 (-0.517740) | 0.379963 / 0.579283 (-0.199320) | 0.381350 / 0.434364 (-0.053014) | 0.439044 / 0.540337 (-0.101293) | 0.521559 / 1.386936 (-0.865377) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#4f3c152c1c35df250d2fbeb25d5823a65714f2d8 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008876 / 0.011353 (-0.002477) | 0.004629 / 0.011008 (-0.006379) | 0.101697 / 0.038508 (0.063189) | 0.030373 / 0.023109 (0.007264) | 0.302206 / 0.275898 (0.026308) | 0.365835 / 0.323480 (0.042355) | 0.007877 / 0.007986 (-0.000109) | 0.004473 / 0.004328 (0.000144) | 0.077334 / 0.004250 (0.073084) | 0.038066 / 0.037052 (0.001014) | 0.308064 / 0.258489 (0.049575) | 0.347329 / 0.293841 (0.053488) | 0.034478 / 0.128546 (-0.094068) | 0.011651 / 0.075646 (-0.063995) | 0.323481 / 0.419271 (-0.095791) | 0.043515 / 0.043533 (-0.000018) | 0.299885 / 0.255139 (0.044746) | 0.328959 / 0.283200 (0.045760) | 0.095308 / 0.141683 (-0.046375) | 1.474058 / 1.452155 (0.021903) | 1.535335 / 1.492716 (0.042619) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.197416 / 0.018006 (0.179410) | 0.421935 / 0.000490 (0.421446) | 0.003490 / 0.000200 (0.003290) | 0.000074 / 0.000054 (0.000020) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024519 / 0.037411 (-0.012892) | 0.100710 / 0.014526 (0.086185) | 0.104520 / 0.176557 (-0.072036) | 0.142048 / 0.737135 (-0.595087) | 0.109274 / 0.296338 (-0.187064) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.408766 / 0.215209 (0.193557) | 4.101720 / 2.077655 (2.024065) | 1.812375 / 1.504120 (0.308256) | 1.605819 / 1.541195 (0.064624) | 1.688923 / 1.468490 (0.220433) | 0.691198 / 4.584777 (-3.893579) | 3.422137 / 3.745712 (-0.323575) | 1.921318 / 5.269862 (-3.348544) | 1.168770 / 4.565676 (-3.396906) | 0.082840 / 0.424275 (-0.341435) | 0.012740 / 0.007607 (0.005133) | 0.524333 / 0.226044 (0.298289) | 5.258077 / 2.268929 (2.989149) | 2.273177 / 55.444624 (-53.171447) | 1.931919 / 6.876477 (-4.944558) | 1.988415 / 2.142072 (-0.153658) | 0.812227 / 4.805227 (-3.993000) | 0.150043 / 6.500664 (-6.350622) | 0.066422 / 0.075469 (-0.009047) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.188069 / 1.841788 (-0.653718) | 13.942681 / 8.074308 (5.868373) | 14.104658 / 10.191392 (3.913266) | 0.151966 / 0.680424 (-0.528458) | 0.028833 / 0.534201 (-0.505368) | 0.395125 / 0.579283 (-0.184158) | 0.408512 / 0.434364 (-0.025852) | 0.487587 / 0.540337 (-0.052751) | 0.570023 / 1.386936 (-0.816913) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006860 / 0.011353 (-0.004493) | 0.004582 / 0.011008 (-0.006426) | 0.079902 / 0.038508 (0.041394) | 0.027565 / 0.023109 (0.004456) | 0.341393 / 0.275898 (0.065495) | 0.378911 / 0.323480 (0.055431) | 0.005847 / 0.007986 (-0.002138) | 0.004681 / 0.004328 (0.000353) | 0.079422 / 0.004250 (0.075171) | 0.039135 / 0.037052 (0.002083) | 0.342026 / 0.258489 (0.083537) | 0.387510 / 0.293841 (0.093669) | 0.031999 / 0.128546 (-0.096547) | 0.011782 / 0.075646 (-0.063865) | 0.088563 / 0.419271 (-0.330709) | 0.042435 / 0.043533 (-0.001098) | 0.343055 / 0.255139 (0.087916) | 0.367437 / 0.283200 (0.084237) | 0.091578 / 0.141683 (-0.050104) | 1.506828 / 1.452155 (0.054673) | 1.599590 / 1.492716 (0.106874) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.217939 / 0.018006 (0.199932) | 0.408352 / 0.000490 (0.407863) | 0.000394 / 0.000200 (0.000194) | 0.000063 / 0.000054 (0.000009) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026344 / 0.037411 (-0.011067) | 0.102968 / 0.014526 (0.088442) | 0.110340 / 0.176557 (-0.066217) | 0.145696 / 0.737135 (-0.591439) | 0.111632 / 0.296338 (-0.184707) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.440764 / 0.215209 (0.225555) | 4.423179 / 2.077655 (2.345524) | 2.057016 / 1.504120 (0.552896) | 1.848741 / 1.541195 (0.307546) | 1.939827 / 1.468490 (0.471337) | 0.699370 / 4.584777 (-3.885407) | 3.472521 / 3.745712 (-0.273191) | 3.232557 / 5.269862 (-2.037305) | 1.755534 / 4.565676 (-2.810143) | 0.083469 / 0.424275 (-0.340807) | 0.012980 / 0.007607 (0.005373) | 0.557662 / 0.226044 (0.331618) | 5.435657 / 2.268929 (3.166729) | 2.545106 / 55.444624 (-52.899519) | 2.168047 / 6.876477 (-4.708430) | 2.234070 / 2.142072 (0.091997) | 0.804662 / 4.805227 (-4.000565) | 0.152832 / 6.500664 (-6.347833) | 0.069372 / 0.075469 (-0.006097) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.299189 / 1.841788 (-0.542598) | 14.752880 / 8.074308 (6.678572) | 13.607676 / 10.191392 (3.416284) | 0.150773 / 0.680424 (-0.529650) | 0.016701 / 0.534201 (-0.517500) | 0.379507 / 0.579283 (-0.199776) | 0.389401 / 0.434364 (-0.044963) | 0.444199 / 0.540337 (-0.096139) | 0.524264 / 1.386936 (-0.862672) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#12be850b36c0b9d4841af86c75e08c0a726ffb5c \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008694 / 0.011353 (-0.002659) | 0.004549 / 0.011008 (-0.006459) | 0.101164 / 0.038508 (0.062656) | 0.029644 / 0.023109 (0.006535) | 0.294849 / 0.275898 (0.018950) | 0.366755 / 0.323480 (0.043275) | 0.007205 / 0.007986 (-0.000780) | 0.004255 / 0.004328 (-0.000074) | 0.077433 / 0.004250 (0.073183) | 0.038024 / 0.037052 (0.000972) | 0.310380 / 0.258489 (0.051891) | 0.347093 / 0.293841 (0.053252) | 0.033232 / 0.128546 (-0.095314) | 0.011404 / 0.075646 (-0.064242) | 0.323341 / 0.419271 (-0.095930) | 0.040586 / 0.043533 (-0.002946) | 0.296083 / 0.255139 (0.040944) | 0.321870 / 0.283200 (0.038671) | 0.087377 / 0.141683 (-0.054306) | 1.466869 / 1.452155 (0.014715) | 1.514763 / 1.492716 (0.022046) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.010272 / 0.018006 (-0.007734) | 0.414645 / 0.000490 (0.414155) | 0.003730 / 0.000200 (0.003530) | 0.000076 / 0.000054 (0.000021) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024093 / 0.037411 (-0.013318) | 0.098718 / 0.014526 (0.084192) | 0.105526 / 0.176557 (-0.071030) | 0.141578 / 0.737135 (-0.595557) | 0.109679 / 0.296338 (-0.186660) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.412907 / 0.215209 (0.197698) | 4.134934 / 2.077655 (2.057280) | 1.881180 / 1.504120 (0.377060) | 1.693207 / 1.541195 (0.152012) | 1.753725 / 1.468490 (0.285235) | 0.693077 / 4.584777 (-3.891700) | 3.367409 / 3.745712 (-0.378303) | 2.749035 / 5.269862 (-2.520827) | 1.565015 / 4.565676 (-3.000662) | 0.082609 / 0.424275 (-0.341666) | 0.012500 / 0.007607 (0.004892) | 0.523619 / 0.226044 (0.297575) | 5.250188 / 2.268929 (2.981259) | 2.314255 / 55.444624 (-53.130369) | 1.962357 / 6.876477 (-4.914120) | 2.020632 / 2.142072 (-0.121441) | 0.812504 / 4.805227 (-3.992724) | 0.149921 / 6.500664 (-6.350743) | 0.065816 / 0.075469 (-0.009653) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.230811 / 1.841788 (-0.610977) | 14.008566 / 8.074308 (5.934258) | 14.371285 / 10.191392 (4.179893) | 0.166323 / 0.680424 (-0.514101) | 0.029702 / 0.534201 (-0.504499) | 0.408629 / 0.579283 (-0.170654) | 0.410529 / 0.434364 (-0.023835) | 0.484482 / 0.540337 (-0.055855) | 0.572360 / 1.386936 (-0.814576) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006873 / 0.011353 (-0.004480) | 0.004609 / 0.011008 (-0.006400) | 0.075492 / 0.038508 (0.036984) | 0.028560 / 0.023109 (0.005450) | 0.340321 / 0.275898 (0.064423) | 0.376758 / 0.323480 (0.053278) | 0.005271 / 0.007986 (-0.002715) | 0.004786 / 0.004328 (0.000457) | 0.074843 / 0.004250 (0.070592) | 0.041072 / 0.037052 (0.004019) | 0.339952 / 0.258489 (0.081463) | 0.384375 / 0.293841 (0.090534) | 0.031771 / 0.128546 (-0.096775) | 0.011607 / 0.075646 (-0.064039) | 0.084338 / 0.419271 (-0.334933) | 0.042251 / 0.043533 (-0.001282) | 0.338904 / 0.255139 (0.083765) | 0.365360 / 0.283200 (0.082160) | 0.093151 / 0.141683 (-0.048532) | 1.449833 / 1.452155 (-0.002322) | 1.601946 / 1.492716 (0.109229) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.225149 / 0.018006 (0.207142) | 0.409855 / 0.000490 (0.409365) | 0.000384 / 0.000200 (0.000184) | 0.000060 / 0.000054 (0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025914 / 0.037411 (-0.011497) | 0.100443 / 0.014526 (0.085917) | 0.108557 / 0.176557 (-0.067999) | 0.150338 / 0.737135 (-0.586798) | 0.111472 / 0.296338 (-0.184866) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.440221 / 0.215209 (0.225012) | 4.409268 / 2.077655 (2.331613) | 2.096008 / 1.504120 (0.591888) | 1.849443 / 1.541195 (0.308248) | 1.934901 / 1.468490 (0.466410) | 0.704072 / 4.584777 (-3.880705) | 3.371370 / 3.745712 (-0.374343) | 3.185478 / 5.269862 (-2.084384) | 1.514541 / 4.565676 (-3.051135) | 0.083724 / 0.424275 (-0.340551) | 0.012674 / 0.007607 (0.005067) | 0.542155 / 0.226044 (0.316111) | 5.413456 / 2.268929 (3.144528) | 2.508567 / 55.444624 (-52.936057) | 2.163235 / 6.876477 (-4.713242) | 2.193914 / 2.142072 (0.051842) | 0.810955 / 4.805227 (-3.994272) | 0.152769 / 6.500664 (-6.347895) | 0.068009 / 0.075469 (-0.007460) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.272511 / 1.841788 (-0.569276) | 14.334861 / 8.074308 (6.260553) | 13.555445 / 10.191392 (3.364053) | 0.160520 / 0.680424 (-0.519904) | 0.018363 / 0.534201 (-0.515838) | 0.384937 / 0.579283 (-0.194346) | 0.409138 / 0.434364 (-0.025225) | 0.484037 / 0.540337 (-0.056300) | 0.565595 / 1.386936 (-0.821341) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#23f076ef0187a4009d3c62b14a02e146baf0e35f \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010077 / 0.011353 (-0.001276) | 0.005650 / 0.011008 (-0.005359) | 0.101285 / 0.038508 (0.062777) | 0.039571 / 0.023109 (0.016462) | 0.291855 / 0.275898 (0.015957) | 0.363582 / 0.323480 (0.040102) | 0.008513 / 0.007986 (0.000527) | 0.004472 / 0.004328 (0.000144) | 0.077314 / 0.004250 (0.073064) | 0.050707 / 0.037052 (0.013654) | 0.317282 / 0.258489 (0.058792) | 0.342348 / 0.293841 (0.048507) | 0.042951 / 0.128546 (-0.085595) | 0.012295 / 0.075646 (-0.063351) | 0.337269 / 0.419271 (-0.082003) | 0.048953 / 0.043533 (0.005420) | 0.292547 / 0.255139 (0.037408) | 0.325436 / 0.283200 (0.042236) | 0.111859 / 0.141683 (-0.029824) | 1.501958 / 1.452155 (0.049804) | 1.522281 / 1.492716 (0.029565) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.011775 / 0.018006 (-0.006231) | 0.513283 / 0.000490 (0.512793) | 0.002941 / 0.000200 (0.002741) | 0.000099 / 0.000054 (0.000044) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028702 / 0.037411 (-0.008710) | 0.108465 / 0.014526 (0.093940) | 0.121806 / 0.176557 (-0.054750) | 0.158424 / 0.737135 (-0.578712) | 0.128077 / 0.296338 (-0.168262) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.395392 / 0.215209 (0.180183) | 3.944138 / 2.077655 (1.866483) | 1.773698 / 1.504120 (0.269578) | 1.588907 / 1.541195 (0.047712) | 1.697794 / 1.468490 (0.229304) | 0.690281 / 4.584777 (-3.894496) | 3.819661 / 3.745712 (0.073948) | 3.228006 / 5.269862 (-2.041856) | 1.755625 / 4.565676 (-2.810052) | 0.083169 / 0.424275 (-0.341106) | 0.012337 / 0.007607 (0.004730) | 0.504730 / 0.226044 (0.278686) | 5.016916 / 2.268929 (2.747988) | 2.245484 / 55.444624 (-53.199141) | 1.911682 / 6.876477 (-4.964795) | 1.957659 / 2.142072 (-0.184413) | 0.818361 / 4.805227 (-3.986866) | 0.162386 / 6.500664 (-6.338279) | 0.062461 / 0.075469 (-0.013008) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.197654 / 1.841788 (-0.644134) | 15.465611 / 8.074308 (7.391303) | 14.409126 / 10.191392 (4.217734) | 0.171776 / 0.680424 (-0.508647) | 0.028749 / 0.534201 (-0.505452) | 0.439666 / 0.579283 (-0.139618) | 0.445159 / 0.434364 (0.010795) | 0.543992 / 0.540337 (0.003655) | 0.643911 / 1.386936 (-0.743025) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007036 / 0.011353 (-0.004317) | 0.005273 / 0.011008 (-0.005735) | 0.075314 / 0.038508 (0.036806) | 0.033075 / 0.023109 (0.009966) | 0.350133 / 0.275898 (0.074235) | 0.399366 / 0.323480 (0.075886) | 0.005945 / 0.007986 (-0.002041) | 0.004276 / 0.004328 (-0.000052) | 0.074975 / 0.004250 (0.070725) | 0.051758 / 0.037052 (0.014706) | 0.355077 / 0.258489 (0.096588) | 0.430296 / 0.293841 (0.136455) | 0.036257 / 0.128546 (-0.092290) | 0.012376 / 0.075646 (-0.063270) | 0.087441 / 0.419271 (-0.331830) | 0.049066 / 0.043533 (0.005534) | 0.339867 / 0.255139 (0.084728) | 0.384379 / 0.283200 (0.101179) | 0.104843 / 0.141683 (-0.036840) | 1.498897 / 1.452155 (0.046742) | 1.551400 / 1.492716 (0.058684) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.334504 / 0.018006 (0.316498) | 0.516551 / 0.000490 (0.516061) | 0.000450 / 0.000200 (0.000250) | 0.000057 / 0.000054 (0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029313 / 0.037411 (-0.008099) | 0.110667 / 0.014526 (0.096141) | 0.124001 / 0.176557 (-0.052556) | 0.159154 / 0.737135 (-0.577981) | 0.129503 / 0.296338 (-0.166836) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.416749 / 0.215209 (0.201540) | 4.171163 / 2.077655 (2.093508) | 1.981071 / 1.504120 (0.476951) | 1.788303 / 1.541195 (0.247108) | 1.912118 / 1.468490 (0.443628) | 0.708764 / 4.584777 (-3.876013) | 3.815222 / 3.745712 (0.069510) | 2.121633 / 5.269862 (-3.148229) | 1.347866 / 4.565676 (-3.217811) | 0.086340 / 0.424275 (-0.337935) | 0.012646 / 0.007607 (0.005039) | 0.525286 / 0.226044 (0.299241) | 5.254922 / 2.268929 (2.985994) | 2.488743 / 55.444624 (-52.955881) | 2.128069 / 6.876477 (-4.748408) | 2.180358 / 2.142072 (0.038286) | 0.841011 / 4.805227 (-3.964216) | 0.168732 / 6.500664 (-6.331932) | 0.065559 / 0.075469 (-0.009910) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.270518 / 1.841788 (-0.571270) | 15.557563 / 8.074308 (7.483255) | 13.660757 / 10.191392 (3.469365) | 0.185636 / 0.680424 (-0.494788) | 0.018152 / 0.534201 (-0.516049) | 0.423553 / 0.579283 (-0.155730) | 0.412718 / 0.434364 (-0.021646) | 0.528455 / 0.540337 (-0.011882) | 0.635274 / 1.386936 (-0.751662) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#d40f05ef827c52344a2c6e83f7c8d13bb6b660d3 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.011194 / 0.011353 (-0.000159) | 0.006344 / 0.011008 (-0.004664) | 0.122013 / 0.038508 (0.083505) | 0.044323 / 0.023109 (0.021214) | 0.356665 / 0.275898 (0.080767) | 0.439871 / 0.323480 (0.116391) | 0.010694 / 0.007986 (0.002709) | 0.004648 / 0.004328 (0.000320) | 0.091140 / 0.004250 (0.086890) | 0.052457 / 0.037052 (0.015404) | 0.369282 / 0.258489 (0.110793) | 0.403279 / 0.293841 (0.109438) | 0.054075 / 0.128546 (-0.074472) | 0.014484 / 0.075646 (-0.061162) | 0.407932 / 0.419271 (-0.011340) | 0.060681 / 0.043533 (0.017148) | 0.350889 / 0.255139 (0.095750) | 0.392041 / 0.283200 (0.108841) | 0.121252 / 0.141683 (-0.020431) | 1.809527 / 1.452155 (0.357373) | 1.835141 / 1.492716 (0.342425) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.227372 / 0.018006 (0.209366) | 0.481908 / 0.000490 (0.481418) | 0.007262 / 0.000200 (0.007062) | 0.000148 / 0.000054 (0.000093) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031039 / 0.037411 (-0.006372) | 0.133947 / 0.014526 (0.119421) | 0.141935 / 0.176557 (-0.034622) | 0.197854 / 0.737135 (-0.539281) | 0.152393 / 0.296338 (-0.143945) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.517400 / 0.215209 (0.302191) | 4.899972 / 2.077655 (2.822317) | 2.171023 / 1.504120 (0.666903) | 2.008706 / 1.541195 (0.467511) | 1.988777 / 1.468490 (0.520287) | 0.859872 / 4.584777 (-3.724905) | 4.673923 / 3.745712 (0.928211) | 2.703189 / 5.269862 (-2.566672) | 1.891680 / 4.565676 (-2.673997) | 0.109601 / 0.424275 (-0.314674) | 0.014622 / 0.007607 (0.007015) | 0.618990 / 0.226044 (0.392946) | 6.255608 / 2.268929 (3.986679) | 2.822199 / 55.444624 (-52.622425) | 2.457684 / 6.876477 (-4.418793) | 2.500041 / 2.142072 (0.357968) | 1.054529 / 4.805227 (-3.750698) | 0.209501 / 6.500664 (-6.291163) | 0.074929 / 0.075469 (-0.000540) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.532780 / 1.841788 (-0.309008) | 19.159455 / 8.074308 (11.085147) | 17.817063 / 10.191392 (7.625671) | 0.194078 / 0.680424 (-0.486346) | 0.038211 / 0.534201 (-0.495990) | 0.537366 / 0.579283 (-0.041917) | 0.538995 / 0.434364 (0.104631) | 0.679431 / 0.540337 (0.139094) | 0.801960 / 1.386936 (-0.584976) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008729 / 0.011353 (-0.002624) | 0.005711 / 0.011008 (-0.005297) | 0.091570 / 0.038508 (0.053062) | 0.039805 / 0.023109 (0.016696) | 0.413507 / 0.275898 (0.137609) | 0.456342 / 0.323480 (0.132862) | 0.006201 / 0.007986 (-0.001785) | 0.009700 / 0.004328 (0.005372) | 0.089146 / 0.004250 (0.084896) | 0.057543 / 0.037052 (0.020490) | 0.420806 / 0.258489 (0.162317) | 0.471962 / 0.293841 (0.178121) | 0.043940 / 0.128546 (-0.084606) | 0.014457 / 0.075646 (-0.061190) | 0.106674 / 0.419271 (-0.312598) | 0.058930 / 0.043533 (0.015397) | 0.419111 / 0.255139 (0.163972) | 0.452974 / 0.283200 (0.169774) | 0.124573 / 0.141683 (-0.017110) | 1.864753 / 1.452155 (0.412599) | 1.935387 / 1.492716 (0.442670) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.275657 / 0.018006 (0.257651) | 0.498096 / 0.000490 (0.497606) | 0.000480 / 0.000200 (0.000280) | 0.000066 / 0.000054 (0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034377 / 0.037411 (-0.003035) | 0.138050 / 0.014526 (0.123524) | 0.153718 / 0.176557 (-0.022838) | 0.201445 / 0.737135 (-0.535690) | 0.160346 / 0.296338 (-0.135992) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.540670 / 0.215209 (0.325461) | 5.376291 / 2.077655 (3.298636) | 2.581799 / 1.504120 (1.077679) | 2.328858 / 1.541195 (0.787663) | 2.446458 / 1.468490 (0.977968) | 0.923005 / 4.584777 (-3.661772) | 4.815977 / 3.745712 (1.070265) | 4.205725 / 5.269862 (-1.064137) | 2.400466 / 4.565676 (-2.165211) | 0.107207 / 0.424275 (-0.317068) | 0.015427 / 0.007607 (0.007819) | 0.657267 / 0.226044 (0.431222) | 6.491256 / 2.268929 (4.222327) | 3.179099 / 55.444624 (-52.265525) | 2.722434 / 6.876477 (-4.154042) | 2.788202 / 2.142072 (0.646129) | 1.060016 / 4.805227 (-3.745211) | 0.206899 / 6.500664 (-6.293766) | 0.077868 / 0.075469 (0.002399) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.567894 / 1.841788 (-0.273893) | 19.314330 / 8.074308 (11.240022) | 17.597614 / 10.191392 (7.406222) | 0.195777 / 0.680424 (-0.484647) | 0.022160 / 0.534201 (-0.512041) | 0.530592 / 0.579283 (-0.048691) | 0.508591 / 0.434364 (0.074227) | 0.619794 / 0.540337 (0.079457) | 0.749773 / 1.386936 (-0.637163) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#8637141a67639c510294620306c9bb25d31d34ef \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.012431 / 0.011353 (0.001078) | 0.006526 / 0.011008 (-0.004482) | 0.132266 / 0.038508 (0.093757) | 0.043199 / 0.023109 (0.020089) | 0.405230 / 0.275898 (0.129332) | 0.494643 / 0.323480 (0.171163) | 0.009927 / 0.007986 (0.001941) | 0.005227 / 0.004328 (0.000899) | 0.110914 / 0.004250 (0.106664) | 0.047815 / 0.037052 (0.010763) | 0.419099 / 0.258489 (0.160610) | 0.463405 / 0.293841 (0.169564) | 0.057858 / 0.128546 (-0.070688) | 0.018918 / 0.075646 (-0.056728) | 0.450584 / 0.419271 (0.031313) | 0.060457 / 0.043533 (0.016924) | 0.408234 / 0.255139 (0.153095) | 0.433722 / 0.283200 (0.150523) | 0.119403 / 0.141683 (-0.022280) | 1.966742 / 1.452155 (0.514587) | 1.980685 / 1.492716 (0.487969) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.292853 / 0.018006 (0.274847) | 0.619697 / 0.000490 (0.619207) | 0.002135 / 0.000200 (0.001935) | 0.000117 / 0.000054 (0.000062) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031283 / 0.037411 (-0.006129) | 0.128649 / 0.014526 (0.114123) | 0.150116 / 0.176557 (-0.026441) | 0.187605 / 0.737135 (-0.549530) | 0.153334 / 0.296338 (-0.143005) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.659660 / 0.215209 (0.444451) | 6.459749 / 2.077655 (4.382094) | 2.764566 / 1.504120 (1.260446) | 2.362630 / 1.541195 (0.821435) | 2.426421 / 1.468490 (0.957931) | 1.282407 / 4.584777 (-3.302370) | 5.668865 / 3.745712 (1.923153) | 3.236255 / 5.269862 (-2.033606) | 2.248836 / 4.565676 (-2.316841) | 0.145861 / 0.424275 (-0.278414) | 0.015707 / 0.007607 (0.008100) | 0.805218 / 0.226044 (0.579174) | 8.146831 / 2.268929 (5.877903) | 3.506283 / 55.444624 (-51.938341) | 2.736682 / 6.876477 (-4.139795) | 2.959039 / 2.142072 (0.816967) | 1.528428 / 4.805227 (-3.276799) | 0.270980 / 6.500664 (-6.229684) | 0.086824 / 0.075469 (0.011355) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.682506 / 1.841788 (-0.159282) | 18.844103 / 8.074308 (10.769795) | 21.008471 / 10.191392 (10.817079) | 0.258372 / 0.680424 (-0.422052) | 0.046505 / 0.534201 (-0.487696) | 0.574760 / 0.579283 (-0.004523) | 0.663745 / 0.434364 (0.229381) | 0.702411 / 0.540337 (0.162074) | 0.824024 / 1.386936 (-0.562912) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010016 / 0.011353 (-0.001337) | 0.007459 / 0.011008 (-0.003549) | 0.103954 / 0.038508 (0.065446) | 0.036363 / 0.023109 (0.013254) | 0.464079 / 0.275898 (0.188181) | 0.504730 / 0.323480 (0.181250) | 0.007865 / 0.007986 (-0.000121) | 0.005210 / 0.004328 (0.000882) | 0.105018 / 0.004250 (0.100767) | 0.062191 / 0.037052 (0.025139) | 0.483304 / 0.258489 (0.224815) | 0.547030 / 0.293841 (0.253189) | 0.055436 / 0.128546 (-0.073110) | 0.021073 / 0.075646 (-0.054573) | 0.120952 / 0.419271 (-0.298319) | 0.075593 / 0.043533 (0.032060) | 0.459930 / 0.255139 (0.204791) | 0.486924 / 0.283200 (0.203724) | 0.129465 / 0.141683 (-0.012218) | 1.902322 / 1.452155 (0.450167) | 1.980809 / 1.492716 (0.488092) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.259263 / 0.018006 (0.241257) | 0.596703 / 0.000490 (0.596213) | 0.004520 / 0.000200 (0.004320) | 0.000124 / 0.000054 (0.000070) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032802 / 0.037411 (-0.004609) | 0.138751 / 0.014526 (0.124225) | 0.147106 / 0.176557 (-0.029451) | 0.194791 / 0.737135 (-0.542345) | 0.152643 / 0.296338 (-0.143696) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.678455 / 0.215209 (0.463246) | 6.673643 / 2.077655 (4.595989) | 2.943368 / 1.504120 (1.439248) | 2.591223 / 1.541195 (1.050029) | 2.741097 / 1.468490 (1.272607) | 1.261178 / 4.584777 (-3.323599) | 5.773853 / 3.745712 (2.028141) | 3.171559 / 5.269862 (-2.098303) | 2.124898 / 4.565676 (-2.440779) | 0.161849 / 0.424275 (-0.262426) | 0.015498 / 0.007607 (0.007891) | 0.857984 / 0.226044 (0.631940) | 8.456946 / 2.268929 (6.188018) | 3.818787 / 55.444624 (-51.625837) | 3.009953 / 6.876477 (-3.866523) | 3.113006 / 2.142072 (0.970934) | 1.477299 / 4.805227 (-3.327929) | 0.267207 / 6.500664 (-6.233457) | 0.087590 / 0.075469 (0.012121) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.757389 / 1.841788 (-0.084398) | 19.287690 / 8.074308 (11.213381) | 21.601991 / 10.191392 (11.410599) | 0.260464 / 0.680424 (-0.419960) | 0.028552 / 0.534201 (-0.505649) | 0.558934 / 0.579283 (-0.020349) | 0.673651 / 0.434364 (0.239287) | 0.714448 / 0.540337 (0.174111) | 0.857608 / 1.386936 (-0.529328) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#2d3bd0134de444ffd10c4a39873dbf9aa3732c08 \"CML watermark\")\n",
"Ready for review @mariosasko, LMKWYT :)\r\n\r\nSorry it tooks me a few tries to fix the CI - I ended up not trying to use the latest `torch` version in the CI.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009474 / 0.011353 (-0.001878) | 0.005507 / 0.011008 (-0.005501) | 0.101219 / 0.038508 (0.062711) | 0.035591 / 0.023109 (0.012481) | 0.305841 / 0.275898 (0.029943) | 0.339135 / 0.323480 (0.015656) | 0.007920 / 0.007986 (-0.000066) | 0.004252 / 0.004328 (-0.000077) | 0.076912 / 0.004250 (0.072662) | 0.041923 / 0.037052 (0.004871) | 0.301405 / 0.258489 (0.042916) | 0.356488 / 0.293841 (0.062647) | 0.039342 / 0.128546 (-0.089204) | 0.012711 / 0.075646 (-0.062935) | 0.334193 / 0.419271 (-0.085079) | 0.049112 / 0.043533 (0.005579) | 0.301484 / 0.255139 (0.046345) | 0.315306 / 0.283200 (0.032106) | 0.102959 / 0.141683 (-0.038724) | 1.420677 / 1.452155 (-0.031478) | 1.549493 / 1.492716 (0.056777) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.284639 / 0.018006 (0.266633) | 0.501226 / 0.000490 (0.500736) | 0.004328 / 0.000200 (0.004128) | 0.000091 / 0.000054 (0.000036) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027034 / 0.037411 (-0.010377) | 0.108066 / 0.014526 (0.093540) | 0.122106 / 0.176557 (-0.054451) | 0.162908 / 0.737135 (-0.574227) | 0.127233 / 0.296338 (-0.169105) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.394023 / 0.215209 (0.178813) | 3.932729 / 2.077655 (1.855075) | 1.771195 / 1.504120 (0.267075) | 1.582788 / 1.541195 (0.041594) | 1.703219 / 1.468490 (0.234728) | 0.702629 / 4.584777 (-3.882148) | 3.780187 / 3.745712 (0.034475) | 2.180433 / 5.269862 (-3.089428) | 1.504806 / 4.565676 (-3.060871) | 0.085289 / 0.424275 (-0.338986) | 0.012580 / 0.007607 (0.004973) | 0.515408 / 0.226044 (0.289363) | 5.010613 / 2.268929 (2.741685) | 2.256648 / 55.444624 (-53.187976) | 1.914971 / 6.876477 (-4.961505) | 2.038436 / 2.142072 (-0.103636) | 0.846240 / 4.805227 (-3.958987) | 0.164920 / 6.500664 (-6.335744) | 0.063899 / 0.075469 (-0.011570) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.224160 / 1.841788 (-0.617627) | 15.089995 / 8.074308 (7.015687) | 14.777003 / 10.191392 (4.585611) | 0.169873 / 0.680424 (-0.510551) | 0.029233 / 0.534201 (-0.504968) | 0.445424 / 0.579283 (-0.133859) | 0.439194 / 0.434364 (0.004830) | 0.536370 / 0.540337 (-0.003968) | 0.636694 / 1.386936 (-0.750242) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008230 / 0.011353 (-0.003122) | 0.005499 / 0.011008 (-0.005509) | 0.076108 / 0.038508 (0.037600) | 0.037444 / 0.023109 (0.014335) | 0.364420 / 0.275898 (0.088522) | 0.412308 / 0.323480 (0.088828) | 0.006704 / 0.007986 (-0.001282) | 0.004359 / 0.004328 (0.000031) | 0.075080 / 0.004250 (0.070830) | 0.057698 / 0.037052 (0.020646) | 0.366088 / 0.258489 (0.107599) | 0.409583 / 0.293841 (0.115742) | 0.037882 / 0.128546 (-0.090664) | 0.012421 / 0.075646 (-0.063225) | 0.087701 / 0.419271 (-0.331571) | 0.050669 / 0.043533 (0.007136) | 0.351139 / 0.255139 (0.096000) | 0.384340 / 0.283200 (0.101140) | 0.108097 / 0.141683 (-0.033586) | 1.445010 / 1.452155 (-0.007145) | 1.559570 / 1.492716 (0.066853) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.324114 / 0.018006 (0.306108) | 0.549134 / 0.000490 (0.548644) | 0.003544 / 0.000200 (0.003344) | 0.000097 / 0.000054 (0.000042) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030646 / 0.037411 (-0.006765) | 0.108573 / 0.014526 (0.094047) | 0.125291 / 0.176557 (-0.051266) | 0.174798 / 0.737135 (-0.562338) | 0.128000 / 0.296338 (-0.168338) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.428881 / 0.215209 (0.213672) | 4.282320 / 2.077655 (2.204665) | 2.061462 / 1.504120 (0.557342) | 1.858477 / 1.541195 (0.317283) | 1.971646 / 1.468490 (0.503156) | 0.723631 / 4.584777 (-3.861146) | 3.822376 / 3.745712 (0.076664) | 2.174427 / 5.269862 (-3.095434) | 1.386066 / 4.565676 (-3.179611) | 0.088391 / 0.424275 (-0.335884) | 0.012948 / 0.007607 (0.005341) | 0.524423 / 0.226044 (0.298378) | 5.249389 / 2.268929 (2.980460) | 2.528662 / 55.444624 (-52.915962) | 2.245329 / 6.876477 (-4.631147) | 2.402733 / 2.142072 (0.260660) | 0.868864 / 4.805227 (-3.936364) | 0.174066 / 6.500664 (-6.326598) | 0.066165 / 0.075469 (-0.009304) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.296922 / 1.841788 (-0.544865) | 15.814109 / 8.074308 (7.739801) | 14.086059 / 10.191392 (3.894667) | 0.190952 / 0.680424 (-0.489472) | 0.017679 / 0.534201 (-0.516522) | 0.428872 / 0.579283 (-0.150411) | 0.435399 / 0.434364 (0.001035) | 0.540856 / 0.540337 (0.000519) | 0.648904 / 1.386936 (-0.738032) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#f401758c5019ede4404994d5d59220125984874d \"CML watermark\")\n"
] | 2023-02-08T13:38:59 | 2023-02-19T18:35:09 | 2023-02-19T18:27:29 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5512",
"html_url": "https://github.com/huggingface/datasets/pull/5512",
"diff_url": "https://github.com/huggingface/datasets/pull/5512.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5512.patch",
"merged_at": "2023-02-19T18:27:29"
} | I implemented `__getitems__` to speed up batched data loading in PyTorch
close https://github.com/huggingface/datasets/issues/5505 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5512/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5512/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5511 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5511/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5511/comments | https://api.github.com/repos/huggingface/datasets/issues/5511/events | https://github.com/huggingface/datasets/issues/5511 | 1,575,851,768 | I_kwDODunzps5d7Zb4 | 5,511 | Creating a dummy dataset from a bigger one | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Update `datasets` or downgrade `huggingface-hub` ;)\r\n\r\nThe `huggingface-hub` lib did a breaking change a few months ago, and you're using an old version of `datasets` that does't support it",
"Awesome thanks a lot! Everything works just fine with `datasets==2.9.0` :-) ",
"Getting same error with latest versions.\r\n\r\n\r\n```shell\r\n---------------------------------------------------------------------------\r\nTypeError Traceback (most recent call last)\r\nCell In[99], line 1\r\n----> 1 dataset.push_to_hub(\"mirfan899/kids_phoneme_asr\")\r\n\r\nFile /opt/conda/lib/python3.10/site-packages/datasets/arrow_dataset.py:3538, in Dataset.push_to_hub(self, repo_id, split, private, token, branch, shard_size, embed_external_files)\r\n 3493 def push_to_hub(\r\n 3494 self,\r\n 3495 repo_id: str,\r\n (...)\r\n 3501 embed_external_files: bool = True,\r\n 3502 ):\r\n 3503 \"\"\"Pushes the dataset to the hub.\r\n 3504 The dataset is pushed using HTTP requests and does not need to have neither git or git-lfs installed.\r\n 3505 \r\n (...)\r\n 3536 ```\r\n 3537 \"\"\"\r\n-> 3538 repo_id, split, uploaded_size, dataset_nbytes = self._push_parquet_shards_to_hub(\r\n 3539 repo_id=repo_id,\r\n 3540 split=split,\r\n 3541 private=private,\r\n 3542 token=token,\r\n 3543 branch=branch,\r\n 3544 shard_size=shard_size,\r\n 3545 embed_external_files=embed_external_files,\r\n 3546 )\r\n 3547 organization, dataset_name = repo_id.split(\"/\")\r\n 3548 info_to_dump = self.info.copy()\r\n\r\nFile /opt/conda/lib/python3.10/site-packages/datasets/arrow_dataset.py:3474, in Dataset._push_parquet_shards_to_hub(self, repo_id, split, private, token, branch, shard_size, embed_external_files)\r\n 3472 shard.to_parquet(buffer)\r\n 3473 uploaded_size += buffer.tell()\r\n-> 3474 _retry(\r\n 3475 api.upload_file,\r\n 3476 func_kwargs=dict(\r\n 3477 path_or_fileobj=buffer.getvalue(),\r\n 3478 path_in_repo=path_in_repo(index),\r\n 3479 repo_id=repo_id,\r\n 3480 token=token,\r\n 3481 repo_type=\"dataset\",\r\n 3482 revision=branch,\r\n 3483 identical_ok=True,\r\n 3484 ),\r\n 3485 exceptions=HTTPError,\r\n 3486 status_codes=[504],\r\n 3487 base_wait_time=2.0,\r\n 3488 max_retries=5,\r\n 3489 max_wait_time=20.0,\r\n 3490 )\r\n 3491 return repo_id, split, uploaded_size, dataset_nbytes\r\n\r\nFile /opt/conda/lib/python3.10/site-packages/datasets/utils/file_utils.py:330, in _retry(func, func_args, func_kwargs, exceptions, status_codes, max_retries, base_wait_time, max_wait_time)\r\n 328 while True:\r\n 329 try:\r\n--> 330 return func(*func_args, **func_kwargs)\r\n 331 except exceptions as err:\r\n 332 if retry >= max_retries or (status_codes and err.response.status_code not in status_codes):\r\n\r\nFile /opt/conda/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py:120, in validate_hf_hub_args.<locals>._inner_fn(*args, **kwargs)\r\n 117 if check_use_auth_token:\r\n 118 kwargs = smoothly_deprecate_use_auth_token(fn_name=fn.__name__, has_token=has_token, kwargs=kwargs)\r\n--> 120 return fn(*args, **kwargs)\r\n\r\nTypeError: HfApi.upload_file() got an unexpected keyword argument 'identical_ok'\r\n```",
"Feel free to update `datasets` and `huggingface-hub`, it should fix it :)",
"I went ahead and upgraded both datasets and hub and still getting the same error\r\n",
"Which version do you have ? It's been a while since it has been fixed",
"huggingface 0.0.1\r\nhuggingface-hub 0.17.1\r\ndatasets 2.14.5\r\n\r\nstill has the issue!!",
"I face the same issue even after upgrading :/"
] | 2023-02-08T10:18:41 | 2023-12-28T18:21:01 | 2023-02-08T10:35:48 | CONTRIBUTOR | null | null | null | ### Describe the bug
I often want to create a dummy dataset from a bigger dataset for fast iteration when training. However, I'm having a hard time doing this especially when trying to upload the dataset to the Hub.
### Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("lambdalabs/pokemon-blip-captions")
dataset["train"] = dataset["train"].select(range(20))
dataset.push_to_hub("patrickvonplaten/dummy_image_data")
```
gives:
```
~/python_bin/datasets/arrow_dataset.py in _push_parquet_shards_to_hub(self, repo_id, split, private, token, branch, max_shard_size, embed_external_files)
4003 base_wait_time=2.0,
4004 max_retries=5,
-> 4005 max_wait_time=20.0,
4006 )
4007 return repo_id, split, uploaded_size, dataset_nbytes
~/python_bin/datasets/utils/file_utils.py in _retry(func, func_args, func_kwargs, exceptions, status_codes, max_retries, base_wait_time, max_wait_time)
328 while True:
329 try:
--> 330 return func(*func_args, **func_kwargs)
331 except exceptions as err:
332 if retry >= max_retries or (status_codes and err.response.status_code not in status_codes):
~/hf/lib/python3.7/site-packages/huggingface_hub/utils/_validators.py in _inner_fn(*args, **kwargs)
122 )
123
--> 124 return fn(*args, **kwargs)
125
126 return _inner_fn # type: ignore
TypeError: upload_file() got an unexpected keyword argument 'identical_ok'
In [2]:
```
### Expected behavior
I would have expected this to work. It's for me the most intuitive way of creating a dummy dataset.
### Environment info
```
- `datasets` version: 2.1.1.dev0
- Platform: Linux-4.19.0-22-cloud-amd64-x86_64-with-debian-10.13
- Python version: 3.7.3
- PyArrow version: 11.0.0
- Pandas version: 1.3.5
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5511/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5511/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5510 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5510/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5510/comments | https://api.github.com/repos/huggingface/datasets/issues/5510/events | https://github.com/huggingface/datasets/pull/5510 | 1,575,191,549 | PR_kwDODunzps5JehbR | 5,510 | Milvus integration for search | {
"login": "filip-halt",
"id": 81822489,
"node_id": "MDQ6VXNlcjgxODIyNDg5",
"avatar_url": "https://avatars.githubusercontent.com/u/81822489?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/filip-halt",
"html_url": "https://github.com/filip-halt",
"followers_url": "https://api.github.com/users/filip-halt/followers",
"following_url": "https://api.github.com/users/filip-halt/following{/other_user}",
"gists_url": "https://api.github.com/users/filip-halt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/filip-halt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/filip-halt/subscriptions",
"organizations_url": "https://api.github.com/users/filip-halt/orgs",
"repos_url": "https://api.github.com/users/filip-halt/repos",
"events_url": "https://api.github.com/users/filip-halt/events{/privacy}",
"received_events_url": "https://api.github.com/users/filip-halt/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5510). All of your documentation changes will be reflected on that endpoint.",
"To the maintainer, sorry about the repeated run requests for formatting. Missed the `make style` outlined in contributing guidelines. ",
"Anything I can do to get the workflow to run? @lhoestq ",
"cc @mariosasko \r\n\r\n> Anything I can do to get the workflow to run?\r\n\r\nYou can merge `main` into your branch to fix code formatting (we switched from isort+flake8 to ruff this week), and then run `make style`",
"I believe that should be good. @mariosasko"
] | 2023-02-07T23:30:26 | 2023-02-24T16:45:09 | null | NONE | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5510",
"html_url": "https://github.com/huggingface/datasets/pull/5510",
"diff_url": "https://github.com/huggingface/datasets/pull/5510.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5510.patch",
"merged_at": null
} | Signed-off-by: Filip Haltmayer <filip.haltmayer@zilliz.com> | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5510/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5510/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5509 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5509/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5509/comments | https://api.github.com/repos/huggingface/datasets/issues/5509/events | https://github.com/huggingface/datasets/pull/5509 | 1,574,177,320 | PR_kwDODunzps5JbH-u | 5,509 | Add a static `__all__` to `__init__.py` for typecheckers | {
"login": "LoicGrobol",
"id": 14248012,
"node_id": "MDQ6VXNlcjE0MjQ4MDEy",
"avatar_url": "https://avatars.githubusercontent.com/u/14248012?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LoicGrobol",
"html_url": "https://github.com/LoicGrobol",
"followers_url": "https://api.github.com/users/LoicGrobol/followers",
"following_url": "https://api.github.com/users/LoicGrobol/following{/other_user}",
"gists_url": "https://api.github.com/users/LoicGrobol/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LoicGrobol/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LoicGrobol/subscriptions",
"organizations_url": "https://api.github.com/users/LoicGrobol/orgs",
"repos_url": "https://api.github.com/users/LoicGrobol/repos",
"events_url": "https://api.github.com/users/LoicGrobol/events{/privacy}",
"received_events_url": "https://api.github.com/users/LoicGrobol/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5509). All of your documentation changes will be reflected on that endpoint.",
"Hi! I've commented on the original issue to provide some context. Feel free to share your opinion there."
] | 2023-02-07T11:42:40 | 2023-02-08T17:48:24 | null | NONE | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5509",
"html_url": "https://github.com/huggingface/datasets/pull/5509",
"diff_url": "https://github.com/huggingface/datasets/pull/5509.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5509.patch",
"merged_at": null
} | This adds a static `__all__` field to `__init__.py`, allowing typecheckers to know which symbols are accessible from `datasets` at runtime. In particular [Pyright](https://github.com/microsoft/pylance-release/issues/2328#issuecomment-1029381258) seems to rely on this. At this point I have added all (modulo oversight) the symbols mentioned in the Reference part of [the docs](https://huggingface.co/docs/datasets), but that could be adjusted. As a side effect, only these symbols will be imported by `from datasets import *`, which may or may not be a good thing (and if it isn't, that's easy to fix).
Another option would be to add a pyi stub, but I think `__all__` should be the most pythonic solution.
This should fix #3841. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5509/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5509/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5508 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5508/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5508/comments | https://api.github.com/repos/huggingface/datasets/issues/5508/events | https://github.com/huggingface/datasets/issues/5508 | 1,573,290,359 | I_kwDODunzps5dxoF3 | 5,508 | Saving a dataset after setting format to torch doesn't work, but only if filtering | {
"login": "joebhakim",
"id": 13984157,
"node_id": "MDQ6VXNlcjEzOTg0MTU3",
"avatar_url": "https://avatars.githubusercontent.com/u/13984157?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joebhakim",
"html_url": "https://github.com/joebhakim",
"followers_url": "https://api.github.com/users/joebhakim/followers",
"following_url": "https://api.github.com/users/joebhakim/following{/other_user}",
"gists_url": "https://api.github.com/users/joebhakim/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joebhakim/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joebhakim/subscriptions",
"organizations_url": "https://api.github.com/users/joebhakim/orgs",
"repos_url": "https://api.github.com/users/joebhakim/repos",
"events_url": "https://api.github.com/users/joebhakim/events{/privacy}",
"received_events_url": "https://api.github.com/users/joebhakim/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hey, I'm a research engineer working on language modelling wanting to contribute to open source. I was wondering if I could give it a shot?",
"Hi! This issue was fixed in https://github.com/huggingface/datasets/pull/4972, so please install `datasets>=2.5.0` to avoid it."
] | 2023-02-06T21:08:58 | 2023-02-09T14:55:26 | 2023-02-09T14:55:26 | NONE | null | null | null | ### Describe the bug
Saving a dataset after setting format to torch doesn't work, but only if filtering
### Steps to reproduce the bug
```
a = Dataset.from_dict({"b": [1, 2]})
a.set_format('torch')
a.save_to_disk("test_save") # saves successfully
a.filter(None).save_to_disk("test_save_filter") # does not
>> [...] TypeError: Provided `function` which is applied to all elements of table returns a `dict` of types [<class 'torch.Tensor'>]. When using `batched=True`, make sure provided `function` returns a `dict` of types like `(<class 'list'>, <class 'numpy.ndarray'>)`.
# note: skipping the format change to torch lets this work.
### Expected behavior
Saving to work
### Environment info
- `datasets` version: 2.4.0
- Platform: Linux-6.1.9-arch1-1-x86_64-with-glibc2.36
- Python version: 3.10.9
- PyArrow version: 9.0.0
- Pandas version: 1.4.4 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5508/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5508/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5507 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5507/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5507/comments | https://api.github.com/repos/huggingface/datasets/issues/5507/events | https://github.com/huggingface/datasets/issues/5507 | 1,572,667,036 | I_kwDODunzps5dvP6c | 5,507 | Optimise behaviour in respect to indices mapping | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 2023-02-06T14:25:55 | 2023-02-28T18:19:18 | null | COLLABORATOR | null | null | null | _Originally [posted](https://huggingface.slack.com/archives/C02V51Q3800/p1675443873878489?thread_ts=1675418893.373479&cid=C02V51Q3800) on Slack_
Considering all this, perhaps for Datasets 3.0, we can do the following:
* [ ] have `continuous=True` by default in `.shard` (requested in the survey and makes more sense for us since it doesn't create an indices mapping)
* [x] allow calling `save_to_disk` on "unflattened" datasets
* [ ] remove "hidden" expensive calls in `save_to_disk`, `unique`, `concatenate_datasets`, etc. For instance, instead of silently calling `flatten_indices` where it's needed, it's probably better to be explicit (considering how expensive these ops can be) and raise an error instead | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5507/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5507/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5506 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5506/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5506/comments | https://api.github.com/repos/huggingface/datasets/issues/5506/events | https://github.com/huggingface/datasets/issues/5506 | 1,571,838,641 | I_kwDODunzps5dsFqx | 5,506 | IterableDataset and Dataset return different batch sizes when using Trainer with multiple GPUs | {
"login": "kheyer",
"id": 38166299,
"node_id": "MDQ6VXNlcjM4MTY2Mjk5",
"avatar_url": "https://avatars.githubusercontent.com/u/38166299?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kheyer",
"html_url": "https://github.com/kheyer",
"followers_url": "https://api.github.com/users/kheyer/followers",
"following_url": "https://api.github.com/users/kheyer/following{/other_user}",
"gists_url": "https://api.github.com/users/kheyer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kheyer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kheyer/subscriptions",
"organizations_url": "https://api.github.com/users/kheyer/orgs",
"repos_url": "https://api.github.com/users/kheyer/repos",
"events_url": "https://api.github.com/users/kheyer/events{/privacy}",
"received_events_url": "https://api.github.com/users/kheyer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi ! `datasets` doesn't do batching - the PyTorch DataLoader does and is created by the `Trainer`. Do you pass other arguments to training_args with respect to data loading ?\r\n\r\nAlso we recently released `.to_iterable_dataset` that does pretty much what you implemented, but using contiguous shards to get a better speed:\r\n```python\r\nif use_iterable_dataset:\r\n num_shards = 100\r\n dataset = dataset.to_iterable_dataset(num_shards=num_shards)\r\n```",
"This is the full set of training args passed. No training args were changed when switching dataset types.\r\n\r\n```python\r\ntraining_args = TrainingArguments(\r\n output_dir=\"./checkpoints\",\r\n overwrite_output_dir=True,\r\n num_train_epochs=1,\r\n per_device_train_batch_size=256,\r\n save_steps=2000,\r\n save_total_limit=4,\r\n prediction_loss_only=True,\r\n report_to='none',\r\n gradient_accumulation_steps=6,\r\n fp16=True,\r\n max_steps=60000,\r\n lr_scheduler_type='linear',\r\n warmup_ratio=0.1,\r\n logging_steps=100,\r\n weight_decay=0.01,\r\n adam_beta1=0.9,\r\n adam_beta2=0.98,\r\n adam_epsilon=1e-6,\r\n learning_rate=1e-4\r\n)\r\n```",
"I think the issue comes from `transformers`: https://github.com/huggingface/transformers/issues/21444",
"Makes sense. Given that it's a `transformers` issue and already being tracked, I'll close this out."
] | 2023-02-06T03:26:03 | 2023-02-08T18:30:08 | 2023-02-08T18:30:07 | NONE | null | null | null | ### Describe the bug
I am training a Roberta model using 2 GPUs and the `Trainer` API with a batch size of 256.
Initially I used a standard `Dataset`, but had issues with slow data loading. After reading [this issue](https://github.com/huggingface/datasets/issues/2252), I swapped to loading my dataset as contiguous shards and passing those to an `IterableDataset`. I observed an unexpected drop in GPU memory utilization, and found the batch size returned from the model had been cut in half.
When using `Trainer` with 2 GPUs and a batch size of 256, `Dataset` returns a batch of size 512 (256 per GPU), while `IterableDataset` returns a batch size of 256 (256 total). My guess is `IterableDataset` isn't accounting for multiple cards.
### Steps to reproduce the bug
```python
import datasets
from datasets import IterableDataset
from transformers import RobertaConfig
from transformers import RobertaTokenizerFast
from transformers import RobertaForMaskedLM
from transformers import DataCollatorForLanguageModeling
from transformers import Trainer, TrainingArguments
use_iterable_dataset = True
def gen_from_shards(shards):
for shard in shards:
for example in shard:
yield example
dataset = datasets.load_from_disk('my_dataset.hf')
if use_iterable_dataset:
n_shards = 100
shards = [dataset.shard(num_shards=n_shards, index=i) for i in range(n_shards)]
dataset = IterableDataset.from_generator(gen_from_shards, gen_kwargs={"shards": shards})
tokenizer = RobertaTokenizerFast.from_pretrained("./my_tokenizer", max_len=160, use_fast=True)
config = RobertaConfig(
vocab_size=8248,
max_position_embeddings=256,
num_attention_heads=8,
num_hidden_layers=6,
type_vocab_size=1)
model = RobertaForMaskedLM(config=config)
data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=True, mlm_probability=0.15)
training_args = TrainingArguments(
per_device_train_batch_size=256
# other args removed for brevity
)
trainer = Trainer(
model=model,
args=training_args,
data_collator=data_collator,
train_dataset=dataset,
)
trainer.train()
```
### Expected behavior
Expected `Dataset` and `IterableDataset` to have the same batch size behavior. If the current behavior is intentional, the batch size printout at the start of training should be updated. Currently, both dataset classes result in `Trainer` printing the same total batch size, even though the batch size sent to the GPUs are different.
### Environment info
datasets 2.7.1
transformers 4.25.1 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5506/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5506/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5505 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5505/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5505/comments | https://api.github.com/repos/huggingface/datasets/issues/5505/events | https://github.com/huggingface/datasets/issues/5505 | 1,571,720,814 | I_kwDODunzps5dro5u | 5,505 | PyTorch BatchSampler still loads from Dataset one-by-one | {
"login": "davidgilbertson",
"id": 4443482,
"node_id": "MDQ6VXNlcjQ0NDM0ODI=",
"avatar_url": "https://avatars.githubusercontent.com/u/4443482?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/davidgilbertson",
"html_url": "https://github.com/davidgilbertson",
"followers_url": "https://api.github.com/users/davidgilbertson/followers",
"following_url": "https://api.github.com/users/davidgilbertson/following{/other_user}",
"gists_url": "https://api.github.com/users/davidgilbertson/gists{/gist_id}",
"starred_url": "https://api.github.com/users/davidgilbertson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/davidgilbertson/subscriptions",
"organizations_url": "https://api.github.com/users/davidgilbertson/orgs",
"repos_url": "https://api.github.com/users/davidgilbertson/repos",
"events_url": "https://api.github.com/users/davidgilbertson/events{/privacy}",
"received_events_url": "https://api.github.com/users/davidgilbertson/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"This change seems to come from a few months ago in the PyTorch side. That's good news and it means we may not need to pass a batch_sampler as soon as we add `Dataset.__getitems__` to get the optimal speed :)\r\n\r\nThanks for reporting ! Would you like to open a PR to add `__getitems__` and remove this outdated documentation ?",
"Yeah I figured this was the sort of thing that probably once worked. I can confirm that you no longer need the batch sampler, just `batch_size=n` in the `DataLoader`.\r\n\r\nI'll pass on the PR, I'm flat out right now, sorry."
] | 2023-02-06T01:14:55 | 2023-02-19T18:27:30 | 2023-02-19T18:27:30 | NONE | null | null | null | ### Describe the bug
In [the docs here](https://huggingface.co/docs/datasets/use_with_pytorch#use-a-batchsampler), it mentions the issue of the Dataset being read one-by-one, then states that using a BatchSampler resolves the issue.
I'm not sure if this is a mistake in the docs or the code, but it seems that the only way for a Dataset to be passed a list of indexes by PyTorch (instead of one index at a time) is to define a `__getitems__` method (note the plural) on the Dataset object, and since the HF Dataset doesn't have this, PyTorch executes [this line of code](https://github.com/pytorch/pytorch/blob/master/torch/utils/data/_utils/fetch.py#L58), reverting to fetching one-by-one.
### Steps to reproduce the bug
You can put a breakpoint in `Dataset.__getitem__()` or just print the args from there and see that it's called multiple times for a single `next(iter(dataloader))`, even when using the code from the docs:
```py
from torch.utils.data.sampler import BatchSampler, RandomSampler
batch_sampler = BatchSampler(RandomSampler(ds), batch_size=32, drop_last=False)
dataloader = DataLoader(ds, batch_sampler=batch_sampler)
```
### Expected behavior
The expected behaviour would be for it to fetch batches from the dataset, rather than one-by-one.
To demonstrate that there is room for improvement: once I have a HF dataset `ds`, if I just add this line:
```py
ds.__getitems__ = ds.__getitem__
```
...then the time taken to loop over the dataset improves considerably (for wikitext-103, from one minute to 13 seconds with batch size 32). Probably not a big deal in the grand scheme of things, but seems like an easy win.
### Environment info
- `datasets` version: 2.9.0
- Platform: Linux-5.10.102.1-microsoft-standard-WSL2-x86_64-with-glibc2.31
- Python version: 3.10.8
- PyArrow version: 10.0.1
- Pandas version: 1.5.3 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5505/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5505/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5504 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5504/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5504/comments | https://api.github.com/repos/huggingface/datasets/issues/5504/events | https://github.com/huggingface/datasets/pull/5504 | 1,570,621,242 | PR_kwDODunzps5JPoWy | 5,504 | don't zero copy timestamps | {
"login": "dwyatte",
"id": 2512762,
"node_id": "MDQ6VXNlcjI1MTI3NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/2512762?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dwyatte",
"html_url": "https://github.com/dwyatte",
"followers_url": "https://api.github.com/users/dwyatte/followers",
"following_url": "https://api.github.com/users/dwyatte/following{/other_user}",
"gists_url": "https://api.github.com/users/dwyatte/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dwyatte/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dwyatte/subscriptions",
"organizations_url": "https://api.github.com/users/dwyatte/orgs",
"repos_url": "https://api.github.com/users/dwyatte/repos",
"events_url": "https://api.github.com/users/dwyatte/events{/privacy}",
"received_events_url": "https://api.github.com/users/dwyatte/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008606 / 0.011353 (-0.002747) | 0.004659 / 0.011008 (-0.006349) | 0.101311 / 0.038508 (0.062802) | 0.029664 / 0.023109 (0.006555) | 0.321850 / 0.275898 (0.045952) | 0.380497 / 0.323480 (0.057017) | 0.007003 / 0.007986 (-0.000982) | 0.003393 / 0.004328 (-0.000936) | 0.078704 / 0.004250 (0.074453) | 0.035810 / 0.037052 (-0.001242) | 0.327271 / 0.258489 (0.068782) | 0.369302 / 0.293841 (0.075461) | 0.033625 / 0.128546 (-0.094921) | 0.011563 / 0.075646 (-0.064084) | 0.323950 / 0.419271 (-0.095322) | 0.040660 / 0.043533 (-0.002872) | 0.327211 / 0.255139 (0.072072) | 0.350325 / 0.283200 (0.067125) | 0.085427 / 0.141683 (-0.056256) | 1.464370 / 1.452155 (0.012216) | 1.490355 / 1.492716 (-0.002362) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.202879 / 0.018006 (0.184873) | 0.419836 / 0.000490 (0.419346) | 0.000303 / 0.000200 (0.000103) | 0.000063 / 0.000054 (0.000008) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023336 / 0.037411 (-0.014075) | 0.096817 / 0.014526 (0.082291) | 0.103990 / 0.176557 (-0.072567) | 0.137749 / 0.737135 (-0.599386) | 0.108236 / 0.296338 (-0.188102) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.420801 / 0.215209 (0.205592) | 4.205308 / 2.077655 (2.127653) | 2.050363 / 1.504120 (0.546243) | 1.877390 / 1.541195 (0.336195) | 2.031060 / 1.468490 (0.562570) | 0.687950 / 4.584777 (-3.896827) | 3.363202 / 3.745712 (-0.382510) | 1.869482 / 5.269862 (-3.400379) | 1.159131 / 4.565676 (-3.406545) | 0.082374 / 0.424275 (-0.341901) | 0.012425 / 0.007607 (0.004818) | 0.519775 / 0.226044 (0.293731) | 5.244612 / 2.268929 (2.975684) | 2.371314 / 55.444624 (-53.073311) | 2.052713 / 6.876477 (-4.823764) | 2.190015 / 2.142072 (0.047942) | 0.803806 / 4.805227 (-4.001421) | 0.148110 / 6.500664 (-6.352554) | 0.064174 / 0.075469 (-0.011295) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.250424 / 1.841788 (-0.591364) | 13.487870 / 8.074308 (5.413561) | 13.080736 / 10.191392 (2.889344) | 0.147715 / 0.680424 (-0.532709) | 0.028409 / 0.534201 (-0.505792) | 0.397531 / 0.579283 (-0.181752) | 0.399458 / 0.434364 (-0.034905) | 0.461467 / 0.540337 (-0.078871) | 0.541639 / 1.386936 (-0.845297) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006753 / 0.011353 (-0.004600) | 0.004573 / 0.011008 (-0.006435) | 0.076122 / 0.038508 (0.037614) | 0.027529 / 0.023109 (0.004419) | 0.341291 / 0.275898 (0.065393) | 0.376889 / 0.323480 (0.053409) | 0.005032 / 0.007986 (-0.002953) | 0.003447 / 0.004328 (-0.000882) | 0.075186 / 0.004250 (0.070936) | 0.038516 / 0.037052 (0.001463) | 0.340927 / 0.258489 (0.082438) | 0.386626 / 0.293841 (0.092785) | 0.031929 / 0.128546 (-0.096617) | 0.011759 / 0.075646 (-0.063888) | 0.085616 / 0.419271 (-0.333656) | 0.042858 / 0.043533 (-0.000674) | 0.341881 / 0.255139 (0.086742) | 0.367502 / 0.283200 (0.084303) | 0.090788 / 0.141683 (-0.050895) | 1.472871 / 1.452155 (0.020716) | 1.577825 / 1.492716 (0.085109) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.233137 / 0.018006 (0.215131) | 0.415016 / 0.000490 (0.414526) | 0.000379 / 0.000200 (0.000179) | 0.000059 / 0.000054 (0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024966 / 0.037411 (-0.012445) | 0.102794 / 0.014526 (0.088268) | 0.107543 / 0.176557 (-0.069014) | 0.143133 / 0.737135 (-0.594002) | 0.111494 / 0.296338 (-0.184845) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.438354 / 0.215209 (0.223145) | 4.382244 / 2.077655 (2.304589) | 2.056340 / 1.504120 (0.552220) | 1.851524 / 1.541195 (0.310330) | 1.933147 / 1.468490 (0.464657) | 0.701446 / 4.584777 (-3.883331) | 3.396893 / 3.745712 (-0.348819) | 2.837516 / 5.269862 (-2.432346) | 1.538298 / 4.565676 (-3.027379) | 0.083449 / 0.424275 (-0.340826) | 0.012793 / 0.007607 (0.005186) | 0.539661 / 0.226044 (0.313616) | 5.428415 / 2.268929 (3.159487) | 2.527582 / 55.444624 (-52.917042) | 2.172795 / 6.876477 (-4.703682) | 2.220011 / 2.142072 (0.077938) | 0.814338 / 4.805227 (-3.990889) | 0.153468 / 6.500664 (-6.347196) | 0.069056 / 0.075469 (-0.006413) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.278434 / 1.841788 (-0.563354) | 14.284924 / 8.074308 (6.210616) | 13.486596 / 10.191392 (3.295203) | 0.138457 / 0.680424 (-0.541967) | 0.016609 / 0.534201 (-0.517592) | 0.382828 / 0.579283 (-0.196455) | 0.387604 / 0.434364 (-0.046760) | 0.478801 / 0.540337 (-0.061536) | 0.565352 / 1.386936 (-0.821584) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#c39ba501daab763b9972f44f229c66d900d20bee \"CML watermark\")\n",
"> Thanks! I modified the test a bit to make it more consistent with the rest of the \"extractor\" tests.\r\n\r\nAppreciate the assist on the tests! 🚀 "
] | 2023-02-03T23:39:04 | 2023-02-08T17:28:50 | 2023-02-08T14:33:17 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5504",
"html_url": "https://github.com/huggingface/datasets/pull/5504",
"diff_url": "https://github.com/huggingface/datasets/pull/5504.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5504.patch",
"merged_at": "2023-02-08T14:33:17"
} | Fixes https://github.com/huggingface/datasets/issues/5495
I'm not sure whether we prefer a test here or if timestamps are known to be unsupported (like booleans). The current test at least covers the bug | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5504/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5504/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5502 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5502/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5502/comments | https://api.github.com/repos/huggingface/datasets/issues/5502/events | https://github.com/huggingface/datasets/pull/5502 | 1,570,091,225 | PR_kwDODunzps5JN0aX | 5,502 | Added functionality: sort datasets by multiple keys | {
"login": "MichlF",
"id": 7805682,
"node_id": "MDQ6VXNlcjc4MDU2ODI=",
"avatar_url": "https://avatars.githubusercontent.com/u/7805682?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MichlF",
"html_url": "https://github.com/MichlF",
"followers_url": "https://api.github.com/users/MichlF/followers",
"following_url": "https://api.github.com/users/MichlF/following{/other_user}",
"gists_url": "https://api.github.com/users/MichlF/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MichlF/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MichlF/subscriptions",
"organizations_url": "https://api.github.com/users/MichlF/orgs",
"repos_url": "https://api.github.com/users/MichlF/repos",
"events_url": "https://api.github.com/users/MichlF/events{/privacy}",
"received_events_url": "https://api.github.com/users/MichlF/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"> Thanks! I've left some comments.\r\n> \r\n> We should also add some tests, mainly to make sure `reverse` behaves as expected. Let me know if you need help with that.\r\n\r\nThanks for the offer! I couldn't find any guidelines on how huggingface goes about testing, so it would indeed be great to get a few pointers on that. I assume I should expand on the `test_sort` function in `test_arrow_dataset.py` but since I am not very familiar with the `datasets` package, it isn't immediately for which cases I should test (i.e., expand on).",
"@MichlF \r\n\r\nResolving a comment means that the comment has been addressed with the code change, so since this is not the case here, can you please \"unresolve\" the comments and address them adequately? \r\n\r\n> I assume I should expand on the `test_sort` function in `test_arrow_dataset.py`\r\n\r\nYes, that's correct. I think one test to check sorting on multiple keys and another one to check if an error is raised when `len(reverse)!=len(column_names)` should be enough.\r\n",
"> Yes, that's correct. I think one test to check sorting on multiple keys and another one to check if an error is raised when `len(reverse)!=len(column_names)` should be enough.\r\n\r\nI have added the tests in https://github.com/huggingface/datasets/pull/5502/commits/0efa259732e822e94d67b96a70031a3daccedfc1 by keeping them in the same format of the tests of the old `sort` function. Let me know if they can be improved.\r\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010170 / 0.011353 (-0.001183) | 0.005891 / 0.011008 (-0.005117) | 0.100416 / 0.038508 (0.061908) | 0.041309 / 0.023109 (0.018200) | 0.300813 / 0.275898 (0.024915) | 0.376679 / 0.323480 (0.053199) | 0.008806 / 0.007986 (0.000821) | 0.005964 / 0.004328 (0.001636) | 0.075862 / 0.004250 (0.071611) | 0.050370 / 0.037052 (0.013318) | 0.313365 / 0.258489 (0.054876) | 0.351184 / 0.293841 (0.057343) | 0.039556 / 0.128546 (-0.088991) | 0.012462 / 0.075646 (-0.063185) | 0.337141 / 0.419271 (-0.082130) | 0.049678 / 0.043533 (0.006145) | 0.298547 / 0.255139 (0.043408) | 0.317547 / 0.283200 (0.034347) | 0.113595 / 0.141683 (-0.028088) | 1.448467 / 1.452155 (-0.003688) | 1.501303 / 1.492716 (0.008587) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.011005 / 0.018006 (-0.007002) | 0.527430 / 0.000490 (0.526940) | 0.005073 / 0.000200 (0.004873) | 0.000100 / 0.000054 (0.000045) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030377 / 0.037411 (-0.007034) | 0.116932 / 0.014526 (0.102406) | 0.124047 / 0.176557 (-0.052509) | 0.192358 / 0.737135 (-0.544777) | 0.130528 / 0.296338 (-0.165811) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.401158 / 0.215209 (0.185949) | 4.005854 / 2.077655 (1.928200) | 1.810365 / 1.504120 (0.306245) | 1.626490 / 1.541195 (0.085295) | 1.752591 / 1.468490 (0.284101) | 0.709065 / 4.584777 (-3.875712) | 3.893356 / 3.745712 (0.147643) | 3.655180 / 5.269862 (-1.614682) | 1.873660 / 4.565676 (-2.692017) | 0.085860 / 0.424275 (-0.338415) | 0.012671 / 0.007607 (0.005063) | 0.512804 / 0.226044 (0.286759) | 5.103426 / 2.268929 (2.834497) | 2.336148 / 55.444624 (-53.108477) | 2.000140 / 6.876477 (-4.876336) | 2.095155 / 2.142072 (-0.046918) | 0.848612 / 4.805227 (-3.956615) | 0.171840 / 6.500664 (-6.328824) | 0.064144 / 0.075469 (-0.011325) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.222106 / 1.841788 (-0.619682) | 15.828559 / 8.074308 (7.754251) | 14.995298 / 10.191392 (4.803906) | 0.172783 / 0.680424 (-0.507641) | 0.029296 / 0.534201 (-0.504905) | 0.447469 / 0.579283 (-0.131814) | 0.658615 / 0.434364 (0.224251) | 1.527607 / 0.540337 (0.987270) | 1.830018 / 1.386936 (0.443082) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007922 / 0.011353 (-0.003431) | 0.005369 / 0.011008 (-0.005639) | 0.076580 / 0.038508 (0.038071) | 0.038770 / 0.023109 (0.015661) | 0.338995 / 0.275898 (0.063097) | 0.380865 / 0.323480 (0.057385) | 0.006489 / 0.007986 (-0.001497) | 0.004421 / 0.004328 (0.000093) | 0.074143 / 0.004250 (0.069893) | 0.054224 / 0.037052 (0.017171) | 0.348887 / 0.258489 (0.090397) | 0.395044 / 0.293841 (0.101203) | 0.037040 / 0.128546 (-0.091507) | 0.012547 / 0.075646 (-0.063099) | 0.087521 / 0.419271 (-0.331751) | 0.049918 / 0.043533 (0.006385) | 0.342428 / 0.255139 (0.087289) | 0.362216 / 0.283200 (0.079016) | 0.107204 / 0.141683 (-0.034479) | 1.509206 / 1.452155 (0.057052) | 1.596010 / 1.492716 (0.103293) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.246795 / 0.018006 (0.228788) | 0.505998 / 0.000490 (0.505509) | 0.000446 / 0.000200 (0.000246) | 0.000064 / 0.000054 (0.000009) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031591 / 0.037411 (-0.005821) | 0.117595 / 0.014526 (0.103069) | 0.132500 / 0.176557 (-0.044056) | 0.202244 / 0.737135 (-0.534891) | 0.136624 / 0.296338 (-0.159715) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.428235 / 0.215209 (0.213026) | 4.262691 / 2.077655 (2.185036) | 2.057348 / 1.504120 (0.553228) | 1.928559 / 1.541195 (0.387364) | 2.120838 / 1.468490 (0.652347) | 0.706300 / 4.584777 (-3.878477) | 3.951828 / 3.745712 (0.206115) | 2.144218 / 5.269862 (-3.125644) | 1.359500 / 4.565676 (-3.206177) | 0.085404 / 0.424275 (-0.338872) | 0.012363 / 0.007607 (0.004756) | 0.529985 / 0.226044 (0.303941) | 5.295831 / 2.268929 (3.026903) | 2.522602 / 55.444624 (-52.922022) | 2.182850 / 6.876477 (-4.693627) | 2.270187 / 2.142072 (0.128114) | 0.841676 / 4.805227 (-3.963551) | 0.168366 / 6.500664 (-6.332298) | 0.065371 / 0.075469 (-0.010098) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.261464 / 1.841788 (-0.580324) | 17.010125 / 8.074308 (8.935817) | 14.304453 / 10.191392 (4.113061) | 0.177782 / 0.680424 (-0.502642) | 0.017762 / 0.534201 (-0.516439) | 0.427283 / 0.579283 (-0.152000) | 0.455176 / 0.434364 (0.020812) | 0.525962 / 0.540337 (-0.014375) | 0.625583 / 1.386936 (-0.761353) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#3b2aba6637dc61f145acda40e4e7b028c3947d72 \"CML watermark\")\n"
] | 2023-02-03T16:17:00 | 2023-02-21T14:46:49 | 2023-02-21T14:39:23 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5502",
"html_url": "https://github.com/huggingface/datasets/pull/5502",
"diff_url": "https://github.com/huggingface/datasets/pull/5502.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5502.patch",
"merged_at": "2023-02-21T14:39:23"
} | Added functionality implementation: sort datasets by multiple keys/columns as discussed in https://github.com/huggingface/datasets/issues/5425. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5502/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5502/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5501 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5501/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5501/comments | https://api.github.com/repos/huggingface/datasets/issues/5501/events | https://github.com/huggingface/datasets/pull/5501 | 1,569,644,159 | PR_kwDODunzps5JMTn8 | 5,501 | Increase chunk size for speeding up file downloads | {
"login": "Narsil",
"id": 204321,
"node_id": "MDQ6VXNlcjIwNDMyMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Narsil",
"html_url": "https://github.com/Narsil",
"followers_url": "https://api.github.com/users/Narsil/followers",
"following_url": "https://api.github.com/users/Narsil/following{/other_user}",
"gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Narsil/subscriptions",
"organizations_url": "https://api.github.com/users/Narsil/orgs",
"repos_url": "https://api.github.com/users/Narsil/repos",
"events_url": "https://api.github.com/users/Narsil/events{/privacy}",
"received_events_url": "https://api.github.com/users/Narsil/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5501). All of your documentation changes will be reflected on that endpoint.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008407 / 0.011353 (-0.002946) | 0.004651 / 0.011008 (-0.006357) | 0.100367 / 0.038508 (0.061859) | 0.029107 / 0.023109 (0.005998) | 0.302798 / 0.275898 (0.026900) | 0.354379 / 0.323480 (0.030899) | 0.006985 / 0.007986 (-0.001001) | 0.003365 / 0.004328 (-0.000963) | 0.078312 / 0.004250 (0.074062) | 0.034205 / 0.037052 (-0.002847) | 0.310431 / 0.258489 (0.051941) | 0.346239 / 0.293841 (0.052398) | 0.033800 / 0.128546 (-0.094747) | 0.011515 / 0.075646 (-0.064131) | 0.323588 / 0.419271 (-0.095684) | 0.040766 / 0.043533 (-0.002767) | 0.300914 / 0.255139 (0.045775) | 0.332983 / 0.283200 (0.049784) | 0.087500 / 0.141683 (-0.054182) | 1.469505 / 1.452155 (0.017350) | 1.505119 / 1.492716 (0.012403) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.187319 / 0.018006 (0.169313) | 0.405498 / 0.000490 (0.405008) | 0.001000 / 0.000200 (0.000800) | 0.000069 / 0.000054 (0.000015) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022583 / 0.037411 (-0.014828) | 0.098096 / 0.014526 (0.083570) | 0.104272 / 0.176557 (-0.072284) | 0.142801 / 0.737135 (-0.594335) | 0.109749 / 0.296338 (-0.186590) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.423343 / 0.215209 (0.208134) | 4.215116 / 2.077655 (2.137461) | 1.899714 / 1.504120 (0.395594) | 1.689579 / 1.541195 (0.148384) | 1.710292 / 1.468490 (0.241801) | 0.690976 / 4.584777 (-3.893801) | 3.432501 / 3.745712 (-0.313212) | 1.899600 / 5.269862 (-3.370261) | 1.279801 / 4.565676 (-3.285876) | 0.082763 / 0.424275 (-0.341512) | 0.012545 / 0.007607 (0.004938) | 0.531381 / 0.226044 (0.305336) | 5.320077 / 2.268929 (3.051148) | 2.370705 / 55.444624 (-53.073919) | 2.007089 / 6.876477 (-4.869388) | 2.062412 / 2.142072 (-0.079661) | 0.814998 / 4.805227 (-3.990229) | 0.149822 / 6.500664 (-6.350842) | 0.064399 / 0.075469 (-0.011070) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.226196 / 1.841788 (-0.615591) | 13.823443 / 8.074308 (5.749134) | 13.813667 / 10.191392 (3.622275) | 0.161289 / 0.680424 (-0.519135) | 0.028569 / 0.534201 (-0.505632) | 0.390360 / 0.579283 (-0.188923) | 0.396217 / 0.434364 (-0.038147) | 0.483120 / 0.540337 (-0.057217) | 0.570041 / 1.386936 (-0.816895) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006422 / 0.011353 (-0.004931) | 0.004528 / 0.011008 (-0.006481) | 0.076043 / 0.038508 (0.037535) | 0.027631 / 0.023109 (0.004522) | 0.340622 / 0.275898 (0.064724) | 0.376694 / 0.323480 (0.053214) | 0.004993 / 0.007986 (-0.002992) | 0.003403 / 0.004328 (-0.000926) | 0.074521 / 0.004250 (0.070270) | 0.037568 / 0.037052 (0.000516) | 0.343423 / 0.258489 (0.084934) | 0.387729 / 0.293841 (0.093888) | 0.031790 / 0.128546 (-0.096757) | 0.011767 / 0.075646 (-0.063879) | 0.085182 / 0.419271 (-0.334090) | 0.042867 / 0.043533 (-0.000666) | 0.341269 / 0.255139 (0.086130) | 0.368460 / 0.283200 (0.085261) | 0.090153 / 0.141683 (-0.051530) | 1.536490 / 1.452155 (0.084335) | 1.596403 / 1.492716 (0.103686) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.222373 / 0.018006 (0.204367) | 0.396145 / 0.000490 (0.395655) | 0.000384 / 0.000200 (0.000184) | 0.000062 / 0.000054 (0.000008) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024801 / 0.037411 (-0.012610) | 0.099711 / 0.014526 (0.085185) | 0.106094 / 0.176557 (-0.070463) | 0.147819 / 0.737135 (-0.589316) | 0.110065 / 0.296338 (-0.186274) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.442863 / 0.215209 (0.227654) | 4.420043 / 2.077655 (2.342388) | 2.070136 / 1.504120 (0.566016) | 1.862363 / 1.541195 (0.321168) | 1.910890 / 1.468490 (0.442400) | 0.702570 / 4.584777 (-3.882207) | 3.435855 / 3.745712 (-0.309857) | 1.871290 / 5.269862 (-3.398572) | 1.169321 / 4.565676 (-3.396355) | 0.083674 / 0.424275 (-0.340601) | 0.012823 / 0.007607 (0.005216) | 0.539330 / 0.226044 (0.313285) | 5.403317 / 2.268929 (3.134389) | 2.536508 / 55.444624 (-52.908117) | 2.179629 / 6.876477 (-4.696847) | 2.207586 / 2.142072 (0.065514) | 0.812256 / 4.805227 (-3.992972) | 0.152915 / 6.500664 (-6.347749) | 0.068431 / 0.075469 (-0.007038) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.294982 / 1.841788 (-0.546806) | 13.912811 / 8.074308 (5.838503) | 13.415658 / 10.191392 (3.224266) | 0.149531 / 0.680424 (-0.530893) | 0.016785 / 0.534201 (-0.517416) | 0.381055 / 0.579283 (-0.198228) | 0.392084 / 0.434364 (-0.042280) | 0.472614 / 0.540337 (-0.067724) | 0.559799 / 1.386936 (-0.827137) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#6ef20f9b71acbb387caab2d297d8c22ba3db3633 \"CML watermark\")\n",
"We simply do GET requests to hf.co to download files from the Hub right now. We may switch to hfh when we update how we do caching \r\n\r\nYou can try on any dataset hosted on the hub like `imagenet-1k`",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010931 / 0.011353 (-0.000422) | 0.005730 / 0.011008 (-0.005278) | 0.116653 / 0.038508 (0.078145) | 0.041439 / 0.023109 (0.018330) | 0.359559 / 0.275898 (0.083661) | 0.408398 / 0.323480 (0.084918) | 0.009193 / 0.007986 (0.001208) | 0.006024 / 0.004328 (0.001695) | 0.087743 / 0.004250 (0.083492) | 0.048636 / 0.037052 (0.011584) | 0.363133 / 0.258489 (0.104643) | 0.407144 / 0.293841 (0.113303) | 0.044610 / 0.128546 (-0.083936) | 0.014075 / 0.075646 (-0.061571) | 0.396506 / 0.419271 (-0.022766) | 0.057014 / 0.043533 (0.013482) | 0.358254 / 0.255139 (0.103115) | 0.399887 / 0.283200 (0.116687) | 0.115337 / 0.141683 (-0.026346) | 1.731655 / 1.452155 (0.279500) | 1.813276 / 1.492716 (0.320560) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.210197 / 0.018006 (0.192191) | 0.475887 / 0.000490 (0.475397) | 0.003323 / 0.000200 (0.003123) | 0.000100 / 0.000054 (0.000045) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031686 / 0.037411 (-0.005725) | 0.131167 / 0.014526 (0.116641) | 0.137919 / 0.176557 (-0.038637) | 0.184843 / 0.737135 (-0.552293) | 0.144998 / 0.296338 (-0.151340) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.471371 / 0.215209 (0.256162) | 4.693739 / 2.077655 (2.616084) | 2.251567 / 1.504120 (0.747447) | 1.993653 / 1.541195 (0.452458) | 2.053236 / 1.468490 (0.584746) | 0.809226 / 4.584777 (-3.775551) | 4.494120 / 3.745712 (0.748408) | 2.436921 / 5.269862 (-2.832940) | 1.541973 / 4.565676 (-3.023704) | 0.098401 / 0.424275 (-0.325874) | 0.014329 / 0.007607 (0.006722) | 0.597813 / 0.226044 (0.371769) | 5.964035 / 2.268929 (3.695107) | 2.709283 / 55.444624 (-52.735341) | 2.323537 / 6.876477 (-4.552940) | 2.401707 / 2.142072 (0.259635) | 0.976379 / 4.805227 (-3.828848) | 0.194638 / 6.500664 (-6.306026) | 0.076904 / 0.075469 (0.001435) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.516877 / 1.841788 (-0.324911) | 18.228010 / 8.074308 (10.153702) | 16.631750 / 10.191392 (6.440358) | 0.176030 / 0.680424 (-0.504394) | 0.033769 / 0.534201 (-0.500432) | 0.520511 / 0.579283 (-0.058773) | 0.531764 / 0.434364 (0.097400) | 0.648658 / 0.540337 (0.108321) | 0.779124 / 1.386936 (-0.607812) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008635 / 0.011353 (-0.002718) | 0.005785 / 0.011008 (-0.005223) | 0.087042 / 0.038508 (0.048534) | 0.039632 / 0.023109 (0.016523) | 0.419719 / 0.275898 (0.143821) | 0.463860 / 0.323480 (0.140380) | 0.006621 / 0.007986 (-0.001364) | 0.004655 / 0.004328 (0.000327) | 0.087003 / 0.004250 (0.082753) | 0.057122 / 0.037052 (0.020069) | 0.417820 / 0.258489 (0.159331) | 0.485981 / 0.293841 (0.192140) | 0.042606 / 0.128546 (-0.085940) | 0.014369 / 0.075646 (-0.061278) | 0.101939 / 0.419271 (-0.317333) | 0.058303 / 0.043533 (0.014770) | 0.415053 / 0.255139 (0.159914) | 0.439914 / 0.283200 (0.156714) | 0.134628 / 0.141683 (-0.007055) | 1.765464 / 1.452155 (0.313309) | 1.843963 / 1.492716 (0.351247) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.307156 / 0.018006 (0.289150) | 0.476657 / 0.000490 (0.476167) | 0.019718 / 0.000200 (0.019518) | 0.000160 / 0.000054 (0.000105) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.035286 / 0.037411 (-0.002125) | 0.138094 / 0.014526 (0.123568) | 0.144768 / 0.176557 (-0.031789) | 0.191386 / 0.737135 (-0.545750) | 0.151988 / 0.296338 (-0.144350) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.504733 / 0.215209 (0.289523) | 5.027048 / 2.077655 (2.949394) | 2.441571 / 1.504120 (0.937451) | 2.198242 / 1.541195 (0.657047) | 2.298473 / 1.468490 (0.829983) | 0.848048 / 4.584777 (-3.736729) | 4.613102 / 3.745712 (0.867390) | 2.522824 / 5.269862 (-2.747037) | 1.610159 / 4.565676 (-2.955517) | 0.105197 / 0.424275 (-0.319078) | 0.015195 / 0.007607 (0.007588) | 0.626976 / 0.226044 (0.400932) | 6.268459 / 2.268929 (3.999530) | 3.014387 / 55.444624 (-52.430237) | 2.554102 / 6.876477 (-4.322375) | 2.656051 / 2.142072 (0.513979) | 1.027978 / 4.805227 (-3.777249) | 0.200686 / 6.500664 (-6.299978) | 0.077104 / 0.075469 (0.001635) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.485228 / 1.841788 (-0.356560) | 18.319949 / 8.074308 (10.245641) | 15.855739 / 10.191392 (5.664347) | 0.204365 / 0.680424 (-0.476059) | 0.023824 / 0.534201 (-0.510377) | 0.505000 / 0.579283 (-0.074283) | 0.502866 / 0.434364 (0.068502) | 0.629574 / 0.540337 (0.089237) | 0.746602 / 1.386936 (-0.640334) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#900d429d3601657f766737b8670f855033078d57 \"CML watermark\")\n"
] | 2023-02-03T10:50:10 | 2023-02-09T11:04:11 | null | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5501",
"html_url": "https://github.com/huggingface/datasets/pull/5501",
"diff_url": "https://github.com/huggingface/datasets/pull/5501.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5501.patch",
"merged_at": null
} | Original fix: https://github.com/huggingface/huggingface_hub/pull/1267
Not sure this function is actually still called though.
I haven't done benches on this. Is there a dataset where files are hosted on the hub through cloudfront so we can have the same setup as in `hf_hub` ? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5501/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5501/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5500 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5500/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5500/comments | https://api.github.com/repos/huggingface/datasets/issues/5500/events | https://github.com/huggingface/datasets/issues/5500 | 1,569,257,240 | I_kwDODunzps5diPcY | 5,500 | WMT19 custom download checksum error | {
"login": "Hannibal046",
"id": 38466901,
"node_id": "MDQ6VXNlcjM4NDY2OTAx",
"avatar_url": "https://avatars.githubusercontent.com/u/38466901?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Hannibal046",
"html_url": "https://github.com/Hannibal046",
"followers_url": "https://api.github.com/users/Hannibal046/followers",
"following_url": "https://api.github.com/users/Hannibal046/following{/other_user}",
"gists_url": "https://api.github.com/users/Hannibal046/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Hannibal046/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Hannibal046/subscriptions",
"organizations_url": "https://api.github.com/users/Hannibal046/orgs",
"repos_url": "https://api.github.com/users/Hannibal046/repos",
"events_url": "https://api.github.com/users/Hannibal046/events{/privacy}",
"received_events_url": "https://api.github.com/users/Hannibal046/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I update the `datatsets` version and it works."
] | 2023-02-03T05:45:37 | 2023-02-03T05:52:56 | 2023-02-03T05:52:56 | NONE | null | null | null | ### Describe the bug
I use the following scripts to download data from WMT19:
```python
import datasets
from datasets import inspect_dataset, load_dataset_builder
from wmt19.wmt_utils import _TRAIN_SUBSETS,_DEV_SUBSETS
## this is a must due to: https://discuss.huggingface.co/t/load-dataset-hangs-with-local-files/28034/3
if __name__ == '__main__':
dev_subsets,train_subsets = [],[]
for subset in _TRAIN_SUBSETS:
if subset.target=='en' and 'de' in subset.sources:
train_subsets.append(subset.name)
for subset in _DEV_SUBSETS:
if subset.target=='en' and 'de' in subset.sources:
dev_subsets.append(subset.name)
inspect_dataset("wmt19", "./wmt19")
builder = load_dataset_builder(
"./wmt19/wmt_utils.py",
language_pair=("de", "en"),
subsets={
datasets.Split.TRAIN: train_subsets,
datasets.Split.VALIDATION: dev_subsets,
},
)
builder.download_and_prepare()
ds = builder.as_dataset()
ds.to_json("../data/wmt19/ende/data.json")
```
And I got the following error:
```
Traceback (most recent call last): | 0/2 [00:00<?, ?obj/s]
File "draft.py", line 26, in <module>
builder.download_and_prepare() | 0/1 [00:00<?, ?obj/s]
File "/Users/hannibal046/anaconda3/lib/python3.8/site-packages/datasets/builder.py", line 605, in download_and_prepare
self._download_and_prepare(%| | 0/1 [00:00<?, ?obj/s]
File "/Users/hannibal046/anaconda3/lib/python3.8/site-packages/datasets/builder.py", line 1104, in _download_and_prepare
super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos) | 0/1 [00:00<?, ?obj/s]
File "/Users/hannibal046/anaconda3/lib/python3.8/site-packages/datasets/builder.py", line 676, in _download_and_prepare
verify_checksums(s #13: 0%| | 0/1 [00:00<?, ?obj/s]
File "/Users/hannibal046/anaconda3/lib/python3.8/site-packages/datasets/utils/info_utils.py", line 35, in verify_checksums
raise UnexpectedDownloadedFile(str(set(recorded_checksums) - set(expected_checksums))) | 0/1 [00:00<?, ?obj/s]
datasets.utils.info_utils.UnexpectedDownloadedFile: {'https://s3.amazonaws.com/web-language-models/paracrawl/release1/paracrawl-release1.en-de.zipporah0-dedup-clean.tgz', 'https://huggingface.co/datasets/wmt/wmt13/resolve/main-zip/training-parallel-europarl-v7.zip', 'https://huggingface.co/datasets/wmt/wmt18/resolve/main-zip/translation-task/rapid2016.zip', 'https://huggingface.co/datasets/wmt/wmt18/resolve/main-zip/translation-task/training-parallel-nc-v13.zip', 'https://huggingface.co/datasets/wmt/wmt17/resolve/main-zip/translation-task/training-parallel-nc-v12.zip', 'https://huggingface.co/datasets/wmt/wmt14/resolve/main-zip/training-parallel-nc-v9.zip', 'https://huggingface.co/datasets/wmt/wmt15/resolve/main-zip/training-parallel-nc-v10.zip', 'https://huggingface.co/datasets/wmt/wmt16/resolve/main-zip/translation-task/training-parallel-nc-v11.zip'}
```
### Steps to reproduce the bug
see above
### Expected behavior
download data successfully
### Environment info
datasets==2.1.0
python==3.8
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5500/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5500/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5499 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5499/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5499/comments | https://api.github.com/repos/huggingface/datasets/issues/5499/events | https://github.com/huggingface/datasets/issues/5499 | 1,568,937,026 | I_kwDODunzps5dhBRC | 5,499 | `load_dataset` has ~4 seconds of overhead for cached data | {
"login": "davidgilbertson",
"id": 4443482,
"node_id": "MDQ6VXNlcjQ0NDM0ODI=",
"avatar_url": "https://avatars.githubusercontent.com/u/4443482?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/davidgilbertson",
"html_url": "https://github.com/davidgilbertson",
"followers_url": "https://api.github.com/users/davidgilbertson/followers",
"following_url": "https://api.github.com/users/davidgilbertson/following{/other_user}",
"gists_url": "https://api.github.com/users/davidgilbertson/gists{/gist_id}",
"starred_url": "https://api.github.com/users/davidgilbertson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/davidgilbertson/subscriptions",
"organizations_url": "https://api.github.com/users/davidgilbertson/orgs",
"repos_url": "https://api.github.com/users/davidgilbertson/repos",
"events_url": "https://api.github.com/users/davidgilbertson/events{/privacy}",
"received_events_url": "https://api.github.com/users/davidgilbertson/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [
"Hi ! To skip the verification step that checks if newer data exist, you can enable offline mode with `HF_DATASETS_OFFLINE=1`.\r\n\r\nAlthough I agree this step should be much faster for datasets hosted on the HF Hub - we could just compare the commit hash from the local data and the remote git repository. We're not been leveraging the git commit hashes, since the library was built before we even had git repositories for each dataset on HF.",
"Thanks @lhoestq, for memory when I recorded those times I had `HF_DATASETS_OFFLINE` set."
] | 2023-02-02T23:34:50 | 2023-02-07T19:35:11 | null | NONE | null | null | null | ### Feature request
When loading a dataset that has been cached locally, the `load_dataset` function takes a lot longer than it should take to fetch the dataset from disk (or memory).
This is particularly noticeable for smaller datasets. For example, wikitext-2, comparing `load_data` (once cached) and `load_from_disk`, the `load_dataset` method takes 40 times longer.
⏱ 4.84s ⮜ load_dataset
⏱ 119ms ⮜ load_from_disk
### Motivation
I assume this is doing something like checking for a newer version.
If so, that's an age old problem: do you make the user wait _every single time they load from cache_ or do you do something like load from cache always, _then_ check for a newer version and alert if they have stale data. The decision usually revolves around what percentage of the time the data will have been updated, and how dangerous old data is.
For most datasets it's extremely unlikely that there will be a newer version on any given run, so 99% of the time this is just wasted time.
Maybe you don't want to make that decision for all users, but at least having the _option_ to not wait for checks would be an improvement.
### Your contribution
. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5499/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5499/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5498 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5498/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5498/comments | https://api.github.com/repos/huggingface/datasets/issues/5498/events | https://github.com/huggingface/datasets/issues/5498 | 1,568,190,529 | I_kwDODunzps5deLBB | 5,498 | TypeError: 'bool' object is not iterable when filtering a datasets.arrow_dataset.Dataset | {
"login": "vmuel",
"id": 91255010,
"node_id": "MDQ6VXNlcjkxMjU1MDEw",
"avatar_url": "https://avatars.githubusercontent.com/u/91255010?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vmuel",
"html_url": "https://github.com/vmuel",
"followers_url": "https://api.github.com/users/vmuel/followers",
"following_url": "https://api.github.com/users/vmuel/following{/other_user}",
"gists_url": "https://api.github.com/users/vmuel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vmuel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vmuel/subscriptions",
"organizations_url": "https://api.github.com/users/vmuel/orgs",
"repos_url": "https://api.github.com/users/vmuel/repos",
"events_url": "https://api.github.com/users/vmuel/events{/privacy}",
"received_events_url": "https://api.github.com/users/vmuel/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi! Instead of a single boolean, your filter function should return an iterable (of booleans) in the batched mode like so:\r\n```python\r\ntrain_dataset = train_dataset.filter(\r\n function=lambda batch: [image is not None for image in batch[\"image\"]], \r\n batched=True,\r\n batch_size=10)\r\n```\r\n\r\nPS: You can make this operation much faster by operating directly on the arrow data to skip the decoding part:\r\n```python\r\ntrain_dataset = train_dataset.with_format(\"arrow\")\r\ntrain_dataset = train_dataset.filter(\r\n function=lambda table: table[\"image\"].is_valid().to_pylist(), \r\n batched=True,\r\n batch_size=100)\r\ntrain_dataset = train_dataset.with_format(None)\r\n```",
"Thank a lot!",
"I hit the same issue and the error message isn't really clear on what's going wrong. It might be helpful to update the docs with a batched example."
] | 2023-02-02T14:46:49 | 2023-10-08T06:12:47 | 2023-02-04T17:19:36 | NONE | null | null | null | ### Describe the bug
Hi,
Thanks for the amazing work on the library!
**Describe the bug**
I think I might have noticed a small bug in the filter method.
Having loaded a dataset using `load_dataset`, when I try to filter out empty entries with `batched=True`, I get a TypeError.
### Steps to reproduce the bug
```
train_dataset = train_dataset.filter(
function=lambda example: example["image"] is not None,
batched=True,
batch_size=10)
```
Error message:
```
File .../lib/python3.9/site-packages/datasets/fingerprint.py:480, in fingerprint_transform.<locals>._fingerprint.<locals>.wrapper(*args, **kwargs)
476 validate_fingerprint(kwargs[fingerprint_name])
478 # Call actual function
--> 480 out = func(self, *args, **kwargs)
...
-> 5666 indices_array = [i for i, to_keep in zip(indices, mask) if to_keep]
5667 if indices_mapping is not None:
5668 indices_array = pa.array(indices_array, type=pa.uint64())
TypeError: 'bool' object is not iterable
```
**Removing batched=True allows to bypass the issue.**
### Expected behavior
According to the doc, "[batch_size corresponds to the] number of examples per batch provided to function if batched = True", so we shouldn't need to remove the batchd=True arg?
source: https://huggingface.co/docs/datasets/v2.9.0/en/package_reference/main_classes#datasets.Dataset.filter
### Environment info
- `datasets` version: 2.9.0
- Platform: Linux-5.4.0-122-generic-x86_64-with-glibc2.31
- Python version: 3.9.10
- PyArrow version: 10.0.1
- Pandas version: 1.5.3 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5498/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5498/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5497 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5497/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5497/comments | https://api.github.com/repos/huggingface/datasets/issues/5497/events | https://github.com/huggingface/datasets/pull/5497 | 1,567,601,264 | PR_kwDODunzps5JFhvc | 5,497 | Improved error message for gated/private repos | {
"login": "osanseviero",
"id": 7246357,
"node_id": "MDQ6VXNlcjcyNDYzNTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7246357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/osanseviero",
"html_url": "https://github.com/osanseviero",
"followers_url": "https://api.github.com/users/osanseviero/followers",
"following_url": "https://api.github.com/users/osanseviero/following{/other_user}",
"gists_url": "https://api.github.com/users/osanseviero/gists{/gist_id}",
"starred_url": "https://api.github.com/users/osanseviero/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/osanseviero/subscriptions",
"organizations_url": "https://api.github.com/users/osanseviero/orgs",
"repos_url": "https://api.github.com/users/osanseviero/repos",
"events_url": "https://api.github.com/users/osanseviero/events{/privacy}",
"received_events_url": "https://api.github.com/users/osanseviero/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009491 / 0.011353 (-0.001862) | 0.004690 / 0.011008 (-0.006319) | 0.111904 / 0.038508 (0.073396) | 0.030781 / 0.023109 (0.007671) | 0.309442 / 0.275898 (0.033544) | 0.389511 / 0.323480 (0.066031) | 0.007277 / 0.007986 (-0.000709) | 0.004364 / 0.004328 (0.000036) | 0.074501 / 0.004250 (0.070250) | 0.036799 / 0.037052 (-0.000254) | 0.320279 / 0.258489 (0.061790) | 0.353887 / 0.293841 (0.060046) | 0.047969 / 0.128546 (-0.080577) | 0.017281 / 0.075646 (-0.058366) | 0.339655 / 0.419271 (-0.079617) | 0.049317 / 0.043533 (0.005784) | 0.321221 / 0.255139 (0.066082) | 0.354743 / 0.283200 (0.071544) | 0.098634 / 0.141683 (-0.043049) | 1.408640 / 1.452155 (-0.043515) | 1.488361 / 1.492716 (-0.004356) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.233677 / 0.018006 (0.215671) | 0.604424 / 0.000490 (0.603934) | 0.003834 / 0.000200 (0.003634) | 0.000103 / 0.000054 (0.000049) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022682 / 0.037411 (-0.014729) | 0.103800 / 0.014526 (0.089274) | 0.113868 / 0.176557 (-0.062689) | 0.155111 / 0.737135 (-0.582025) | 0.111862 / 0.296338 (-0.184476) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.474992 / 0.215209 (0.259783) | 4.755325 / 2.077655 (2.677670) | 1.889754 / 1.504120 (0.385634) | 1.597009 / 1.541195 (0.055814) | 1.639570 / 1.468490 (0.171080) | 0.970681 / 4.584777 (-3.614096) | 4.782567 / 3.745712 (1.036855) | 4.350465 / 5.269862 (-0.919397) | 2.413533 / 4.565676 (-2.152144) | 0.115510 / 0.424275 (-0.308765) | 0.011663 / 0.007607 (0.004055) | 0.626450 / 0.226044 (0.400406) | 6.238147 / 2.268929 (3.969218) | 2.603070 / 55.444624 (-52.841555) | 2.030378 / 6.876477 (-4.846099) | 1.996883 / 2.142072 (-0.145190) | 1.206436 / 4.805227 (-3.598792) | 0.203018 / 6.500664 (-6.297646) | 0.060550 / 0.075469 (-0.014919) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.259850 / 1.841788 (-0.581937) | 14.079936 / 8.074308 (6.005628) | 16.036329 / 10.191392 (5.844937) | 0.221546 / 0.680424 (-0.458878) | 0.042416 / 0.534201 (-0.491785) | 0.438851 / 0.579283 (-0.140432) | 0.507053 / 0.434364 (0.072689) | 0.518672 / 0.540337 (-0.021665) | 0.585278 / 1.386936 (-0.801659) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010718 / 0.011353 (-0.000635) | 0.005469 / 0.011008 (-0.005539) | 0.075624 / 0.038508 (0.037116) | 0.029103 / 0.023109 (0.005994) | 0.353294 / 0.275898 (0.077395) | 0.353674 / 0.323480 (0.030194) | 0.005678 / 0.007986 (-0.002308) | 0.004610 / 0.004328 (0.000282) | 0.075213 / 0.004250 (0.070963) | 0.040032 / 0.037052 (0.002980) | 0.344363 / 0.258489 (0.085874) | 0.376861 / 0.293841 (0.083020) | 0.043718 / 0.128546 (-0.084828) | 0.016057 / 0.075646 (-0.059589) | 0.087746 / 0.419271 (-0.331526) | 0.051380 / 0.043533 (0.007848) | 0.336904 / 0.255139 (0.081765) | 0.357636 / 0.283200 (0.074436) | 0.089425 / 0.141683 (-0.052258) | 1.377462 / 1.452155 (-0.074692) | 1.448844 / 1.492716 (-0.043872) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.259038 / 0.018006 (0.241031) | 0.512284 / 0.000490 (0.511794) | 0.005666 / 0.000200 (0.005466) | 0.000123 / 0.000054 (0.000068) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023669 / 0.037411 (-0.013742) | 0.097979 / 0.014526 (0.083453) | 0.117947 / 0.176557 (-0.058610) | 0.140764 / 0.737135 (-0.596372) | 0.114700 / 0.296338 (-0.181638) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.528844 / 0.215209 (0.313635) | 5.073828 / 2.077655 (2.996173) | 2.088738 / 1.504120 (0.584618) | 1.855820 / 1.541195 (0.314626) | 1.838639 / 1.468490 (0.370149) | 0.968228 / 4.584777 (-3.616549) | 4.589792 / 3.745712 (0.844079) | 2.586149 / 5.269862 (-2.683712) | 1.714241 / 4.565676 (-2.851435) | 0.124502 / 0.424275 (-0.299774) | 0.012115 / 0.007607 (0.004507) | 0.679539 / 0.226044 (0.453494) | 6.541335 / 2.268929 (4.272407) | 2.749153 / 55.444624 (-52.695471) | 2.124164 / 6.876477 (-4.752313) | 2.181249 / 2.142072 (0.039177) | 1.196846 / 4.805227 (-3.608381) | 0.213352 / 6.500664 (-6.287312) | 0.075021 / 0.075469 (-0.000448) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.254301 / 1.841788 (-0.587487) | 14.494254 / 8.074308 (6.419946) | 16.619679 / 10.191392 (6.428287) | 0.205158 / 0.680424 (-0.475266) | 0.022181 / 0.534201 (-0.512019) | 0.422928 / 0.579283 (-0.156355) | 0.539825 / 0.434364 (0.105461) | 0.523165 / 0.540337 (-0.017173) | 0.615014 / 1.386936 (-0.771922) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#e4d8a3d43569d61e73f7ab12ff3a6b48466afa8d \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.011522 / 0.011353 (0.000169) | 0.006906 / 0.011008 (-0.004102) | 0.114692 / 0.038508 (0.076184) | 0.037686 / 0.023109 (0.014577) | 0.393662 / 0.275898 (0.117764) | 0.377730 / 0.323480 (0.054250) | 0.008212 / 0.007986 (0.000226) | 0.005470 / 0.004328 (0.001142) | 0.086962 / 0.004250 (0.082712) | 0.039085 / 0.037052 (0.002033) | 0.357565 / 0.258489 (0.099076) | 0.404384 / 0.293841 (0.110543) | 0.055523 / 0.128546 (-0.073023) | 0.018277 / 0.075646 (-0.057369) | 0.389812 / 0.419271 (-0.029459) | 0.058706 / 0.043533 (0.015173) | 0.344735 / 0.255139 (0.089597) | 0.395734 / 0.283200 (0.112535) | 0.096098 / 0.141683 (-0.045584) | 1.546654 / 1.452155 (0.094499) | 1.665314 / 1.492716 (0.172597) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.255893 / 0.018006 (0.237887) | 0.589563 / 0.000490 (0.589074) | 0.005890 / 0.000200 (0.005690) | 0.000123 / 0.000054 (0.000069) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029167 / 0.037411 (-0.008245) | 0.113561 / 0.014526 (0.099036) | 0.125361 / 0.176557 (-0.051195) | 0.182225 / 0.737135 (-0.554910) | 0.125147 / 0.296338 (-0.171192) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.596859 / 0.215209 (0.381650) | 5.797725 / 2.077655 (3.720071) | 2.238420 / 1.504120 (0.734300) | 1.933177 / 1.541195 (0.391982) | 2.030750 / 1.468490 (0.562260) | 1.122655 / 4.584777 (-3.462122) | 5.247913 / 3.745712 (1.502201) | 2.792742 / 5.269862 (-2.477120) | 1.861487 / 4.565676 (-2.704190) | 0.133009 / 0.424275 (-0.291266) | 0.013219 / 0.007607 (0.005612) | 0.696905 / 0.226044 (0.470861) | 6.961298 / 2.268929 (4.692369) | 2.895352 / 55.444624 (-52.549273) | 2.353677 / 6.876477 (-4.522799) | 2.458804 / 2.142072 (0.316731) | 1.271905 / 4.805227 (-3.533322) | 0.224850 / 6.500664 (-6.275814) | 0.083773 / 0.075469 (0.008304) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.502425 / 1.841788 (-0.339363) | 16.959241 / 8.074308 (8.884933) | 19.865569 / 10.191392 (9.674177) | 0.228608 / 0.680424 (-0.451816) | 0.044035 / 0.534201 (-0.490166) | 0.545172 / 0.579283 (-0.034112) | 0.677193 / 0.434364 (0.242829) | 0.608988 / 0.540337 (0.068650) | 0.719210 / 1.386936 (-0.667726) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008297 / 0.011353 (-0.003056) | 0.005729 / 0.011008 (-0.005280) | 0.084762 / 0.038508 (0.046254) | 0.030622 / 0.023109 (0.007512) | 0.408017 / 0.275898 (0.132119) | 0.432114 / 0.323480 (0.108634) | 0.006965 / 0.007986 (-0.001021) | 0.004830 / 0.004328 (0.000502) | 0.087375 / 0.004250 (0.083124) | 0.048110 / 0.037052 (0.011058) | 0.414978 / 0.258489 (0.156489) | 0.446136 / 0.293841 (0.152295) | 0.064351 / 0.128546 (-0.064195) | 0.018273 / 0.075646 (-0.057374) | 0.114853 / 0.419271 (-0.304418) | 0.056962 / 0.043533 (0.013429) | 0.427791 / 0.255139 (0.172652) | 0.428829 / 0.283200 (0.145629) | 0.108004 / 0.141683 (-0.033679) | 1.639285 / 1.452155 (0.187130) | 1.652106 / 1.492716 (0.159390) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.359744 / 0.018006 (0.341738) | 0.596060 / 0.000490 (0.595570) | 0.025448 / 0.000200 (0.025248) | 0.000158 / 0.000054 (0.000104) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026348 / 0.037411 (-0.011064) | 0.119153 / 0.014526 (0.104628) | 0.129304 / 0.176557 (-0.047253) | 0.195670 / 0.737135 (-0.541465) | 0.135559 / 0.296338 (-0.160780) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.588963 / 0.215209 (0.373754) | 5.682957 / 2.077655 (3.605302) | 2.380178 / 1.504120 (0.876059) | 2.131299 / 1.541195 (0.590104) | 2.167839 / 1.468490 (0.699349) | 1.126418 / 4.584777 (-3.458359) | 5.289104 / 3.745712 (1.543392) | 2.952128 / 5.269862 (-2.317734) | 1.922974 / 4.565676 (-2.642702) | 0.143874 / 0.424275 (-0.280401) | 0.015399 / 0.007607 (0.007792) | 0.815675 / 0.226044 (0.589631) | 7.320146 / 2.268929 (5.051217) | 3.453670 / 55.444624 (-51.990954) | 2.579133 / 6.876477 (-4.297344) | 2.532331 / 2.142072 (0.390258) | 1.345881 / 4.805227 (-3.459347) | 0.242448 / 6.500664 (-6.258216) | 0.070007 / 0.075469 (-0.005462) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.433173 / 1.841788 (-0.408614) | 17.127287 / 8.074308 (9.052979) | 17.953878 / 10.191392 (7.762486) | 0.220035 / 0.680424 (-0.460389) | 0.028660 / 0.534201 (-0.505541) | 0.496233 / 0.579283 (-0.083050) | 0.591587 / 0.434364 (0.157223) | 0.635204 / 0.540337 (0.094867) | 0.702143 / 1.386936 (-0.684793) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#7cfac43b980ab9e4a69c2328f085770996323005 \"CML watermark\")\n"
] | 2023-02-02T08:56:15 | 2023-02-02T11:26:08 | 2023-02-02T11:17:15 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5497",
"html_url": "https://github.com/huggingface/datasets/pull/5497",
"diff_url": "https://github.com/huggingface/datasets/pull/5497.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5497.patch",
"merged_at": "2023-02-02T11:17:14"
} | Using `use_auth_token=True` is not needed anymore. If a user logged in, the token will be automatically retrieved. Also include a mention for gated repos
See https://github.com/huggingface/huggingface_hub/pull/1064 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5497/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5497/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5496 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5496/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5496/comments | https://api.github.com/repos/huggingface/datasets/issues/5496/events | https://github.com/huggingface/datasets/issues/5496 | 1,567,301,765 | I_kwDODunzps5dayCF | 5,496 | Add a `reduce` method | {
"login": "zhangir-azerbayev",
"id": 59542043,
"node_id": "MDQ6VXNlcjU5NTQyMDQz",
"avatar_url": "https://avatars.githubusercontent.com/u/59542043?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhangir-azerbayev",
"html_url": "https://github.com/zhangir-azerbayev",
"followers_url": "https://api.github.com/users/zhangir-azerbayev/followers",
"following_url": "https://api.github.com/users/zhangir-azerbayev/following{/other_user}",
"gists_url": "https://api.github.com/users/zhangir-azerbayev/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zhangir-azerbayev/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhangir-azerbayev/subscriptions",
"organizations_url": "https://api.github.com/users/zhangir-azerbayev/orgs",
"repos_url": "https://api.github.com/users/zhangir-azerbayev/repos",
"events_url": "https://api.github.com/users/zhangir-azerbayev/events{/privacy}",
"received_events_url": "https://api.github.com/users/zhangir-azerbayev/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | [
"Hi! Sure, feel free to open a PR, so we can see the API you have in mind.",
"I would like to give it a go! #self-assign",
"Closing as `Dataset.map` can be used instead (see https://github.com/huggingface/datasets/pull/5533#issuecomment-1440571658 and https://github.com/huggingface/datasets/pull/5533#issuecomment-1446403263)"
] | 2023-02-02T04:30:22 | 2023-07-21T14:24:32 | 2023-07-21T14:24:32 | NONE | null | null | null | ### Feature request
Right now the `Dataset` class implements `map()` and `filter()`, but leaves out the third functional idiom popular among Python users: `reduce`.
### Motivation
A `reduce` method is often useful when calculating dataset statistics, for example, the occurrence of a particular n-gram or the average line length of a code dataset.
### Your contribution
I haven't contributed to `datasets` before, but I don't expect this will be too difficult, since the implementation will closely follow that of `map` and `filter`. I could have a crack over the weekend. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5496/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5496/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5495 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5495/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5495/comments | https://api.github.com/repos/huggingface/datasets/issues/5495/events | https://github.com/huggingface/datasets/issues/5495 | 1,566,803,452 | I_kwDODunzps5dY4X8 | 5,495 | to_tf_dataset fails with datetime UTC columns even if not included in columns argument | {
"login": "dwyatte",
"id": 2512762,
"node_id": "MDQ6VXNlcjI1MTI3NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/2512762?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dwyatte",
"html_url": "https://github.com/dwyatte",
"followers_url": "https://api.github.com/users/dwyatte/followers",
"following_url": "https://api.github.com/users/dwyatte/following{/other_user}",
"gists_url": "https://api.github.com/users/dwyatte/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dwyatte/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dwyatte/subscriptions",
"organizations_url": "https://api.github.com/users/dwyatte/orgs",
"repos_url": "https://api.github.com/users/dwyatte/repos",
"events_url": "https://api.github.com/users/dwyatte/events{/privacy}",
"received_events_url": "https://api.github.com/users/dwyatte/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 1935892877,
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue",
"name": "good first issue",
"color": "7057ff",
"default": true,
"description": "Good for newcomers"
}
] | closed | false | null | [] | null | [
"Hi! This is indeed a bug in our zero-copy logic.\r\n\r\nTo fix it, instead of the line:\r\nhttps://github.com/huggingface/datasets/blob/7cfac43b980ab9e4a69c2328f085770996323005/src/datasets/features/features.py#L702\r\n\r\nwe should have:\r\n```python\r\nreturn pa.types.is_primitive(pa_type) and not (pa.types.is_boolean(pa_type) or pa.types.is_temporal(pa_type))\r\n```",
"@mariosasko submitted a small PR [here](https://github.com/huggingface/datasets/pull/5504)"
] | 2023-02-01T20:47:33 | 2023-02-08T14:33:19 | 2023-02-08T14:33:19 | CONTRIBUTOR | null | null | null | ### Describe the bug
There appears to be some eager behavior in `to_tf_dataset` that runs against every column in a dataset even if they aren't included in the columns argument. This is problematic with datetime UTC columns due to them not working with zero copy. If I don't have UTC information in my datetime column, then everything works as expected.
### Steps to reproduce the bug
```python
import numpy as np
import pandas as pd
from datasets import Dataset
df = pd.DataFrame(np.random.rand(2, 1), columns=["x"])
# df["dt"] = pd.to_datetime(["2023-01-01", "2023-01-01"]) # works fine
df["dt"] = pd.to_datetime(["2023-01-01 00:00:00.00000+00:00", "2023-01-01 00:00:00.00000+00:00"])
df.to_parquet("test.pq")
ds = Dataset.from_parquet("test.pq")
tf_ds = ds.to_tf_dataset(columns=["x"], batch_size=2, shuffle=True)
```
```
ArrowInvalid Traceback (most recent call last)
Cell In[1], line 12
8 df.to_parquet("test.pq")
11 ds = Dataset.from_parquet("test.pq")
---> 12 tf_ds = ds.to_tf_dataset(columns=["r"], batch_size=2, shuffle=True)
File ~/venv/lib/python3.8/site-packages/datasets/arrow_dataset.py:411, in TensorflowDatasetMixin.to_tf_dataset(self, batch_size, columns, shuffle, collate_fn, drop_remainder, collate_fn_args, label_cols, prefetch, num_workers)
407 dataset = self
409 # TODO(Matt, QL): deprecate the retention of label_ids and label
--> 411 output_signature, columns_to_np_types = dataset._get_output_signature(
412 dataset,
413 collate_fn=collate_fn,
414 collate_fn_args=collate_fn_args,
415 cols_to_retain=cols_to_retain,
416 batch_size=batch_size if drop_remainder else None,
417 )
419 if "labels" in output_signature:
420 if ("label_ids" in columns or "label" in columns) and "labels" not in columns:
File ~/venv/lib/python3.8/site-packages/datasets/arrow_dataset.py:254, in TensorflowDatasetMixin._get_output_signature(dataset, collate_fn, collate_fn_args, cols_to_retain, batch_size, num_test_batches)
252 for _ in range(num_test_batches):
253 indices = sample(range(len(dataset)), test_batch_size)
--> 254 test_batch = dataset[indices]
255 if cols_to_retain is not None:
256 test_batch = {key: value for key, value in test_batch.items() if key in cols_to_retain}
File ~/venv/lib/python3.8/site-packages/datasets/arrow_dataset.py:2590, in Dataset.__getitem__(self, key)
2588 def __getitem__(self, key): # noqa: F811
2589 """Can be used to index columns (by string names) or rows (by integer index or iterable of indices or bools)."""
-> 2590 return self._getitem(
2591 key,
2592 )
File ~/venv/lib/python3.8/site-packages/datasets/arrow_dataset.py:2575, in Dataset._getitem(self, key, **kwargs)
2573 formatter = get_formatter(format_type, features=self.features, **format_kwargs)
2574 pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None)
-> 2575 formatted_output = format_table(
2576 pa_subtable, key, formatter=formatter, format_columns=format_columns, output_all_columns=output_all_columns
2577 )
2578 return formatted_output
File ~/venv/lib/python3.8/site-packages/datasets/formatting/formatting.py:634, in format_table(table, key, formatter, format_columns, output_all_columns)
632 python_formatter = PythonFormatter(features=None)
633 if format_columns is None:
--> 634 return formatter(pa_table, query_type=query_type)
635 elif query_type == "column":
636 if key in format_columns:
File ~/venv/lib/python3.8/site-packages/datasets/formatting/formatting.py:410, in Formatter.__call__(self, pa_table, query_type)
408 return self.format_column(pa_table)
409 elif query_type == "batch":
--> 410 return self.format_batch(pa_table)
File ~/venv/lib/python3.8/site-packages/datasets/formatting/np_formatter.py:78, in NumpyFormatter.format_batch(self, pa_table)
77 def format_batch(self, pa_table: pa.Table) -> Mapping:
---> 78 batch = self.numpy_arrow_extractor().extract_batch(pa_table)
79 batch = self.python_features_decoder.decode_batch(batch)
80 batch = self.recursive_tensorize(batch)
File ~/venv/lib/python3.8/site-packages/datasets/formatting/formatting.py:164, in NumpyArrowExtractor.extract_batch(self, pa_table)
163 def extract_batch(self, pa_table: pa.Table) -> dict:
--> 164 return {col: self._arrow_array_to_numpy(pa_table[col]) for col in pa_table.column_names}
File ~/venv/lib/python3.8/site-packages/datasets/formatting/formatting.py:164, in <dictcomp>(.0)
163 def extract_batch(self, pa_table: pa.Table) -> dict:
--> 164 return {col: self._arrow_array_to_numpy(pa_table[col]) for col in pa_table.column_names}
File ~/venv/lib/python3.8/site-packages/datasets/formatting/formatting.py:185, in NumpyArrowExtractor._arrow_array_to_numpy(self, pa_array)
181 else:
182 zero_copy_only = _is_zero_copy_only(pa_array.type) and all(
183 not _is_array_with_nulls(chunk) for chunk in pa_array.chunks
184 )
--> 185 array: List = [
186 row for chunk in pa_array.chunks for row in chunk.to_numpy(zero_copy_only=zero_copy_only)
187 ]
188 else:
189 if isinstance(pa_array.type, _ArrayXDExtensionType):
190 # don't call to_pylist() to preserve dtype of the fixed-size array
File ~/venv/lib/python3.8/site-packages/datasets/formatting/formatting.py:186, in <listcomp>(.0)
181 else:
182 zero_copy_only = _is_zero_copy_only(pa_array.type) and all(
183 not _is_array_with_nulls(chunk) for chunk in pa_array.chunks
184 )
185 array: List = [
--> 186 row for chunk in pa_array.chunks for row in chunk.to_numpy(zero_copy_only=zero_copy_only)
187 ]
188 else:
189 if isinstance(pa_array.type, _ArrayXDExtensionType):
190 # don't call to_pylist() to preserve dtype of the fixed-size array
File ~/venv/lib/python3.8/site-packages/pyarrow/array.pxi:1475, in pyarrow.lib.Array.to_numpy()
File ~/venv/lib/python3.8/site-packages/pyarrow/error.pxi:100, in pyarrow.lib.check_status()
ArrowInvalid: Needed to copy 1 chunks with 0 nulls, but zero_copy_only was True
```
### Expected behavior
I think there are two potential issues/fixes
1. Proper handling of datetime UTC columns (perhaps there is something incorrect with zero copy handling here)
2. Not eagerly running against every column in a dataset when the columns argument of `to_tf_dataset` specifies a subset of columns (although I'm not sure if this is unavoidable)
### Environment info
- `datasets` version: 2.9.0
- Platform: macOS-13.2-x86_64-i386-64bit
- Python version: 3.8.12
- PyArrow version: 11.0.0
- Pandas version: 1.5.3
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5495/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5495/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5494 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5494/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5494/comments | https://api.github.com/repos/huggingface/datasets/issues/5494/events | https://github.com/huggingface/datasets/issues/5494 | 1,566,655,348 | I_kwDODunzps5dYUN0 | 5,494 | Update audio installation doc page | {
"login": "polinaeterna",
"id": 16348744,
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/polinaeterna",
"html_url": "https://github.com/polinaeterna",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892861,
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
}
] | closed | false | null | [] | null | [
"Totally agree, the docs should be in sync with our code.\r\n\r\nIndeed to avoid confusing users, I think we should have updated the docs at the same time as this PR:\r\n- #5167",
"@albertvillanova yeah sure I should have, but I forgot back then, sorry for that 😶",
"No, @polinaeterna, nothing to be sorry about.\r\n\r\nMy comment was for all of us datasets team, as a reminder: when making a PR, but also when reviewing some other's PR, we should not forget to update the corresponding docstring and doc pages. It is something we can improve if we help each other in reminding about it... :hugs: ",
"@polinaeterna I think we can close this issue now as we no longer use `torchaudio` for decoding."
] | 2023-02-01T19:07:50 | 2023-03-02T16:08:17 | 2023-03-02T16:08:17 | CONTRIBUTOR | null | null | null | Our [installation documentation page](https://huggingface.co/docs/datasets/installation#audio) says that one can use Datasets for mp3 only with `torchaudio<0.12`. `torchaudio>0.12` is actually supported too but requires a specific version of ffmpeg which is not easily installed on all linux versions but there is a custom ubuntu repo for it, we have insctructions in the code: https://github.com/huggingface/datasets/blob/main/src/datasets/features/audio.py#L327
So we should update the doc page. But first investigate [this issue](5488). | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5494/reactions",
"total_count": 3,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/datasets/issues/5494/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5493 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5493/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5493/comments | https://api.github.com/repos/huggingface/datasets/issues/5493/events | https://github.com/huggingface/datasets/pull/5493 | 1,566,637,806 | PR_kwDODunzps5JCSAZ | 5,493 | Remove unused `load_from_cache_file` arg from `Dataset.shard()` docstring | {
"login": "polinaeterna",
"id": 16348744,
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/polinaeterna",
"html_url": "https://github.com/polinaeterna",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5493). All of your documentation changes will be reflected on that endpoint.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008956 / 0.011353 (-0.002397) | 0.004590 / 0.011008 (-0.006418) | 0.101305 / 0.038508 (0.062797) | 0.030347 / 0.023109 (0.007237) | 0.302492 / 0.275898 (0.026594) | 0.335986 / 0.323480 (0.012506) | 0.007272 / 0.007986 (-0.000714) | 0.004303 / 0.004328 (-0.000025) | 0.078592 / 0.004250 (0.074341) | 0.035545 / 0.037052 (-0.001507) | 0.316052 / 0.258489 (0.057563) | 0.342523 / 0.293841 (0.048682) | 0.034128 / 0.128546 (-0.094419) | 0.011475 / 0.075646 (-0.064171) | 0.325272 / 0.419271 (-0.093999) | 0.041815 / 0.043533 (-0.001717) | 0.303093 / 0.255139 (0.047955) | 0.331987 / 0.283200 (0.048788) | 0.087264 / 0.141683 (-0.054419) | 1.476284 / 1.452155 (0.024129) | 1.562034 / 1.492716 (0.069318) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.206502 / 0.018006 (0.188496) | 0.409893 / 0.000490 (0.409404) | 0.002479 / 0.000200 (0.002279) | 0.000073 / 0.000054 (0.000018) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022891 / 0.037411 (-0.014520) | 0.100209 / 0.014526 (0.085683) | 0.105576 / 0.176557 (-0.070981) | 0.141035 / 0.737135 (-0.596100) | 0.109733 / 0.296338 (-0.186606) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.413791 / 0.215209 (0.198582) | 4.125890 / 2.077655 (2.048235) | 1.833023 / 1.504120 (0.328903) | 1.631325 / 1.541195 (0.090130) | 1.708406 / 1.468490 (0.239916) | 0.690100 / 4.584777 (-3.894677) | 3.379058 / 3.745712 (-0.366654) | 2.019044 / 5.269862 (-3.250818) | 1.323332 / 4.565676 (-3.242344) | 0.082709 / 0.424275 (-0.341566) | 0.012434 / 0.007607 (0.004827) | 0.527139 / 0.226044 (0.301095) | 5.271529 / 2.268929 (3.002601) | 2.297311 / 55.444624 (-53.147314) | 1.949021 / 6.876477 (-4.927456) | 2.001098 / 2.142072 (-0.140975) | 0.811591 / 4.805227 (-3.993636) | 0.149028 / 6.500664 (-6.351637) | 0.066233 / 0.075469 (-0.009236) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.254276 / 1.841788 (-0.587512) | 13.638485 / 8.074308 (5.564177) | 13.943274 / 10.191392 (3.751882) | 0.147426 / 0.680424 (-0.532997) | 0.028602 / 0.534201 (-0.505599) | 0.398080 / 0.579283 (-0.181203) | 0.402178 / 0.434364 (-0.032186) | 0.477045 / 0.540337 (-0.063292) | 0.567731 / 1.386936 (-0.819205) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006936 / 0.011353 (-0.004417) | 0.004614 / 0.011008 (-0.006394) | 0.079779 / 0.038508 (0.041271) | 0.027941 / 0.023109 (0.004832) | 0.347224 / 0.275898 (0.071326) | 0.378183 / 0.323480 (0.054703) | 0.005249 / 0.007986 (-0.002737) | 0.004907 / 0.004328 (0.000579) | 0.078678 / 0.004250 (0.074428) | 0.041912 / 0.037052 (0.004860) | 0.347838 / 0.258489 (0.089349) | 0.386760 / 0.293841 (0.092919) | 0.032680 / 0.128546 (-0.095867) | 0.014321 / 0.075646 (-0.061325) | 0.087924 / 0.419271 (-0.331347) | 0.045060 / 0.043533 (0.001527) | 0.340986 / 0.255139 (0.085847) | 0.368689 / 0.283200 (0.085489) | 0.093274 / 0.141683 (-0.048409) | 1.474435 / 1.452155 (0.022281) | 1.569753 / 1.492716 (0.077037) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.206789 / 0.018006 (0.188783) | 0.416518 / 0.000490 (0.416028) | 0.000404 / 0.000200 (0.000204) | 0.000059 / 0.000054 (0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026207 / 0.037411 (-0.011205) | 0.101914 / 0.014526 (0.087388) | 0.108585 / 0.176557 (-0.067972) | 0.150438 / 0.737135 (-0.586697) | 0.110744 / 0.296338 (-0.185594) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.443571 / 0.215209 (0.228362) | 4.433139 / 2.077655 (2.355485) | 2.109525 / 1.504120 (0.605405) | 1.901484 / 1.541195 (0.360290) | 1.968812 / 1.468490 (0.500322) | 0.704334 / 4.584777 (-3.880443) | 3.392028 / 3.745712 (-0.353684) | 3.072693 / 5.269862 (-2.197168) | 1.552227 / 4.565676 (-3.013449) | 0.083741 / 0.424275 (-0.340534) | 0.012627 / 0.007607 (0.005020) | 0.544706 / 0.226044 (0.318662) | 5.462743 / 2.268929 (3.193815) | 2.551265 / 55.444624 (-52.893360) | 2.208075 / 6.876477 (-4.668401) | 2.259092 / 2.142072 (0.117020) | 0.810687 / 4.805227 (-3.994540) | 0.152347 / 6.500664 (-6.348317) | 0.068346 / 0.075469 (-0.007123) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.269716 / 1.841788 (-0.572072) | 14.215698 / 8.074308 (6.141390) | 13.691773 / 10.191392 (3.500381) | 0.152620 / 0.680424 (-0.527804) | 0.017219 / 0.534201 (-0.516982) | 0.382533 / 0.579283 (-0.196750) | 0.388994 / 0.434364 (-0.045370) | 0.479400 / 0.540337 (-0.060938) | 0.572699 / 1.386936 (-0.814237) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#f2d90f14cd6e756abeb27045940a6756104cc2d6 \"CML watermark\")\n"
] | 2023-02-01T18:57:48 | 2023-02-08T15:10:46 | 2023-02-08T15:03:50 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5493",
"html_url": "https://github.com/huggingface/datasets/pull/5493",
"diff_url": "https://github.com/huggingface/datasets/pull/5493.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5493.patch",
"merged_at": "2023-02-08T15:03:50"
} | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5493/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5493/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5492 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5492/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5492/comments | https://api.github.com/repos/huggingface/datasets/issues/5492/events | https://github.com/huggingface/datasets/issues/5492 | 1,566,604,216 | I_kwDODunzps5dYHu4 | 5,492 | Push_to_hub in a pull request | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
},
{
"id": 1935892877,
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue",
"name": "good first issue",
"color": "7057ff",
"default": true,
"description": "Good for newcomers"
}
] | closed | false | {
"login": "nateraw",
"id": 32437151,
"node_id": "MDQ6VXNlcjMyNDM3MTUx",
"avatar_url": "https://avatars.githubusercontent.com/u/32437151?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nateraw",
"html_url": "https://github.com/nateraw",
"followers_url": "https://api.github.com/users/nateraw/followers",
"following_url": "https://api.github.com/users/nateraw/following{/other_user}",
"gists_url": "https://api.github.com/users/nateraw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nateraw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nateraw/subscriptions",
"organizations_url": "https://api.github.com/users/nateraw/orgs",
"repos_url": "https://api.github.com/users/nateraw/repos",
"events_url": "https://api.github.com/users/nateraw/events{/privacy}",
"received_events_url": "https://api.github.com/users/nateraw/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "nateraw",
"id": 32437151,
"node_id": "MDQ6VXNlcjMyNDM3MTUx",
"avatar_url": "https://avatars.githubusercontent.com/u/32437151?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nateraw",
"html_url": "https://github.com/nateraw",
"followers_url": "https://api.github.com/users/nateraw/followers",
"following_url": "https://api.github.com/users/nateraw/following{/other_user}",
"gists_url": "https://api.github.com/users/nateraw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nateraw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nateraw/subscriptions",
"organizations_url": "https://api.github.com/users/nateraw/orgs",
"repos_url": "https://api.github.com/users/nateraw/repos",
"events_url": "https://api.github.com/users/nateraw/events{/privacy}",
"received_events_url": "https://api.github.com/users/nateraw/received_events",
"type": "User",
"site_admin": false
},
{
"login": "AJDERS",
"id": 38854604,
"node_id": "MDQ6VXNlcjM4ODU0NjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/38854604?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AJDERS",
"html_url": "https://github.com/AJDERS",
"followers_url": "https://api.github.com/users/AJDERS/followers",
"following_url": "https://api.github.com/users/AJDERS/following{/other_user}",
"gists_url": "https://api.github.com/users/AJDERS/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AJDERS/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AJDERS/subscriptions",
"organizations_url": "https://api.github.com/users/AJDERS/orgs",
"repos_url": "https://api.github.com/users/AJDERS/repos",
"events_url": "https://api.github.com/users/AJDERS/events{/privacy}",
"received_events_url": "https://api.github.com/users/AJDERS/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Assigned to myself and will get to it in the next week, but if someone finds this issue annoying and wants to submit a PR before I do, just ping me here and I'll reassign :). ",
"I would like to be assigned to this issue, @nateraw . #self-assign"
] | 2023-02-01T18:32:14 | 2023-10-16T13:30:48 | 2023-10-16T13:30:48 | MEMBER | null | null | null | Right now `ds.push_to_hub()` can push a dataset on `main` or on a new branch with `branch=`, but there is no way to open a pull request. Even passing `branch=refs/pr/x` doesn't seem to work: it tries to create a branch with that name
cc @nateraw
It should be possible to tweak the use of `huggingface_hub` in `push_to_hub` to make it open a PR or push to an existing PR | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5492/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5492/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5491 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5491/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5491/comments | https://api.github.com/repos/huggingface/datasets/issues/5491/events | https://github.com/huggingface/datasets/pull/5491 | 1,566,235,012 | PR_kwDODunzps5JA9OD | 5,491 | [MINOR] Typo | {
"login": "cakiki",
"id": 3664563,
"node_id": "MDQ6VXNlcjM2NjQ1NjM=",
"avatar_url": "https://avatars.githubusercontent.com/u/3664563?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cakiki",
"html_url": "https://github.com/cakiki",
"followers_url": "https://api.github.com/users/cakiki/followers",
"following_url": "https://api.github.com/users/cakiki/following{/other_user}",
"gists_url": "https://api.github.com/users/cakiki/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cakiki/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cakiki/subscriptions",
"organizations_url": "https://api.github.com/users/cakiki/orgs",
"repos_url": "https://api.github.com/users/cakiki/repos",
"events_url": "https://api.github.com/users/cakiki/events{/privacy}",
"received_events_url": "https://api.github.com/users/cakiki/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008726 / 0.011353 (-0.002627) | 0.004589 / 0.011008 (-0.006419) | 0.101078 / 0.038508 (0.062570) | 0.029732 / 0.023109 (0.006622) | 0.298309 / 0.275898 (0.022411) | 0.367800 / 0.323480 (0.044320) | 0.007025 / 0.007986 (-0.000961) | 0.003513 / 0.004328 (-0.000815) | 0.079531 / 0.004250 (0.075281) | 0.035588 / 0.037052 (-0.001465) | 0.307850 / 0.258489 (0.049361) | 0.351603 / 0.293841 (0.057762) | 0.033593 / 0.128546 (-0.094954) | 0.011669 / 0.075646 (-0.063977) | 0.323025 / 0.419271 (-0.096246) | 0.042047 / 0.043533 (-0.001486) | 0.300565 / 0.255139 (0.045426) | 0.329362 / 0.283200 (0.046163) | 0.089001 / 0.141683 (-0.052682) | 1.472799 / 1.452155 (0.020644) | 1.488902 / 1.492716 (-0.003814) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.012491 / 0.018006 (-0.005515) | 0.408245 / 0.000490 (0.407755) | 0.003878 / 0.000200 (0.003678) | 0.000078 / 0.000054 (0.000023) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023698 / 0.037411 (-0.013713) | 0.100442 / 0.014526 (0.085916) | 0.108233 / 0.176557 (-0.068323) | 0.145308 / 0.737135 (-0.591827) | 0.113121 / 0.296338 (-0.183218) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.420490 / 0.215209 (0.205281) | 4.179838 / 2.077655 (2.102183) | 2.156007 / 1.504120 (0.651887) | 1.911358 / 1.541195 (0.370163) | 1.867961 / 1.468490 (0.399471) | 0.685254 / 4.584777 (-3.899523) | 3.382386 / 3.745712 (-0.363326) | 3.285657 / 5.269862 (-1.984205) | 1.693878 / 4.565676 (-2.871798) | 0.081680 / 0.424275 (-0.342595) | 0.012182 / 0.007607 (0.004575) | 0.526021 / 0.226044 (0.299977) | 5.276217 / 2.268929 (3.007289) | 2.541518 / 55.444624 (-52.903106) | 2.313452 / 6.876477 (-4.563025) | 2.340000 / 2.142072 (0.197928) | 0.807099 / 4.805227 (-3.998128) | 0.147587 / 6.500664 (-6.353077) | 0.064280 / 0.075469 (-0.011189) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.223466 / 1.841788 (-0.618321) | 13.911365 / 8.074308 (5.837057) | 14.261550 / 10.191392 (4.070158) | 0.135922 / 0.680424 (-0.544502) | 0.028832 / 0.534201 (-0.505368) | 0.393142 / 0.579283 (-0.186141) | 0.400507 / 0.434364 (-0.033857) | 0.471792 / 0.540337 (-0.068546) | 0.558278 / 1.386936 (-0.828658) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006644 / 0.011353 (-0.004709) | 0.004531 / 0.011008 (-0.006478) | 0.076285 / 0.038508 (0.037777) | 0.027249 / 0.023109 (0.004140) | 0.343137 / 0.275898 (0.067239) | 0.378498 / 0.323480 (0.055018) | 0.004950 / 0.007986 (-0.003036) | 0.003422 / 0.004328 (-0.000907) | 0.075662 / 0.004250 (0.071412) | 0.039692 / 0.037052 (0.002640) | 0.343402 / 0.258489 (0.084913) | 0.385067 / 0.293841 (0.091226) | 0.032382 / 0.128546 (-0.096164) | 0.011577 / 0.075646 (-0.064069) | 0.085534 / 0.419271 (-0.333738) | 0.052139 / 0.043533 (0.008606) | 0.342176 / 0.255139 (0.087037) | 0.367298 / 0.283200 (0.084098) | 0.096088 / 0.141683 (-0.045595) | 1.470770 / 1.452155 (0.018615) | 1.567316 / 1.492716 (0.074600) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.217664 / 0.018006 (0.199657) | 0.397807 / 0.000490 (0.397317) | 0.006864 / 0.000200 (0.006664) | 0.000099 / 0.000054 (0.000044) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025064 / 0.037411 (-0.012348) | 0.100906 / 0.014526 (0.086380) | 0.107444 / 0.176557 (-0.069113) | 0.143679 / 0.737135 (-0.593457) | 0.112460 / 0.296338 (-0.183879) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.442634 / 0.215209 (0.227425) | 4.410687 / 2.077655 (2.333032) | 2.067445 / 1.504120 (0.563325) | 1.860569 / 1.541195 (0.319374) | 1.943523 / 1.468490 (0.475033) | 0.694585 / 4.584777 (-3.890192) | 3.375906 / 3.745712 (-0.369806) | 3.483334 / 5.269862 (-1.786528) | 1.437700 / 4.565676 (-3.127977) | 0.083138 / 0.424275 (-0.341137) | 0.012979 / 0.007607 (0.005372) | 0.536414 / 0.226044 (0.310370) | 5.379872 / 2.268929 (3.110943) | 2.517907 / 55.444624 (-52.926717) | 2.164772 / 6.876477 (-4.711705) | 2.212839 / 2.142072 (0.070767) | 0.799675 / 4.805227 (-4.005553) | 0.150253 / 6.500664 (-6.350411) | 0.067033 / 0.075469 (-0.008436) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.295592 / 1.841788 (-0.546196) | 14.372932 / 8.074308 (6.298623) | 13.618423 / 10.191392 (3.427031) | 0.141212 / 0.680424 (-0.539212) | 0.016933 / 0.534201 (-0.517268) | 0.385664 / 0.579283 (-0.193619) | 0.386919 / 0.434364 (-0.047445) | 0.477022 / 0.540337 (-0.063315) | 0.565158 / 1.386936 (-0.821778) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#38c715cc787a81d0fd894205b4b24aca2f45f84b \"CML watermark\")\n"
] | 2023-02-01T14:39:39 | 2023-02-02T07:42:28 | 2023-02-02T07:35:14 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5491",
"html_url": "https://github.com/huggingface/datasets/pull/5491",
"diff_url": "https://github.com/huggingface/datasets/pull/5491.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5491.patch",
"merged_at": "2023-02-02T07:35:14"
} | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5491/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5491/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5490 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5490/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5490/comments | https://api.github.com/repos/huggingface/datasets/issues/5490/events | https://github.com/huggingface/datasets/pull/5490 | 1,565,842,327 | PR_kwDODunzps5I_nz- | 5,490 | Do not add index column by default when exporting to CSV | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008581 / 0.011353 (-0.002772) | 0.004519 / 0.011008 (-0.006490) | 0.099721 / 0.038508 (0.061213) | 0.029217 / 0.023109 (0.006107) | 0.298229 / 0.275898 (0.022331) | 0.332605 / 0.323480 (0.009125) | 0.006880 / 0.007986 (-0.001106) | 0.003324 / 0.004328 (-0.001005) | 0.078143 / 0.004250 (0.073892) | 0.034262 / 0.037052 (-0.002790) | 0.304162 / 0.258489 (0.045673) | 0.342351 / 0.293841 (0.048510) | 0.033387 / 0.128546 (-0.095159) | 0.011397 / 0.075646 (-0.064249) | 0.321527 / 0.419271 (-0.097744) | 0.040886 / 0.043533 (-0.002647) | 0.299968 / 0.255139 (0.044829) | 0.322484 / 0.283200 (0.039285) | 0.083832 / 0.141683 (-0.057851) | 1.482241 / 1.452155 (0.030086) | 1.548438 / 1.492716 (0.055721) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.191002 / 0.018006 (0.172996) | 0.403423 / 0.000490 (0.402933) | 0.002493 / 0.000200 (0.002293) | 0.000074 / 0.000054 (0.000020) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023720 / 0.037411 (-0.013691) | 0.100806 / 0.014526 (0.086281) | 0.105314 / 0.176557 (-0.071242) | 0.141490 / 0.737135 (-0.595645) | 0.108695 / 0.296338 (-0.187644) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.412250 / 0.215209 (0.197041) | 4.124830 / 2.077655 (2.047175) | 1.851948 / 1.504120 (0.347828) | 1.651597 / 1.541195 (0.110403) | 1.712486 / 1.468490 (0.243996) | 0.696634 / 4.584777 (-3.888143) | 3.304220 / 3.745712 (-0.441492) | 1.862776 / 5.269862 (-3.407086) | 1.159452 / 4.565676 (-3.406224) | 0.082930 / 0.424275 (-0.341345) | 0.012586 / 0.007607 (0.004979) | 0.524499 / 0.226044 (0.298455) | 5.249235 / 2.268929 (2.980307) | 2.293187 / 55.444624 (-53.151437) | 1.950101 / 6.876477 (-4.926376) | 2.008274 / 2.142072 (-0.133799) | 0.811641 / 4.805227 (-3.993586) | 0.148785 / 6.500664 (-6.351879) | 0.064461 / 0.075469 (-0.011008) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.232227 / 1.841788 (-0.609561) | 13.235896 / 8.074308 (5.161588) | 13.837420 / 10.191392 (3.646028) | 0.135586 / 0.680424 (-0.544838) | 0.028935 / 0.534201 (-0.505266) | 0.397064 / 0.579283 (-0.182220) | 0.393814 / 0.434364 (-0.040549) | 0.480450 / 0.540337 (-0.059887) | 0.561159 / 1.386936 (-0.825777) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006696 / 0.011353 (-0.004657) | 0.004528 / 0.011008 (-0.006480) | 0.077335 / 0.038508 (0.038827) | 0.027181 / 0.023109 (0.004072) | 0.345379 / 0.275898 (0.069481) | 0.372544 / 0.323480 (0.049064) | 0.006808 / 0.007986 (-0.001178) | 0.003284 / 0.004328 (-0.001045) | 0.077379 / 0.004250 (0.073129) | 0.039954 / 0.037052 (0.002901) | 0.348094 / 0.258489 (0.089605) | 0.382315 / 0.293841 (0.088474) | 0.031694 / 0.128546 (-0.096852) | 0.011714 / 0.075646 (-0.063933) | 0.086425 / 0.419271 (-0.332846) | 0.041778 / 0.043533 (-0.001754) | 0.342161 / 0.255139 (0.087022) | 0.363798 / 0.283200 (0.080599) | 0.091315 / 0.141683 (-0.050368) | 1.462066 / 1.452155 (0.009912) | 1.541417 / 1.492716 (0.048700) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.235840 / 0.018006 (0.217834) | 0.397096 / 0.000490 (0.396606) | 0.004597 / 0.000200 (0.004397) | 0.000079 / 0.000054 (0.000025) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024296 / 0.037411 (-0.013115) | 0.099167 / 0.014526 (0.084641) | 0.108257 / 0.176557 (-0.068299) | 0.143434 / 0.737135 (-0.593701) | 0.111933 / 0.296338 (-0.184406) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.440306 / 0.215209 (0.225096) | 4.374065 / 2.077655 (2.296410) | 2.072653 / 1.504120 (0.568533) | 1.864829 / 1.541195 (0.323635) | 1.927970 / 1.468490 (0.459479) | 0.710118 / 4.584777 (-3.874659) | 3.391216 / 3.745712 (-0.354496) | 1.888847 / 5.269862 (-3.381015) | 1.178740 / 4.565676 (-3.386936) | 0.083950 / 0.424275 (-0.340325) | 0.012567 / 0.007607 (0.004960) | 0.540557 / 0.226044 (0.314513) | 5.437621 / 2.268929 (3.168692) | 2.531165 / 55.444624 (-52.913460) | 2.181450 / 6.876477 (-4.695027) | 2.209108 / 2.142072 (0.067035) | 0.814236 / 4.805227 (-3.990991) | 0.153000 / 6.500664 (-6.347664) | 0.066769 / 0.075469 (-0.008700) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.301057 / 1.841788 (-0.540731) | 14.066786 / 8.074308 (5.992478) | 13.641455 / 10.191392 (3.450063) | 0.138838 / 0.680424 (-0.541586) | 0.016733 / 0.534201 (-0.517468) | 0.391823 / 0.579283 (-0.187460) | 0.390817 / 0.434364 (-0.043547) | 0.487682 / 0.540337 (-0.052656) | 0.581134 / 1.386936 (-0.805802) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#b065547654efa0ec633cf373ac1512884c68b2e1 \"CML watermark\")\n"
] | 2023-02-01T10:20:55 | 2023-02-09T09:29:08 | 2023-02-09T09:22:23 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5490",
"html_url": "https://github.com/huggingface/datasets/pull/5490",
"diff_url": "https://github.com/huggingface/datasets/pull/5490.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5490.patch",
"merged_at": "2023-02-09T09:22:23"
} | As pointed out by @merveenoyan, default behavior of `Dataset.to_csv` adds the index as an additional column without name.
This PR changes the default behavior, so that now the index column is not written.
To add the index column, now you need to pass `index=True` and also `index_label=<name of the index colum>` to name that column.
CC: @merveenoyan | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5490/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5490/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5489 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5489/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5489/comments | https://api.github.com/repos/huggingface/datasets/issues/5489/events | https://github.com/huggingface/datasets/pull/5489 | 1,565,761,705 | PR_kwDODunzps5I_WPH | 5,489 | Pin dill lower version | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008798 / 0.011353 (-0.002554) | 0.005313 / 0.011008 (-0.005695) | 0.099234 / 0.038508 (0.060726) | 0.033935 / 0.023109 (0.010826) | 0.306610 / 0.275898 (0.030712) | 0.373151 / 0.323480 (0.049671) | 0.008305 / 0.007986 (0.000320) | 0.004647 / 0.004328 (0.000319) | 0.079984 / 0.004250 (0.075733) | 0.042546 / 0.037052 (0.005493) | 0.355105 / 0.258489 (0.096616) | 0.332769 / 0.293841 (0.038928) | 0.037708 / 0.128546 (-0.090839) | 0.012141 / 0.075646 (-0.063505) | 0.365338 / 0.419271 (-0.053933) | 0.048875 / 0.043533 (0.005343) | 0.301771 / 0.255139 (0.046632) | 0.323301 / 0.283200 (0.040101) | 0.099116 / 0.141683 (-0.042566) | 1.463948 / 1.452155 (0.011793) | 1.563006 / 1.492716 (0.070290) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.219799 / 0.018006 (0.201793) | 0.524126 / 0.000490 (0.523636) | 0.003899 / 0.000200 (0.003699) | 0.000092 / 0.000054 (0.000037) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028361 / 0.037411 (-0.009050) | 0.111386 / 0.014526 (0.096860) | 0.125749 / 0.176557 (-0.050807) | 0.167026 / 0.737135 (-0.570109) | 0.132082 / 0.296338 (-0.164257) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.385046 / 0.215209 (0.169837) | 3.933129 / 2.077655 (1.855475) | 1.823395 / 1.504120 (0.319276) | 1.646468 / 1.541195 (0.105273) | 1.658835 / 1.468490 (0.190344) | 0.708300 / 4.584777 (-3.876477) | 4.001478 / 3.745712 (0.255766) | 2.221773 / 5.269862 (-3.048089) | 1.597925 / 4.565676 (-2.967751) | 0.088699 / 0.424275 (-0.335577) | 0.013575 / 0.007607 (0.005968) | 0.520577 / 0.226044 (0.294533) | 5.044313 / 2.268929 (2.775385) | 2.239862 / 55.444624 (-53.204763) | 2.060394 / 6.876477 (-4.816083) | 2.060684 / 2.142072 (-0.081389) | 0.844862 / 4.805227 (-3.960365) | 0.190321 / 6.500664 (-6.310343) | 0.071595 / 0.075469 (-0.003875) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.400048 / 1.841788 (-0.441740) | 15.684159 / 8.074308 (7.609851) | 14.369298 / 10.191392 (4.177906) | 0.164874 / 0.680424 (-0.515550) | 0.033219 / 0.534201 (-0.500982) | 0.449176 / 0.579283 (-0.130107) | 0.456560 / 0.434364 (0.022196) | 0.517978 / 0.540337 (-0.022359) | 0.635467 / 1.386936 (-0.751469) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007263 / 0.011353 (-0.004089) | 0.005451 / 0.011008 (-0.005558) | 0.078785 / 0.038508 (0.040277) | 0.032656 / 0.023109 (0.009546) | 0.346384 / 0.275898 (0.070486) | 0.390778 / 0.323480 (0.067299) | 0.005848 / 0.007986 (-0.002137) | 0.004565 / 0.004328 (0.000236) | 0.077903 / 0.004250 (0.073652) | 0.048659 / 0.037052 (0.011606) | 0.368629 / 0.258489 (0.110140) | 0.401632 / 0.293841 (0.107791) | 0.038516 / 0.128546 (-0.090030) | 0.011895 / 0.075646 (-0.063752) | 0.089185 / 0.419271 (-0.330086) | 0.049875 / 0.043533 (0.006342) | 0.344771 / 0.255139 (0.089632) | 0.378237 / 0.283200 (0.095038) | 0.099184 / 0.141683 (-0.042498) | 1.505058 / 1.452155 (0.052903) | 1.555330 / 1.492716 (0.062614) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.209132 / 0.018006 (0.191126) | 0.479928 / 0.000490 (0.479438) | 0.005923 / 0.000200 (0.005723) | 0.000113 / 0.000054 (0.000058) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029187 / 0.037411 (-0.008224) | 0.117026 / 0.014526 (0.102500) | 0.131834 / 0.176557 (-0.044722) | 0.172797 / 0.737135 (-0.564339) | 0.129098 / 0.296338 (-0.167240) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.450214 / 0.215209 (0.235005) | 4.323950 / 2.077655 (2.246295) | 2.210100 / 1.504120 (0.705980) | 2.058733 / 1.541195 (0.517538) | 1.968191 / 1.468490 (0.499701) | 0.694918 / 4.584777 (-3.889859) | 4.176559 / 3.745712 (0.430846) | 2.118211 / 5.269862 (-3.151651) | 1.410652 / 4.565676 (-3.155024) | 0.093606 / 0.424275 (-0.330669) | 0.013729 / 0.007607 (0.006122) | 0.528463 / 0.226044 (0.302418) | 5.311766 / 2.268929 (3.042837) | 2.522981 / 55.444624 (-52.921644) | 2.177191 / 6.876477 (-4.699285) | 2.211448 / 2.142072 (0.069375) | 0.824334 / 4.805227 (-3.980893) | 0.166642 / 6.500664 (-6.334022) | 0.062774 / 0.075469 (-0.012695) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.367573 / 1.841788 (-0.474215) | 15.913637 / 8.074308 (7.839328) | 13.397411 / 10.191392 (3.206019) | 0.162599 / 0.680424 (-0.517825) | 0.020325 / 0.534201 (-0.513876) | 0.438745 / 0.579283 (-0.140538) | 0.449892 / 0.434364 (0.015528) | 0.556226 / 0.540337 (0.015888) | 0.672661 / 1.386936 (-0.714275) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#5f810b7011a8a4ab077a1847c024d2d9e267b065 \"CML watermark\")\n"
] | 2023-02-01T09:33:42 | 2023-02-02T07:48:09 | 2023-02-02T07:40:43 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5489",
"html_url": "https://github.com/huggingface/datasets/pull/5489",
"diff_url": "https://github.com/huggingface/datasets/pull/5489.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5489.patch",
"merged_at": "2023-02-02T07:40:43"
} | Pin `dill` lower version compatible with `datasets`.
Related to:
- #5487
- #288
Note that the required `dill._dill` module was introduced in dill-2.8.0, however we have heuristically tested that datasets can only be installed with dill>=3.0.0 (otherwise pip hangs indefinitely while preparing metadata for multiprocess-0.70.7)
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5489/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5489/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5488 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5488/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5488/comments | https://api.github.com/repos/huggingface/datasets/issues/5488/events | https://github.com/huggingface/datasets/issues/5488 | 1,565,025,262 | I_kwDODunzps5dSGPu | 5,488 | Error loading MP3 files from CommonVoice | {
"login": "kradonneoh",
"id": 110259722,
"node_id": "U_kgDOBpJuCg",
"avatar_url": "https://avatars.githubusercontent.com/u/110259722?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kradonneoh",
"html_url": "https://github.com/kradonneoh",
"followers_url": "https://api.github.com/users/kradonneoh/followers",
"following_url": "https://api.github.com/users/kradonneoh/following{/other_user}",
"gists_url": "https://api.github.com/users/kradonneoh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kradonneoh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kradonneoh/subscriptions",
"organizations_url": "https://api.github.com/users/kradonneoh/orgs",
"repos_url": "https://api.github.com/users/kradonneoh/repos",
"events_url": "https://api.github.com/users/kradonneoh/events{/privacy}",
"received_events_url": "https://api.github.com/users/kradonneoh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @kradonneoh, thanks for reporting.\r\n\r\nPlease note that to work with audio datasets (and specifically with MP3 files) we have detailed installation instructions in our docs: https://huggingface.co/docs/datasets/installation#audio\r\n- one of the requirements is torchaudio<0.12.0\r\n\r\nLet us know if the problem persists after having followed them.",
"I saw that and have followed it (hence the Expected Behavior section of the bug report). \r\n\r\nIs there no intention of updating to the latest version? It does limit the version of `torch` I can use, which isn’t ideal.",
"@kradonneoh hey! actually with `ffmpeg4` loading of mp3 files should work, so this is a not expected behavior and we need to investigate it. It works on my side with `torchaudio==0.13` and `ffmpeg==4.2.7`. Which `torchaudio` version do you use?\r\n\r\n`datasets` should support decoding of mp3 files with `torchaudio` when its version is `>0.12` but as you noted it requires `ffmpeg>4`, we need to fix this in the documentation, thank you for pointing to this! \r\n\r\nBut according to your traceback it seems that it tries to use [`libsndfile`](https://github.com/libsndfile/libsndfile) backend for mp3 decoding. And `libsndfile` library supports mp3 decoding starting from version 1.1.0 which on Linux has to be compiled from source for now afaik. \r\n\r\nfyi - we are aiming at getting rid of `torchaudio` dependency at all by the next major library release in favor of `libsndfile` too.",
"We now decode MP3 with `soundfile`, so I'm closing this issue"
] | 2023-01-31T21:25:33 | 2023-03-02T16:25:14 | 2023-03-02T16:25:13 | NONE | null | null | null | ### Describe the bug
When loading a CommonVoice dataset with `datasets==2.9.0` and `torchaudio>=0.12.0`, I get an error reading the audio arrays:
```python
---------------------------------------------------------------------------
LibsndfileError Traceback (most recent call last)
~/.local/lib/python3.8/site-packages/datasets/features/audio.py in _decode_mp3(self, path_or_file)
310 try: # try torchaudio anyway because sometimes it works (depending on the os and os packages installed)
--> 311 array, sampling_rate = self._decode_mp3_torchaudio(path_or_file)
312 except RuntimeError:
~/.local/lib/python3.8/site-packages/datasets/features/audio.py in _decode_mp3_torchaudio(self, path_or_file)
351
--> 352 array, sampling_rate = torchaudio.load(path_or_file, format="mp3")
353 if self.sampling_rate and self.sampling_rate != sampling_rate:
~/.local/lib/python3.8/site-packages/torchaudio/backend/soundfile_backend.py in load(filepath, frame_offset, num_frames, normalize, channels_first, format)
204 """
--> 205 with soundfile.SoundFile(filepath, "r") as file_:
206 if file_.format != "WAV" or normalize:
~/.local/lib/python3.8/site-packages/soundfile.py in __init__(self, file, mode, samplerate, channels, subtype, endian, format, closefd)
654 format, subtype, endian)
--> 655 self._file = self._open(file, mode_int, closefd)
656 if set(mode).issuperset('r+') and self.seekable():
~/.local/lib/python3.8/site-packages/soundfile.py in _open(self, file, mode_int, closefd)
1212 err = _snd.sf_error(file_ptr)
-> 1213 raise LibsndfileError(err, prefix="Error opening {0!r}: ".format(self.name))
1214 if mode_int == _snd.SFM_WRITE:
LibsndfileError: Error opening <_io.BytesIO object at 0x7fa539462090>: File contains data in an unknown format.
```
I assume this is because there's some issue with the mp3 decoding process. I've verified that I have `ffmpeg>=4` (on a Linux distro), which appears to be the fallback backend for `torchaudio,` (at least according to #4889).
### Steps to reproduce the bug
```python
dataset = load_dataset("mozilla-foundation/common_voice_11_0", "be", split="train")
dataset[0]
```
### Expected behavior
Similar behavior to `torchaudio<0.12.0`, which doesn't result in a `LibsndfileError`
### Environment info
- `datasets` version: 2.9.0
- Platform: Linux-5.15.0-52-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 10.0.1
- Pandas version: 1.5.1 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5488/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5488/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5487 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5487/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5487/comments | https://api.github.com/repos/huggingface/datasets/issues/5487/events | https://github.com/huggingface/datasets/issues/5487 | 1,564,480,121 | I_kwDODunzps5dQBJ5 | 5,487 | Incorrect filepath for dill module | {
"login": "avivbrokman",
"id": 35349273,
"node_id": "MDQ6VXNlcjM1MzQ5Mjcz",
"avatar_url": "https://avatars.githubusercontent.com/u/35349273?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/avivbrokman",
"html_url": "https://github.com/avivbrokman",
"followers_url": "https://api.github.com/users/avivbrokman/followers",
"following_url": "https://api.github.com/users/avivbrokman/following{/other_user}",
"gists_url": "https://api.github.com/users/avivbrokman/gists{/gist_id}",
"starred_url": "https://api.github.com/users/avivbrokman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/avivbrokman/subscriptions",
"organizations_url": "https://api.github.com/users/avivbrokman/orgs",
"repos_url": "https://api.github.com/users/avivbrokman/repos",
"events_url": "https://api.github.com/users/avivbrokman/events{/privacy}",
"received_events_url": "https://api.github.com/users/avivbrokman/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi! The correct path is still `dill._dill.XXXX` in the latest release. What do you get when you run `python -c \"import dill; print(dill.__version__)\"` in your environment?",
"`0.3.6` I feel like that's bad news, because it's probably not the issue.\r\n\r\nMy mistake, about the wrong path guess. I think I didn't notice that the first `dill` in the path isn't supposed to be included in the path specification in python.\r\n<img width=\"146\" alt=\"Screen Shot 2023-01-31 at 12 58 32 PM\" src=\"https://user-images.githubusercontent.com/35349273/215844209-74af6a8f-9bff-4c75-9495-44c658c8e9f7.png\">\r\n",
"Hi, @avivbrokman, this issue you report appeared only with old versions of dill. See:\r\n- #288\r\n\r\nAre you sure you are in the right Python environment?\r\n- Please note that Jupyter (where I guess you get the error) may have multiple execution backends (IPython kernels) that might be different from the Python environment your are using to get the dill version\r\n - Have you run `import dill; print(dill.__version__)` in the same Jupyter/IPython that you were using when you got the error while executing `import datasets`?",
"I'm using spyder, and I am still getting `0.3.6` for `dill`, so unfortunately #288 isn't applicable, I think. However, I found something odd that I believe is a clue: \r\n\r\n```\r\nimport inspect\r\nimport dill\r\n\r\ninspect.getfile(dill)\r\n>>> '/Users/avivbrokman/opt/anaconda3/lib/python3.9/site-packages/dill/__init__.py'\r\n```\r\n\r\nI checked out the directory, and there is no `dill` subdirectory within '/Users/avivbrokman/opt/anaconda3/lib/python3.9/site-packages/dill`, as there should be. Rather, `_dill.py` is in '/Users/avivbrokman/opt/anaconda3/lib/python3.9/site-packages/dill` itself. \r\n\r\n If I run `pip install dill` or `pip install --upgrade dill`, I get the message `Requirement already satisfied: dill in ./opt/anaconda3/lib/python3.9/site-packages (0.3.6)`. If I run `conda upgrade dill`, I get the message `Solving environment: failed with repodata from current_repodata.json, will retry with next repodata source.` a couple of times, followed by\r\n\r\n```\r\nSolving environment: failed\r\nSolving environment: / \r\nFound conflicts! Looking for incompatible packages.\r\n```\r\n\r\nAnd then terminal proceeds to list conflicts between different packages I have.\r\n\r\nThis is all very strange to me because I recently uninstalled and reinstalled `anaconda`.\r\n",
"As I said above, I guess this is not a problem with `datasets`. I think you have different Python environments: one with the new dill version (the one you get while using pip) and other with the old dill version (the one where you get the AttributeError).\r\n\r\nYou should update `dill` in the Python environment you are using within spyder.\r\n\r\nPlease note that the `_dill` module is present in the `dill` package since their 2.8.0 version."
] | 2023-01-31T15:01:08 | 2023-02-24T16:18:36 | 2023-02-24T16:18:36 | NONE | null | null | null | ### Describe the bug
I installed the `datasets` package and when I try to `import` it, I get the following error:
```
Traceback (most recent call last):
File "/var/folders/jt/zw5g74ln6tqfdzsl8tx378j00000gn/T/ipykernel_3805/3458380017.py", line 1, in <module>
import datasets
File "/Users/avivbrokman/opt/anaconda3/lib/python3.9/site-packages/datasets/__init__.py", line 43, in <module>
from .arrow_dataset import Dataset
File "/Users/avivbrokman/opt/anaconda3/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 66, in <module>
from .arrow_writer import ArrowWriter, OptimizedTypedSequence
File "/Users/avivbrokman/opt/anaconda3/lib/python3.9/site-packages/datasets/arrow_writer.py", line 27, in <module>
from .features import Features, Image, Value
File "/Users/avivbrokman/opt/anaconda3/lib/python3.9/site-packages/datasets/features/__init__.py", line 17, in <module>
from .audio import Audio
File "/Users/avivbrokman/opt/anaconda3/lib/python3.9/site-packages/datasets/features/audio.py", line 12, in <module>
from ..download.streaming_download_manager import xopen
File "/Users/avivbrokman/opt/anaconda3/lib/python3.9/site-packages/datasets/download/__init__.py", line 9, in <module>
from .download_manager import DownloadManager, DownloadMode
File "/Users/avivbrokman/opt/anaconda3/lib/python3.9/site-packages/datasets/download/download_manager.py", line 36, in <module>
from ..utils.py_utils import NestedDataStructure, map_nested, size_str
File "/Users/avivbrokman/opt/anaconda3/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 602, in <module>
class Pickler(dill.Pickler):
File "/Users/avivbrokman/opt/anaconda3/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 605, in Pickler
dispatch = dill._dill.MetaCatchingDict(dill.Pickler.dispatch.copy())
AttributeError: module 'dill' has no attribute '_dill'
```
Looking at the github source code for dill, it appears that `datasets` has a bug or is not compatible with the latest `dill`. Specifically, rather than `dill._dill.XXXX` it should be `dill.dill._dill.XXXX`. But given the popularity of `datasets` I feel confused about me being the first person to have this issue, so it makes me wonder if I'm misdiagnosing the issue.
### Steps to reproduce the bug
Install `dill` and `datasets` packages and then `import datasets`
### Expected behavior
I expect `datasets` to import.
### Environment info
- `datasets` version: 2.9.0
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.9.13
- PyArrow version: 11.0.0
- Pandas version: 1.4.4 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5487/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5487/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5486 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5486/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5486/comments | https://api.github.com/repos/huggingface/datasets/issues/5486/events | https://github.com/huggingface/datasets/issues/5486 | 1,564,059,749 | I_kwDODunzps5dOahl | 5,486 | Adding `sep` to TextConfig | {
"login": "omar-araboghli",
"id": 29576434,
"node_id": "MDQ6VXNlcjI5NTc2NDM0",
"avatar_url": "https://avatars.githubusercontent.com/u/29576434?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/omar-araboghli",
"html_url": "https://github.com/omar-araboghli",
"followers_url": "https://api.github.com/users/omar-araboghli/followers",
"following_url": "https://api.github.com/users/omar-araboghli/following{/other_user}",
"gists_url": "https://api.github.com/users/omar-araboghli/gists{/gist_id}",
"starred_url": "https://api.github.com/users/omar-araboghli/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/omar-araboghli/subscriptions",
"organizations_url": "https://api.github.com/users/omar-araboghli/orgs",
"repos_url": "https://api.github.com/users/omar-araboghli/repos",
"events_url": "https://api.github.com/users/omar-araboghli/events{/privacy}",
"received_events_url": "https://api.github.com/users/omar-araboghli/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Hi @omar-araboghli, thanks for your proposal.\r\n\r\nHave you tried to use \"csv\" loader instead of \"text\"? That already has a `sep` argument.",
"Hi @albertvillanova, thanks for the quick response!\r\n\r\nIndeed, I have been trying to use `csv` instead of `text`. However I am still not able to define range of rows as one sequence, that is achievable with passing `sample_by='paragraph'` to the `TextConfig`\r\n\r\nFor instance, the below code\r\n\r\n```python\r\nimport datasets\r\n\r\ndataset = datasets.load_dataset(\r\n path='csv',\r\n data_files={'train': TRAINING_SET_PATH},\r\n sep='\\t',\r\n header=None,\r\n column_names=['tokens', 'pos_tags', 'chunk_tags', 'ner_tags']\r\n)\r\n```\r\n\r\nleads to \r\n\r\n```python\r\ndataset\r\n>>> DatasetDict({\r\n train: Dataset({\r\n features: ['tokens', 'pos_tags', 'chunk_tags', 'ner_tags'],\r\n num_rows: 62543\r\n })\r\n})\r\n\r\ndataset['train'][0]\r\n>>> {'tokens': 'Distribution',\r\n 'pos_tags': 'NN',\r\n 'chunk_tags': 'O',\r\n 'ner_tags': 'O'\r\n}\r\n```\r\nIs there a way to deal with multiple csv rows as one dataset instance, where each column is a sequence of those rows ?"
] | 2023-01-31T10:39:53 | 2023-01-31T14:50:18 | null | NONE | null | null | null | I have a local a `.txt` file that follows the `CONLL2003` format which I need to load using `load_script`. However, by using `sample_by='line'`, one can only split the dataset into lines without splitting each line into columns. Would it be reasonable to add a `sep` argument in combination with `sample_by='paragraph'` to parse a paragraph into an array for each column ? If so, I am happy to contribute!
## Environment
* `python 3.8.10`
* `datasets 2.9.0`
## Snippet of `train.txt`
```txt
Distribution NN O O
and NN O O
dynamics NN O O
of NN O O
electron NN O B-RP
complexes NN O I-RP
in NN O O
cyanobacterial NN O B-R
membranes NN O I-R
The NN O O
occurrence NN O O
of NN O O
prostaglandin NN O B-R
F2α NN O I-R
in NN O O
Pharbitis NN O B-R
seedlings NN O I-R
grown NN O O
under NN O O
short NN O B-P
days NN O I-P
or NN O I-P
days NN O I-P
```
## Current Behaviour
```python
# defining 4 features ['tokens', 'pos_tags', 'chunk_tags', 'ner_tags'] here would fail with `ValueError: Length of names (4) does not match length of arrays (1)`
dataset = datasets.load_dataset(path='text', features=features, data_files={'train': 'train.txt'}, sample_by='line')
dataset['train']['tokens'][0]
>>> 'Distribution\tNN\tO\tO'
```
## Expected Behaviour / Suggestion
```python
# suppose we defined 4 features ['tokens', 'pos_tags', 'chunk_tags', 'ner_tags']
dataset = datasets.load_dataset(path='text', features=features, data_files={'train': 'train.txt'}, sample_by='paragraph', sep='\t')
dataset['train']['tokens'][0]
>>> ['Distribution', 'and', 'dynamics', ... ]
dataset['train']['ner_tags'][0]
>>> ['O', 'O', 'O', ... ]
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5486/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5486/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5485 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5485/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5485/comments | https://api.github.com/repos/huggingface/datasets/issues/5485/events | https://github.com/huggingface/datasets/pull/5485 | 1,563,002,829 | PR_kwDODunzps5I2ER2 | 5,485 | Add section in tutorial for IterableDataset | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008492 / 0.011353 (-0.002861) | 0.004717 / 0.011008 (-0.006292) | 0.101111 / 0.038508 (0.062602) | 0.029129 / 0.023109 (0.006019) | 0.307564 / 0.275898 (0.031666) | 0.367038 / 0.323480 (0.043558) | 0.007105 / 0.007986 (-0.000881) | 0.003622 / 0.004328 (-0.000706) | 0.078370 / 0.004250 (0.074120) | 0.036960 / 0.037052 (-0.000093) | 0.315612 / 0.258489 (0.057123) | 0.353601 / 0.293841 (0.059760) | 0.032900 / 0.128546 (-0.095647) | 0.011405 / 0.075646 (-0.064241) | 0.322331 / 0.419271 (-0.096940) | 0.040823 / 0.043533 (-0.002710) | 0.306734 / 0.255139 (0.051595) | 0.328155 / 0.283200 (0.044955) | 0.087169 / 0.141683 (-0.054514) | 1.460543 / 1.452155 (0.008389) | 1.498094 / 1.492716 (0.005378) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.011863 / 0.018006 (-0.006143) | 0.416315 / 0.000490 (0.415826) | 0.003463 / 0.000200 (0.003263) | 0.000075 / 0.000054 (0.000020) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023219 / 0.037411 (-0.014192) | 0.096469 / 0.014526 (0.081943) | 0.105960 / 0.176557 (-0.070596) | 0.148993 / 0.737135 (-0.588142) | 0.108112 / 0.296338 (-0.188226) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.415662 / 0.215209 (0.200453) | 4.155111 / 2.077655 (2.077456) | 1.834943 / 1.504120 (0.330823) | 1.622752 / 1.541195 (0.081557) | 1.701630 / 1.468490 (0.233140) | 0.690596 / 4.584777 (-3.894181) | 3.399385 / 3.745712 (-0.346327) | 3.140521 / 5.269862 (-2.129341) | 1.609152 / 4.565676 (-2.956524) | 0.082132 / 0.424275 (-0.342143) | 0.012343 / 0.007607 (0.004735) | 0.532715 / 0.226044 (0.306670) | 5.323032 / 2.268929 (3.054104) | 2.326625 / 55.444624 (-53.118000) | 1.944263 / 6.876477 (-4.932213) | 1.994015 / 2.142072 (-0.148058) | 0.813805 / 4.805227 (-3.991422) | 0.149233 / 6.500664 (-6.351431) | 0.065318 / 0.075469 (-0.010151) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.212441 / 1.841788 (-0.629347) | 13.979069 / 8.074308 (5.904761) | 14.003998 / 10.191392 (3.812606) | 0.146956 / 0.680424 (-0.533468) | 0.028564 / 0.534201 (-0.505637) | 0.392370 / 0.579283 (-0.186913) | 0.399695 / 0.434364 (-0.034669) | 0.473481 / 0.540337 (-0.066856) | 0.562625 / 1.386936 (-0.824311) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006821 / 0.011353 (-0.004532) | 0.004570 / 0.011008 (-0.006438) | 0.076217 / 0.038508 (0.037709) | 0.028888 / 0.023109 (0.005779) | 0.345431 / 0.275898 (0.069533) | 0.389246 / 0.323480 (0.065766) | 0.005939 / 0.007986 (-0.002046) | 0.003356 / 0.004328 (-0.000973) | 0.075880 / 0.004250 (0.071629) | 0.041427 / 0.037052 (0.004374) | 0.344481 / 0.258489 (0.085992) | 0.398508 / 0.293841 (0.104667) | 0.031801 / 0.128546 (-0.096745) | 0.011763 / 0.075646 (-0.063884) | 0.085600 / 0.419271 (-0.333672) | 0.042656 / 0.043533 (-0.000876) | 0.345893 / 0.255139 (0.090754) | 0.376910 / 0.283200 (0.093711) | 0.092451 / 0.141683 (-0.049232) | 1.461222 / 1.452155 (0.009068) | 1.555822 / 1.492716 (0.063106) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.235781 / 0.018006 (0.217774) | 0.418485 / 0.000490 (0.417995) | 0.005560 / 0.000200 (0.005360) | 0.000078 / 0.000054 (0.000023) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025410 / 0.037411 (-0.012001) | 0.103780 / 0.014526 (0.089254) | 0.110183 / 0.176557 (-0.066374) | 0.151097 / 0.737135 (-0.586039) | 0.112539 / 0.296338 (-0.183799) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.436686 / 0.215209 (0.221477) | 4.341594 / 2.077655 (2.263940) | 2.062309 / 1.504120 (0.558190) | 1.857461 / 1.541195 (0.316267) | 1.947204 / 1.468490 (0.478713) | 0.699641 / 4.584777 (-3.885136) | 3.406983 / 3.745712 (-0.338729) | 3.294705 / 5.269862 (-1.975157) | 1.360582 / 4.565676 (-3.205095) | 0.083025 / 0.424275 (-0.341250) | 0.012461 / 0.007607 (0.004854) | 0.537767 / 0.226044 (0.311722) | 5.393316 / 2.268929 (3.124387) | 2.516692 / 55.444624 (-52.927932) | 2.163987 / 6.876477 (-4.712490) | 2.220480 / 2.142072 (0.078408) | 0.810648 / 4.805227 (-3.994579) | 0.151820 / 6.500664 (-6.348844) | 0.068080 / 0.075469 (-0.007389) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.279382 / 1.841788 (-0.562405) | 13.989947 / 8.074308 (5.915638) | 14.039229 / 10.191392 (3.847836) | 0.141071 / 0.680424 (-0.539352) | 0.017118 / 0.534201 (-0.517083) | 0.381558 / 0.579283 (-0.197725) | 0.390407 / 0.434364 (-0.043957) | 0.440920 / 0.540337 (-0.099418) | 0.525478 / 1.386936 (-0.861458) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#eeedb5167d150888a640cd70ca63d6d72bbe1043 \"CML watermark\")\n"
] | 2023-01-30T18:43:04 | 2023-02-01T18:15:38 | 2023-02-01T18:08:46 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5485",
"html_url": "https://github.com/huggingface/datasets/pull/5485",
"diff_url": "https://github.com/huggingface/datasets/pull/5485.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5485.patch",
"merged_at": "2023-02-01T18:08:46"
} | Introduces an `IterableDataset` and how to access it in the tutorial section. It also adds a brief next step section at the end to provide a path for users who want more explanation and a path for users who want something more practical and learn how to preprocess these dataset types. It'll complement the awesome new doc introduced in:
- #5410 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5485/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5485/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5484 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5484/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5484/comments | https://api.github.com/repos/huggingface/datasets/issues/5484/events | https://github.com/huggingface/datasets/pull/5484 | 1,562,877,070 | PR_kwDODunzps5I1oaq | 5,484 | Update docs for `nyu_depth_v2` dataset | {
"login": "awsaf49",
"id": 36858976,
"node_id": "MDQ6VXNlcjM2ODU4OTc2",
"avatar_url": "https://avatars.githubusercontent.com/u/36858976?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/awsaf49",
"html_url": "https://github.com/awsaf49",
"followers_url": "https://api.github.com/users/awsaf49/followers",
"following_url": "https://api.github.com/users/awsaf49/following{/other_user}",
"gists_url": "https://api.github.com/users/awsaf49/gists{/gist_id}",
"starred_url": "https://api.github.com/users/awsaf49/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/awsaf49/subscriptions",
"organizations_url": "https://api.github.com/users/awsaf49/orgs",
"repos_url": "https://api.github.com/users/awsaf49/repos",
"events_url": "https://api.github.com/users/awsaf49/events{/privacy}",
"received_events_url": "https://api.github.com/users/awsaf49/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I think I need to create another PR on https://huggingface.co/datasets/huggingface/documentation-images/tree/main/datasets for hosting the images there?",
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks for the update @awsaf49 !",
"> Thanks a lot for the updates!\r\n> \r\n> Just some minor things remain and the we should be good to ship this 🚀\r\n\r\n@sayakpaul I have updated the minor things. Please approve the workflows",
"I think this PR is good to go..\r\n@sayakpaul @lhoestq ",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009064 / 0.011353 (-0.002289) | 0.005262 / 0.011008 (-0.005746) | 0.099608 / 0.038508 (0.061100) | 0.035015 / 0.023109 (0.011906) | 0.296501 / 0.275898 (0.020602) | 0.353619 / 0.323480 (0.030139) | 0.007903 / 0.007986 (-0.000083) | 0.004093 / 0.004328 (-0.000235) | 0.075260 / 0.004250 (0.071009) | 0.043142 / 0.037052 (0.006089) | 0.307755 / 0.258489 (0.049266) | 0.336340 / 0.293841 (0.042499) | 0.038596 / 0.128546 (-0.089950) | 0.011861 / 0.075646 (-0.063786) | 0.334226 / 0.419271 (-0.085045) | 0.051472 / 0.043533 (0.007940) | 0.298539 / 0.255139 (0.043400) | 0.316856 / 0.283200 (0.033656) | 0.108620 / 0.141683 (-0.033063) | 1.434901 / 1.452155 (-0.017254) | 1.468368 / 1.492716 (-0.024348) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.208402 / 0.018006 (0.190395) | 0.445799 / 0.000490 (0.445309) | 0.003704 / 0.000200 (0.003504) | 0.000084 / 0.000054 (0.000030) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025435 / 0.037411 (-0.011976) | 0.105874 / 0.014526 (0.091348) | 0.115652 / 0.176557 (-0.060905) | 0.150872 / 0.737135 (-0.586263) | 0.121705 / 0.296338 (-0.174633) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.397816 / 0.215209 (0.182607) | 3.977766 / 2.077655 (1.900111) | 1.850848 / 1.504120 (0.346728) | 1.686062 / 1.541195 (0.144867) | 1.786277 / 1.468490 (0.317787) | 0.696250 / 4.584777 (-3.888527) | 3.785255 / 3.745712 (0.039543) | 3.355013 / 5.269862 (-1.914849) | 1.818232 / 4.565676 (-2.747444) | 0.085408 / 0.424275 (-0.338867) | 0.012567 / 0.007607 (0.004960) | 0.524185 / 0.226044 (0.298140) | 5.061975 / 2.268929 (2.793047) | 2.299866 / 55.444624 (-53.144758) | 1.966709 / 6.876477 (-4.909768) | 2.018760 / 2.142072 (-0.123313) | 0.841341 / 4.805227 (-3.963886) | 0.166374 / 6.500664 (-6.334290) | 0.061854 / 0.075469 (-0.013615) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.221666 / 1.841788 (-0.620122) | 14.373194 / 8.074308 (6.298886) | 14.253614 / 10.191392 (4.062222) | 0.172979 / 0.680424 (-0.507445) | 0.029176 / 0.534201 (-0.505025) | 0.447399 / 0.579283 (-0.131884) | 0.443663 / 0.434364 (0.009299) | 0.537071 / 0.540337 (-0.003267) | 0.640539 / 1.386936 (-0.746397) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007019 / 0.011353 (-0.004334) | 0.005091 / 0.011008 (-0.005917) | 0.074588 / 0.038508 (0.036080) | 0.032391 / 0.023109 (0.009282) | 0.340548 / 0.275898 (0.064650) | 0.367159 / 0.323480 (0.043679) | 0.005594 / 0.007986 (-0.002392) | 0.004003 / 0.004328 (-0.000325) | 0.073946 / 0.004250 (0.069695) | 0.045921 / 0.037052 (0.008868) | 0.340245 / 0.258489 (0.081756) | 0.397958 / 0.293841 (0.104117) | 0.036539 / 0.128546 (-0.092007) | 0.012258 / 0.075646 (-0.063388) | 0.087406 / 0.419271 (-0.331865) | 0.049276 / 0.043533 (0.005743) | 0.345235 / 0.255139 (0.090096) | 0.361250 / 0.283200 (0.078050) | 0.100757 / 0.141683 (-0.040926) | 1.464644 / 1.452155 (0.012489) | 1.545852 / 1.492716 (0.053136) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.222952 / 0.018006 (0.204945) | 0.434607 / 0.000490 (0.434117) | 0.000438 / 0.000200 (0.000238) | 0.000060 / 0.000054 (0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028834 / 0.037411 (-0.008577) | 0.107523 / 0.014526 (0.092997) | 0.122077 / 0.176557 (-0.054479) | 0.156574 / 0.737135 (-0.580561) | 0.122917 / 0.296338 (-0.173421) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.417292 / 0.215209 (0.202083) | 4.165980 / 2.077655 (2.088325) | 1.996731 / 1.504120 (0.492611) | 1.802946 / 1.541195 (0.261751) | 1.878456 / 1.468490 (0.409966) | 0.711035 / 4.584777 (-3.873742) | 3.847357 / 3.745712 (0.101644) | 2.088354 / 5.269862 (-3.181508) | 1.344763 / 4.565676 (-3.220913) | 0.086356 / 0.424275 (-0.337919) | 0.012530 / 0.007607 (0.004923) | 0.511693 / 0.226044 (0.285648) | 5.126093 / 2.268929 (2.857165) | 2.490023 / 55.444624 (-52.954602) | 2.180274 / 6.876477 (-4.696202) | 2.221511 / 2.142072 (0.079438) | 0.836348 / 4.805227 (-3.968879) | 0.169554 / 6.500664 (-6.331110) | 0.064555 / 0.075469 (-0.010914) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.293466 / 1.841788 (-0.548321) | 14.785700 / 8.074308 (6.711392) | 13.858493 / 10.191392 (3.667101) | 0.161777 / 0.680424 (-0.518646) | 0.017794 / 0.534201 (-0.516407) | 0.426286 / 0.579283 (-0.152997) | 0.422517 / 0.434364 (-0.011847) | 0.530777 / 0.540337 (-0.009560) | 0.634822 / 1.386936 (-0.752114) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#c6e08fcfc3a04e53430c26fa7c07da4cb18d977d \"CML watermark\")\n"
] | 2023-01-30T17:37:08 | 2023-09-29T06:43:11 | 2023-02-05T14:15:04 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5484",
"html_url": "https://github.com/huggingface/datasets/pull/5484",
"diff_url": "https://github.com/huggingface/datasets/pull/5484.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5484.patch",
"merged_at": "2023-02-05T14:15:04"
} | This PR will fix the issue mentioned in #5461. Here is brief overview,
## Bug:
Discrepancy between depth map of `nyu_depth_v2` dataset [here](https://huggingface.co/docs/datasets/main/en/depth_estimation) and actual depth map. Depth values somehow got **discretized/clipped** resulting in depth maps that are different from actual ones. Here is a side-by-side comparison,
![image](https://user-images.githubusercontent.com/36858976/214381162-1d9582c2-6750-4114-a01a-61ca1cd5f872.png)
## Fix:
When I first loaded the datasets from HF I noticed it was 30GB but in DenseDepth data is only 4GB with dtype=uint8. This means data from fast-depth (before loading to HF) must have high precision. So when I tried to dig deeper by directly loading depth_map from `h5py`, I found depth_map from `h5py` came with `float32`. But when the data is processed in HF with `datasets.Image()` it was directly converted to `uint8` from `float32` hence the **discretized** depth map.
https://github.com/huggingface/datasets/blob/c78559cacbb0ca6e0bc8bfc313cc0359f8c23ead/src/datasets/features/image.py#L91-L93
cc: @sayakpaul @lhoestq
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5484/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5484/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5483 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5483/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5483/comments | https://api.github.com/repos/huggingface/datasets/issues/5483/events | https://github.com/huggingface/datasets/issues/5483 | 1,560,894,690 | I_kwDODunzps5dCVzi | 5,483 | Unable to upload dataset | {
"login": "yuvalkirstain",
"id": 57996478,
"node_id": "MDQ6VXNlcjU3OTk2NDc4",
"avatar_url": "https://avatars.githubusercontent.com/u/57996478?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yuvalkirstain",
"html_url": "https://github.com/yuvalkirstain",
"followers_url": "https://api.github.com/users/yuvalkirstain/followers",
"following_url": "https://api.github.com/users/yuvalkirstain/following{/other_user}",
"gists_url": "https://api.github.com/users/yuvalkirstain/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yuvalkirstain/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yuvalkirstain/subscriptions",
"organizations_url": "https://api.github.com/users/yuvalkirstain/orgs",
"repos_url": "https://api.github.com/users/yuvalkirstain/repos",
"events_url": "https://api.github.com/users/yuvalkirstain/events{/privacy}",
"received_events_url": "https://api.github.com/users/yuvalkirstain/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Seems to work now, perhaps it was something internal with our university's network."
] | 2023-01-28T15:18:26 | 2023-01-29T08:09:49 | 2023-01-29T08:09:49 | NONE | null | null | null | ### Describe the bug
Uploading a simple dataset ends with an exception
### Steps to reproduce the bug
I created a new conda env with python 3.10, pip installed datasets and:
```python
>>> from datasets import load_dataset, load_from_disk, Dataset
>>> d = Dataset.from_dict({"text": ["hello"] * 2})
>>> d.push_to_hub("ttt111")
/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/huggingface_hub/utils/_hf_folder.py:92: UserWarning: A token has been found in `/a/home/cc/students/cs/kirstain/.huggingface/token`. This is the old path where tokens were stored. The new location is `/home/olab/kirstain/.cache/huggingface/token` which is configurable using `HF_HOME` environment variable. Your token has been copied to this new location. You can now safely delete the old token file manually or use `huggingface-cli logout`.
warnings.warn(
Creating parquet from Arrow format: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 279.94ba/s]
Upload 1 LFS files: 0%| | 0/1 [00:02<?, ?it/s]
Pushing dataset shards to the dataset hub: 0%| | 0/1 [00:04<?, ?it/s]
Traceback (most recent call last):
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/huggingface_hub/utils/_errors.py", line 264, in hf_raise_for_status
response.raise_for_status()
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/requests/models.py", line 1021, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 403 Client Error: Forbidden for url: https://s3.us-east-1.amazonaws.com/lfs.huggingface.co/repos/cf/0c/cf0c5ab8a3f729e5f57a8b79a36ecea64a31126f13218591c27ed9a1c7bd9b41/ece885a4bb6bbc8c1bb51b45542b805283d74590f72cd4c45d3ba76628570386?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=AKIA4N7VTDGO27GPWFUO%2F20230128%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20230128T151640Z&X-Amz-Expires=900&X-Amz-Signature=89e78e9a9d70add7ed93d453334f4f93c6f29d889d46750a1f2da04af73978db&X-Amz-SignedHeaders=host&x-amz-storage-class=INTELLIGENT_TIERING&x-id=PutObject
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/huggingface_hub/_commit_api.py", line 334, in _inner_upload_lfs_object
return _upload_lfs_object(
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/huggingface_hub/_commit_api.py", line 391, in _upload_lfs_object
lfs_upload(
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/huggingface_hub/lfs.py", line 273, in lfs_upload
_upload_single_part(
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/huggingface_hub/lfs.py", line 305, in _upload_single_part
hf_raise_for_status(upload_res)
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/huggingface_hub/utils/_errors.py", line 318, in hf_raise_for_status
raise HfHubHTTPError(str(e), response=response) from e
huggingface_hub.utils._errors.HfHubHTTPError: 403 Client Error: Forbidden for url: https://s3.us-east-1.amazonaws.com/lfs.huggingface.co/repos/cf/0c/cf0c5ab8a3f729e5f57a8b79a36ecea64a31126f13218591c27ed9a1c7bd9b41/ece885a4bb6bbc8c1bb51b45542b805283d74590f72cd4c45d3ba76628570386?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=AKIA4N7VTDGO27GPWFUO%2F20230128%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20230128T151640Z&X-Amz-Expires=900&X-Amz-Signature=89e78e9a9d70add7ed93d453334f4f93c6f29d889d46750a1f2da04af73978db&X-Amz-SignedHeaders=host&x-amz-storage-class=INTELLIGENT_TIERING&x-id=PutObject
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 4909, in push_to_hub
repo_id, split, uploaded_size, dataset_nbytes, repo_files, deleted_size = self._push_parquet_shards_to_hub(
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 4804, in _push_parquet_shards_to_hub
_retry(
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 281, in _retry
return func(*func_args, **func_kwargs)
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 124, in _inner_fn
return fn(*args, **kwargs)
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/huggingface_hub/hf_api.py", line 2537, in upload_file
commit_info = self.create_commit(
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 124, in _inner_fn
return fn(*args, **kwargs)
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/huggingface_hub/hf_api.py", line 2346, in create_commit
upload_lfs_files(
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 124, in _inner_fn
return fn(*args, **kwargs)
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/huggingface_hub/_commit_api.py", line 346, in upload_lfs_files
thread_map(
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/tqdm/contrib/concurrent.py", line 94, in thread_map
return _executor_map(ThreadPoolExecutor, fn, *iterables, **tqdm_kwargs)
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/tqdm/contrib/concurrent.py", line 76, in _executor_map
return list(tqdm_class(ex.map(fn, *iterables, **map_args), **kwargs))
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/tqdm/std.py", line 1195, in __iter__
for obj in iterable:
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/concurrent/futures/_base.py", line 621, in result_iterator
yield _result_or_cancel(fs.pop())
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/concurrent/futures/_base.py", line 319, in _result_or_cancel
return fut.result(timeout)
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/concurrent/futures/_base.py", line 458, in result
return self.__get_result()
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
raise self._exception
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/huggingface_hub/_commit_api.py", line 338, in _inner_upload_lfs_object
raise RuntimeError(
RuntimeError: Error while uploading 'data/train-00000-of-00001-6df93048e66df326.parquet' to the Hub.
```
### Expected behavior
The dataset should be uploaded without any exceptions
### Environment info
- `datasets` version: 2.9.0
- Platform: Linux-4.15.0-65-generic-x86_64-with-glibc2.27
- Python version: 3.10.9
- PyArrow version: 11.0.0
- Pandas version: 1.5.3
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5483/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5483/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5482 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5482/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5482/comments | https://api.github.com/repos/huggingface/datasets/issues/5482/events | https://github.com/huggingface/datasets/issues/5482 | 1,560,853,137 | I_kwDODunzps5dCLqR | 5,482 | Reload features from Parquet metadata | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
},
{
"id": 3761482852,
"node_id": "LA_kwDODunzps7gM6xk",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20second%20issue",
"name": "good second issue",
"color": "BDE59C",
"default": false,
"description": "Issues a bit more difficult than \"Good First\" issues"
}
] | closed | false | {
"login": "MFreidank",
"id": 6368040,
"node_id": "MDQ6VXNlcjYzNjgwNDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6368040?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MFreidank",
"html_url": "https://github.com/MFreidank",
"followers_url": "https://api.github.com/users/MFreidank/followers",
"following_url": "https://api.github.com/users/MFreidank/following{/other_user}",
"gists_url": "https://api.github.com/users/MFreidank/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MFreidank/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MFreidank/subscriptions",
"organizations_url": "https://api.github.com/users/MFreidank/orgs",
"repos_url": "https://api.github.com/users/MFreidank/repos",
"events_url": "https://api.github.com/users/MFreidank/events{/privacy}",
"received_events_url": "https://api.github.com/users/MFreidank/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "MFreidank",
"id": 6368040,
"node_id": "MDQ6VXNlcjYzNjgwNDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6368040?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MFreidank",
"html_url": "https://github.com/MFreidank",
"followers_url": "https://api.github.com/users/MFreidank/followers",
"following_url": "https://api.github.com/users/MFreidank/following{/other_user}",
"gists_url": "https://api.github.com/users/MFreidank/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MFreidank/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MFreidank/subscriptions",
"organizations_url": "https://api.github.com/users/MFreidank/orgs",
"repos_url": "https://api.github.com/users/MFreidank/repos",
"events_url": "https://api.github.com/users/MFreidank/events{/privacy}",
"received_events_url": "https://api.github.com/users/MFreidank/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"I'd be happy to have a look, if nobody else has started working on this yet @lhoestq. \r\n\r\nIt seems to me that for the `arrow` format features are currently attached as metadata [in `datasets.arrow_writer`](https://github.com/huggingface/datasets/blob/5f810b7011a8a4ab077a1847c024d2d9e267b065/src/datasets/arrow_writer.py#L412) and retrieved from the metadata at `load_dataset` time using [`datasets.features.features.from_arrow_schema`](https://github.com/huggingface/datasets/blob/5f810b7011a8a4ab077a1847c024d2d9e267b065/src/datasets/features/features.py#L1602). \r\n\r\nThis will need to be replicated for `parquet` via calls to [this api](https://arrow.apache.org/docs/python/generated/pyarrow.parquet.write_metadata.html) from `io.parquet.ParquetWriter` and `io.parquet.ParquetReader` [respectively](https://github.com/huggingface/datasets/blob/5f810b7011a8a4ab077a1847c024d2d9e267b065/src/datasets/io/parquet.py#L104).\r\n\r\nAny other important considerations?\r\n",
"Thanks @MFreidank ! That's correct :)\r\n\r\nReading the metadata to infer the features can be ideally done in the `parquet.py` file in `packaged_builder` when a parquet file is read. You can cast the arrow table to the schema you get from the features.arrow_schema",
"#self-assign"
] | 2023-01-28T13:12:31 | 2023-02-12T15:57:02 | 2023-02-12T15:57:02 | MEMBER | null | null | null | The idea would be to allow this :
```python
ds.to_parquet("my_dataset/ds.parquet")
reloaded = load_dataset("my_dataset")
assert ds.features == reloaded.features
```
And it should also work with Image and Audio types (right now they're reloaded as a dict type)
This can be implemented by storing and reading the feature types in the parquet metadata, as we do for arrow files. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5482/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5482/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5481 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5481/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5481/comments | https://api.github.com/repos/huggingface/datasets/issues/5481/events | https://github.com/huggingface/datasets/issues/5481 | 1,560,468,195 | I_kwDODunzps5dAtrj | 5,481 | Load a cached dataset as iterable | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
},
{
"id": 3761482852,
"node_id": "LA_kwDODunzps7gM6xk",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20second%20issue",
"name": "good second issue",
"color": "BDE59C",
"default": false,
"description": "Issues a bit more difficult than \"Good First\" issues"
}
] | open | false | null | [] | null | [
"Can I work on this issue? I am pretty new to this.",
"Hi ! Sure :) you can comment `#self-assign` to assign yourself to this issue.\r\n\r\nI can give you some pointers to get started:\r\n\r\n`load_dataset` works roughly this way:\r\n1. it instantiate a dataset builder using `load_dataset_builder()`\r\n2. the builder download and prepare the dataset as Arrow files in the cache using `download_and_prepare()`\r\n3. the builder returns a Dataset object with `as_dataset()`\r\n\r\nOne way to approach this would be to implement `as_iterable_dataset()` in `builder.py`.\r\n\r\nAnd similarly to `as_dataset()`, you can use the `ArrowReader`. It has a `get_file_instructions()` method that can be helpful. It gives you the files to read as list of dictionaries with those keys: `filename`, `skip` and `take`.\r\n\r\nThe `skip` and `take` arguments are used in case the user wants to load a subset of the dataset, e.g.\r\n```python\r\nload_dataset(..., split=\"train[:10]\")\r\n```\r\n\r\nLet me know if you have questions or if I can help :)",
"This use-case is a bit specific, and `load_dataset` already has enough parameters (plus, `streaming=True` also returns an iterable dataset, so we would have to explain the difference), so I think it would be better to add `IterableDataset.from_file` to the API (more flexible and aligned with the goal from https://github.com/huggingface/datasets/issues/3444) instead.",
"> This use-case is a bit specific\r\n\r\nThis allows to use `datasets` for large scale training where map-style datasets are too slow and use too much memory in PyTorch. So I would still consider adding it.\r\n\r\nAlternatively we could add this feature one level bellow:\r\n```python\r\nbuilder = load_dataset_builder(...)\r\nbuilder.download_and_prepare()\r\nids = builder.as_iterable_dataset()\r\n```",
"Yes, I see how this can be useful. Still, I think `Dataset.to_iterable` + `IterableDataset.from_file` would be much cleaner in terms of the API design (and more flexible since `load_dataset` can only access the \"initial\" (unprocessed) version of a dataset).\r\n\r\nAnd since it can be tricky to manually find the \"initial\" version of a dataset in the cache, maybe `load_dataset` could return an iterable dataset streamed from the cache if `streaming=True` and the cache is up-to-date. ",
"> This allows to use datasets for large scale training where map-style datasets are too slow and use too much memory in PyTorch.\r\n\r\nI second that. e.g. In my last experiment Oscar-en uses 16GB RSS RAM per process and when using multiple processes the host quickly runs out cpu memory. ",
">And since it can be tricky to manually find the \"initial\" version of a dataset in the cache, maybe load_dataset could return an iterable dataset streamed from the cache if streaming=True and the cache is up-to-date.\r\n\r\nThis is exactly the need on JeanZay (HPC) - I have the dataset cache ready, but the compute node is offline, so making streaming work off a local cache would address that need.\r\n\r\nIf you will have a working POC I can be the tester. ",
"> Yes, I see how this can be useful. Still, I think Dataset.to_iterable + IterableDataset.from_file would be much cleaner in terms of the API design (and more flexible since load_dataset can only access the \"initial\" (unprocessed) version of a dataset).\r\n\r\nI like `IterableDataset.from_file` as well. On the other hand `Dataset.to_iterable` first requires to load a Dataset object, which can take time depending on your hardware and your dataset size (sometimes 1h+).\r\n\r\n> And since it can be tricky to manually find the \"initial\" version of a dataset in the cache, maybe load_dataset could return an iterable dataset streamed from the cache if streaming=True and the cache is up-to-date.\r\n\r\nThat would definitely do the job. I was suggesting a different parameter just to make explicit the difference between\r\n- streaming from the raw data\r\n- streaming from the local cache\r\n\r\nBut I'd be fine with streaming from cache is the cache is up-to-date since it's always faster. We could log a message as usual to make it explicit that the cache is used",
"> I was suggesting a different parameter just to make explicit the difference between\r\n\r\nMosaicML's `streaming` library does the same (tries to stream from the local cache if possible), so logging a message should be explicit enough :).",
"Ok ! Sounds good then :)",
"Hi Both! It has been a while since my first issue so I am gonna go for this one ! #self-assign",
"#self-assign",
"I like idea of `IterableDataset.from_file`. ",
"https://github.com/huggingface/datasets/pull/5821 should be helpful to implement `IterableDataset.from_file`, since it defines a new ArrowExamplesIterable that takes an Arrow tables generator function (e.g. from a file) and can be used in an IterableDataset",
"@lhoestq I have just started working on this issue. ",
"@lhoestq Thank you for taking over.",
"So what's recommanded usage of `IterableDataset.from_file` and `load_dataset`? How about I have multiple arrow files and `load_dataset` is often convenient to handle that.",
"If you have multiple Arrow files you can load them using\r\n\r\n```python\r\nfrom datasets import load_dataset\r\n\r\ndata_files = {\"train\": [\"path/to/0.arrow\", \"path/to/1.arrow\", ..., \"path/to/n.arrow\"]}\r\n\r\nds = load_dataset(\"arrow\", data_files=data_files, streaming=True)\r\n```\r\n\r\nThis is equivalent to calling `IterableDataset.from_file` and `concatenate_datasets`."
] | 2023-01-27T21:43:51 | 2023-06-26T10:48:53 | null | MEMBER | null | null | null | The idea would be to allow something like
```python
ds = load_dataset("c4", "en", as_iterable=True)
```
To be used to train models. It would load an IterableDataset from the cached Arrow files.
Cc @stas00
Edit : from the discussions we may load from cache when streaming=True | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5481/reactions",
"total_count": 5,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 5,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5481/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5480 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5480/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5480/comments | https://api.github.com/repos/huggingface/datasets/issues/5480/events | https://github.com/huggingface/datasets/pull/5480 | 1,560,364,866 | PR_kwDODunzps5ItY2y | 5,480 | Select columns of Dataset or DatasetDict | {
"login": "daskol",
"id": 9336514,
"node_id": "MDQ6VXNlcjkzMzY1MTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/9336514?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/daskol",
"html_url": "https://github.com/daskol",
"followers_url": "https://api.github.com/users/daskol/followers",
"following_url": "https://api.github.com/users/daskol/following{/other_user}",
"gists_url": "https://api.github.com/users/daskol/gists{/gist_id}",
"starred_url": "https://api.github.com/users/daskol/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/daskol/subscriptions",
"organizations_url": "https://api.github.com/users/daskol/orgs",
"repos_url": "https://api.github.com/users/daskol/repos",
"events_url": "https://api.github.com/users/daskol/events{/privacy}",
"received_events_url": "https://api.github.com/users/daskol/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009963 / 0.011353 (-0.001390) | 0.005512 / 0.011008 (-0.005496) | 0.100495 / 0.038508 (0.061987) | 0.039929 / 0.023109 (0.016820) | 0.299749 / 0.275898 (0.023850) | 0.372330 / 0.323480 (0.048850) | 0.008689 / 0.007986 (0.000703) | 0.004334 / 0.004328 (0.000006) | 0.076469 / 0.004250 (0.072218) | 0.048091 / 0.037052 (0.011039) | 0.303884 / 0.258489 (0.045395) | 0.352747 / 0.293841 (0.058906) | 0.038941 / 0.128546 (-0.089605) | 0.012541 / 0.075646 (-0.063105) | 0.334227 / 0.419271 (-0.085044) | 0.048802 / 0.043533 (0.005269) | 0.295800 / 0.255139 (0.040661) | 0.316222 / 0.283200 (0.033022) | 0.108246 / 0.141683 (-0.033437) | 1.452735 / 1.452155 (0.000580) | 1.466293 / 1.492716 (-0.026423) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.010497 / 0.018006 (-0.007510) | 0.507427 / 0.000490 (0.506937) | 0.003054 / 0.000200 (0.002854) | 0.000084 / 0.000054 (0.000030) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029529 / 0.037411 (-0.007883) | 0.114151 / 0.014526 (0.099625) | 0.120599 / 0.176557 (-0.055957) | 0.161881 / 0.737135 (-0.575255) | 0.127669 / 0.296338 (-0.168669) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.399631 / 0.215209 (0.184421) | 3.992997 / 2.077655 (1.915343) | 1.803770 / 1.504120 (0.299650) | 1.612301 / 1.541195 (0.071106) | 1.717846 / 1.468490 (0.249356) | 0.706753 / 4.584777 (-3.878024) | 3.798224 / 3.745712 (0.052512) | 2.169733 / 5.269862 (-3.100128) | 1.358264 / 4.565676 (-3.207413) | 0.086828 / 0.424275 (-0.337447) | 0.012606 / 0.007607 (0.004999) | 0.512085 / 0.226044 (0.286041) | 5.101491 / 2.268929 (2.832563) | 2.285688 / 55.444624 (-53.158936) | 1.955160 / 6.876477 (-4.921317) | 2.045887 / 2.142072 (-0.096186) | 0.878836 / 4.805227 (-3.926392) | 0.166483 / 6.500664 (-6.334181) | 0.062656 / 0.075469 (-0.012814) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.215152 / 1.841788 (-0.626636) | 15.436187 / 8.074308 (7.361879) | 14.489951 / 10.191392 (4.298559) | 0.199019 / 0.680424 (-0.481404) | 0.029148 / 0.534201 (-0.505053) | 0.440309 / 0.579283 (-0.138974) | 0.452041 / 0.434364 (0.017677) | 0.527102 / 0.540337 (-0.013236) | 0.634302 / 1.386936 (-0.752634) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007814 / 0.011353 (-0.003539) | 0.005582 / 0.011008 (-0.005427) | 0.075466 / 0.038508 (0.036958) | 0.034421 / 0.023109 (0.011312) | 0.342345 / 0.275898 (0.066447) | 0.389943 / 0.323480 (0.066463) | 0.006346 / 0.007986 (-0.001639) | 0.004442 / 0.004328 (0.000113) | 0.074440 / 0.004250 (0.070190) | 0.056383 / 0.037052 (0.019331) | 0.340293 / 0.258489 (0.081804) | 0.394416 / 0.293841 (0.100575) | 0.037217 / 0.128546 (-0.091330) | 0.012597 / 0.075646 (-0.063050) | 0.087005 / 0.419271 (-0.332267) | 0.051626 / 0.043533 (0.008094) | 0.336690 / 0.255139 (0.081551) | 0.369143 / 0.283200 (0.085943) | 0.110764 / 0.141683 (-0.030919) | 1.459003 / 1.452155 (0.006849) | 1.557333 / 1.492716 (0.064617) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.319596 / 0.018006 (0.301590) | 0.514697 / 0.000490 (0.514207) | 0.005286 / 0.000200 (0.005086) | 0.000086 / 0.000054 (0.000032) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032579 / 0.037411 (-0.004832) | 0.111094 / 0.014526 (0.096568) | 0.127827 / 0.176557 (-0.048730) | 0.169967 / 0.737135 (-0.567168) | 0.133149 / 0.296338 (-0.163189) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.424637 / 0.215209 (0.209428) | 4.217889 / 2.077655 (2.140235) | 2.044844 / 1.504120 (0.540724) | 1.863513 / 1.541195 (0.322319) | 1.975674 / 1.468490 (0.507184) | 0.695493 / 4.584777 (-3.889284) | 3.815562 / 3.745712 (0.069850) | 3.534427 / 5.269862 (-1.735435) | 1.684874 / 4.565676 (-2.880802) | 0.085560 / 0.424275 (-0.338715) | 0.012439 / 0.007607 (0.004832) | 0.541231 / 0.226044 (0.315187) | 5.287166 / 2.268929 (3.018237) | 2.596622 / 55.444624 (-52.848002) | 2.315913 / 6.876477 (-4.560564) | 2.418454 / 2.142072 (0.276381) | 0.838947 / 4.805227 (-3.966281) | 0.168149 / 6.500664 (-6.332515) | 0.066439 / 0.075469 (-0.009030) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.264814 / 1.841788 (-0.576974) | 15.861324 / 8.074308 (7.787016) | 14.352515 / 10.191392 (4.161123) | 0.167032 / 0.680424 (-0.513391) | 0.017766 / 0.534201 (-0.516435) | 0.421821 / 0.579283 (-0.157462) | 0.426657 / 0.434364 (-0.007707) | 0.526742 / 0.540337 (-0.013595) | 0.623851 / 1.386936 (-0.763085) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#69b19755e9e37b746ef56780a62d21ef20c574d5 \"CML watermark\")\n"
] | 2023-01-27T20:06:16 | 2023-02-13T11:10:13 | 2023-02-13T09:59:35 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5480",
"html_url": "https://github.com/huggingface/datasets/pull/5480",
"diff_url": "https://github.com/huggingface/datasets/pull/5480.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5480.patch",
"merged_at": "2023-02-13T09:59:35"
} | Close #5474 and #5468. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5480/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5480/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5479 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5479/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5479/comments | https://api.github.com/repos/huggingface/datasets/issues/5479/events | https://github.com/huggingface/datasets/issues/5479 | 1,560,357,590 | I_kwDODunzps5dASrW | 5,479 | audiofolder works on local env, but creates empty dataset in a remote one, what dependencies could I be missing/outdated | {
"login": "jcho19",
"id": 107211437,
"node_id": "U_kgDOBmPqrQ",
"avatar_url": "https://avatars.githubusercontent.com/u/107211437?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jcho19",
"html_url": "https://github.com/jcho19",
"followers_url": "https://api.github.com/users/jcho19/followers",
"following_url": "https://api.github.com/users/jcho19/following{/other_user}",
"gists_url": "https://api.github.com/users/jcho19/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jcho19/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jcho19/subscriptions",
"organizations_url": "https://api.github.com/users/jcho19/orgs",
"repos_url": "https://api.github.com/users/jcho19/repos",
"events_url": "https://api.github.com/users/jcho19/events{/privacy}",
"received_events_url": "https://api.github.com/users/jcho19/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2023-01-27T20:01:22 | 2023-01-29T05:23:14 | 2023-01-29T05:23:14 | NONE | null | null | null | ### Describe the bug
I'm using a custom audio dataset (400+ audio files) in the correct format for audiofolder. Although loading the dataset with audiofolder works in one local setup, it doesn't in a remote one (it just creates an empty dataset). I have both ffmpeg and libndfile installed on both computers, what could be missing/need to be updated in the one that doesn't work? On the remote env, libsndfile is 1.0.28 and ffmpeg is 4.2.1.
from datasets import load_dataset
ds = load_dataset("audiofolder", data_dir="...")
Here is the output (should be generating 400+ rows):
Downloading and preparing dataset audiofolder/default to ...
Downloading data files: 0%| | 0/2 [00:00<?, ?it/s]
Downloading data files: 0it [00:00, ?it/s]
Extracting data files: 0it [00:00, ?it/s]
Generating train split: 0 examples [00:00, ? examples/s]
Dataset audiofolder downloaded and prepared to ... Subsequent calls will reuse this data.
0%| | 0/1 [00:00<?, ?it/s]
DatasetDict({
train: Dataset({
features: ['audio', 'transcription'],
num_rows: 1
})
})
Here is my pip environment in the one that doesn't work (uses torch 1.11.a0 from shared env):
Package Version
------------------- -------------------
aiofiles 22.1.0
aiohttp 3.8.3
aiosignal 1.3.1
altair 4.2.1
anyio 3.6.2
appdirs 1.4.4
argcomplete 2.0.0
argon2-cffi 20.1.0
astunparse 1.6.3
async-timeout 4.0.2
attrs 21.2.0
audioread 3.0.0
backcall 0.2.0
bleach 4.0.0
certifi 2021.10.8
cffi 1.14.6
charset-normalizer 2.0.12
click 8.1.3
contourpy 1.0.7
cycler 0.11.0
datasets 2.9.0
debugpy 1.4.1
decorator 5.0.9
defusedxml 0.7.1
dill 0.3.6
distlib 0.3.4
entrypoints 0.3
evaluate 0.4.0
expecttest 0.1.3
fastapi 0.89.1
ffmpy 0.3.0
filelock 3.6.0
fonttools 4.38.0
frozenlist 1.3.3
fsspec 2023.1.0
future 0.18.2
gradio 3.16.2
h11 0.14.0
httpcore 0.16.3
httpx 0.23.3
huggingface-hub 0.12.0
idna 3.3
ipykernel 6.2.0
ipython 7.26.0
ipython-genutils 0.2.0
ipywidgets 7.6.3
jedi 0.18.0
Jinja2 3.0.1
jiwer 2.5.1
joblib 1.2.0
jsonschema 3.2.0
jupyter 1.0.0
jupyter-client 6.1.12
jupyter-console 6.4.0
jupyter-core 4.7.1
jupyterlab-pygments 0.1.2
jupyterlab-widgets 1.0.0
kiwisolver 1.4.4
Levenshtein 0.20.2
librosa 0.9.2
linkify-it-py 1.0.3
llvmlite 0.39.1
markdown-it-py 2.1.0
MarkupSafe 2.0.1
matplotlib 3.6.3
matplotlib-inline 0.1.2
mdit-py-plugins 0.3.3
mdurl 0.1.2
mistune 0.8.4
multidict 6.0.4
multiprocess 0.70.14
nbclient 0.5.4
nbconvert 6.1.0
nbformat 5.1.3
nest-asyncio 1.5.1
notebook 6.4.3
numba 0.56.4
numpy 1.20.3
orjson 3.8.5
packaging 21.0
pandas 1.5.3
pandocfilters 1.4.3
parso 0.8.2
pexpect 4.8.0
pickleshare 0.7.5
Pillow 9.4.0
pip 22.3.1
pipx 1.1.0
platformdirs 2.5.2
pooch 1.6.0
prometheus-client 0.11.0
prompt-toolkit 3.0.19
psutil 5.9.0
ptyprocess 0.7.0
pyarrow 10.0.1
pycparser 2.20
pycryptodome 3.16.0
pydantic 1.10.4
pydub 0.25.1
Pygments 2.10.0
pyparsing 2.4.7
pyrsistent 0.18.0
python-dateutil 2.8.2
python-multipart 0.0.5
pytz 2022.7.1
PyYAML 6.0
pyzmq 22.2.1
qtconsole 5.1.1
QtPy 1.10.0
rapidfuzz 2.13.7
regex 2022.10.31
requests 2.27.1
resampy 0.4.2
responses 0.18.0
rfc3986 1.5.0
scikit-learn 1.2.1
scipy 1.6.3
Send2Trash 1.8.0
setuptools 65.5.1
shiboken6 6.3.1
shiboken6-generator 6.3.1
six 1.16.0
sniffio 1.3.0
soundfile 0.11.0
starlette 0.22.0
terminado 0.11.0
testpath 0.5.0
threadpoolctl 3.1.0
tokenizers 0.13.2
toolz 0.12.0
torch 1.11.0a0+gitunknown
tornado 6.1
tqdm 4.64.1
traitlets 5.0.5
transformers 4.27.0.dev0
types-dataclasses 0.6.4
typing_extensions 4.1.1
uc-micro-py 1.0.1
urllib3 1.26.9
userpath 1.8.0
uvicorn 0.20.0
virtualenv 20.14.1
wcwidth 0.2.5
webencodings 0.5.1
websockets 10.4
wheel 0.37.1
widgetsnbextension 3.5.1
xxhash 3.2.0
yarl 1.8.2
### Steps to reproduce the bug
Create a pip environment with the packages listed above (make sure ffmpeg and libsndfile is installed with same versions listed above).
Create a custom audio dataset and load it in with load_dataset("audiofolder", ...)
### Expected behavior
load_dataset should create a dataset with 400+ rows.
### Environment info
- `datasets` version: 2.9.0
- Platform: Linux-3.10.0-1160.80.1.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.9.0
- PyArrow version: 10.0.1
- Pandas version: 1.5.3 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5479/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5479/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5478 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5478/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5478/comments | https://api.github.com/repos/huggingface/datasets/issues/5478/events | https://github.com/huggingface/datasets/pull/5478 | 1,560,357,583 | PR_kwDODunzps5ItXQG | 5,478 | Tip for recomputing metadata | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008167 / 0.011353 (-0.003186) | 0.004404 / 0.011008 (-0.006605) | 0.100462 / 0.038508 (0.061954) | 0.028835 / 0.023109 (0.005726) | 0.326759 / 0.275898 (0.050861) | 0.355150 / 0.323480 (0.031670) | 0.007200 / 0.007986 (-0.000786) | 0.003293 / 0.004328 (-0.001035) | 0.078006 / 0.004250 (0.073756) | 0.033298 / 0.037052 (-0.003754) | 0.307119 / 0.258489 (0.048630) | 0.337689 / 0.293841 (0.043848) | 0.033016 / 0.128546 (-0.095530) | 0.011383 / 0.075646 (-0.064263) | 0.321989 / 0.419271 (-0.097283) | 0.039793 / 0.043533 (-0.003740) | 0.295388 / 0.255139 (0.040249) | 0.322694 / 0.283200 (0.039494) | 0.082989 / 0.141683 (-0.058694) | 1.496701 / 1.452155 (0.044546) | 1.548861 / 1.492716 (0.056145) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.176587 / 0.018006 (0.158580) | 0.397660 / 0.000490 (0.397170) | 0.001063 / 0.000200 (0.000863) | 0.000070 / 0.000054 (0.000015) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022386 / 0.037411 (-0.015025) | 0.096380 / 0.014526 (0.081854) | 0.103032 / 0.176557 (-0.073525) | 0.135050 / 0.737135 (-0.602086) | 0.105941 / 0.296338 (-0.190397) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.430989 / 0.215209 (0.215780) | 4.310309 / 2.077655 (2.232654) | 2.142596 / 1.504120 (0.638477) | 1.952043 / 1.541195 (0.410848) | 1.817803 / 1.468490 (0.349312) | 0.690026 / 4.584777 (-3.894751) | 3.315413 / 3.745712 (-0.430299) | 3.370336 / 5.269862 (-1.899525) | 1.668707 / 4.565676 (-2.896970) | 0.081860 / 0.424275 (-0.342415) | 0.012493 / 0.007607 (0.004886) | 0.527779 / 0.226044 (0.301735) | 5.318732 / 2.268929 (3.049804) | 2.467029 / 55.444624 (-52.977596) | 2.247171 / 6.876477 (-4.629306) | 2.270825 / 2.142072 (0.128752) | 0.802288 / 4.805227 (-4.002939) | 0.148895 / 6.500664 (-6.351770) | 0.064967 / 0.075469 (-0.010503) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.259304 / 1.841788 (-0.582484) | 13.662441 / 8.074308 (5.588133) | 14.074662 / 10.191392 (3.883270) | 0.152907 / 0.680424 (-0.527516) | 0.028340 / 0.534201 (-0.505861) | 0.397356 / 0.579283 (-0.181927) | 0.392600 / 0.434364 (-0.041764) | 0.467935 / 0.540337 (-0.072402) | 0.539890 / 1.386936 (-0.847046) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006156 / 0.011353 (-0.005197) | 0.004371 / 0.011008 (-0.006637) | 0.076391 / 0.038508 (0.037883) | 0.026455 / 0.023109 (0.003346) | 0.339816 / 0.275898 (0.063917) | 0.370032 / 0.323480 (0.046552) | 0.004614 / 0.007986 (-0.003372) | 0.003200 / 0.004328 (-0.001129) | 0.075408 / 0.004250 (0.071157) | 0.034100 / 0.037052 (-0.002953) | 0.341232 / 0.258489 (0.082743) | 0.380290 / 0.293841 (0.086449) | 0.031021 / 0.128546 (-0.097525) | 0.011562 / 0.075646 (-0.064084) | 0.085564 / 0.419271 (-0.333708) | 0.041431 / 0.043533 (-0.002102) | 0.359570 / 0.255139 (0.104431) | 0.366919 / 0.283200 (0.083719) | 0.088242 / 0.141683 (-0.053441) | 1.460703 / 1.452155 (0.008548) | 1.534351 / 1.492716 (0.041635) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.225703 / 0.018006 (0.207697) | 0.395014 / 0.000490 (0.394524) | 0.000385 / 0.000200 (0.000185) | 0.000060 / 0.000054 (0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023975 / 0.037411 (-0.013436) | 0.098658 / 0.014526 (0.084132) | 0.105043 / 0.176557 (-0.071513) | 0.139988 / 0.737135 (-0.597148) | 0.106854 / 0.296338 (-0.189484) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.442454 / 0.215209 (0.227245) | 4.430860 / 2.077655 (2.353205) | 2.084823 / 1.504120 (0.580704) | 1.870421 / 1.541195 (0.329226) | 1.901618 / 1.468490 (0.433128) | 0.699214 / 4.584777 (-3.885563) | 3.336911 / 3.745712 (-0.408801) | 1.856479 / 5.269862 (-3.413383) | 1.166496 / 4.565676 (-3.399180) | 0.083189 / 0.424275 (-0.341086) | 0.012293 / 0.007607 (0.004686) | 0.543147 / 0.226044 (0.317102) | 5.452030 / 2.268929 (3.183101) | 2.506689 / 55.444624 (-52.937936) | 2.168186 / 6.876477 (-4.708291) | 2.172277 / 2.142072 (0.030205) | 0.813554 / 4.805227 (-3.991673) | 0.152074 / 6.500664 (-6.348590) | 0.066891 / 0.075469 (-0.008579) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.278635 / 1.841788 (-0.563153) | 13.690232 / 8.074308 (5.615924) | 13.403201 / 10.191392 (3.211809) | 0.128171 / 0.680424 (-0.552253) | 0.016687 / 0.534201 (-0.517514) | 0.378645 / 0.579283 (-0.200638) | 0.382922 / 0.434364 (-0.051442) | 0.467483 / 0.540337 (-0.072854) | 0.559026 / 1.386936 (-0.827910) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#b262d411ec0e252615a140c4e3e60e7dbd38eef1 \"CML watermark\")\n"
] | 2023-01-27T20:01:22 | 2023-01-30T19:22:21 | 2023-01-30T19:15:26 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5478",
"html_url": "https://github.com/huggingface/datasets/pull/5478",
"diff_url": "https://github.com/huggingface/datasets/pull/5478.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5478.patch",
"merged_at": "2023-01-30T19:15:26"
} | From this [feedback](https://discuss.huggingface.co/t/nonmatchingsplitssizeserror/30033) on the forum, thought I'd include a tip for recomputing the metadata numbers if it is your own dataset. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5478/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5478/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5477 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5477/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5477/comments | https://api.github.com/repos/huggingface/datasets/issues/5477/events | https://github.com/huggingface/datasets/issues/5477 | 1,559,909,892 | I_kwDODunzps5c-lYE | 5,477 | Unpin sqlalchemy once issue is fixed | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"@albertvillanova It looks like that issue has been fixed so I made a PR to unpin sqlalchemy! ",
"The source issue:\r\n- https://github.com/pandas-dev/pandas/issues/40686\r\n\r\nhas been fixed:\r\n- https://github.com/pandas-dev/pandas/pull/48576\r\n\r\nThe fix was released yesterday (2023-04-03) only in `pandas-2.0.0`:\r\n- https://github.com/pandas-dev/pandas/releases/tag/v2.0.0\r\n\r\nbut it will not be back-ported to `pandas-1`:\r\n- https://github.com/pandas-dev/pandas/pull/48576#issuecomment-1466467159\r\n\r\nAlso note that `pandas-2.0.0` dropped support for Python 3.7:\r\n- https://github.com/pandas-dev/pandas/issues/41678\r\n- https://github.com/pandas-dev/pandas/pull/41989\r\n\r\nTherefore, we cannot unpin `sqlalchemy` until we drop support for Python 3.7 (these Python users cannot use `pandas-2`)."
] | 2023-01-27T15:01:55 | 2024-01-26T14:50:45 | 2024-01-26T14:50:45 | MEMBER | null | null | null | Once the source issue is fixed:
- pandas-dev/pandas#51015
we should revert the pin introduced in:
- #5476 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5477/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5477/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5476 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5476/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5476/comments | https://api.github.com/repos/huggingface/datasets/issues/5476/events | https://github.com/huggingface/datasets/pull/5476 | 1,559,594,684 | PR_kwDODunzps5IqwC_ | 5,476 | Pin sqlalchemy | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.012442 / 0.011353 (0.001089) | 0.006274 / 0.011008 (-0.004734) | 0.128249 / 0.038508 (0.089741) | 0.040117 / 0.023109 (0.017008) | 0.383725 / 0.275898 (0.107827) | 0.510494 / 0.323480 (0.187014) | 0.009037 / 0.007986 (0.001051) | 0.008256 / 0.004328 (0.003927) | 0.105329 / 0.004250 (0.101079) | 0.046909 / 0.037052 (0.009857) | 0.401980 / 0.258489 (0.143491) | 0.461332 / 0.293841 (0.167491) | 0.065629 / 0.128546 (-0.062917) | 0.020043 / 0.075646 (-0.055604) | 0.453773 / 0.419271 (0.034501) | 0.063456 / 0.043533 (0.019923) | 0.384458 / 0.255139 (0.129319) | 0.449699 / 0.283200 (0.166499) | 0.118197 / 0.141683 (-0.023486) | 1.915080 / 1.452155 (0.462925) | 1.957132 / 1.492716 (0.464416) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.209657 / 0.018006 (0.191651) | 0.592478 / 0.000490 (0.591988) | 0.004137 / 0.000200 (0.003937) | 0.000124 / 0.000054 (0.000069) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029607 / 0.037411 (-0.007804) | 0.129559 / 0.014526 (0.115033) | 0.148326 / 0.176557 (-0.028231) | 0.190506 / 0.737135 (-0.546629) | 0.143177 / 0.296338 (-0.153162) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.626166 / 0.215209 (0.410957) | 6.612680 / 2.077655 (4.535026) | 2.432354 / 1.504120 (0.928234) | 2.051482 / 1.541195 (0.510287) | 2.055822 / 1.468490 (0.587332) | 1.210099 / 4.584777 (-3.374678) | 5.498117 / 3.745712 (1.752405) | 3.054838 / 5.269862 (-2.215024) | 2.182875 / 4.565676 (-2.382802) | 0.144518 / 0.424275 (-0.279757) | 0.014132 / 0.007607 (0.006525) | 0.801805 / 0.226044 (0.575761) | 7.911235 / 2.268929 (5.642307) | 3.372762 / 55.444624 (-52.071862) | 2.517266 / 6.876477 (-4.359210) | 2.515329 / 2.142072 (0.373256) | 1.501731 / 4.805227 (-3.303497) | 0.252569 / 6.500664 (-6.248096) | 0.080987 / 0.075469 (0.005518) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.709880 / 1.841788 (-0.131907) | 18.640340 / 8.074308 (10.566032) | 23.560908 / 10.191392 (13.369516) | 0.265680 / 0.680424 (-0.414744) | 0.046438 / 0.534201 (-0.487763) | 0.571973 / 0.579283 (-0.007310) | 0.642425 / 0.434364 (0.208061) | 0.698167 / 0.540337 (0.157830) | 0.842132 / 1.386936 (-0.544804) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009268 / 0.011353 (-0.002085) | 0.006052 / 0.011008 (-0.004956) | 0.133448 / 0.038508 (0.094939) | 0.034417 / 0.023109 (0.011308) | 0.435573 / 0.275898 (0.159675) | 0.479642 / 0.323480 (0.156162) | 0.008016 / 0.007986 (0.000030) | 0.006616 / 0.004328 (0.002288) | 0.106256 / 0.004250 (0.102005) | 0.048995 / 0.037052 (0.011942) | 0.450056 / 0.258489 (0.191567) | 0.511027 / 0.293841 (0.217187) | 0.052928 / 0.128546 (-0.075618) | 0.020824 / 0.075646 (-0.054822) | 0.450105 / 0.419271 (0.030834) | 0.062729 / 0.043533 (0.019196) | 0.438887 / 0.255139 (0.183748) | 0.468732 / 0.283200 (0.185532) | 0.116101 / 0.141683 (-0.025582) | 1.909689 / 1.452155 (0.457534) | 2.042007 / 1.492716 (0.549291) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.198265 / 0.018006 (0.180259) | 0.541799 / 0.000490 (0.541309) | 0.003938 / 0.000200 (0.003738) | 0.000116 / 0.000054 (0.000062) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.035933 / 0.037411 (-0.001478) | 0.130754 / 0.014526 (0.116229) | 0.146143 / 0.176557 (-0.030414) | 0.202042 / 0.737135 (-0.535094) | 0.155648 / 0.296338 (-0.140691) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.691123 / 0.215209 (0.475914) | 6.708370 / 2.077655 (4.630715) | 2.957120 / 1.504120 (1.453000) | 2.558350 / 1.541195 (1.017155) | 2.611271 / 1.468490 (1.142781) | 1.327355 / 4.584777 (-3.257422) | 5.755975 / 3.745712 (2.010263) | 3.295556 / 5.269862 (-1.974305) | 2.159831 / 4.565676 (-2.405845) | 0.161409 / 0.424275 (-0.262866) | 0.015470 / 0.007607 (0.007863) | 0.840611 / 0.226044 (0.614567) | 8.550064 / 2.268929 (6.281136) | 3.832013 / 55.444624 (-51.612612) | 3.032909 / 6.876477 (-3.843568) | 3.155651 / 2.142072 (1.013578) | 1.612486 / 4.805227 (-3.192741) | 0.273789 / 6.500664 (-6.226875) | 0.085618 / 0.075469 (0.010149) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.808376 / 1.841788 (-0.033412) | 18.267614 / 8.074308 (10.193306) | 21.047679 / 10.191392 (10.856286) | 0.259089 / 0.680424 (-0.421335) | 0.029211 / 0.534201 (-0.504990) | 0.556303 / 0.579283 (-0.022980) | 0.625264 / 0.434364 (0.190900) | 0.680814 / 0.540337 (0.140476) | 0.810146 / 1.386936 (-0.576790) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#20ea76c80e07acad78cf67198a4046a982feda21 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008779 / 0.011353 (-0.002574) | 0.004644 / 0.011008 (-0.006364) | 0.099814 / 0.038508 (0.061306) | 0.029830 / 0.023109 (0.006721) | 0.299159 / 0.275898 (0.023261) | 0.354815 / 0.323480 (0.031335) | 0.006968 / 0.007986 (-0.001018) | 0.003521 / 0.004328 (-0.000808) | 0.077687 / 0.004250 (0.073437) | 0.035019 / 0.037052 (-0.002034) | 0.309548 / 0.258489 (0.051059) | 0.345228 / 0.293841 (0.051387) | 0.033644 / 0.128546 (-0.094902) | 0.011564 / 0.075646 (-0.064083) | 0.321835 / 0.419271 (-0.097437) | 0.041798 / 0.043533 (-0.001735) | 0.298190 / 0.255139 (0.043051) | 0.328874 / 0.283200 (0.045674) | 0.088175 / 0.141683 (-0.053508) | 1.481755 / 1.452155 (0.029600) | 1.503085 / 1.492716 (0.010369) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.170930 / 0.018006 (0.152924) | 0.422155 / 0.000490 (0.421666) | 0.001708 / 0.000200 (0.001509) | 0.000083 / 0.000054 (0.000028) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022588 / 0.037411 (-0.014824) | 0.095775 / 0.014526 (0.081249) | 0.103939 / 0.176557 (-0.072618) | 0.138441 / 0.737135 (-0.598694) | 0.107896 / 0.296338 (-0.188442) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.418243 / 0.215209 (0.203034) | 4.171432 / 2.077655 (2.093777) | 1.906029 / 1.504120 (0.401909) | 1.698174 / 1.541195 (0.156979) | 1.748339 / 1.468490 (0.279849) | 0.691026 / 4.584777 (-3.893751) | 3.393354 / 3.745712 (-0.352358) | 2.722412 / 5.269862 (-2.547450) | 1.462439 / 4.565676 (-3.103238) | 0.084713 / 0.424275 (-0.339562) | 0.012131 / 0.007607 (0.004524) | 0.522153 / 0.226044 (0.296109) | 5.197916 / 2.268929 (2.928988) | 2.314270 / 55.444624 (-53.130354) | 1.986599 / 6.876477 (-4.889878) | 2.012757 / 2.142072 (-0.129315) | 0.802540 / 4.805227 (-4.002687) | 0.148673 / 6.500664 (-6.351991) | 0.065924 / 0.075469 (-0.009545) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.263790 / 1.841788 (-0.577998) | 13.874784 / 8.074308 (5.800476) | 13.842276 / 10.191392 (3.650884) | 0.149002 / 0.680424 (-0.531422) | 0.028550 / 0.534201 (-0.505651) | 0.396913 / 0.579283 (-0.182370) | 0.401543 / 0.434364 (-0.032821) | 0.473754 / 0.540337 (-0.066583) | 0.560455 / 1.386936 (-0.826481) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006724 / 0.011353 (-0.004629) | 0.004507 / 0.011008 (-0.006502) | 0.098447 / 0.038508 (0.059939) | 0.027888 / 0.023109 (0.004779) | 0.428956 / 0.275898 (0.153058) | 0.451557 / 0.323480 (0.128077) | 0.005056 / 0.007986 (-0.002929) | 0.003363 / 0.004328 (-0.000965) | 0.075990 / 0.004250 (0.071740) | 0.038688 / 0.037052 (0.001635) | 0.421550 / 0.258489 (0.163061) | 0.459480 / 0.293841 (0.165639) | 0.031408 / 0.128546 (-0.097138) | 0.011559 / 0.075646 (-0.064088) | 0.320054 / 0.419271 (-0.099217) | 0.041917 / 0.043533 (-0.001616) | 0.420878 / 0.255139 (0.165739) | 0.444813 / 0.283200 (0.161613) | 0.090409 / 0.141683 (-0.051274) | 1.490058 / 1.452155 (0.037904) | 1.645206 / 1.492716 (0.152489) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.221105 / 0.018006 (0.203099) | 0.407537 / 0.000490 (0.407047) | 0.000410 / 0.000200 (0.000210) | 0.000059 / 0.000054 (0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024658 / 0.037411 (-0.012754) | 0.099230 / 0.014526 (0.084705) | 0.107788 / 0.176557 (-0.068769) | 0.143040 / 0.737135 (-0.594096) | 0.109440 / 0.296338 (-0.186899) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.453303 / 0.215209 (0.238094) | 4.520376 / 2.077655 (2.442722) | 2.133909 / 1.504120 (0.629789) | 1.926996 / 1.541195 (0.385801) | 2.019870 / 1.468490 (0.551380) | 0.707423 / 4.584777 (-3.877354) | 3.391903 / 3.745712 (-0.353809) | 1.860661 / 5.269862 (-3.409201) | 1.159940 / 4.565676 (-3.405736) | 0.083773 / 0.424275 (-0.340502) | 0.012228 / 0.007607 (0.004621) | 0.554666 / 0.226044 (0.328622) | 5.567564 / 2.268929 (3.298636) | 2.636718 / 55.444624 (-52.807907) | 2.240215 / 6.876477 (-4.636262) | 2.218951 / 2.142072 (0.076879) | 0.817167 / 4.805227 (-3.988060) | 0.151633 / 6.500664 (-6.349032) | 0.066515 / 0.075469 (-0.008954) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.296665 / 1.841788 (-0.545123) | 13.997898 / 8.074308 (5.923590) | 13.286607 / 10.191392 (3.095215) | 0.148906 / 0.680424 (-0.531518) | 0.016600 / 0.534201 (-0.517601) | 0.377459 / 0.579283 (-0.201824) | 0.379938 / 0.434364 (-0.054426) | 0.461628 / 0.540337 (-0.078709) | 0.550592 / 1.386936 (-0.836344) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#053f51a3e2adb762236eb29dd02791307f45f02f \"CML watermark\")\n"
] | 2023-01-27T11:26:38 | 2023-01-27T12:06:51 | 2023-01-27T11:57:48 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5476",
"html_url": "https://github.com/huggingface/datasets/pull/5476",
"diff_url": "https://github.com/huggingface/datasets/pull/5476.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5476.patch",
"merged_at": "2023-01-27T11:57:48"
} | since sqlalchemy update to 2.0.0 the CI started to fail: https://github.com/huggingface/datasets/actions/runs/4023742457/jobs/6914976514
the error comes from pandas: https://github.com/pandas-dev/pandas/issues/51015 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5476/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5476/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5475 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5475/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5475/comments | https://api.github.com/repos/huggingface/datasets/issues/5475/events | https://github.com/huggingface/datasets/issues/5475 | 1,559,030,149 | I_kwDODunzps5c7OmF | 5,475 | Dataset scan time is much slower than using native arrow | {
"login": "jonny-cyberhaven",
"id": 121845112,
"node_id": "U_kgDOB0M1eA",
"avatar_url": "https://avatars.githubusercontent.com/u/121845112?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jonny-cyberhaven",
"html_url": "https://github.com/jonny-cyberhaven",
"followers_url": "https://api.github.com/users/jonny-cyberhaven/followers",
"following_url": "https://api.github.com/users/jonny-cyberhaven/following{/other_user}",
"gists_url": "https://api.github.com/users/jonny-cyberhaven/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jonny-cyberhaven/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jonny-cyberhaven/subscriptions",
"organizations_url": "https://api.github.com/users/jonny-cyberhaven/orgs",
"repos_url": "https://api.github.com/users/jonny-cyberhaven/repos",
"events_url": "https://api.github.com/users/jonny-cyberhaven/events{/privacy}",
"received_events_url": "https://api.github.com/users/jonny-cyberhaven/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi ! In your code you only iterate on the Arrow buffers - you don't actually load the data as python objects. For a fair comparison, you can modify your code using:\r\n```diff\r\n- for _ in range(0, len(table), bsz):\r\n- _ = {k:table[k][_ : _ + bsz] for k in cols}\r\n+ for _ in range(0, len(table), bsz):\r\n+ _ = {k:table[k][_ : _ + bsz].to_pylist() for k in cols}\r\n```\r\n\r\nI re-ran your code and got a speed ratio of 1.00x and 1.02x",
"Ah I see, datasets is implicitly making this conversion. Thanks for pointing that out!\r\n\r\nIf it's not too much, I would also suggest updating some of your docs with the same `.to_pylist()` conversion in the code snippet that follows [here](https://huggingface.co/course/chapter5/4?fw=pt#:~:text=let%E2%80%99s%20run%20a%20little%20speed%20test%20by%20iterating%20over%20all%20the%20elements%20in%20the%20PubMed%20Abstracts%20dataset%3A).",
"This code snippet shows `datasets` code that reads the Arrow data as python objects already, there is no need to add to_pylist. Or were you thinking about something else ?"
] | 2023-01-27T01:32:25 | 2023-01-30T16:17:11 | 2023-01-30T16:17:11 | CONTRIBUTOR | null | null | null | ### Describe the bug
I'm basically running the same scanning experiment from the tutorials https://huggingface.co/course/chapter5/4?fw=pt except now I'm comparing to a native pyarrow version.
I'm finding that the native pyarrow approach is much faster (2 orders of magnitude). Is there something I'm missing that explains this phenomenon?
### Steps to reproduce the bug
https://colab.research.google.com/drive/11EtHDaGAf1DKCpvYnAPJUW-LFfAcDzHY?usp=sharing
### Expected behavior
I expect scan times to be on par with using pyarrow directly.
### Environment info
standard colab environment | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5475/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5475/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5474 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5474/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5474/comments | https://api.github.com/repos/huggingface/datasets/issues/5474/events | https://github.com/huggingface/datasets/issues/5474 | 1,558,827,155 | I_kwDODunzps5c6dCT | 5,474 | Column project operation on `datasets.Dataset` | {
"login": "daskol",
"id": 9336514,
"node_id": "MDQ6VXNlcjkzMzY1MTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/9336514?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/daskol",
"html_url": "https://github.com/daskol",
"followers_url": "https://api.github.com/users/daskol/followers",
"following_url": "https://api.github.com/users/daskol/following{/other_user}",
"gists_url": "https://api.github.com/users/daskol/gists{/gist_id}",
"starred_url": "https://api.github.com/users/daskol/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/daskol/subscriptions",
"organizations_url": "https://api.github.com/users/daskol/orgs",
"repos_url": "https://api.github.com/users/daskol/repos",
"events_url": "https://api.github.com/users/daskol/events{/privacy}",
"received_events_url": "https://api.github.com/users/daskol/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892865,
"node_id": "MDU6TGFiZWwxOTM1ODkyODY1",
"url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate",
"name": "duplicate",
"color": "cfd3d7",
"default": true,
"description": "This issue or pull request already exists"
},
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | [
"Hi ! This would be a nice addition indeed :) This sounds like a duplicate of https://github.com/huggingface/datasets/issues/5468\r\n\r\n> Not sure. Some of my PRs are still open and some do not have any discussions.\r\n\r\nSorry to hear that, feel free to ping me on those PRs"
] | 2023-01-26T21:47:53 | 2023-02-13T09:59:37 | 2023-02-13T09:59:37 | CONTRIBUTOR | null | null | null | ### Feature request
There is no operation to select a subset of columns of original dataset. Expected API follows.
```python
a = Dataset.from_dict({
'int': [0, 1, 2]
'char': ['a', 'b', 'c'],
'none': [None] * 3,
})
b = a.project('int', 'char') # usually, .select()
print(a.column_names) # stdout: ['int', 'char', 'none']
print(b.column_names) # stdout: ['int', 'char']
```
Method project can easily accept not only column names (as a `str)` but univariant function applied to corresponding column as an example. Or keyword arguments can be used in order to rename columns in advance (see `pandas`, `pyspark`, `pyarrow`, and SQL)..
### Motivation
Projection is a typical operation in every data processing library. And it is a basic block of a well-known data manipulation language like SQL. Without this operation `datasets.Dataset` interface is not complete.
### Your contribution
Not sure. Some of my PRs are still open and some do not have any discussions. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5474/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5474/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5473 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5473/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5473/comments | https://api.github.com/repos/huggingface/datasets/issues/5473/events | https://github.com/huggingface/datasets/pull/5473 | 1,558,668,197 | PR_kwDODunzps5Inm9h | 5,473 | Set dev version | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008959 / 0.011353 (-0.002394) | 0.004549 / 0.011008 (-0.006460) | 0.102012 / 0.038508 (0.063504) | 0.030122 / 0.023109 (0.007013) | 0.303731 / 0.275898 (0.027833) | 0.344418 / 0.323480 (0.020938) | 0.007199 / 0.007986 (-0.000787) | 0.003415 / 0.004328 (-0.000913) | 0.079784 / 0.004250 (0.075534) | 0.034894 / 0.037052 (-0.002158) | 0.304739 / 0.258489 (0.046250) | 0.359457 / 0.293841 (0.065616) | 0.034194 / 0.128546 (-0.094352) | 0.011348 / 0.075646 (-0.064298) | 0.324340 / 0.419271 (-0.094931) | 0.041071 / 0.043533 (-0.002461) | 0.304437 / 0.255139 (0.049298) | 0.335517 / 0.283200 (0.052317) | 0.087787 / 0.141683 (-0.053895) | 1.467293 / 1.452155 (0.015138) | 1.543529 / 1.492716 (0.050813) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.187654 / 0.018006 (0.169648) | 0.426558 / 0.000490 (0.426068) | 0.003585 / 0.000200 (0.003385) | 0.000076 / 0.000054 (0.000022) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023410 / 0.037411 (-0.014001) | 0.097065 / 0.014526 (0.082539) | 0.105358 / 0.176557 (-0.071198) | 0.140941 / 0.737135 (-0.596195) | 0.109484 / 0.296338 (-0.186855) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.420334 / 0.215209 (0.205125) | 4.223235 / 2.077655 (2.145581) | 1.866213 / 1.504120 (0.362093) | 1.673829 / 1.541195 (0.132634) | 1.757828 / 1.468490 (0.289337) | 0.702203 / 4.584777 (-3.882574) | 3.426192 / 3.745712 (-0.319521) | 1.950392 / 5.269862 (-3.319470) | 1.286139 / 4.565676 (-3.279538) | 0.082858 / 0.424275 (-0.341417) | 0.012587 / 0.007607 (0.004980) | 0.531920 / 0.226044 (0.305876) | 5.344425 / 2.268929 (3.075497) | 2.337875 / 55.444624 (-53.106749) | 1.967713 / 6.876477 (-4.908764) | 2.022075 / 2.142072 (-0.119997) | 0.829267 / 4.805227 (-3.975961) | 0.151712 / 6.500664 (-6.348952) | 0.066617 / 0.075469 (-0.008852) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.251867 / 1.841788 (-0.589921) | 13.861756 / 8.074308 (5.787448) | 14.236309 / 10.191392 (4.044917) | 0.138215 / 0.680424 (-0.542209) | 0.028600 / 0.534201 (-0.505601) | 0.395890 / 0.579283 (-0.183393) | 0.403971 / 0.434364 (-0.030393) | 0.479033 / 0.540337 (-0.061305) | 0.564019 / 1.386936 (-0.822917) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006845 / 0.011353 (-0.004508) | 0.004544 / 0.011008 (-0.006464) | 0.098719 / 0.038508 (0.060211) | 0.029082 / 0.023109 (0.005973) | 0.426011 / 0.275898 (0.150113) | 0.447185 / 0.323480 (0.123705) | 0.005203 / 0.007986 (-0.002783) | 0.004790 / 0.004328 (0.000462) | 0.076446 / 0.004250 (0.072196) | 0.040649 / 0.037052 (0.003596) | 0.414810 / 0.258489 (0.156321) | 0.452082 / 0.293841 (0.158241) | 0.031842 / 0.128546 (-0.096704) | 0.011575 / 0.075646 (-0.064071) | 0.320710 / 0.419271 (-0.098561) | 0.044994 / 0.043533 (0.001461) | 0.415645 / 0.255139 (0.160506) | 0.435235 / 0.283200 (0.152035) | 0.091756 / 0.141683 (-0.049927) | 1.493900 / 1.452155 (0.041746) | 1.592353 / 1.492716 (0.099637) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.264710 / 0.018006 (0.246703) | 0.410553 / 0.000490 (0.410064) | 0.024497 / 0.000200 (0.024297) | 0.000232 / 0.000054 (0.000178) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024452 / 0.037411 (-0.012959) | 0.102673 / 0.014526 (0.088147) | 0.107787 / 0.176557 (-0.068770) | 0.147368 / 0.737135 (-0.589767) | 0.112127 / 0.296338 (-0.184211) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.471294 / 0.215209 (0.256085) | 4.711638 / 2.077655 (2.633983) | 2.436819 / 1.504120 (0.932699) | 2.238540 / 1.541195 (0.697345) | 2.334134 / 1.468490 (0.865644) | 0.697668 / 4.584777 (-3.887108) | 3.414332 / 3.745712 (-0.331380) | 2.783248 / 5.269862 (-2.486614) | 1.529599 / 4.565676 (-3.036078) | 0.082626 / 0.424275 (-0.341649) | 0.012385 / 0.007607 (0.004778) | 0.580486 / 0.226044 (0.354441) | 5.837914 / 2.268929 (3.568986) | 2.915129 / 55.444624 (-52.529495) | 2.606254 / 6.876477 (-4.270223) | 2.659031 / 2.142072 (0.516958) | 0.810431 / 4.805227 (-3.994796) | 0.151666 / 6.500664 (-6.348998) | 0.066873 / 0.075469 (-0.008596) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.259933 / 1.841788 (-0.581855) | 14.052388 / 8.074308 (5.978080) | 13.356141 / 10.191392 (3.164749) | 0.138416 / 0.680424 (-0.542008) | 0.016582 / 0.534201 (-0.517619) | 0.378110 / 0.579283 (-0.201173) | 0.385089 / 0.434364 (-0.049275) | 0.465299 / 0.540337 (-0.075038) | 0.559780 / 1.386936 (-0.827156) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#d2859fd4d4beca33f21539a6e1df9a7f012cbd10 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.011945 / 0.011353 (0.000592) | 0.006128 / 0.011008 (-0.004880) | 0.128926 / 0.038508 (0.090418) | 0.037708 / 0.023109 (0.014599) | 0.373449 / 0.275898 (0.097551) | 0.423567 / 0.323480 (0.100088) | 0.009848 / 0.007986 (0.001863) | 0.006097 / 0.004328 (0.001769) | 0.098275 / 0.004250 (0.094024) | 0.043199 / 0.037052 (0.006147) | 0.376848 / 0.258489 (0.118359) | 0.441819 / 0.293841 (0.147978) | 0.055094 / 0.128546 (-0.073453) | 0.019704 / 0.075646 (-0.055942) | 0.422746 / 0.419271 (0.003474) | 0.061764 / 0.043533 (0.018231) | 0.381056 / 0.255139 (0.125917) | 0.419343 / 0.283200 (0.136144) | 0.116720 / 0.141683 (-0.024963) | 1.763913 / 1.452155 (0.311759) | 1.872306 / 1.492716 (0.379589) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.198651 / 0.018006 (0.180645) | 0.560565 / 0.000490 (0.560075) | 0.004269 / 0.000200 (0.004069) | 0.000114 / 0.000054 (0.000059) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027307 / 0.037411 (-0.010104) | 0.128276 / 0.014526 (0.113750) | 0.129015 / 0.176557 (-0.047542) | 0.167269 / 0.737135 (-0.569866) | 0.143955 / 0.296338 (-0.152384) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.564954 / 0.215209 (0.349745) | 5.810570 / 2.077655 (3.732916) | 2.456382 / 1.504120 (0.952262) | 2.115809 / 1.541195 (0.574614) | 2.097363 / 1.468490 (0.628873) | 1.189712 / 4.584777 (-3.395065) | 5.318287 / 3.745712 (1.572575) | 2.965763 / 5.269862 (-2.304099) | 2.177958 / 4.565676 (-2.387719) | 0.144135 / 0.424275 (-0.280140) | 0.014348 / 0.007607 (0.006741) | 0.781715 / 0.226044 (0.555670) | 7.688349 / 2.268929 (5.419421) | 3.189260 / 55.444624 (-52.255365) | 2.552340 / 6.876477 (-4.324137) | 2.559312 / 2.142072 (0.417240) | 1.490755 / 4.805227 (-3.314473) | 0.257908 / 6.500664 (-6.242756) | 0.082016 / 0.075469 (0.006547) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.565735 / 1.841788 (-0.276053) | 17.660338 / 8.074308 (9.586030) | 19.493573 / 10.191392 (9.302181) | 0.241310 / 0.680424 (-0.439114) | 0.043485 / 0.534201 (-0.490716) | 0.557397 / 0.579283 (-0.021886) | 0.624385 / 0.434364 (0.190021) | 0.634601 / 0.540337 (0.094264) | 0.743140 / 1.386936 (-0.643796) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010134 / 0.011353 (-0.001219) | 0.005858 / 0.011008 (-0.005150) | 0.128741 / 0.038508 (0.090232) | 0.036769 / 0.023109 (0.013660) | 0.470894 / 0.275898 (0.194996) | 0.524302 / 0.323480 (0.200822) | 0.006830 / 0.007986 (-0.001156) | 0.006166 / 0.004328 (0.001838) | 0.094875 / 0.004250 (0.090625) | 0.051201 / 0.037052 (0.014148) | 0.493992 / 0.258489 (0.235503) | 0.510540 / 0.293841 (0.216699) | 0.056354 / 0.128546 (-0.072192) | 0.020512 / 0.075646 (-0.055134) | 0.417809 / 0.419271 (-0.001463) | 0.061941 / 0.043533 (0.018408) | 0.498883 / 0.255139 (0.243744) | 0.480762 / 0.283200 (0.197563) | 0.110753 / 0.141683 (-0.030930) | 1.914096 / 1.452155 (0.461941) | 1.941338 / 1.492716 (0.448622) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.237955 / 0.018006 (0.219949) | 0.518136 / 0.000490 (0.517647) | 0.000475 / 0.000200 (0.000275) | 0.000095 / 0.000054 (0.000040) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032947 / 0.037411 (-0.004465) | 0.127857 / 0.014526 (0.113331) | 0.133911 / 0.176557 (-0.042646) | 0.188406 / 0.737135 (-0.548729) | 0.143939 / 0.296338 (-0.152400) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.787553 / 0.215209 (0.572344) | 6.976572 / 2.077655 (4.898918) | 2.897964 / 1.504120 (1.393844) | 2.545906 / 1.541195 (1.004711) | 2.622111 / 1.468490 (1.153620) | 1.278283 / 4.584777 (-3.306494) | 5.650447 / 3.745712 (1.904734) | 4.955835 / 5.269862 (-0.314027) | 2.767946 / 4.565676 (-1.797731) | 0.149385 / 0.424275 (-0.274890) | 0.014340 / 0.007607 (0.006733) | 0.861774 / 0.226044 (0.635730) | 8.660985 / 2.268929 (6.392057) | 3.685611 / 55.444624 (-51.759014) | 2.963087 / 6.876477 (-3.913390) | 3.020746 / 2.142072 (0.878673) | 1.538908 / 4.805227 (-3.266319) | 0.285875 / 6.500664 (-6.214789) | 0.080337 / 0.075469 (0.004867) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.575155 / 1.841788 (-0.266633) | 17.548946 / 8.074308 (9.474638) | 19.954104 / 10.191392 (9.762712) | 0.242025 / 0.680424 (-0.438398) | 0.025586 / 0.534201 (-0.508615) | 0.515676 / 0.579283 (-0.063607) | 0.607035 / 0.434364 (0.172671) | 0.633597 / 0.540337 (0.093259) | 0.744577 / 1.386936 (-0.642359) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#6529cada7879496bf18dd686e4d281de81d6203c \"CML watermark\")\n"
] | 2023-01-26T19:34:44 | 2023-01-26T19:47:34 | 2023-01-26T19:38:30 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5473",
"html_url": "https://github.com/huggingface/datasets/pull/5473",
"diff_url": "https://github.com/huggingface/datasets/pull/5473.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5473.patch",
"merged_at": "2023-01-26T19:38:30"
} | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5473/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5473/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5472 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5472/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5472/comments | https://api.github.com/repos/huggingface/datasets/issues/5472/events | https://github.com/huggingface/datasets/pull/5472 | 1,558,662,251 | PR_kwDODunzps5Inlp8 | 5,472 | Release: 2.9.0 | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008578 / 0.011353 (-0.002775) | 0.004535 / 0.011008 (-0.006473) | 0.100694 / 0.038508 (0.062186) | 0.029570 / 0.023109 (0.006460) | 0.296384 / 0.275898 (0.020486) | 0.354405 / 0.323480 (0.030925) | 0.006962 / 0.007986 (-0.001024) | 0.003405 / 0.004328 (-0.000924) | 0.077275 / 0.004250 (0.073025) | 0.036623 / 0.037052 (-0.000429) | 0.309844 / 0.258489 (0.051355) | 0.340343 / 0.293841 (0.046502) | 0.033626 / 0.128546 (-0.094920) | 0.011433 / 0.075646 (-0.064214) | 0.322659 / 0.419271 (-0.096612) | 0.040509 / 0.043533 (-0.003024) | 0.294002 / 0.255139 (0.038863) | 0.323259 / 0.283200 (0.040059) | 0.088023 / 0.141683 (-0.053660) | 1.462039 / 1.452155 (0.009885) | 1.495401 / 1.492716 (0.002684) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.218614 / 0.018006 (0.200608) | 0.482359 / 0.000490 (0.481869) | 0.001216 / 0.000200 (0.001016) | 0.000081 / 0.000054 (0.000027) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023167 / 0.037411 (-0.014245) | 0.098468 / 0.014526 (0.083942) | 0.108273 / 0.176557 (-0.068284) | 0.139991 / 0.737135 (-0.597144) | 0.109032 / 0.296338 (-0.187307) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.421526 / 0.215209 (0.206317) | 4.216808 / 2.077655 (2.139153) | 1.860550 / 1.504120 (0.356431) | 1.654518 / 1.541195 (0.113323) | 1.699064 / 1.468490 (0.230574) | 0.691489 / 4.584777 (-3.893287) | 3.401885 / 3.745712 (-0.343827) | 2.792860 / 5.269862 (-2.477001) | 1.516269 / 4.565676 (-3.049408) | 0.081627 / 0.424275 (-0.342648) | 0.012556 / 0.007607 (0.004949) | 0.531535 / 0.226044 (0.305491) | 5.320752 / 2.268929 (3.051823) | 2.314502 / 55.444624 (-53.130123) | 1.967118 / 6.876477 (-4.909359) | 2.008252 / 2.142072 (-0.133821) | 0.809730 / 4.805227 (-3.995497) | 0.148112 / 6.500664 (-6.352552) | 0.064821 / 0.075469 (-0.010648) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.269754 / 1.841788 (-0.572033) | 13.884200 / 8.074308 (5.809892) | 13.914390 / 10.191392 (3.722998) | 0.150176 / 0.680424 (-0.530248) | 0.028463 / 0.534201 (-0.505738) | 0.398723 / 0.579283 (-0.180561) | 0.400433 / 0.434364 (-0.033931) | 0.485169 / 0.540337 (-0.055169) | 0.565995 / 1.386936 (-0.820941) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006479 / 0.011353 (-0.004874) | 0.004504 / 0.011008 (-0.006504) | 0.097905 / 0.038508 (0.059397) | 0.027140 / 0.023109 (0.004031) | 0.408742 / 0.275898 (0.132844) | 0.448707 / 0.323480 (0.125228) | 0.004819 / 0.007986 (-0.003166) | 0.004761 / 0.004328 (0.000433) | 0.075456 / 0.004250 (0.071205) | 0.036282 / 0.037052 (-0.000771) | 0.405961 / 0.258489 (0.147472) | 0.449411 / 0.293841 (0.155570) | 0.031159 / 0.128546 (-0.097387) | 0.011693 / 0.075646 (-0.063954) | 0.321124 / 0.419271 (-0.098147) | 0.041369 / 0.043533 (-0.002164) | 0.408070 / 0.255139 (0.152931) | 0.428704 / 0.283200 (0.145504) | 0.086839 / 0.141683 (-0.054844) | 1.477772 / 1.452155 (0.025617) | 1.555913 / 1.492716 (0.063197) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.239494 / 0.018006 (0.221488) | 0.410785 / 0.000490 (0.410295) | 0.000989 / 0.000200 (0.000789) | 0.000072 / 0.000054 (0.000017) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023805 / 0.037411 (-0.013607) | 0.097904 / 0.014526 (0.083378) | 0.106437 / 0.176557 (-0.070120) | 0.140555 / 0.737135 (-0.596580) | 0.107169 / 0.296338 (-0.189170) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.470233 / 0.215209 (0.255024) | 4.700451 / 2.077655 (2.622797) | 2.391712 / 1.504120 (0.887592) | 2.191125 / 1.541195 (0.649930) | 2.268924 / 1.468490 (0.800434) | 0.692421 / 4.584777 (-3.892356) | 3.387117 / 3.745712 (-0.358595) | 1.881731 / 5.269862 (-3.388130) | 1.155759 / 4.565676 (-3.409917) | 0.082040 / 0.424275 (-0.342236) | 0.012687 / 0.007607 (0.005080) | 0.567556 / 0.226044 (0.341511) | 5.701408 / 2.268929 (3.432480) | 2.864368 / 55.444624 (-52.580256) | 2.512073 / 6.876477 (-4.364404) | 2.546078 / 2.142072 (0.404005) | 0.795939 / 4.805227 (-4.009288) | 0.150078 / 6.500664 (-6.350586) | 0.067644 / 0.075469 (-0.007825) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.281681 / 1.841788 (-0.560107) | 13.967107 / 8.074308 (5.892799) | 13.293648 / 10.191392 (3.102256) | 0.128027 / 0.680424 (-0.552397) | 0.016791 / 0.534201 (-0.517410) | 0.379400 / 0.579283 (-0.199884) | 0.386847 / 0.434364 (-0.047517) | 0.469859 / 0.540337 (-0.070478) | 0.564203 / 1.386936 (-0.822733) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#90832b5e33774ea8ec35ccb92ac14649a345bdbe \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008701 / 0.011353 (-0.002652) | 0.004564 / 0.011008 (-0.006444) | 0.100578 / 0.038508 (0.062070) | 0.029209 / 0.023109 (0.006100) | 0.315308 / 0.275898 (0.039410) | 0.381022 / 0.323480 (0.057542) | 0.007152 / 0.007986 (-0.000834) | 0.003511 / 0.004328 (-0.000817) | 0.078361 / 0.004250 (0.074110) | 0.035394 / 0.037052 (-0.001658) | 0.331076 / 0.258489 (0.072586) | 0.366613 / 0.293841 (0.072772) | 0.033466 / 0.128546 (-0.095080) | 0.011521 / 0.075646 (-0.064126) | 0.322178 / 0.419271 (-0.097093) | 0.040891 / 0.043533 (-0.002641) | 0.320418 / 0.255139 (0.065279) | 0.345199 / 0.283200 (0.062000) | 0.087906 / 0.141683 (-0.053777) | 1.476801 / 1.452155 (0.024646) | 1.497738 / 1.492716 (0.005022) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.178094 / 0.018006 (0.160087) | 0.408317 / 0.000490 (0.407827) | 0.001825 / 0.000200 (0.001625) | 0.000067 / 0.000054 (0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022402 / 0.037411 (-0.015010) | 0.097104 / 0.014526 (0.082578) | 0.105361 / 0.176557 (-0.071196) | 0.139728 / 0.737135 (-0.597407) | 0.109613 / 0.296338 (-0.186725) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.418245 / 0.215209 (0.203036) | 4.155655 / 2.077655 (2.078000) | 1.865892 / 1.504120 (0.361772) | 1.659003 / 1.541195 (0.117809) | 1.725649 / 1.468490 (0.257159) | 0.688733 / 4.584777 (-3.896044) | 3.323529 / 3.745712 (-0.422184) | 1.867807 / 5.269862 (-3.402054) | 1.157740 / 4.565676 (-3.407936) | 0.081947 / 0.424275 (-0.342329) | 0.012471 / 0.007607 (0.004864) | 0.529333 / 0.226044 (0.303288) | 5.284898 / 2.268929 (3.015970) | 2.321741 / 55.444624 (-53.122883) | 1.975683 / 6.876477 (-4.900794) | 2.029691 / 2.142072 (-0.112381) | 0.810212 / 4.805227 (-3.995015) | 0.148185 / 6.500664 (-6.352479) | 0.064594 / 0.075469 (-0.010875) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.183391 / 1.841788 (-0.658396) | 13.574760 / 8.074308 (5.500452) | 14.215015 / 10.191392 (4.023623) | 0.150776 / 0.680424 (-0.529648) | 0.029058 / 0.534201 (-0.505143) | 0.404071 / 0.579283 (-0.175212) | 0.401289 / 0.434364 (-0.033075) | 0.490946 / 0.540337 (-0.049392) | 0.582292 / 1.386936 (-0.804644) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006695 / 0.011353 (-0.004658) | 0.004499 / 0.011008 (-0.006510) | 0.097633 / 0.038508 (0.059125) | 0.027606 / 0.023109 (0.004496) | 0.413191 / 0.275898 (0.137293) | 0.441896 / 0.323480 (0.118416) | 0.005703 / 0.007986 (-0.002283) | 0.004608 / 0.004328 (0.000280) | 0.074392 / 0.004250 (0.070141) | 0.037966 / 0.037052 (0.000913) | 0.410736 / 0.258489 (0.152247) | 0.448581 / 0.293841 (0.154740) | 0.031594 / 0.128546 (-0.096952) | 0.011597 / 0.075646 (-0.064049) | 0.319632 / 0.419271 (-0.099639) | 0.041189 / 0.043533 (-0.002343) | 0.407120 / 0.255139 (0.151981) | 0.433416 / 0.283200 (0.150216) | 0.089932 / 0.141683 (-0.051751) | 1.453919 / 1.452155 (0.001764) | 1.545892 / 1.492716 (0.053176) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.224302 / 0.018006 (0.206296) | 0.415519 / 0.000490 (0.415029) | 0.000407 / 0.000200 (0.000207) | 0.000060 / 0.000054 (0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024104 / 0.037411 (-0.013307) | 0.098202 / 0.014526 (0.083676) | 0.106416 / 0.176557 (-0.070140) | 0.141090 / 0.737135 (-0.596045) | 0.110188 / 0.296338 (-0.186150) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.478252 / 0.215209 (0.263043) | 4.739684 / 2.077655 (2.662029) | 2.419040 / 1.504120 (0.914920) | 2.217705 / 1.541195 (0.676510) | 2.303288 / 1.468490 (0.834798) | 0.696682 / 4.584777 (-3.888095) | 3.401962 / 3.745712 (-0.343750) | 1.886015 / 5.269862 (-3.383846) | 1.175084 / 4.565676 (-3.390592) | 0.083064 / 0.424275 (-0.341211) | 0.012613 / 0.007607 (0.005006) | 0.579105 / 0.226044 (0.353060) | 5.792119 / 2.268929 (3.523191) | 2.889778 / 55.444624 (-52.554846) | 2.537438 / 6.876477 (-4.339039) | 2.574814 / 2.142072 (0.432741) | 0.803438 / 4.805227 (-4.001789) | 0.151912 / 6.500664 (-6.348752) | 0.068291 / 0.075469 (-0.007178) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.286002 / 1.841788 (-0.555786) | 14.179443 / 8.074308 (6.105135) | 13.443939 / 10.191392 (3.252547) | 0.152427 / 0.680424 (-0.527996) | 0.017248 / 0.534201 (-0.516953) | 0.378734 / 0.579283 (-0.200549) | 0.382276 / 0.434364 (-0.052087) | 0.465323 / 0.540337 (-0.075014) | 0.556454 / 1.386936 (-0.830482) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#b5672a956d5de864e6f5550e493527d962d6ae55 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008675 / 0.011353 (-0.002678) | 0.004537 / 0.011008 (-0.006471) | 0.100179 / 0.038508 (0.061671) | 0.029307 / 0.023109 (0.006198) | 0.294687 / 0.275898 (0.018789) | 0.356868 / 0.323480 (0.033388) | 0.006992 / 0.007986 (-0.000994) | 0.003380 / 0.004328 (-0.000949) | 0.076961 / 0.004250 (0.072710) | 0.036047 / 0.037052 (-0.001005) | 0.308037 / 0.258489 (0.049548) | 0.341089 / 0.293841 (0.047248) | 0.033416 / 0.128546 (-0.095131) | 0.011534 / 0.075646 (-0.064112) | 0.322976 / 0.419271 (-0.096296) | 0.040894 / 0.043533 (-0.002639) | 0.296501 / 0.255139 (0.041362) | 0.324605 / 0.283200 (0.041405) | 0.086713 / 0.141683 (-0.054970) | 1.502784 / 1.452155 (0.050630) | 1.535013 / 1.492716 (0.042297) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.186647 / 0.018006 (0.168641) | 0.411003 / 0.000490 (0.410514) | 0.003594 / 0.000200 (0.003394) | 0.000074 / 0.000054 (0.000020) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023704 / 0.037411 (-0.013707) | 0.096154 / 0.014526 (0.081629) | 0.103671 / 0.176557 (-0.072885) | 0.138878 / 0.737135 (-0.598258) | 0.106947 / 0.296338 (-0.189391) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.417180 / 0.215209 (0.201970) | 4.149579 / 2.077655 (2.071925) | 1.865763 / 1.504120 (0.361643) | 1.669722 / 1.541195 (0.128527) | 1.722345 / 1.468490 (0.253855) | 0.695910 / 4.584777 (-3.888867) | 3.342266 / 3.745712 (-0.403446) | 1.884568 / 5.269862 (-3.385294) | 1.265013 / 4.565676 (-3.300664) | 0.081836 / 0.424275 (-0.342439) | 0.012371 / 0.007607 (0.004764) | 0.522997 / 0.226044 (0.296953) | 5.225434 / 2.268929 (2.956506) | 2.304701 / 55.444624 (-53.139924) | 1.949067 / 6.876477 (-4.927410) | 2.016347 / 2.142072 (-0.125725) | 0.809850 / 4.805227 (-3.995377) | 0.148396 / 6.500664 (-6.352268) | 0.063340 / 0.075469 (-0.012129) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.224621 / 1.841788 (-0.617167) | 13.814223 / 8.074308 (5.739915) | 13.879728 / 10.191392 (3.688336) | 0.149530 / 0.680424 (-0.530894) | 0.028439 / 0.534201 (-0.505762) | 0.392726 / 0.579283 (-0.186557) | 0.396894 / 0.434364 (-0.037469) | 0.474395 / 0.540337 (-0.065943) | 0.569090 / 1.386936 (-0.817847) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006483 / 0.011353 (-0.004870) | 0.004527 / 0.011008 (-0.006481) | 0.098038 / 0.038508 (0.059530) | 0.027239 / 0.023109 (0.004130) | 0.441773 / 0.275898 (0.165875) | 0.471448 / 0.323480 (0.147968) | 0.005034 / 0.007986 (-0.002951) | 0.004732 / 0.004328 (0.000403) | 0.075036 / 0.004250 (0.070785) | 0.036711 / 0.037052 (-0.000341) | 0.442634 / 0.258489 (0.184145) | 0.476479 / 0.293841 (0.182638) | 0.031303 / 0.128546 (-0.097243) | 0.011642 / 0.075646 (-0.064005) | 0.320750 / 0.419271 (-0.098521) | 0.048698 / 0.043533 (0.005165) | 0.441205 / 0.255139 (0.186066) | 0.464845 / 0.283200 (0.181645) | 0.092716 / 0.141683 (-0.048967) | 1.510028 / 1.452155 (0.057874) | 1.574065 / 1.492716 (0.081349) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.220756 / 0.018006 (0.202750) | 0.393971 / 0.000490 (0.393482) | 0.002506 / 0.000200 (0.002306) | 0.000073 / 0.000054 (0.000018) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024455 / 0.037411 (-0.012956) | 0.100164 / 0.014526 (0.085638) | 0.108053 / 0.176557 (-0.068504) | 0.142973 / 0.737135 (-0.594163) | 0.110108 / 0.296338 (-0.186231) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.473639 / 0.215209 (0.258430) | 4.737521 / 2.077655 (2.659866) | 2.466208 / 1.504120 (0.962088) | 2.272608 / 1.541195 (0.731413) | 2.349255 / 1.468490 (0.880764) | 0.699928 / 4.584777 (-3.884849) | 3.348443 / 3.745712 (-0.397269) | 2.604611 / 5.269862 (-2.665250) | 1.543080 / 4.565676 (-3.022597) | 0.082627 / 0.424275 (-0.341648) | 0.012251 / 0.007607 (0.004644) | 0.569949 / 0.226044 (0.343905) | 5.732316 / 2.268929 (3.463388) | 2.913541 / 55.444624 (-52.531084) | 2.560584 / 6.876477 (-4.315892) | 2.615192 / 2.142072 (0.473120) | 0.803822 / 4.805227 (-4.001406) | 0.150821 / 6.500664 (-6.349843) | 0.067128 / 0.075469 (-0.008341) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.272278 / 1.841788 (-0.569510) | 13.783339 / 8.074308 (5.709030) | 13.243601 / 10.191392 (3.052209) | 0.136421 / 0.680424 (-0.544003) | 0.016565 / 0.534201 (-0.517636) | 0.381102 / 0.579283 (-0.198181) | 0.386166 / 0.434364 (-0.048197) | 0.474249 / 0.540337 (-0.066089) | 0.566826 / 1.386936 (-0.820110) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#b5672a956d5de864e6f5550e493527d962d6ae55 \"CML watermark\")\n"
] | 2023-01-26T19:29:42 | 2023-01-26T19:40:44 | 2023-01-26T19:33:00 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5472",
"html_url": "https://github.com/huggingface/datasets/pull/5472",
"diff_url": "https://github.com/huggingface/datasets/pull/5472.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5472.patch",
"merged_at": "2023-01-26T19:33:00"
} | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5472/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5472/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5471 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5471/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5471/comments | https://api.github.com/repos/huggingface/datasets/issues/5471/events | https://github.com/huggingface/datasets/pull/5471 | 1,558,557,545 | PR_kwDODunzps5InPA7 | 5,471 | Add num_test_batches option | {
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"I thought this issue was resolved in my parallel `to_tf_dataset` PR! I changed the default `num_test_batches` in `_get_output_signature` to 20 and used a test batch size of 1 to maximize variance to detect shorter samples. I think it's still okay to have this PR, though - but I'd use the new value of 20 as the default!",
"@Rocketknight1 You're right - I didn't have the most recent changes to the default values. Updated now to 20! I still think it would be good to have it configurable from the `to_tf_dataset` call so the user has the option to either make it more robust if many samples are needed, or faster if only one is needed. That, and I selfishly want it for faster tests. ",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010441 / 0.011353 (-0.000912) | 0.005605 / 0.011008 (-0.005404) | 0.115712 / 0.038508 (0.077204) | 0.040907 / 0.023109 (0.017797) | 0.357673 / 0.275898 (0.081775) | 0.415427 / 0.323480 (0.091947) | 0.008827 / 0.007986 (0.000842) | 0.006069 / 0.004328 (0.001740) | 0.088985 / 0.004250 (0.084735) | 0.048461 / 0.037052 (0.011409) | 0.362065 / 0.258489 (0.103576) | 0.393643 / 0.293841 (0.099802) | 0.043844 / 0.128546 (-0.084703) | 0.013757 / 0.075646 (-0.061889) | 0.390993 / 0.419271 (-0.028278) | 0.053612 / 0.043533 (0.010079) | 0.348688 / 0.255139 (0.093549) | 0.377818 / 0.283200 (0.094619) | 0.115762 / 0.141683 (-0.025920) | 1.751826 / 1.452155 (0.299672) | 1.773326 / 1.492716 (0.280609) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.220668 / 0.018006 (0.202662) | 0.536830 / 0.000490 (0.536340) | 0.000467 / 0.000200 (0.000267) | 0.000069 / 0.000054 (0.000015) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031500 / 0.037411 (-0.005911) | 0.125796 / 0.014526 (0.111270) | 0.137539 / 0.176557 (-0.039017) | 0.184651 / 0.737135 (-0.552484) | 0.145707 / 0.296338 (-0.150632) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.465876 / 0.215209 (0.250667) | 4.637711 / 2.077655 (2.560056) | 2.132335 / 1.504120 (0.628215) | 1.862593 / 1.541195 (0.321398) | 1.961701 / 1.468490 (0.493211) | 0.800551 / 4.584777 (-3.784226) | 4.453321 / 3.745712 (0.707608) | 4.291030 / 5.269862 (-0.978832) | 2.256685 / 4.565676 (-2.308991) | 0.097787 / 0.424275 (-0.326488) | 0.014116 / 0.007607 (0.006509) | 0.593395 / 0.226044 (0.367351) | 5.885774 / 2.268929 (3.616845) | 2.666224 / 55.444624 (-52.778400) | 2.276673 / 6.876477 (-4.599803) | 2.358190 / 2.142072 (0.216117) | 0.981398 / 4.805227 (-3.823829) | 0.196997 / 6.500664 (-6.303668) | 0.077020 / 0.075469 (0.001550) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.365646 / 1.841788 (-0.476142) | 17.418157 / 8.074308 (9.343849) | 15.838749 / 10.191392 (5.647357) | 0.172749 / 0.680424 (-0.507675) | 0.033711 / 0.534201 (-0.500490) | 0.513306 / 0.579283 (-0.065978) | 0.503201 / 0.434364 (0.068837) | 0.608954 / 0.540337 (0.068616) | 0.734697 / 1.386936 (-0.652239) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008749 / 0.011353 (-0.002604) | 0.005738 / 0.011008 (-0.005270) | 0.084946 / 0.038508 (0.046438) | 0.040386 / 0.023109 (0.017277) | 0.398698 / 0.275898 (0.122800) | 0.435843 / 0.323480 (0.112363) | 0.006812 / 0.007986 (-0.001174) | 0.004567 / 0.004328 (0.000239) | 0.085857 / 0.004250 (0.081607) | 0.054791 / 0.037052 (0.017738) | 0.400381 / 0.258489 (0.141892) | 0.460313 / 0.293841 (0.166472) | 0.042299 / 0.128546 (-0.086247) | 0.014128 / 0.075646 (-0.061519) | 0.100497 / 0.419271 (-0.318775) | 0.058356 / 0.043533 (0.014823) | 0.399774 / 0.255139 (0.144635) | 0.428210 / 0.283200 (0.145011) | 0.122084 / 0.141683 (-0.019598) | 1.683519 / 1.452155 (0.231365) | 1.798024 / 1.492716 (0.305307) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.255058 / 0.018006 (0.237051) | 0.488831 / 0.000490 (0.488342) | 0.008349 / 0.000200 (0.008149) | 0.000183 / 0.000054 (0.000129) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034870 / 0.037411 (-0.002541) | 0.131818 / 0.014526 (0.117292) | 0.143607 / 0.176557 (-0.032949) | 0.197413 / 0.737135 (-0.539722) | 0.148970 / 0.296338 (-0.147368) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.492831 / 0.215209 (0.277622) | 4.963085 / 2.077655 (2.885430) | 2.367803 / 1.504120 (0.863683) | 2.145535 / 1.541195 (0.604340) | 2.289452 / 1.468490 (0.820962) | 0.812691 / 4.584777 (-3.772086) | 4.554068 / 3.745712 (0.808356) | 2.377126 / 5.269862 (-2.892735) | 1.537243 / 4.565676 (-3.028433) | 0.099742 / 0.424275 (-0.324534) | 0.014757 / 0.007607 (0.007149) | 0.628714 / 0.226044 (0.402670) | 6.240197 / 2.268929 (3.971268) | 2.961929 / 55.444624 (-52.482696) | 2.533436 / 6.876477 (-4.343040) | 2.642619 / 2.142072 (0.500547) | 0.976002 / 4.805227 (-3.829225) | 0.197912 / 6.500664 (-6.302752) | 0.078767 / 0.075469 (0.003297) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.522863 / 1.841788 (-0.318925) | 18.210504 / 8.074308 (10.136196) | 15.664172 / 10.191392 (5.472780) | 0.178510 / 0.680424 (-0.501914) | 0.020852 / 0.534201 (-0.513349) | 0.501757 / 0.579283 (-0.077526) | 0.496542 / 0.434364 (0.062178) | 0.624958 / 0.540337 (0.084620) | 0.746960 / 1.386936 (-0.639976) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#da7f09ed65411c5941de45c372a8aa8d5e55b431 \"CML watermark\")\n"
] | 2023-01-26T18:09:40 | 2023-01-27T18:16:45 | 2023-01-27T18:08:36 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5471",
"html_url": "https://github.com/huggingface/datasets/pull/5471",
"diff_url": "https://github.com/huggingface/datasets/pull/5471.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5471.patch",
"merged_at": "2023-01-27T18:08:36"
} | `to_tf_dataset` calls can be very costly because of the number of test batches drawn during `_get_output_signature`. The test batches are draw in order to estimate the shapes when creating the tensorflow dataset. This is necessary when the shapes can be irregular, but not in cases when the tensor shapes are the same across all samples. This PR adds an option to change the number of batches drawn, so the user can speed this conversion up.
Running the following, and modifying `num_test_batches`
```
import time
from datasets import load_dataset
from transformers import DefaultDataCollator
data_collator = DefaultDataCollator()
dataset = load_dataset("beans")
dataset = dataset["train"].with_format("np")
start = time.time()
dataset = dataset.to_tf_dataset(
columns=["image"],
label_cols=["label"],
batch_size=8,
collate_fn=data_collator,
num_test_batches=NUM_TEST_BATCHES,
)
end = time.time()
print(end - start)
```
NUM_TEST_BATCHES=200: 0.8197s
NUM_TEST_BATCHES=50: 0.3070s
NUM_TEST_BATCHES=2: 0.1417s
NUM_TEST_BATCHES=1: 0.1352s | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5471/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5471/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5470 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5470/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5470/comments | https://api.github.com/repos/huggingface/datasets/issues/5470/events | https://github.com/huggingface/datasets/pull/5470 | 1,558,542,611 | PR_kwDODunzps5InLw9 | 5,470 | Update dataset card creation | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"The CI failure is unrelated to your PR - feel free to merge :)",
"Haha thanks, you read my mind :)",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008332 / 0.011353 (-0.003021) | 0.004556 / 0.011008 (-0.006452) | 0.102239 / 0.038508 (0.063731) | 0.029332 / 0.023109 (0.006222) | 0.296189 / 0.275898 (0.020291) | 0.355746 / 0.323480 (0.032266) | 0.007705 / 0.007986 (-0.000281) | 0.003488 / 0.004328 (-0.000840) | 0.079142 / 0.004250 (0.074891) | 0.034980 / 0.037052 (-0.002073) | 0.307460 / 0.258489 (0.048971) | 0.345944 / 0.293841 (0.052103) | 0.033815 / 0.128546 (-0.094731) | 0.011603 / 0.075646 (-0.064044) | 0.322097 / 0.419271 (-0.097175) | 0.043753 / 0.043533 (0.000220) | 0.296706 / 0.255139 (0.041567) | 0.323195 / 0.283200 (0.039996) | 0.092295 / 0.141683 (-0.049388) | 1.542556 / 1.452155 (0.090401) | 1.571896 / 1.492716 (0.079180) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.191075 / 0.018006 (0.173069) | 0.407394 / 0.000490 (0.406905) | 0.002033 / 0.000200 (0.001833) | 0.000073 / 0.000054 (0.000018) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023175 / 0.037411 (-0.014236) | 0.094774 / 0.014526 (0.080248) | 0.105782 / 0.176557 (-0.070775) | 0.146608 / 0.737135 (-0.590528) | 0.107519 / 0.296338 (-0.188819) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.421516 / 0.215209 (0.206306) | 4.201091 / 2.077655 (2.123436) | 1.880285 / 1.504120 (0.376165) | 1.676333 / 1.541195 (0.135139) | 1.734301 / 1.468490 (0.265811) | 0.688504 / 4.584777 (-3.896273) | 3.370289 / 3.745712 (-0.375423) | 3.127661 / 5.269862 (-2.142201) | 1.562570 / 4.565676 (-3.003106) | 0.081687 / 0.424275 (-0.342588) | 0.012334 / 0.007607 (0.004727) | 0.524125 / 0.226044 (0.298080) | 5.245595 / 2.268929 (2.976667) | 2.332622 / 55.444624 (-53.112002) | 1.973212 / 6.876477 (-4.903265) | 2.006507 / 2.142072 (-0.135565) | 0.807126 / 4.805227 (-3.998101) | 0.148254 / 6.500664 (-6.352411) | 0.064240 / 0.075469 (-0.011229) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.206880 / 1.841788 (-0.634907) | 13.854877 / 8.074308 (5.780569) | 13.806772 / 10.191392 (3.615380) | 0.144380 / 0.680424 (-0.536044) | 0.028492 / 0.534201 (-0.505709) | 0.393854 / 0.579283 (-0.185429) | 0.402210 / 0.434364 (-0.032154) | 0.462138 / 0.540337 (-0.078199) | 0.537480 / 1.386936 (-0.849456) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006692 / 0.011353 (-0.004661) | 0.004529 / 0.011008 (-0.006479) | 0.077925 / 0.038508 (0.039417) | 0.027824 / 0.023109 (0.004715) | 0.342288 / 0.275898 (0.066390) | 0.375071 / 0.323480 (0.051591) | 0.004889 / 0.007986 (-0.003097) | 0.003353 / 0.004328 (-0.000975) | 0.076198 / 0.004250 (0.071947) | 0.037797 / 0.037052 (0.000744) | 0.347834 / 0.258489 (0.089345) | 0.384200 / 0.293841 (0.090359) | 0.032184 / 0.128546 (-0.096362) | 0.011674 / 0.075646 (-0.063972) | 0.086242 / 0.419271 (-0.333029) | 0.044465 / 0.043533 (0.000932) | 0.341712 / 0.255139 (0.086573) | 0.366908 / 0.283200 (0.083709) | 0.091526 / 0.141683 (-0.050156) | 1.495798 / 1.452155 (0.043643) | 1.571700 / 1.492716 (0.078984) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.221962 / 0.018006 (0.203955) | 0.393095 / 0.000490 (0.392605) | 0.000385 / 0.000200 (0.000185) | 0.000074 / 0.000054 (0.000020) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024365 / 0.037411 (-0.013046) | 0.099278 / 0.014526 (0.084753) | 0.105940 / 0.176557 (-0.070617) | 0.141334 / 0.737135 (-0.595802) | 0.110898 / 0.296338 (-0.185440) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.446150 / 0.215209 (0.230941) | 4.471441 / 2.077655 (2.393786) | 2.124864 / 1.504120 (0.620744) | 1.909950 / 1.541195 (0.368755) | 1.970085 / 1.468490 (0.501595) | 0.706711 / 4.584777 (-3.878066) | 3.380336 / 3.745712 (-0.365376) | 1.866106 / 5.269862 (-3.403756) | 1.160657 / 4.565676 (-3.405019) | 0.082786 / 0.424275 (-0.341489) | 0.012470 / 0.007607 (0.004862) | 0.537620 / 0.226044 (0.311575) | 5.390588 / 2.268929 (3.121659) | 2.539137 / 55.444624 (-52.905488) | 2.191867 / 6.876477 (-4.684610) | 2.236212 / 2.142072 (0.094139) | 0.810756 / 4.805227 (-3.994471) | 0.150933 / 6.500664 (-6.349731) | 0.066141 / 0.075469 (-0.009328) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.271595 / 1.841788 (-0.570193) | 13.840013 / 8.074308 (5.765705) | 13.334443 / 10.191392 (3.143051) | 0.150096 / 0.680424 (-0.530328) | 0.016919 / 0.534201 (-0.517282) | 0.375534 / 0.579283 (-0.203749) | 0.387203 / 0.434364 (-0.047161) | 0.463500 / 0.540337 (-0.076838) | 0.553496 / 1.386936 (-0.833440) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#5f2e47230c13f977bcebdc4380623f59da67a75f \"CML watermark\")\n"
] | 2023-01-26T17:57:51 | 2023-01-27T16:27:00 | 2023-01-27T16:20:10 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5470",
"html_url": "https://github.com/huggingface/datasets/pull/5470",
"diff_url": "https://github.com/huggingface/datasets/pull/5470.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5470.patch",
"merged_at": "2023-01-27T16:20:10"
} | Encourages users to create a dataset card on the Hub directly with the new metadata ui + import dataset card template instead of telling users to manually create and upload one. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5470/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5470/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5469 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5469/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5469/comments | https://api.github.com/repos/huggingface/datasets/issues/5469/events | https://github.com/huggingface/datasets/pull/5469 | 1,558,346,906 | PR_kwDODunzps5Imhk2 | 5,469 | Remove deprecated `shard_size` arg from `.push_to_hub()` | {
"login": "polinaeterna",
"id": 16348744,
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/polinaeterna",
"html_url": "https://github.com/polinaeterna",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008272 / 0.011353 (-0.003081) | 0.004494 / 0.011008 (-0.006515) | 0.100764 / 0.038508 (0.062256) | 0.028741 / 0.023109 (0.005632) | 0.309020 / 0.275898 (0.033122) | 0.354184 / 0.323480 (0.030704) | 0.007455 / 0.007986 (-0.000531) | 0.003377 / 0.004328 (-0.000951) | 0.078472 / 0.004250 (0.074222) | 0.034719 / 0.037052 (-0.002333) | 0.312787 / 0.258489 (0.054298) | 0.342878 / 0.293841 (0.049037) | 0.033326 / 0.128546 (-0.095221) | 0.011519 / 0.075646 (-0.064127) | 0.323556 / 0.419271 (-0.095716) | 0.039929 / 0.043533 (-0.003604) | 0.304627 / 0.255139 (0.049488) | 0.322876 / 0.283200 (0.039677) | 0.086410 / 0.141683 (-0.055273) | 1.502607 / 1.452155 (0.050453) | 1.577953 / 1.492716 (0.085237) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.192861 / 0.018006 (0.174855) | 0.406008 / 0.000490 (0.405519) | 0.001075 / 0.000200 (0.000875) | 0.000071 / 0.000054 (0.000016) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023351 / 0.037411 (-0.014060) | 0.096086 / 0.014526 (0.081561) | 0.104641 / 0.176557 (-0.071915) | 0.141940 / 0.737135 (-0.595195) | 0.109266 / 0.296338 (-0.187073) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.416496 / 0.215209 (0.201287) | 4.161581 / 2.077655 (2.083926) | 1.815357 / 1.504120 (0.311238) | 1.609536 / 1.541195 (0.068341) | 1.654105 / 1.468490 (0.185615) | 0.693947 / 4.584777 (-3.890830) | 3.349029 / 3.745712 (-0.396683) | 1.883968 / 5.269862 (-3.385893) | 1.287988 / 4.565676 (-3.277688) | 0.081765 / 0.424275 (-0.342511) | 0.012373 / 0.007607 (0.004766) | 0.517186 / 0.226044 (0.291142) | 5.200892 / 2.268929 (2.931964) | 2.247414 / 55.444624 (-53.197211) | 1.910601 / 6.876477 (-4.965876) | 1.965407 / 2.142072 (-0.176666) | 0.814386 / 4.805227 (-3.990841) | 0.149295 / 6.500664 (-6.351369) | 0.064667 / 0.075469 (-0.010802) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.247258 / 1.841788 (-0.594530) | 13.837355 / 8.074308 (5.763047) | 13.850454 / 10.191392 (3.659062) | 0.136078 / 0.680424 (-0.544346) | 0.028322 / 0.534201 (-0.505878) | 0.391394 / 0.579283 (-0.187889) | 0.407494 / 0.434364 (-0.026870) | 0.473784 / 0.540337 (-0.066554) | 0.562953 / 1.386936 (-0.823983) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006559 / 0.011353 (-0.004794) | 0.004546 / 0.011008 (-0.006462) | 0.099527 / 0.038508 (0.061019) | 0.027428 / 0.023109 (0.004319) | 0.344276 / 0.275898 (0.068377) | 0.377897 / 0.323480 (0.054417) | 0.004913 / 0.007986 (-0.003072) | 0.003338 / 0.004328 (-0.000990) | 0.077589 / 0.004250 (0.073339) | 0.038819 / 0.037052 (0.001766) | 0.343165 / 0.258489 (0.084676) | 0.386228 / 0.293841 (0.092387) | 0.031753 / 0.128546 (-0.096794) | 0.011756 / 0.075646 (-0.063890) | 0.322537 / 0.419271 (-0.096735) | 0.049865 / 0.043533 (0.006332) | 0.340493 / 0.255139 (0.085354) | 0.372179 / 0.283200 (0.088980) | 0.099669 / 0.141683 (-0.042013) | 1.487841 / 1.452155 (0.035686) | 1.527400 / 1.492716 (0.034683) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.180782 / 0.018006 (0.162776) | 0.393494 / 0.000490 (0.393004) | 0.003004 / 0.000200 (0.002804) | 0.000076 / 0.000054 (0.000022) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024997 / 0.037411 (-0.012415) | 0.098232 / 0.014526 (0.083707) | 0.107869 / 0.176557 (-0.068688) | 0.141042 / 0.737135 (-0.596093) | 0.109551 / 0.296338 (-0.186787) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.477115 / 0.215209 (0.261906) | 4.783928 / 2.077655 (2.706273) | 2.435725 / 1.504120 (0.931605) | 2.233111 / 1.541195 (0.691916) | 2.341097 / 1.468490 (0.872607) | 0.694304 / 4.584777 (-3.890473) | 3.345687 / 3.745712 (-0.400025) | 1.886932 / 5.269862 (-3.382929) | 1.155585 / 4.565676 (-3.410092) | 0.082867 / 0.424275 (-0.341408) | 0.012420 / 0.007607 (0.004813) | 0.576575 / 0.226044 (0.350530) | 5.777691 / 2.268929 (3.508762) | 2.882219 / 55.444624 (-52.562405) | 2.543613 / 6.876477 (-4.332864) | 2.578939 / 2.142072 (0.436866) | 0.803143 / 4.805227 (-4.002084) | 0.151929 / 6.500664 (-6.348735) | 0.067777 / 0.075469 (-0.007693) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.282711 / 1.841788 (-0.559077) | 13.942771 / 8.074308 (5.868463) | 13.376206 / 10.191392 (3.184814) | 0.152916 / 0.680424 (-0.527508) | 0.016619 / 0.534201 (-0.517582) | 0.375141 / 0.579283 (-0.204142) | 0.381660 / 0.434364 (-0.052704) | 0.465090 / 0.540337 (-0.075247) | 0.555068 / 1.386936 (-0.831868) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#10a6a638e0feb955f7b607b4433ee715c30acccf \"CML watermark\")\n"
] | 2023-01-26T15:40:56 | 2023-01-26T17:37:51 | 2023-01-26T17:30:59 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5469",
"html_url": "https://github.com/huggingface/datasets/pull/5469",
"diff_url": "https://github.com/huggingface/datasets/pull/5469.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5469.patch",
"merged_at": "2023-01-26T17:30:59"
} | The docstrings say that it was supposed to be deprecated since version 2.4.0, can we remove it? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5469/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5469/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5468 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5468/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5468/comments | https://api.github.com/repos/huggingface/datasets/issues/5468/events | https://github.com/huggingface/datasets/issues/5468 | 1,558,066,625 | I_kwDODunzps5c3jXB | 5,468 | Allow opposite of remove_columns on Dataset and DatasetDict | {
"login": "hollance",
"id": 346853,
"node_id": "MDQ6VXNlcjM0Njg1Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/346853?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hollance",
"html_url": "https://github.com/hollance",
"followers_url": "https://api.github.com/users/hollance/followers",
"following_url": "https://api.github.com/users/hollance/following{/other_user}",
"gists_url": "https://api.github.com/users/hollance/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hollance/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hollance/subscriptions",
"organizations_url": "https://api.github.com/users/hollance/orgs",
"repos_url": "https://api.github.com/users/hollance/repos",
"events_url": "https://api.github.com/users/hollance/events{/privacy}",
"received_events_url": "https://api.github.com/users/hollance/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
},
{
"id": 1935892877,
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue",
"name": "good first issue",
"color": "7057ff",
"default": true,
"description": "Good for newcomers"
}
] | closed | false | null | [] | null | [
"Hi! I agree it would be nice to have a method like that. Instead of `keep_columns`, we can name it `select_columns` to be more aligned with PyArrow's naming convention (`pa.Table.select`).",
"Hi, I am a newbie to open source and would like to contribute. @mariosasko can I take up this issue ?",
"Hey, I also want to work on this issue I am a newbie to open source. ",
"This sounds related to https://github.com/huggingface/datasets/issues/5474\r\n\r\nI'm fine with `select_columns`, or we could also override `select` to also accept a list of columns maybe ?",
"@lhoestq, I am planning to add a member function to the dataset class to perform the selection operation. Do you think its the right way to proceed? or there is a better option ?",
"Unless @mariosasko thinks otherwise, I think it can go in `Dataset.select()` :)\r\nThough some parameters like keep_in_memory, indices_cache_file_name or writer_batch_size wouldn't when selecting columns, so we would need to update the docstring as well",
"If someone wants to give it a shot, feel free to comment `#self-assign` and it will assign the issue to you.\r\n\r\nFeel free to ping us here if you have questions or if we can help :)",
"I would rather have this functionality as a separate method. IMO it's always better to be explicit than to have an API where a single method can do different/uncorrelated things (somewhat reminds me of Pandas, and there is probably a good reason why PyArrow is more rigid in this aspect).",
"In the end I also think it would be nice to have it as a separate method, this way we can also have it for `IterableDataset` (which can't have `select` for indices)"
] | 2023-01-26T12:28:09 | 2023-02-13T09:59:38 | 2023-02-13T09:59:38 | NONE | null | null | null | ### Feature request
In this blog post https://huggingface.co/blog/audio-datasets, I noticed the following code:
```python
COLUMNS_TO_KEEP = ["text", "audio"]
all_columns = gigaspeech["train"].column_names
columns_to_remove = set(all_columns) - set(COLUMNS_TO_KEEP)
gigaspeech = gigaspeech.remove_columns(columns_to_remove)
```
This kind of thing happens a lot when you don't need to keep all columns from the dataset. It would be more convenient (and less error prone) if you could just write:
```python
gigaspeech = gigaspeech.keep_columns(["text", "audio"])
```
Internally, `keep_columns` could still call `remove_columns`, but it expresses more clearly what the user's intent is.
### Motivation
Less code to write for the user of the dataset.
### Your contribution
- | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5468/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5468/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5467 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5467/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5467/comments | https://api.github.com/repos/huggingface/datasets/issues/5467/events | https://github.com/huggingface/datasets/pull/5467 | 1,557,898,273 | PR_kwDODunzps5IlAlk | 5,467 | Fix conda command in readme | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"ah didn't read well - it's all good",
"or maybe it isn't ? `-c huggingface -c conda-forge` installs from HF or from conda-forge ?",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010196 / 0.011353 (-0.001157) | 0.005531 / 0.011008 (-0.005477) | 0.104601 / 0.038508 (0.066093) | 0.041322 / 0.023109 (0.018213) | 0.302080 / 0.275898 (0.026182) | 0.396579 / 0.323480 (0.073099) | 0.008874 / 0.007986 (0.000888) | 0.004482 / 0.004328 (0.000153) | 0.077487 / 0.004250 (0.073236) | 0.051113 / 0.037052 (0.014061) | 0.321850 / 0.258489 (0.063361) | 0.354946 / 0.293841 (0.061105) | 0.039822 / 0.128546 (-0.088724) | 0.012622 / 0.075646 (-0.063024) | 0.337898 / 0.419271 (-0.081374) | 0.048372 / 0.043533 (0.004839) | 0.299646 / 0.255139 (0.044507) | 0.321113 / 0.283200 (0.037914) | 0.114780 / 0.141683 (-0.026903) | 1.475750 / 1.452155 (0.023595) | 1.496307 / 1.492716 (0.003590) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.311443 / 0.018006 (0.293437) | 0.567268 / 0.000490 (0.566778) | 0.006149 / 0.000200 (0.005950) | 0.000089 / 0.000054 (0.000035) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029407 / 0.037411 (-0.008004) | 0.118611 / 0.014526 (0.104085) | 0.122247 / 0.176557 (-0.054309) | 0.164770 / 0.737135 (-0.572365) | 0.128561 / 0.296338 (-0.167778) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.399185 / 0.215209 (0.183976) | 3.972995 / 2.077655 (1.895340) | 1.764638 / 1.504120 (0.260518) | 1.574058 / 1.541195 (0.032863) | 1.741695 / 1.468490 (0.273205) | 0.705664 / 4.584777 (-3.879113) | 3.915399 / 3.745712 (0.169686) | 2.310154 / 5.269862 (-2.959707) | 1.554067 / 4.565676 (-3.011610) | 0.087133 / 0.424275 (-0.337142) | 0.012393 / 0.007607 (0.004786) | 0.510758 / 0.226044 (0.284713) | 5.114906 / 2.268929 (2.845977) | 2.304473 / 55.444624 (-53.140152) | 1.960768 / 6.876477 (-4.915709) | 2.092263 / 2.142072 (-0.049810) | 0.867973 / 4.805227 (-3.937255) | 0.170000 / 6.500664 (-6.330664) | 0.068358 / 0.075469 (-0.007111) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.211022 / 1.841788 (-0.630765) | 16.777269 / 8.074308 (8.702961) | 15.272659 / 10.191392 (5.081267) | 0.182149 / 0.680424 (-0.498274) | 0.029577 / 0.534201 (-0.504624) | 0.446590 / 0.579283 (-0.132693) | 0.454724 / 0.434364 (0.020360) | 0.541938 / 0.540337 (0.001601) | 0.640886 / 1.386936 (-0.746050) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008441 / 0.011353 (-0.002912) | 0.006105 / 0.011008 (-0.004904) | 0.100349 / 0.038508 (0.061841) | 0.040675 / 0.023109 (0.017565) | 0.381775 / 0.275898 (0.105877) | 0.425246 / 0.323480 (0.101767) | 0.007197 / 0.007986 (-0.000789) | 0.004972 / 0.004328 (0.000644) | 0.075346 / 0.004250 (0.071096) | 0.065339 / 0.037052 (0.028286) | 0.379340 / 0.258489 (0.120851) | 0.435646 / 0.293841 (0.141805) | 0.038891 / 0.128546 (-0.089656) | 0.013079 / 0.075646 (-0.062568) | 0.339273 / 0.419271 (-0.079999) | 0.057478 / 0.043533 (0.013945) | 0.373516 / 0.255139 (0.118377) | 0.402388 / 0.283200 (0.119189) | 0.123145 / 0.141683 (-0.018538) | 1.503765 / 1.452155 (0.051610) | 1.609797 / 1.492716 (0.117081) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.420354 / 0.018006 (0.402348) | 0.589272 / 0.000490 (0.588782) | 0.045861 / 0.000200 (0.045662) | 0.000527 / 0.000054 (0.000473) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033918 / 0.037411 (-0.003493) | 0.128041 / 0.014526 (0.113515) | 0.130274 / 0.176557 (-0.046283) | 0.180605 / 0.737135 (-0.556530) | 0.136377 / 0.296338 (-0.159962) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.440343 / 0.215209 (0.225133) | 4.390264 / 2.077655 (2.312610) | 2.218738 / 1.504120 (0.714618) | 2.052399 / 1.541195 (0.511204) | 2.231912 / 1.468490 (0.763422) | 0.716805 / 4.584777 (-3.867972) | 3.909277 / 3.745712 (0.163565) | 2.302121 / 5.269862 (-2.967740) | 1.419454 / 4.565676 (-3.146222) | 0.088067 / 0.424275 (-0.336208) | 0.012994 / 0.007607 (0.005387) | 0.548267 / 0.226044 (0.322223) | 5.462973 / 2.268929 (3.194044) | 2.768414 / 55.444624 (-52.676210) | 2.489320 / 6.876477 (-4.387157) | 2.569546 / 2.142072 (0.427474) | 0.853135 / 4.805227 (-3.952092) | 0.170618 / 6.500664 (-6.330046) | 0.069908 / 0.075469 (-0.005562) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.304726 / 1.841788 (-0.537062) | 17.335977 / 8.074308 (9.261669) | 15.088319 / 10.191392 (4.896927) | 0.190893 / 0.680424 (-0.489531) | 0.018133 / 0.534201 (-0.516068) | 0.429324 / 0.579283 (-0.149959) | 0.439212 / 0.434364 (0.004848) | 0.545312 / 0.540337 (0.004975) | 0.663972 / 1.386936 (-0.722964) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#e7505adc37498f5e0cb3dd4c13bbb06696afdda5 \"CML watermark\")\n",
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2023-01-26T10:03:01 | 2023-09-24T10:06:59 | 2023-01-26T18:29:37 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5467",
"html_url": "https://github.com/huggingface/datasets/pull/5467",
"diff_url": "https://github.com/huggingface/datasets/pull/5467.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5467.patch",
"merged_at": null
} | The [conda forge channel](https://anaconda.org/conda-forge/datasets) is lagging behind (as of right now, only 2.7.1 is available), we should recommend using the [Hugging face channel](https://anaconda.org/HuggingFace/datasets) that we are maintaining
```
conda install -c huggingface datasets
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5467/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5467/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5466 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5466/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5466/comments | https://api.github.com/repos/huggingface/datasets/issues/5466/events | https://github.com/huggingface/datasets/pull/5466 | 1,557,584,845 | PR_kwDODunzps5Ij-z1 | 5,466 | remove pathlib.Path with URIs | {
"login": "jonny-cyberhaven",
"id": 121845112,
"node_id": "U_kgDOB0M1eA",
"avatar_url": "https://avatars.githubusercontent.com/u/121845112?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jonny-cyberhaven",
"html_url": "https://github.com/jonny-cyberhaven",
"followers_url": "https://api.github.com/users/jonny-cyberhaven/followers",
"following_url": "https://api.github.com/users/jonny-cyberhaven/following{/other_user}",
"gists_url": "https://api.github.com/users/jonny-cyberhaven/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jonny-cyberhaven/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jonny-cyberhaven/subscriptions",
"organizations_url": "https://api.github.com/users/jonny-cyberhaven/orgs",
"repos_url": "https://api.github.com/users/jonny-cyberhaven/repos",
"events_url": "https://api.github.com/users/jonny-cyberhaven/events{/privacy}",
"received_events_url": "https://api.github.com/users/jonny-cyberhaven/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Thanks !\r\n`os.path.join` will use a backslash `\\` on windows which will also fail. You can use this instead in `load_from_disk`:\r\n```python\r\nfrom .filesystems import is_remote_filesystem\r\n\r\nis_local = not is_remote_filesystem(fs)\r\npath_join = os.path.join if is_local else posixpath.join\r\n```",
"Thank you ! I did a minor change to not have to define a new function and I ran the CI. If it's green we can merge :)",
"_The documentation is not available anymore as the PR was closed or merged._",
"> \r\n\r\n\r\n\r\n> Thank you ! I did a minor change to not have to define a new function and I ran the CI. If it's green we can merge :)\r\n\r\nlol it's a battle of +1 imports or +1 functions. LGTM, I was editing fast and swapped which branch gets os vs Path. Should be ok now 🤙",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.012043 / 0.011353 (0.000690) | 0.006585 / 0.011008 (-0.004423) | 0.149007 / 0.038508 (0.110499) | 0.039514 / 0.023109 (0.016405) | 0.403893 / 0.275898 (0.127995) | 0.431252 / 0.323480 (0.107772) | 0.009218 / 0.007986 (0.001233) | 0.006108 / 0.004328 (0.001779) | 0.114666 / 0.004250 (0.110416) | 0.044962 / 0.037052 (0.007910) | 0.411592 / 0.258489 (0.153103) | 0.461561 / 0.293841 (0.167721) | 0.059958 / 0.128546 (-0.068589) | 0.029047 / 0.075646 (-0.046599) | 0.456000 / 0.419271 (0.036728) | 0.060744 / 0.043533 (0.017211) | 0.415816 / 0.255139 (0.160677) | 0.430488 / 0.283200 (0.147289) | 0.122477 / 0.141683 (-0.019205) | 1.862910 / 1.452155 (0.410755) | 1.974698 / 1.492716 (0.481981) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.257230 / 0.018006 (0.239224) | 0.606854 / 0.000490 (0.606364) | 0.006175 / 0.000200 (0.005975) | 0.000099 / 0.000054 (0.000044) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030533 / 0.037411 (-0.006879) | 0.130702 / 0.014526 (0.116177) | 0.143781 / 0.176557 (-0.032775) | 0.183272 / 0.737135 (-0.553863) | 0.151267 / 0.296338 (-0.145071) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.637422 / 0.215209 (0.422213) | 6.503535 / 2.077655 (4.425880) | 2.630387 / 1.504120 (1.126267) | 2.281180 / 1.541195 (0.739985) | 2.354341 / 1.468490 (0.885851) | 1.306497 / 4.584777 (-3.278280) | 5.837184 / 3.745712 (2.091472) | 3.257198 / 5.269862 (-2.012663) | 2.050681 / 4.565676 (-2.514995) | 0.146415 / 0.424275 (-0.277860) | 0.015386 / 0.007607 (0.007779) | 0.790146 / 0.226044 (0.564102) | 8.056137 / 2.268929 (5.787209) | 3.383566 / 55.444624 (-52.061059) | 2.707620 / 6.876477 (-4.168856) | 2.714857 / 2.142072 (0.572785) | 1.520847 / 4.805227 (-3.284380) | 0.266028 / 6.500664 (-6.234636) | 0.091422 / 0.075469 (0.015953) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.656148 / 1.841788 (-0.185640) | 18.833393 / 8.074308 (10.759085) | 21.360824 / 10.191392 (11.169432) | 0.227608 / 0.680424 (-0.452816) | 0.049018 / 0.534201 (-0.485183) | 0.593418 / 0.579283 (0.014135) | 0.656690 / 0.434364 (0.222326) | 0.709171 / 0.540337 (0.168833) | 0.828226 / 1.386936 (-0.558710) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010112 / 0.011353 (-0.001241) | 0.006761 / 0.011008 (-0.004247) | 0.146723 / 0.038508 (0.108215) | 0.038451 / 0.023109 (0.015342) | 0.524267 / 0.275898 (0.248369) | 0.609484 / 0.323480 (0.286004) | 0.008502 / 0.007986 (0.000516) | 0.006964 / 0.004328 (0.002635) | 0.111396 / 0.004250 (0.107146) | 0.056839 / 0.037052 (0.019787) | 0.514649 / 0.258489 (0.256160) | 0.604212 / 0.293841 (0.310372) | 0.061410 / 0.128546 (-0.067137) | 0.020396 / 0.075646 (-0.055250) | 0.505026 / 0.419271 (0.085754) | 0.067280 / 0.043533 (0.023747) | 0.522249 / 0.255139 (0.267110) | 0.559484 / 0.283200 (0.276284) | 0.120943 / 0.141683 (-0.020740) | 2.124323 / 1.452155 (0.672169) | 2.153397 / 1.492716 (0.660681) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.216614 / 0.018006 (0.198608) | 0.594181 / 0.000490 (0.593692) | 0.004079 / 0.000200 (0.003879) | 0.000117 / 0.000054 (0.000063) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.036925 / 0.037411 (-0.000486) | 0.131322 / 0.014526 (0.116797) | 0.148542 / 0.176557 (-0.028015) | 0.196045 / 0.737135 (-0.541090) | 0.156867 / 0.296338 (-0.139472) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.669722 / 0.215209 (0.454513) | 6.858856 / 2.077655 (4.781202) | 3.093969 / 1.504120 (1.589849) | 2.667385 / 1.541195 (1.126190) | 2.797192 / 1.468490 (1.328702) | 1.334759 / 4.584777 (-3.250018) | 6.024861 / 3.745712 (2.279149) | 3.257779 / 5.269862 (-2.012083) | 2.202816 / 4.565676 (-2.362860) | 0.147617 / 0.424275 (-0.276658) | 0.015451 / 0.007607 (0.007844) | 0.887015 / 0.226044 (0.660970) | 8.371288 / 2.268929 (6.102360) | 3.807451 / 55.444624 (-51.637173) | 3.079483 / 6.876477 (-3.796994) | 3.103321 / 2.142072 (0.961249) | 1.520272 / 4.805227 (-3.284955) | 0.273079 / 6.500664 (-6.227585) | 0.088613 / 0.075469 (0.013143) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.818913 / 1.841788 (-0.022875) | 19.274269 / 8.074308 (11.199960) | 19.871784 / 10.191392 (9.680392) | 0.250388 / 0.680424 (-0.430036) | 0.030562 / 0.534201 (-0.503638) | 0.560566 / 0.579283 (-0.018717) | 0.664701 / 0.434364 (0.230337) | 0.714513 / 0.540337 (0.174176) | 0.827227 / 1.386936 (-0.559710) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#f7a9bf823ea41b85313c0392388ec68b3033ef29 \"CML watermark\")\n"
] | 2023-01-26T03:25:45 | 2023-01-26T17:08:57 | 2023-01-26T16:59:11 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5466",
"html_url": "https://github.com/huggingface/datasets/pull/5466",
"diff_url": "https://github.com/huggingface/datasets/pull/5466.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5466.patch",
"merged_at": "2023-01-26T16:59:11"
} | Pathlib will convert "//" to "/" which causes retry errors when downloading from cloud storage | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5466/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5466/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5465 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5465/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5465/comments | https://api.github.com/repos/huggingface/datasets/issues/5465/events | https://github.com/huggingface/datasets/issues/5465 | 1,557,510,618 | I_kwDODunzps5c1bna | 5,465 | audiofolder creates empty dataset even though the dataset passed in follows the correct structure | {
"login": "jcho19",
"id": 107211437,
"node_id": "U_kgDOBmPqrQ",
"avatar_url": "https://avatars.githubusercontent.com/u/107211437?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jcho19",
"html_url": "https://github.com/jcho19",
"followers_url": "https://api.github.com/users/jcho19/followers",
"following_url": "https://api.github.com/users/jcho19/following{/other_user}",
"gists_url": "https://api.github.com/users/jcho19/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jcho19/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jcho19/subscriptions",
"organizations_url": "https://api.github.com/users/jcho19/orgs",
"repos_url": "https://api.github.com/users/jcho19/repos",
"events_url": "https://api.github.com/users/jcho19/events{/privacy}",
"received_events_url": "https://api.github.com/users/jcho19/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2023-01-26T01:45:45 | 2023-01-26T08:48:45 | 2023-01-26T08:48:45 | NONE | null | null | null | ### Describe the bug
The structure of my dataset folder called "my_dataset" is : data metadata.csv
The data folder consists of all mp3 files and metadata.csv consist of file locations like 'data/...mp3 and transcriptions. There's 400+ mp3 files and corresponding transcriptions for my dataset.
When I run the following:
ds = load_dataset("audiofolder", data_dir="my_dataset")
I get:
Using custom data configuration default-...
Downloading and preparing dataset audiofolder/default to /...
Downloading data files: 0%| | 0/2 [00:00<?, ?it/s]
Downloading data files: 0it [00:00, ?it/s]
Extracting data files: 0it [00:00, ?it/s]
Generating train split: 0 examples [00:00, ? examples/s]
Dataset audiofolder downloaded and prepared to /.... Subsequent calls will reuse this data.
0%| | 0/1 [00:00<?, ?it/s]
DatasetDict({
train: Dataset({
features: ['audio', 'transcription'],
num_rows: 1
})
})
### Steps to reproduce the bug
Create a dataset folder called 'my_dataset' with a subfolder called 'data' that has mp3 files. Also, create metadata.csv that has file locations like 'data/...mp3' and their corresponding transcription.
Run:
ds = load_dataset("audiofolder", data_dir="my_dataset")
### Expected behavior
It should generate a dataset with numerous rows.
### Environment info
Run on Jupyter notebook | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5465/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5465/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5464 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5464/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5464/comments | https://api.github.com/repos/huggingface/datasets/issues/5464/events | https://github.com/huggingface/datasets/issues/5464 | 1,557,462,104 | I_kwDODunzps5c1PxY | 5,464 | NonMatchingChecksumError for hendrycks_test | {
"login": "sarahwie",
"id": 8027676,
"node_id": "MDQ6VXNlcjgwMjc2NzY=",
"avatar_url": "https://avatars.githubusercontent.com/u/8027676?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sarahwie",
"html_url": "https://github.com/sarahwie",
"followers_url": "https://api.github.com/users/sarahwie/followers",
"following_url": "https://api.github.com/users/sarahwie/following{/other_user}",
"gists_url": "https://api.github.com/users/sarahwie/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sarahwie/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sarahwie/subscriptions",
"organizations_url": "https://api.github.com/users/sarahwie/orgs",
"repos_url": "https://api.github.com/users/sarahwie/repos",
"events_url": "https://api.github.com/users/sarahwie/events{/privacy}",
"received_events_url": "https://api.github.com/users/sarahwie/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Thanks for reporting, @sarahwie.\r\n\r\nPlease note this issue was already fixed in `datasets` 2.6.0 version:\r\n- #5040\r\n\r\nIf you update your `datasets` version, you will be able to load the dataset:\r\n```\r\npip install -U datasets\r\n```",
"Oops, missed that I needed to upgrade. Thanks!"
] | 2023-01-26T00:43:23 | 2023-01-27T05:44:31 | 2023-01-26T07:41:58 | NONE | null | null | null | ### Describe the bug
The checksum of the file has likely changed on the remote host.
### Steps to reproduce the bug
`dataset = nlp.load_dataset("hendrycks_test", "anatomy")`
### Expected behavior
no error thrown
### Environment info
- `datasets` version: 2.2.1
- Platform: macOS-13.1-arm64-arm-64bit
- Python version: 3.9.13
- PyArrow version: 9.0.0
- Pandas version: 1.5.1 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5464/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5464/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5463 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5463/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5463/comments | https://api.github.com/repos/huggingface/datasets/issues/5463/events | https://github.com/huggingface/datasets/pull/5463 | 1,557,021,041 | PR_kwDODunzps5IiGWb | 5,463 | Imagefolder docs: mention support of CSV and ZIP | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009559 / 0.011353 (-0.001794) | 0.006425 / 0.011008 (-0.004583) | 0.112951 / 0.038508 (0.074443) | 0.030835 / 0.023109 (0.007725) | 0.313846 / 0.275898 (0.037948) | 0.352780 / 0.323480 (0.029301) | 0.007740 / 0.007986 (-0.000246) | 0.006843 / 0.004328 (0.002515) | 0.082632 / 0.004250 (0.078382) | 0.039704 / 0.037052 (0.002652) | 0.328526 / 0.258489 (0.070037) | 0.369162 / 0.293841 (0.075321) | 0.047603 / 0.128546 (-0.080943) | 0.015834 / 0.075646 (-0.059812) | 0.385912 / 0.419271 (-0.033360) | 0.053838 / 0.043533 (0.010306) | 0.325778 / 0.255139 (0.070639) | 0.361863 / 0.283200 (0.078663) | 0.097388 / 0.141683 (-0.044295) | 1.510132 / 1.452155 (0.057978) | 1.555980 / 1.492716 (0.063264) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.210792 / 0.018006 (0.192786) | 0.507270 / 0.000490 (0.506780) | 0.002383 / 0.000200 (0.002183) | 0.000095 / 0.000054 (0.000041) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023057 / 0.037411 (-0.014355) | 0.103471 / 0.014526 (0.088945) | 0.111671 / 0.176557 (-0.064885) | 0.145665 / 0.737135 (-0.591470) | 0.131447 / 0.296338 (-0.164891) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.502979 / 0.215209 (0.287770) | 5.111471 / 2.077655 (3.033816) | 2.093604 / 1.504120 (0.589484) | 1.761342 / 1.541195 (0.220148) | 1.919485 / 1.468490 (0.450995) | 1.065672 / 4.584777 (-3.519105) | 5.109746 / 3.745712 (1.364034) | 4.694027 / 5.269862 (-0.575835) | 2.438401 / 4.565676 (-2.127275) | 0.133579 / 0.424275 (-0.290696) | 0.012355 / 0.007607 (0.004748) | 0.669077 / 0.226044 (0.443033) | 6.533905 / 2.268929 (4.264976) | 2.698832 / 55.444624 (-52.745792) | 2.146377 / 6.876477 (-4.730100) | 2.220563 / 2.142072 (0.078491) | 1.287855 / 4.805227 (-3.517372) | 0.238221 / 6.500664 (-6.262443) | 0.071426 / 0.075469 (-0.004043) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.332659 / 1.841788 (-0.509129) | 15.610100 / 8.074308 (7.535791) | 16.691117 / 10.191392 (6.499725) | 0.226338 / 0.680424 (-0.454086) | 0.039964 / 0.534201 (-0.494237) | 0.462911 / 0.579283 (-0.116372) | 0.575923 / 0.434364 (0.141560) | 0.592583 / 0.540337 (0.052245) | 0.658552 / 1.386936 (-0.728384) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008388 / 0.011353 (-0.002965) | 0.005360 / 0.011008 (-0.005648) | 0.104574 / 0.038508 (0.066066) | 0.030109 / 0.023109 (0.007000) | 0.389294 / 0.275898 (0.113396) | 0.424813 / 0.323480 (0.101333) | 0.006629 / 0.007986 (-0.001356) | 0.005222 / 0.004328 (0.000893) | 0.080157 / 0.004250 (0.075907) | 0.045811 / 0.037052 (0.008759) | 0.398708 / 0.258489 (0.140219) | 0.429449 / 0.293841 (0.135608) | 0.052242 / 0.128546 (-0.076304) | 0.017439 / 0.075646 (-0.058207) | 0.362678 / 0.419271 (-0.056593) | 0.054151 / 0.043533 (0.010618) | 0.387932 / 0.255139 (0.132793) | 0.410544 / 0.283200 (0.127344) | 0.101210 / 0.141683 (-0.040473) | 1.486496 / 1.452155 (0.034341) | 1.576404 / 1.492716 (0.083687) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.259468 / 0.018006 (0.241461) | 0.521661 / 0.000490 (0.521172) | 0.000456 / 0.000200 (0.000256) | 0.000078 / 0.000054 (0.000024) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027045 / 0.037411 (-0.010366) | 0.107615 / 0.014526 (0.093089) | 0.133228 / 0.176557 (-0.043329) | 0.156807 / 0.737135 (-0.580328) | 0.125226 / 0.296338 (-0.171113) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.528804 / 0.215209 (0.313595) | 5.516402 / 2.077655 (3.438748) | 2.387531 / 1.504120 (0.883412) | 2.084734 / 1.541195 (0.543539) | 2.091894 / 1.468490 (0.623404) | 1.089761 / 4.584777 (-3.495016) | 5.093067 / 3.745712 (1.347355) | 2.670349 / 5.269862 (-2.599512) | 1.784723 / 4.565676 (-2.780953) | 0.125528 / 0.424275 (-0.298747) | 0.013702 / 0.007607 (0.006095) | 0.667755 / 0.226044 (0.441710) | 6.653900 / 2.268929 (4.384972) | 3.006058 / 55.444624 (-52.438567) | 2.512919 / 6.876477 (-4.363558) | 2.546824 / 2.142072 (0.404751) | 1.269008 / 4.805227 (-3.536219) | 0.234388 / 6.500664 (-6.266276) | 0.065675 / 0.075469 (-0.009795) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.372222 / 1.841788 (-0.469566) | 15.565156 / 8.074308 (7.490848) | 16.800666 / 10.191392 (6.609274) | 0.220656 / 0.680424 (-0.459768) | 0.023690 / 0.534201 (-0.510511) | 0.450049 / 0.579283 (-0.129234) | 0.580433 / 0.434364 (0.146069) | 0.558899 / 0.540337 (0.018561) | 0.676799 / 1.386936 (-0.710137) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#6cc5dcacecf41efc566385b323a3ca72ab44db36 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009440 / 0.011353 (-0.001913) | 0.005159 / 0.011008 (-0.005849) | 0.099152 / 0.038508 (0.060644) | 0.035939 / 0.023109 (0.012830) | 0.300968 / 0.275898 (0.025070) | 0.365676 / 0.323480 (0.042196) | 0.008220 / 0.007986 (0.000235) | 0.004071 / 0.004328 (-0.000257) | 0.075216 / 0.004250 (0.070965) | 0.042173 / 0.037052 (0.005121) | 0.315055 / 0.258489 (0.056566) | 0.338287 / 0.293841 (0.044446) | 0.037789 / 0.128546 (-0.090758) | 0.011856 / 0.075646 (-0.063791) | 0.332975 / 0.419271 (-0.086297) | 0.047087 / 0.043533 (0.003554) | 0.295107 / 0.255139 (0.039968) | 0.315416 / 0.283200 (0.032217) | 0.102273 / 0.141683 (-0.039410) | 1.464908 / 1.452155 (0.012754) | 1.500281 / 1.492716 (0.007565) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.208522 / 0.018006 (0.190516) | 0.446576 / 0.000490 (0.446086) | 0.005766 / 0.000200 (0.005566) | 0.000084 / 0.000054 (0.000029) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027924 / 0.037411 (-0.009487) | 0.111296 / 0.014526 (0.096771) | 0.119055 / 0.176557 (-0.057502) | 0.157755 / 0.737135 (-0.579381) | 0.125539 / 0.296338 (-0.170799) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.395683 / 0.215209 (0.180474) | 3.962696 / 2.077655 (1.885042) | 1.789511 / 1.504120 (0.285391) | 1.591541 / 1.541195 (0.050346) | 1.661276 / 1.468490 (0.192786) | 0.693524 / 4.584777 (-3.891253) | 3.836526 / 3.745712 (0.090813) | 2.187284 / 5.269862 (-3.082578) | 1.521420 / 4.565676 (-3.044257) | 0.084370 / 0.424275 (-0.339905) | 0.012083 / 0.007607 (0.004476) | 0.498017 / 0.226044 (0.271972) | 4.982356 / 2.268929 (2.713428) | 2.235881 / 55.444624 (-53.208743) | 1.912067 / 6.876477 (-4.964410) | 2.052172 / 2.142072 (-0.089900) | 0.836232 / 4.805227 (-3.968995) | 0.165234 / 6.500664 (-6.335431) | 0.062933 / 0.075469 (-0.012536) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.197785 / 1.841788 (-0.644003) | 15.233655 / 8.074308 (7.159347) | 14.254450 / 10.191392 (4.063058) | 0.169149 / 0.680424 (-0.511274) | 0.028794 / 0.534201 (-0.505407) | 0.437214 / 0.579283 (-0.142069) | 0.434836 / 0.434364 (0.000472) | 0.531594 / 0.540337 (-0.008744) | 0.626266 / 1.386936 (-0.760670) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007394 / 0.011353 (-0.003959) | 0.005305 / 0.011008 (-0.005703) | 0.098888 / 0.038508 (0.060380) | 0.033069 / 0.023109 (0.009959) | 0.388427 / 0.275898 (0.112529) | 0.415216 / 0.323480 (0.091736) | 0.005610 / 0.007986 (-0.002375) | 0.004922 / 0.004328 (0.000593) | 0.073694 / 0.004250 (0.069443) | 0.047368 / 0.037052 (0.010315) | 0.379604 / 0.258489 (0.121115) | 0.424876 / 0.293841 (0.131035) | 0.039471 / 0.128546 (-0.089075) | 0.012219 / 0.075646 (-0.063427) | 0.345925 / 0.419271 (-0.073346) | 0.048981 / 0.043533 (0.005448) | 0.379303 / 0.255139 (0.124164) | 0.404682 / 0.283200 (0.121483) | 0.103932 / 0.141683 (-0.037751) | 1.490852 / 1.452155 (0.038697) | 1.578900 / 1.492716 (0.086183) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.201393 / 0.018006 (0.183387) | 0.452484 / 0.000490 (0.451994) | 0.005627 / 0.000200 (0.005428) | 0.000129 / 0.000054 (0.000075) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029317 / 0.037411 (-0.008094) | 0.114904 / 0.014526 (0.100378) | 0.126678 / 0.176557 (-0.049878) | 0.178315 / 0.737135 (-0.558820) | 0.131603 / 0.296338 (-0.164736) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.459830 / 0.215209 (0.244621) | 4.595358 / 2.077655 (2.517703) | 2.383582 / 1.504120 (0.879462) | 2.181945 / 1.541195 (0.640750) | 2.309517 / 1.468490 (0.841027) | 0.704803 / 4.584777 (-3.879974) | 3.820411 / 3.745712 (0.074698) | 4.872173 / 5.269862 (-0.397689) | 2.266090 / 4.565676 (-2.299586) | 0.085805 / 0.424275 (-0.338470) | 0.012488 / 0.007607 (0.004881) | 0.557500 / 0.226044 (0.331456) | 5.570830 / 2.268929 (3.301901) | 2.836202 / 55.444624 (-52.608422) | 2.530534 / 6.876477 (-4.345943) | 2.599792 / 2.142072 (0.457720) | 0.843852 / 4.805227 (-3.961376) | 0.169427 / 6.500664 (-6.331237) | 0.065521 / 0.075469 (-0.009948) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.246014 / 1.841788 (-0.595774) | 15.455336 / 8.074308 (7.381028) | 13.559111 / 10.191392 (3.367719) | 0.169131 / 0.680424 (-0.511293) | 0.017812 / 0.534201 (-0.516389) | 0.421161 / 0.579283 (-0.158122) | 0.458286 / 0.434364 (0.023922) | 0.534692 / 0.540337 (-0.005645) | 0.639299 / 1.386936 (-0.747637) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#2b7558953b5a071194356bbe4c596a2890a3b847 \"CML watermark\")\n"
] | 2023-01-25T17:24:01 | 2023-01-25T18:33:35 | 2023-01-25T18:26:15 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5463",
"html_url": "https://github.com/huggingface/datasets/pull/5463",
"diff_url": "https://github.com/huggingface/datasets/pull/5463.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5463.patch",
"merged_at": "2023-01-25T18:26:15"
} | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5463/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5463/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5462 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5462/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5462/comments | https://api.github.com/repos/huggingface/datasets/issues/5462/events | https://github.com/huggingface/datasets/pull/5462 | 1,556,572,144 | PR_kwDODunzps5Iglqu | 5,462 | Concatenate on axis=1 with misaligned blocks | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008860 / 0.011353 (-0.002493) | 0.004564 / 0.011008 (-0.006444) | 0.101556 / 0.038508 (0.063048) | 0.030000 / 0.023109 (0.006891) | 0.304404 / 0.275898 (0.028506) | 0.366247 / 0.323480 (0.042767) | 0.007182 / 0.007986 (-0.000804) | 0.003583 / 0.004328 (-0.000746) | 0.079665 / 0.004250 (0.075415) | 0.036529 / 0.037052 (-0.000523) | 0.310998 / 0.258489 (0.052509) | 0.346954 / 0.293841 (0.053113) | 0.034098 / 0.128546 (-0.094448) | 0.011576 / 0.075646 (-0.064070) | 0.320448 / 0.419271 (-0.098824) | 0.043328 / 0.043533 (-0.000205) | 0.307317 / 0.255139 (0.052178) | 0.325071 / 0.283200 (0.041871) | 0.096406 / 0.141683 (-0.045277) | 1.540331 / 1.452155 (0.088176) | 1.589533 / 1.492716 (0.096817) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.011034 / 0.018006 (-0.006972) | 0.422066 / 0.000490 (0.421577) | 0.002409 / 0.000200 (0.002209) | 0.000071 / 0.000054 (0.000017) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023703 / 0.037411 (-0.013708) | 0.099935 / 0.014526 (0.085409) | 0.105966 / 0.176557 (-0.070591) | 0.142259 / 0.737135 (-0.594876) | 0.109327 / 0.296338 (-0.187011) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.418381 / 0.215209 (0.203172) | 4.177564 / 2.077655 (2.099909) | 1.880196 / 1.504120 (0.376076) | 1.669169 / 1.541195 (0.127974) | 1.725989 / 1.468490 (0.257499) | 0.689384 / 4.584777 (-3.895393) | 3.380963 / 3.745712 (-0.364749) | 1.884192 / 5.269862 (-3.385670) | 1.162409 / 4.565676 (-3.403268) | 0.082045 / 0.424275 (-0.342230) | 0.012575 / 0.007607 (0.004968) | 0.525824 / 0.226044 (0.299779) | 5.272574 / 2.268929 (3.003646) | 2.283492 / 55.444624 (-53.161132) | 1.947390 / 6.876477 (-4.929087) | 2.013790 / 2.142072 (-0.128283) | 0.806280 / 4.805227 (-3.998948) | 0.149267 / 6.500664 (-6.351397) | 0.066967 / 0.075469 (-0.008502) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.216511 / 1.841788 (-0.625277) | 13.869829 / 8.074308 (5.795521) | 14.189967 / 10.191392 (3.998575) | 0.148716 / 0.680424 (-0.531708) | 0.028324 / 0.534201 (-0.505877) | 0.390856 / 0.579283 (-0.188427) | 0.404389 / 0.434364 (-0.029975) | 0.456050 / 0.540337 (-0.084287) | 0.544139 / 1.386936 (-0.842797) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006727 / 0.011353 (-0.004626) | 0.004515 / 0.011008 (-0.006494) | 0.098791 / 0.038508 (0.060283) | 0.027596 / 0.023109 (0.004487) | 0.439066 / 0.275898 (0.163168) | 0.480555 / 0.323480 (0.157076) | 0.005066 / 0.007986 (-0.002920) | 0.004669 / 0.004328 (0.000341) | 0.075334 / 0.004250 (0.071084) | 0.039779 / 0.037052 (0.002726) | 0.439860 / 0.258489 (0.181371) | 0.480787 / 0.293841 (0.186946) | 0.031550 / 0.128546 (-0.096996) | 0.011668 / 0.075646 (-0.063978) | 0.317348 / 0.419271 (-0.101923) | 0.041312 / 0.043533 (-0.002220) | 0.442934 / 0.255139 (0.187795) | 0.463677 / 0.283200 (0.180478) | 0.090066 / 0.141683 (-0.051617) | 1.544152 / 1.452155 (0.091998) | 1.584455 / 1.492716 (0.091738) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.224284 / 0.018006 (0.206278) | 0.406982 / 0.000490 (0.406492) | 0.000427 / 0.000200 (0.000227) | 0.000061 / 0.000054 (0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024914 / 0.037411 (-0.012497) | 0.102608 / 0.014526 (0.088082) | 0.106931 / 0.176557 (-0.069626) | 0.140828 / 0.737135 (-0.596308) | 0.112015 / 0.296338 (-0.184324) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.471078 / 0.215209 (0.255869) | 4.705742 / 2.077655 (2.628088) | 2.437442 / 1.504120 (0.933322) | 2.242768 / 1.541195 (0.701573) | 2.302158 / 1.468490 (0.833668) | 0.697314 / 4.584777 (-3.887462) | 3.357730 / 3.745712 (-0.387982) | 1.913306 / 5.269862 (-3.356556) | 1.173879 / 4.565676 (-3.391798) | 0.083257 / 0.424275 (-0.341018) | 0.012480 / 0.007607 (0.004873) | 0.573407 / 0.226044 (0.347362) | 5.728650 / 2.268929 (3.459721) | 2.868863 / 55.444624 (-52.575761) | 2.548640 / 6.876477 (-4.327837) | 2.596622 / 2.142072 (0.454549) | 0.805563 / 4.805227 (-3.999664) | 0.150860 / 6.500664 (-6.349804) | 0.068344 / 0.075469 (-0.007125) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.300368 / 1.841788 (-0.541420) | 13.920451 / 8.074308 (5.846143) | 14.222430 / 10.191392 (4.031038) | 0.152497 / 0.680424 (-0.527927) | 0.017415 / 0.534201 (-0.516786) | 0.378827 / 0.579283 (-0.200456) | 0.384165 / 0.434364 (-0.050199) | 0.439364 / 0.540337 (-0.100973) | 0.525710 / 1.386936 (-0.861226) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#2cd22277fa87e02ad9970483f5b75aacdfbf9a70 \"CML watermark\")\n",
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008482 / 0.011353 (-0.002871) | 0.004405 / 0.011008 (-0.006604) | 0.099662 / 0.038508 (0.061154) | 0.029062 / 0.023109 (0.005953) | 0.298329 / 0.275898 (0.022431) | 0.332837 / 0.323480 (0.009357) | 0.006760 / 0.007986 (-0.001225) | 0.003290 / 0.004328 (-0.001039) | 0.077659 / 0.004250 (0.073409) | 0.034745 / 0.037052 (-0.002307) | 0.303134 / 0.258489 (0.044644) | 0.346402 / 0.293841 (0.052561) | 0.033511 / 0.128546 (-0.095035) | 0.011464 / 0.075646 (-0.064183) | 0.322932 / 0.419271 (-0.096340) | 0.040697 / 0.043533 (-0.002836) | 0.301951 / 0.255139 (0.046812) | 0.328961 / 0.283200 (0.045761) | 0.084802 / 0.141683 (-0.056881) | 1.506247 / 1.452155 (0.054092) | 1.547631 / 1.492716 (0.054915) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.190370 / 0.018006 (0.172363) | 0.405786 / 0.000490 (0.405297) | 0.002196 / 0.000200 (0.001997) | 0.000072 / 0.000054 (0.000018) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022958 / 0.037411 (-0.014453) | 0.095736 / 0.014526 (0.081210) | 0.103684 / 0.176557 (-0.072872) | 0.138200 / 0.737135 (-0.598936) | 0.105618 / 0.296338 (-0.190721) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.415239 / 0.215209 (0.200030) | 4.147223 / 2.077655 (2.069569) | 1.850322 / 1.504120 (0.346202) | 1.662815 / 1.541195 (0.121620) | 1.671563 / 1.468490 (0.203073) | 0.693806 / 4.584777 (-3.890971) | 3.352938 / 3.745712 (-0.392774) | 1.849257 / 5.269862 (-3.420604) | 1.161603 / 4.565676 (-3.404074) | 0.081884 / 0.424275 (-0.342391) | 0.012726 / 0.007607 (0.005119) | 0.521105 / 0.226044 (0.295061) | 5.231910 / 2.268929 (2.962981) | 2.306073 / 55.444624 (-53.138551) | 1.950449 / 6.876477 (-4.926028) | 1.988433 / 2.142072 (-0.153640) | 0.811168 / 4.805227 (-3.994059) | 0.149960 / 6.500664 (-6.350704) | 0.064845 / 0.075469 (-0.010624) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.221487 / 1.841788 (-0.620301) | 13.756534 / 8.074308 (5.682226) | 13.825369 / 10.191392 (3.633977) | 0.155641 / 0.680424 (-0.524783) | 0.028444 / 0.534201 (-0.505757) | 0.390364 / 0.579283 (-0.188919) | 0.397592 / 0.434364 (-0.036772) | 0.455905 / 0.540337 (-0.084433) | 0.534606 / 1.386936 (-0.852330) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006281 / 0.011353 (-0.005071) | 0.004533 / 0.011008 (-0.006475) | 0.098328 / 0.038508 (0.059820) | 0.026998 / 0.023109 (0.003889) | 0.424814 / 0.275898 (0.148915) | 0.457653 / 0.323480 (0.134173) | 0.004617 / 0.007986 (-0.003368) | 0.003320 / 0.004328 (-0.001009) | 0.075884 / 0.004250 (0.071634) | 0.035865 / 0.037052 (-0.001187) | 0.431674 / 0.258489 (0.173185) | 0.468286 / 0.293841 (0.174445) | 0.031915 / 0.128546 (-0.096631) | 0.011680 / 0.075646 (-0.063967) | 0.319575 / 0.419271 (-0.099696) | 0.047792 / 0.043533 (0.004259) | 0.428191 / 0.255139 (0.173052) | 0.445657 / 0.283200 (0.162458) | 0.090464 / 0.141683 (-0.051218) | 1.465480 / 1.452155 (0.013326) | 1.548985 / 1.492716 (0.056268) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.185671 / 0.018006 (0.167664) | 0.399274 / 0.000490 (0.398784) | 0.002822 / 0.000200 (0.002622) | 0.000083 / 0.000054 (0.000028) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025934 / 0.037411 (-0.011477) | 0.099480 / 0.014526 (0.084954) | 0.110264 / 0.176557 (-0.066293) | 0.140558 / 0.737135 (-0.596577) | 0.110832 / 0.296338 (-0.185507) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.473491 / 0.215209 (0.258282) | 4.722507 / 2.077655 (2.644852) | 2.456242 / 1.504120 (0.952122) | 2.255999 / 1.541195 (0.714804) | 2.300816 / 1.468490 (0.832326) | 0.698226 / 4.584777 (-3.886551) | 3.397296 / 3.745712 (-0.348416) | 2.741674 / 5.269862 (-2.528187) | 1.462103 / 4.565676 (-3.103573) | 0.082736 / 0.424275 (-0.341539) | 0.012183 / 0.007607 (0.004576) | 0.580144 / 0.226044 (0.354099) | 5.794351 / 2.268929 (3.525422) | 2.881201 / 55.444624 (-52.563423) | 2.544384 / 6.876477 (-4.332093) | 2.555227 / 2.142072 (0.413154) | 0.805849 / 4.805227 (-3.999378) | 0.151822 / 6.500664 (-6.348842) | 0.067477 / 0.075469 (-0.007992) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.300224 / 1.841788 (-0.541564) | 13.595361 / 8.074308 (5.521053) | 13.967622 / 10.191392 (3.776230) | 0.129222 / 0.680424 (-0.551202) | 0.016939 / 0.534201 (-0.517262) | 0.375190 / 0.579283 (-0.204094) | 0.383511 / 0.434364 (-0.050853) | 0.437179 / 0.540337 (-0.103158) | 0.525674 / 1.386936 (-0.861262) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#7ed52db3d67cc8d0f2adfe53b2ec8d1124a174b8 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.012364 / 0.011353 (0.001011) | 0.006098 / 0.011008 (-0.004911) | 0.158908 / 0.038508 (0.120400) | 0.039798 / 0.023109 (0.016689) | 0.383786 / 0.275898 (0.107888) | 0.533961 / 0.323480 (0.210481) | 0.012079 / 0.007986 (0.004094) | 0.006483 / 0.004328 (0.002155) | 0.109660 / 0.004250 (0.105410) | 0.048391 / 0.037052 (0.011339) | 0.447426 / 0.258489 (0.188937) | 0.477292 / 0.293841 (0.183451) | 0.066492 / 0.128546 (-0.062054) | 0.021155 / 0.075646 (-0.054492) | 0.474473 / 0.419271 (0.055202) | 0.063520 / 0.043533 (0.019987) | 0.444941 / 0.255139 (0.189802) | 0.450675 / 0.283200 (0.167475) | 0.129236 / 0.141683 (-0.012447) | 2.009362 / 1.452155 (0.557207) | 1.912067 / 1.492716 (0.419350) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.260384 / 0.018006 (0.242378) | 0.577654 / 0.000490 (0.577165) | 0.004977 / 0.000200 (0.004777) | 0.000110 / 0.000054 (0.000056) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028101 / 0.037411 (-0.009310) | 0.161680 / 0.014526 (0.147154) | 0.146107 / 0.176557 (-0.030450) | 0.173878 / 0.737135 (-0.563257) | 0.186149 / 0.296338 (-0.110190) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.689835 / 0.215209 (0.474626) | 6.775888 / 2.077655 (4.698234) | 2.885499 / 1.504120 (1.381379) | 2.486855 / 1.541195 (0.945660) | 2.540831 / 1.468490 (1.072341) | 1.328135 / 4.584777 (-3.256642) | 5.964983 / 3.745712 (2.219271) | 3.400713 / 5.269862 (-1.869149) | 2.423257 / 4.565676 (-2.142419) | 0.129767 / 0.424275 (-0.294508) | 0.017936 / 0.007607 (0.010328) | 0.909284 / 0.226044 (0.683239) | 8.778791 / 2.268929 (6.509863) | 3.890757 / 55.444624 (-51.553867) | 3.072116 / 6.876477 (-3.804360) | 3.085390 / 2.142072 (0.943318) | 1.571710 / 4.805227 (-3.233517) | 0.279290 / 6.500664 (-6.221374) | 0.087775 / 0.075469 (0.012306) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.751223 / 1.841788 (-0.090564) | 20.313135 / 8.074308 (12.238827) | 22.793800 / 10.191392 (12.602408) | 0.296052 / 0.680424 (-0.384372) | 0.053420 / 0.534201 (-0.480781) | 0.600626 / 0.579283 (0.021343) | 0.634505 / 0.434364 (0.200142) | 0.724000 / 0.540337 (0.183663) | 0.869283 / 1.386936 (-0.517653) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.014876 / 0.011353 (0.003523) | 0.008113 / 0.011008 (-0.002895) | 0.177038 / 0.038508 (0.138530) | 0.050825 / 0.023109 (0.027716) | 0.473989 / 0.275898 (0.198091) | 0.601058 / 0.323480 (0.277578) | 0.007536 / 0.007986 (-0.000450) | 0.006761 / 0.004328 (0.002432) | 0.105260 / 0.004250 (0.101010) | 0.073960 / 0.037052 (0.036908) | 0.447711 / 0.258489 (0.189222) | 0.609998 / 0.293841 (0.316157) | 0.061280 / 0.128546 (-0.067267) | 0.019370 / 0.075646 (-0.056276) | 0.510466 / 0.419271 (0.091194) | 0.062695 / 0.043533 (0.019162) | 0.436778 / 0.255139 (0.181639) | 0.489916 / 0.283200 (0.206717) | 0.137305 / 0.141683 (-0.004378) | 1.801554 / 1.452155 (0.349399) | 2.082409 / 1.492716 (0.589692) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.291304 / 0.018006 (0.273298) | 0.599041 / 0.000490 (0.598551) | 0.008017 / 0.000200 (0.007817) | 0.000127 / 0.000054 (0.000072) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031243 / 0.037411 (-0.006169) | 0.139689 / 0.014526 (0.125163) | 0.138678 / 0.176557 (-0.037878) | 0.180458 / 0.737135 (-0.556677) | 0.149753 / 0.296338 (-0.146585) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.699692 / 0.215209 (0.484482) | 7.273327 / 2.077655 (5.195672) | 3.222650 / 1.504120 (1.718530) | 2.679424 / 1.541195 (1.138229) | 2.842378 / 1.468490 (1.373888) | 1.394633 / 4.584777 (-3.190143) | 6.379970 / 3.745712 (2.634258) | 5.944663 / 5.269862 (0.674801) | 3.105214 / 4.565676 (-1.460462) | 0.138790 / 0.424275 (-0.285485) | 0.014211 / 0.007607 (0.006604) | 0.815275 / 0.226044 (0.589230) | 8.549334 / 2.268929 (6.280405) | 3.754795 / 55.444624 (-51.689829) | 3.125222 / 6.876477 (-3.751255) | 3.269639 / 2.142072 (1.127566) | 1.464187 / 4.805227 (-3.341040) | 0.314557 / 6.500664 (-6.186107) | 0.107354 / 0.075469 (0.031885) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.480793 / 1.841788 (-0.360995) | 16.770328 / 8.074308 (8.696019) | 18.054861 / 10.191392 (7.863469) | 0.198257 / 0.680424 (-0.482167) | 0.026493 / 0.534201 (-0.507708) | 0.489701 / 0.579283 (-0.089582) | 0.540890 / 0.434364 (0.106526) | 0.566675 / 0.540337 (0.026337) | 0.661918 / 1.386936 (-0.725018) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#c4b839b50e9a81693e065f5299990026b97f6580 \"CML watermark\")\n"
] | 2023-01-25T12:33:22 | 2023-01-26T09:37:00 | 2023-01-26T09:27:19 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5462",
"html_url": "https://github.com/huggingface/datasets/pull/5462",
"diff_url": "https://github.com/huggingface/datasets/pull/5462.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5462.patch",
"merged_at": "2023-01-26T09:27:19"
} | Allow to concatenate on axis 1 two tables made of misaligned blocks.
For example if the first table has 2 row blocks of 3 rows each, and the second table has 3 row blocks or 2 rows each.
To do that, I slice the row blocks to re-align the blocks.
Fix https://github.com/huggingface/datasets/issues/5413 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5462/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5462/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5461 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5461/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5461/comments | https://api.github.com/repos/huggingface/datasets/issues/5461/events | https://github.com/huggingface/datasets/issues/5461 | 1,555,532,719 | I_kwDODunzps5ct4uv | 5,461 | Discrepancy in `nyu_depth_v2` dataset | {
"login": "awsaf49",
"id": 36858976,
"node_id": "MDQ6VXNlcjM2ODU4OTc2",
"avatar_url": "https://avatars.githubusercontent.com/u/36858976?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/awsaf49",
"html_url": "https://github.com/awsaf49",
"followers_url": "https://api.github.com/users/awsaf49/followers",
"following_url": "https://api.github.com/users/awsaf49/following{/other_user}",
"gists_url": "https://api.github.com/users/awsaf49/gists{/gist_id}",
"starred_url": "https://api.github.com/users/awsaf49/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/awsaf49/subscriptions",
"organizations_url": "https://api.github.com/users/awsaf49/orgs",
"repos_url": "https://api.github.com/users/awsaf49/repos",
"events_url": "https://api.github.com/users/awsaf49/events{/privacy}",
"received_events_url": "https://api.github.com/users/awsaf49/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Ccing @dwofk (the author of `fast-depth`). \r\n\r\nThanks, @awsaf49 for reporting this. I believe this is because the NYU Depth V2 shipped from `fast-depth` is already preprocessed. \r\n\r\nIf you think it might be better to have the NYU Depth V2 dataset from BTS [here](https://huggingface.co/datasets/sayakpaul/nyu_depth_v2) feel free to open a PR, I am happy to provide guidance :) ",
"Good catch ! Ideally it would be nice to have the datasets in the raw form, this way users can choose whatever processing they want to apply",
"> Ccing @dwofk (the author of `fast-depth`).\r\n> \r\n> Thanks, @awsaf49 for reporting this. I believe this is because the NYU Depth V2 shipped from `fast-depth` is already preprocessed.\r\n> \r\n> If you think it might be better to have the NYU Depth V2 dataset from BTS [here](https://huggingface.co/datasets/sayakpaul/nyu_depth_v2) feel free to open a PR, I am happy to provide guidance :)\r\n\r\n@sayakpaul I would love to create a PR on this. As this will be my first PR here, some guidance would be helpful.\r\n\r\nNeed a bit of advice on the dataset, there are three publicly available datasets. Which one should I consider for PR?\r\n1. [BTS](https://github.com/cleinc/bts): Containst train/test: 36K/654 data, dtype = `uint16` hence more precise\r\n2. [DenseDepth](https://github.com/ialhashim/DenseDepth) It contains train/test: 50K/654 data, dtype = `uint8` hence less precise\r\n3. [Official](https://cs.nyu.edu/~silberman/datasets/nyu_depth_v2.html#raw_parts): Size is big 400GB+, requires **MatLab** code for fixing **projection** and **sync**, DataType: `pgm` and `dump` hence can't be used directly.\r\n\r\ncc: @lhoestq\r\n\r\n",
"I think BTS. Repositories like https://github.com/vinvino02/GLPDepth usually use BTS. Also, just for clarity, the PR will be to https://huggingface.co/datasets/sayakpaul/nyu_depth_v2. Once we have worked it out, we can update the following things:\r\n\r\n* https://github.com/huggingface/blog/pull/718\r\n* https://huggingface.co/docs/datasets/main/en/depth_estimation\r\n\r\nDon't worry about it if it seems overwhelming. We will work it out together :) \r\n\r\n@lhoestq what do you think? ",
"@sayakpaul If I get this right I have to,\r\n1. Create a PR on https://huggingface.co/datasets/sayakpaul/nyu_depth_v2\r\n2. Create a PR on https://github.com/huggingface/blog\r\n3. Create a PR on https://github.com/huggingface/datasets to update https://github.com/huggingface/datasets/blob/main/docs/source/depth_estimation.mdx",
"The last two are low-hanging fruits. Don't worry about them. ",
"Yup opening a PR to use BTS on https://huggingface.co/datasets/sayakpaul/nyu_depth_v2 sounds good :) Thanks for the help !",
"Finally, I have found the origin of the **discretized depth map**. When I first loaded the datasets from HF I noticed it was 30GB but in DenseDepth data is only 4GB with dtype=uint8. This means data from fast-depth (before loading to HF) must have high precision. So when I tried to dig deeper by directly loading depth_map from `h5py`, I found depth_map from `h5py` came with `float32`. But when the data is processed in HF with `datasets.Image()` it was directly converted to `uint8` from `float32` hence the **discretized** depth map.\r\nhttps://github.com/huggingface/datasets/blob/c78559cacbb0ca6e0bc8bfc313cc0359f8c23ead/src/datasets/features/image.py#L91-L93\r\n\r\n## Solutions:\r\n\r\n#### 1. Array2D\r\nUse `Array2D` feature with `float32` for depth_map \r\n\r\n* Code:\r\n```py\r\nFeatures({'depth_map': Array2D(shape=(480, 640), dtype='float32')})\r\n```\r\n* Pros:\r\nNo precision loss.\r\n\r\n* Cons:\r\nAs depth_map is saved as Array I think it can't be visuzlied in [hf.co/dataset](https://huggingface.co/datasets/sayakpaul/nyu_depth_v2) page like segmentation mask.\r\n\r\n#### 2. Uint16\r\nUse `uint16` as dtype for Image in `_h5_loader` for saving depth maps and accept `uint16` dtype in `datasets.Image()` feature.\r\n\r\n* Code\r\n```py\r\ndepth = np.array(h5f[\"depth\"])\r\ndepth /= 10.0 # [0, max_depth] -> [0, 1]\r\ndepth *= (2**16 -1) # transform from [0, 1] -> [0, 2^16 - 1]\r\ndepth = depth.astype('uint16')\r\n```\r\n* Pros:\r\n * We can visualize depth map in hf.co/datasets page like segmentation mask.\r\n * No need for post-processing.\r\n\r\n* Cons:\r\n * We need to make two change\r\n * Modify `_h5_loader` in https://huggingface.co/datasets/sayakpaul/nyu_depth_v2 to convert depth_map from `float32` to `uint16`.\r\n * Make sure `datasets.Image()` converts `np.ndarray` to `uint16` checking max value\r\n * Precision loss due to `float32` to `uint16`\r\n * Post-processing required for depth_map to transform from `[0, 2^16 - 1]` to `[0, max_depth]` before feeding them to model.",
"Thanks so much for digging into this. \r\n\r\nSince the second solution entails changes to core datatypes in `datasets`, I think it's better to go with the first solution. \r\n\r\n@lhoestq WDYT?",
"@sayakpaul Yes, Solution 1 requires minimal change and provides no precision loss. But I think support for `uint16` image would be a great addition as many datasets come with `uint16` image. For example [UW-Madison GI Tract Image Segmentation](https://www.kaggle.com/competitions/uw-madison-gi-tract-image-segmentation) dataset, here the image itself comes with `uint16` dtype rather than mask. So, saving `uint16` image with `uint8` will result in precision loss.\r\n\r\nPerhaps we can adapt solution 1 for this issue and Add support for `uint16` image separately?",
"Using Array2D makes it not practical to use to train a model - in `transformers` we expect an image type.\r\n\r\nThere is a pull request to support more precision than uint8 in Image() here: https://github.com/huggingface/datasets/pull/5365/files\r\n\r\nwe can probably merge it today and do a release right away",
"Fantastic, @lhoestq! \r\n\r\n@awsaf49 then let's wait for the PR to get merged and then take the next steps? ",
"Sure",
"The PR adds support for uint16 which is ok for BTS if I understand correctly, would it be ok for you ?",
"If the main issue with the current version of NYU we have on the Hub is related to the precision loss stemming from `Image()`, I'd prefer if `Image()` supported float32 as well. ",
"I also prefer `float32` as it offers more precision. But I'm not sure if we'll be able to visualize image with `float32` precision.",
"We could have a separate loading for the float32 one using Array2D, but I feel like it's less convenient to use due to the amount of disk space and because it's not an Image() type. That's why I think uint16 is a better solution for users",
"A bit confused here, If https://github.com/huggingface/datasets/pull/5365 gets merged won't this issue will be resolved automatically?",
"Yes in theory :)",
"actually float32 also seems to work in this PR (it just doesn't work for multi-channel)",
"In that case, a new PR isn't necessary, right?",
"Yep. I just tested from the PR and it works:\r\n```python\r\n>>> train_dataset = load_dataset(\"sayakpaul/nyu_depth_v2\", split=\"train\", streaming=True) \r\nDownloading readme: 100%|██████████████████| 8.71k/8.71k [00:00<00:00, 3.60MB/s]\r\n>>> next(iter(train_dataset))\r\n{'image': <PIL.PngImagePlugin.PngImageFile image mode=RGB size=640x480 at 0x1382ED7F0>,\r\n 'depth_map': <PIL.TiffImagePlugin.TiffImageFile image mode=F size=640x480 at 0x1382EDF28>}\r\n>>> x = next(iter(train_dataset))\r\n>>> np.asarray(x[\"depth_map\"]) \r\narray([[0. , 0. , 0. , ..., 0. , 0. ,\r\n 0. ],\r\n [0. , 0. , 0. , ..., 0. , 0. ,\r\n 0. ],\r\n [0. , 0. , 0. , ..., 0. , 0. ,\r\n 0. ],\r\n ...,\r\n [0. , 2.2861192, 2.2861192, ..., 2.234162 , 2.234162 ,\r\n 0. ],\r\n [0. , 2.2861192, 2.2861192, ..., 2.234162 , 2.234162 ,\r\n 0. ],\r\n [0. , 2.2861192, 2.2861192, ..., 2.234162 , 2.234162 ,\r\n 0. ]], dtype=float32)\r\n```",
"Great! the case is closed! This issue has been solved and I have to say, it was quite the thrill ride. I felt like Sherlock Holmes, solving a mystery and finding the bug🕵️♂️. But in all seriousness, it was a pleasure working on this issue and I'm glad we could get to the bottom of it.\r\n\r\nOn another note, should I consider closing the issue? I think we still need to make updates on https://github.com/huggingface/blog and https://github.com/huggingface/datasets/blob/main/docs/source/depth_estimation.mdx",
"Haha thanks Mr Holmes :p\r\n\r\nmaybe let's close this issue when we're done updating the blog post and the documentation",
"@awsaf49 thank you for your hard work! \r\n\r\nI am a little unsure why the other links need to be updated, though. They all rely on datasets internally. ",
"I think depth_map still shows discretized version. It would be nice to have corrected one.\r\n<img src=\"https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/datasets/depth_est_target_viz.png\" width = 300>",
"Also, I think we need to make some changes in the code to visualize depth_map as it is `float32` . `plot.imshow()` supports either [0, 1] + float32 or [0. 255] + uint8",
"Oh yes! Do you want to start with the fixes? Please feel free to say no but I wanted to make sure your contributions are reflected properly in our doc and the blog :)",
"Yes I think that would be nice :)",
"I'll make the changes tomorrow. I hope it's okay..."
] | 2023-01-24T19:15:46 | 2023-02-06T20:52:00 | null | CONTRIBUTOR | null | null | null | ### Describe the bug
I think there is a discrepancy between depth map of `nyu_depth_v2` dataset [here](https://huggingface.co/docs/datasets/main/en/depth_estimation) and actual depth map. Depth values somehow got **discretized/clipped** resulting in depth maps that are different from actual ones. Here is a side-by-side comparison,
![image](https://user-images.githubusercontent.com/36858976/214381162-1d9582c2-6750-4114-a01a-61ca1cd5f872.png)
I tried to find the origin of this issue but sadly as I mentioned in tensorflow/datasets/issues/4674, the download link from `fast-depth` doesn't work anymore hence couldn't verify if the error originated there or during porting data from there to HF.
Hi @sayakpaul, as you worked on huggingface/datasets/issues/5255, if you still have access to that data could you please share the data or perhaps checkout this issue?
### Steps to reproduce the bug
This [notebook](https://colab.research.google.com/drive/1K3ZU8XUPRDOYD38MQS9nreQXJYitlKSW?usp=sharing#scrollTo=UEW7QSh0jf0i) from @sayakpaul could be used to generate depth maps and actual ground truths could be checked from this [dataset](https://www.kaggle.com/datasets/awsaf49/nyuv2-bts-dataset) from BTS repo.
> Note: BTS dataset has only 36K data compared to the train-test 50K. They sampled the data as adjacent frames look quite the same
### Expected behavior
Expected depth maps should be smooth rather than discrete/clipped.
### Environment info
- `datasets` version: 2.8.1.dev0
- Platform: Linux-5.10.147+-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 9.0.0
- Pandas version: 1.3.5 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5461/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5461/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5460 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5460/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5460/comments | https://api.github.com/repos/huggingface/datasets/issues/5460/events | https://github.com/huggingface/datasets/pull/5460 | 1,555,387,532 | PR_kwDODunzps5Icn9C | 5,460 | Document that removing all the columns returns an empty document and the num_row is lost | {
"login": "thomasw21",
"id": 24695242,
"node_id": "MDQ6VXNlcjI0Njk1MjQy",
"avatar_url": "https://avatars.githubusercontent.com/u/24695242?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomasw21",
"html_url": "https://github.com/thomasw21",
"followers_url": "https://api.github.com/users/thomasw21/followers",
"following_url": "https://api.github.com/users/thomasw21/following{/other_user}",
"gists_url": "https://api.github.com/users/thomasw21/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomasw21/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomasw21/subscriptions",
"organizations_url": "https://api.github.com/users/thomasw21/orgs",
"repos_url": "https://api.github.com/users/thomasw21/repos",
"events_url": "https://api.github.com/users/thomasw21/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomasw21/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.011812 / 0.011353 (0.000459) | 0.006878 / 0.011008 (-0.004130) | 0.128720 / 0.038508 (0.090212) | 0.038506 / 0.023109 (0.015397) | 0.359670 / 0.275898 (0.083772) | 0.422908 / 0.323480 (0.099428) | 0.010115 / 0.007986 (0.002129) | 0.004332 / 0.004328 (0.000004) | 0.096281 / 0.004250 (0.092031) | 0.048850 / 0.037052 (0.011798) | 0.373795 / 0.258489 (0.115306) | 0.414643 / 0.293841 (0.120802) | 0.057568 / 0.128546 (-0.070978) | 0.024135 / 0.075646 (-0.051512) | 0.411764 / 0.419271 (-0.007507) | 0.060167 / 0.043533 (0.016634) | 0.367119 / 0.255139 (0.111980) | 0.391813 / 0.283200 (0.108613) | 0.112125 / 0.141683 (-0.029558) | 1.869560 / 1.452155 (0.417406) | 1.845649 / 1.492716 (0.352932) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.211449 / 0.018006 (0.193443) | 0.522453 / 0.000490 (0.521963) | 0.003984 / 0.000200 (0.003784) | 0.000096 / 0.000054 (0.000042) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026015 / 0.037411 (-0.011397) | 0.117747 / 0.014526 (0.103221) | 0.125037 / 0.176557 (-0.051520) | 0.168351 / 0.737135 (-0.568785) | 0.132390 / 0.296338 (-0.163949) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.605653 / 0.215209 (0.390444) | 5.883452 / 2.077655 (3.805798) | 2.367052 / 1.504120 (0.862932) | 2.137671 / 1.541195 (0.596476) | 2.042370 / 1.468490 (0.573880) | 1.168442 / 4.584777 (-3.416335) | 5.205236 / 3.745712 (1.459524) | 2.992514 / 5.269862 (-2.277348) | 2.191829 / 4.565676 (-2.373847) | 0.137702 / 0.424275 (-0.286574) | 0.015898 / 0.007607 (0.008291) | 0.783987 / 0.226044 (0.557942) | 7.768965 / 2.268929 (5.500036) | 3.249149 / 55.444624 (-52.195476) | 2.530687 / 6.876477 (-4.345790) | 2.675212 / 2.142072 (0.533140) | 1.482804 / 4.805227 (-3.322423) | 0.276845 / 6.500664 (-6.223819) | 0.080597 / 0.075469 (0.005128) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.519086 / 1.841788 (-0.322701) | 17.394093 / 8.074308 (9.319785) | 19.613554 / 10.191392 (9.422162) | 0.253291 / 0.680424 (-0.427133) | 0.047746 / 0.534201 (-0.486455) | 0.547114 / 0.579283 (-0.032170) | 0.623873 / 0.434364 (0.189509) | 0.631924 / 0.540337 (0.091586) | 0.744390 / 1.386936 (-0.642546) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009229 / 0.011353 (-0.002124) | 0.006206 / 0.011008 (-0.004802) | 0.121866 / 0.038508 (0.083357) | 0.033629 / 0.023109 (0.010519) | 0.435172 / 0.275898 (0.159274) | 0.472093 / 0.323480 (0.148613) | 0.006946 / 0.007986 (-0.001039) | 0.004848 / 0.004328 (0.000519) | 0.097289 / 0.004250 (0.093038) | 0.046982 / 0.037052 (0.009930) | 0.447365 / 0.258489 (0.188876) | 0.491213 / 0.293841 (0.197372) | 0.055486 / 0.128546 (-0.073060) | 0.019788 / 0.075646 (-0.055858) | 0.399830 / 0.419271 (-0.019441) | 0.058943 / 0.043533 (0.015411) | 0.447658 / 0.255139 (0.192519) | 0.465752 / 0.283200 (0.182552) | 0.110441 / 0.141683 (-0.031242) | 1.773155 / 1.452155 (0.321001) | 1.899370 / 1.492716 (0.406653) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.191188 / 0.018006 (0.173181) | 0.523721 / 0.000490 (0.523232) | 0.004008 / 0.000200 (0.003808) | 0.000126 / 0.000054 (0.000072) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032579 / 0.037411 (-0.004833) | 0.120870 / 0.014526 (0.106344) | 0.154991 / 0.176557 (-0.021565) | 0.175450 / 0.737135 (-0.561685) | 0.136526 / 0.296338 (-0.159813) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.627262 / 0.215209 (0.412052) | 6.457989 / 2.077655 (4.380334) | 2.935188 / 1.504120 (1.431068) | 2.558705 / 1.541195 (1.017510) | 2.669455 / 1.468490 (1.200965) | 1.228791 / 4.584777 (-3.355985) | 5.621262 / 3.745712 (1.875549) | 3.181775 / 5.269862 (-2.088086) | 2.115116 / 4.565676 (-2.450560) | 0.159348 / 0.424275 (-0.264927) | 0.013598 / 0.007607 (0.005991) | 0.834732 / 0.226044 (0.608687) | 8.051097 / 2.268929 (5.782168) | 3.761681 / 55.444624 (-51.682943) | 2.898158 / 6.876477 (-3.978319) | 2.936289 / 2.142072 (0.794217) | 1.476307 / 4.805227 (-3.328920) | 0.269845 / 6.500664 (-6.230819) | 0.087225 / 0.075469 (0.011756) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.632522 / 1.841788 (-0.209266) | 17.615297 / 8.074308 (9.540989) | 20.501172 / 10.191392 (10.309780) | 0.248845 / 0.680424 (-0.431579) | 0.024852 / 0.534201 (-0.509349) | 0.498957 / 0.579283 (-0.080326) | 0.588566 / 0.434364 (0.154202) | 0.611051 / 0.540337 (0.070714) | 0.726321 / 1.386936 (-0.660615) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#adaaf0b5ad596538c744d41bb56ce472834b6573 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008920 / 0.011353 (-0.002433) | 0.004666 / 0.011008 (-0.006342) | 0.098584 / 0.038508 (0.060076) | 0.030213 / 0.023109 (0.007103) | 0.298180 / 0.275898 (0.022282) | 0.358932 / 0.323480 (0.035452) | 0.007182 / 0.007986 (-0.000804) | 0.005430 / 0.004328 (0.001102) | 0.077962 / 0.004250 (0.073712) | 0.038516 / 0.037052 (0.001463) | 0.308840 / 0.258489 (0.050351) | 0.343678 / 0.293841 (0.049837) | 0.033701 / 0.128546 (-0.094845) | 0.011460 / 0.075646 (-0.064186) | 0.319809 / 0.419271 (-0.099462) | 0.040731 / 0.043533 (-0.002802) | 0.299772 / 0.255139 (0.044633) | 0.324292 / 0.283200 (0.041092) | 0.087755 / 0.141683 (-0.053928) | 1.493077 / 1.452155 (0.040922) | 1.527462 / 1.492716 (0.034746) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.187927 / 0.018006 (0.169921) | 0.412785 / 0.000490 (0.412296) | 0.003235 / 0.000200 (0.003035) | 0.000080 / 0.000054 (0.000026) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023313 / 0.037411 (-0.014098) | 0.095663 / 0.014526 (0.081137) | 0.105094 / 0.176557 (-0.071463) | 0.140389 / 0.737135 (-0.596746) | 0.108477 / 0.296338 (-0.187861) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.410680 / 0.215209 (0.195471) | 4.109287 / 2.077655 (2.031632) | 1.833214 / 1.504120 (0.329094) | 1.622837 / 1.541195 (0.081642) | 1.679899 / 1.468490 (0.211409) | 0.686920 / 4.584777 (-3.897857) | 3.463267 / 3.745712 (-0.282445) | 1.867035 / 5.269862 (-3.402826) | 1.150631 / 4.565676 (-3.415046) | 0.081209 / 0.424275 (-0.343066) | 0.012384 / 0.007607 (0.004777) | 0.521070 / 0.226044 (0.295026) | 5.208829 / 2.268929 (2.939900) | 2.289032 / 55.444624 (-53.155592) | 1.942976 / 6.876477 (-4.933501) | 1.990660 / 2.142072 (-0.151413) | 0.802976 / 4.805227 (-4.002252) | 0.148199 / 6.500664 (-6.352465) | 0.064644 / 0.075469 (-0.010825) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.277029 / 1.841788 (-0.564759) | 13.915489 / 8.074308 (5.841181) | 14.035486 / 10.191392 (3.844094) | 0.138205 / 0.680424 (-0.542219) | 0.028968 / 0.534201 (-0.505232) | 0.394275 / 0.579283 (-0.185008) | 0.399967 / 0.434364 (-0.034397) | 0.460595 / 0.540337 (-0.079742) | 0.537625 / 1.386936 (-0.849311) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006485 / 0.011353 (-0.004868) | 0.004534 / 0.011008 (-0.006474) | 0.097742 / 0.038508 (0.059234) | 0.027231 / 0.023109 (0.004122) | 0.431321 / 0.275898 (0.155423) | 0.469212 / 0.323480 (0.145732) | 0.004894 / 0.007986 (-0.003092) | 0.004147 / 0.004328 (-0.000181) | 0.073650 / 0.004250 (0.069400) | 0.037052 / 0.037052 (-0.000000) | 0.434196 / 0.258489 (0.175707) | 0.480539 / 0.293841 (0.186698) | 0.031923 / 0.128546 (-0.096623) | 0.011522 / 0.075646 (-0.064124) | 0.317062 / 0.419271 (-0.102209) | 0.041124 / 0.043533 (-0.002409) | 0.432013 / 0.255139 (0.176874) | 0.456760 / 0.283200 (0.173560) | 0.089757 / 0.141683 (-0.051925) | 1.497752 / 1.452155 (0.045597) | 1.585342 / 1.492716 (0.092626) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.227784 / 0.018006 (0.209778) | 0.404570 / 0.000490 (0.404080) | 0.000556 / 0.000200 (0.000356) | 0.000065 / 0.000054 (0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025201 / 0.037411 (-0.012210) | 0.099348 / 0.014526 (0.084822) | 0.114984 / 0.176557 (-0.061573) | 0.147039 / 0.737135 (-0.590097) | 0.109727 / 0.296338 (-0.186611) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.468415 / 0.215209 (0.253206) | 4.692228 / 2.077655 (2.614573) | 2.403382 / 1.504120 (0.899262) | 2.196026 / 1.541195 (0.654832) | 2.234736 / 1.468490 (0.766246) | 0.703011 / 4.584777 (-3.881766) | 3.451513 / 3.745712 (-0.294199) | 2.596811 / 5.269862 (-2.673051) | 1.544079 / 4.565676 (-3.021598) | 0.083153 / 0.424275 (-0.341123) | 0.012605 / 0.007607 (0.004998) | 0.570265 / 0.226044 (0.344220) | 5.735996 / 2.268929 (3.467067) | 2.865336 / 55.444624 (-52.579288) | 2.508340 / 6.876477 (-4.368137) | 2.547144 / 2.142072 (0.405072) | 0.813018 / 4.805227 (-3.992210) | 0.150327 / 6.500664 (-6.350337) | 0.065837 / 0.075469 (-0.009632) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.268941 / 1.841788 (-0.572847) | 13.835698 / 8.074308 (5.761390) | 13.992726 / 10.191392 (3.801334) | 0.127751 / 0.680424 (-0.552673) | 0.016673 / 0.534201 (-0.517528) | 0.381921 / 0.579283 (-0.197362) | 0.390688 / 0.434364 (-0.043676) | 0.446234 / 0.540337 (-0.094103) | 0.532631 / 1.386936 (-0.854305) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#1492df3311bfeac55aaedf34c93c014630c4403e \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008486 / 0.011353 (-0.002867) | 0.004573 / 0.011008 (-0.006435) | 0.100096 / 0.038508 (0.061588) | 0.029449 / 0.023109 (0.006340) | 0.298384 / 0.275898 (0.022486) | 0.361886 / 0.323480 (0.038406) | 0.006813 / 0.007986 (-0.001173) | 0.003394 / 0.004328 (-0.000935) | 0.077563 / 0.004250 (0.073312) | 0.035605 / 0.037052 (-0.001447) | 0.306864 / 0.258489 (0.048375) | 0.346438 / 0.293841 (0.052597) | 0.033156 / 0.128546 (-0.095390) | 0.011567 / 0.075646 (-0.064079) | 0.322189 / 0.419271 (-0.097083) | 0.040161 / 0.043533 (-0.003372) | 0.299329 / 0.255139 (0.044190) | 0.326375 / 0.283200 (0.043175) | 0.086572 / 0.141683 (-0.055111) | 1.502473 / 1.452155 (0.050319) | 1.528539 / 1.492716 (0.035823) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.008502 / 0.018006 (-0.009505) | 0.411045 / 0.000490 (0.410555) | 0.003179 / 0.000200 (0.002980) | 0.000073 / 0.000054 (0.000018) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023177 / 0.037411 (-0.014234) | 0.096948 / 0.014526 (0.082422) | 0.104068 / 0.176557 (-0.072489) | 0.138739 / 0.737135 (-0.598396) | 0.108241 / 0.296338 (-0.188097) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.411156 / 0.215209 (0.195947) | 4.092992 / 2.077655 (2.015337) | 1.841903 / 1.504120 (0.337783) | 1.637449 / 1.541195 (0.096254) | 1.670968 / 1.468490 (0.202478) | 0.697301 / 4.584777 (-3.887476) | 3.354717 / 3.745712 (-0.390995) | 1.851518 / 5.269862 (-3.418344) | 1.160367 / 4.565676 (-3.405309) | 0.082613 / 0.424275 (-0.341662) | 0.012477 / 0.007607 (0.004870) | 0.524839 / 0.226044 (0.298795) | 5.264173 / 2.268929 (2.995245) | 2.294530 / 55.444624 (-53.150094) | 1.933233 / 6.876477 (-4.943244) | 1.968959 / 2.142072 (-0.173113) | 0.817104 / 4.805227 (-3.988123) | 0.149072 / 6.500664 (-6.351592) | 0.064911 / 0.075469 (-0.010558) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.222215 / 1.841788 (-0.619573) | 13.607545 / 8.074308 (5.533237) | 13.990230 / 10.191392 (3.798838) | 0.150855 / 0.680424 (-0.529568) | 0.028844 / 0.534201 (-0.505357) | 0.396169 / 0.579283 (-0.183114) | 0.406957 / 0.434364 (-0.027407) | 0.464069 / 0.540337 (-0.076268) | 0.554027 / 1.386936 (-0.832909) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006296 / 0.011353 (-0.005057) | 0.004563 / 0.011008 (-0.006445) | 0.097719 / 0.038508 (0.059211) | 0.027106 / 0.023109 (0.003996) | 0.409333 / 0.275898 (0.133435) | 0.445397 / 0.323480 (0.121917) | 0.004906 / 0.007986 (-0.003080) | 0.003316 / 0.004328 (-0.001012) | 0.075363 / 0.004250 (0.071112) | 0.039366 / 0.037052 (0.002314) | 0.412710 / 0.258489 (0.154221) | 0.451789 / 0.293841 (0.157948) | 0.031810 / 0.128546 (-0.096736) | 0.011681 / 0.075646 (-0.063965) | 0.318484 / 0.419271 (-0.100788) | 0.046741 / 0.043533 (0.003208) | 0.411631 / 0.255139 (0.156492) | 0.435274 / 0.283200 (0.152074) | 0.092366 / 0.141683 (-0.049317) | 1.492243 / 1.452155 (0.040089) | 1.617603 / 1.492716 (0.124887) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.217376 / 0.018006 (0.199369) | 0.400940 / 0.000490 (0.400450) | 0.003700 / 0.000200 (0.003500) | 0.000075 / 0.000054 (0.000021) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023733 / 0.037411 (-0.013678) | 0.098553 / 0.014526 (0.084027) | 0.105790 / 0.176557 (-0.070767) | 0.139537 / 0.737135 (-0.597598) | 0.109862 / 0.296338 (-0.186477) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.476562 / 0.215209 (0.261353) | 4.773469 / 2.077655 (2.695814) | 2.447302 / 1.504120 (0.943182) | 2.240596 / 1.541195 (0.699401) | 2.271370 / 1.468490 (0.802880) | 0.698913 / 4.584777 (-3.885864) | 3.345648 / 3.745712 (-0.400064) | 1.845008 / 5.269862 (-3.424854) | 1.163213 / 4.565676 (-3.402464) | 0.082456 / 0.424275 (-0.341819) | 0.012315 / 0.007607 (0.004708) | 0.575881 / 0.226044 (0.349836) | 5.769575 / 2.268929 (3.500647) | 2.909759 / 55.444624 (-52.534865) | 2.580259 / 6.876477 (-4.296218) | 2.590473 / 2.142072 (0.448401) | 0.802765 / 4.805227 (-4.002462) | 0.151514 / 6.500664 (-6.349150) | 0.067718 / 0.075469 (-0.007751) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.293014 / 1.841788 (-0.548773) | 13.934072 / 8.074308 (5.859763) | 13.538760 / 10.191392 (3.347368) | 0.126490 / 0.680424 (-0.553934) | 0.016653 / 0.534201 (-0.517548) | 0.381220 / 0.579283 (-0.198064) | 0.387571 / 0.434364 (-0.046793) | 0.444674 / 0.540337 (-0.095663) | 0.550802 / 1.386936 (-0.836134) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#bed576f2205c96f6cb26b5c6522345cb8b06ecfc \"CML watermark\")\n"
] | 2023-01-24T17:33:38 | 2023-01-25T16:11:10 | 2023-01-25T16:04:03 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5460",
"html_url": "https://github.com/huggingface/datasets/pull/5460",
"diff_url": "https://github.com/huggingface/datasets/pull/5460.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5460.patch",
"merged_at": "2023-01-25T16:04:03"
} | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5460/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5460/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5459 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5459/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5459/comments | https://api.github.com/repos/huggingface/datasets/issues/5459/events | https://github.com/huggingface/datasets/pull/5459 | 1,555,367,504 | PR_kwDODunzps5Icjwe | 5,459 | Disable aiohttp requoting of redirection URL | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Comment by @lhoestq:\r\n> Do you think we need this in `datasets` if it's fixed on the moon landing side ? In the aiohttp doc they consider those symbols as \"non-safe\" ",
"The lib `requests` does not perform that requote on redirect URLs.",
"Indeed, the `requests` library does perform a requoting, but this does not unquote `%27`:\r\n```python\r\nIn [1]: from requests.utils import requote_uri\r\n\r\nIn [2]: url = \"https://netloc/path?param=param%27%27value\"\r\n\r\nIn [3]: url\r\nOut[3]: 'https://netloc/path?param=param%27%27value'\r\n\r\nIn [4]: requote_uri(url)\r\nOut[4]: 'https://netloc/path?param=param%27%27value'\r\n```\r\n\r\nHowever, the `aiohttp` library uses `yarl.ULR` and this does unquote `%27`:\r\n```python\r\nIn [5]: from yarl import URL\r\n\r\nIn [6]: url\r\nOut[6]: 'https://netloc/path?param=param%27%27value'\r\n\r\nIn [7]: str(URL(url))\r\nOut[7]: \"https://netloc/path?param=param''value\"\r\n```\r\n\r\nIf we pass `requote_redirect_url=False` to `aiohttp`, then it passes `encoded=True` to `yarl.ULR`: https://github.com/aio-libs/aiohttp/blob/4635161ee8e7ad321cca46e01ce5bfeb1ad8bf26/aiohttp/client.py#L578-L580\r\n```python\r\nparsed_url = URL(\r\n r_url, encoded=not self._requote_redirect_url\r\n)\r\n```\r\nwhich does not unquote `%27`:\r\n```python\r\nIn [8]: url\r\nOut[8]: 'https://netloc/path?param=param%27%27value'\r\n\r\nIn [9]: str(URL(url, encoded=True))\r\nOut[9]: 'https://netloc/path?param=param%27%27value'\r\n```",
"See the issues we opened in the respective libraries:\r\n- aiohttp\r\n - aio-libs/aiohttp#7183\r\n- requests\r\n - psf/requests#6341",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.012399 / 0.011353 (0.001047) | 0.006388 / 0.011008 (-0.004620) | 0.134173 / 0.038508 (0.095665) | 0.037059 / 0.023109 (0.013949) | 0.420697 / 0.275898 (0.144799) | 0.473981 / 0.323480 (0.150502) | 0.009857 / 0.007986 (0.001871) | 0.004791 / 0.004328 (0.000463) | 0.106886 / 0.004250 (0.102636) | 0.044871 / 0.037052 (0.007818) | 0.429843 / 0.258489 (0.171354) | 0.461569 / 0.293841 (0.167728) | 0.057285 / 0.128546 (-0.071261) | 0.018809 / 0.075646 (-0.056837) | 0.432613 / 0.419271 (0.013342) | 0.058086 / 0.043533 (0.014553) | 0.413064 / 0.255139 (0.157925) | 0.444407 / 0.283200 (0.161207) | 0.119102 / 0.141683 (-0.022581) | 1.875954 / 1.452155 (0.423799) | 1.916392 / 1.492716 (0.423676) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.267489 / 0.018006 (0.249483) | 0.567554 / 0.000490 (0.567064) | 0.005901 / 0.000200 (0.005701) | 0.000134 / 0.000054 (0.000079) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031248 / 0.037411 (-0.006164) | 0.123014 / 0.014526 (0.108489) | 0.140001 / 0.176557 (-0.036556) | 0.191476 / 0.737135 (-0.545659) | 0.141687 / 0.296338 (-0.154652) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.637481 / 0.215209 (0.422272) | 6.255969 / 2.077655 (4.178314) | 2.559811 / 1.504120 (1.055691) | 2.118154 / 1.541195 (0.576960) | 2.079487 / 1.468490 (0.610997) | 1.201079 / 4.584777 (-3.383698) | 5.592625 / 3.745712 (1.846913) | 5.143344 / 5.269862 (-0.126517) | 2.764716 / 4.565676 (-1.800960) | 0.142539 / 0.424275 (-0.281736) | 0.015541 / 0.007607 (0.007934) | 0.771407 / 0.226044 (0.545363) | 7.631657 / 2.268929 (5.362728) | 3.279684 / 55.444624 (-52.164940) | 2.587566 / 6.876477 (-4.288911) | 2.624622 / 2.142072 (0.482549) | 1.427878 / 4.805227 (-3.377350) | 0.257759 / 6.500664 (-6.242906) | 0.078616 / 0.075469 (0.003147) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.609305 / 1.841788 (-0.232483) | 18.258792 / 8.074308 (10.184484) | 20.345242 / 10.191392 (10.153850) | 0.267366 / 0.680424 (-0.413058) | 0.047035 / 0.534201 (-0.487166) | 0.568881 / 0.579283 (-0.010402) | 0.662763 / 0.434364 (0.228399) | 0.668927 / 0.540337 (0.128590) | 0.755766 / 1.386936 (-0.631170) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010017 / 0.011353 (-0.001336) | 0.006816 / 0.011008 (-0.004192) | 0.105038 / 0.038508 (0.066529) | 0.038689 / 0.023109 (0.015580) | 0.482113 / 0.275898 (0.206215) | 0.540072 / 0.323480 (0.216592) | 0.007738 / 0.007986 (-0.000248) | 0.005134 / 0.004328 (0.000806) | 0.102203 / 0.004250 (0.097953) | 0.054080 / 0.037052 (0.017028) | 0.501057 / 0.258489 (0.242568) | 0.567186 / 0.293841 (0.273345) | 0.060330 / 0.128546 (-0.068217) | 0.020059 / 0.075646 (-0.055587) | 0.123102 / 0.419271 (-0.296170) | 0.063426 / 0.043533 (0.019893) | 0.494171 / 0.255139 (0.239032) | 0.538238 / 0.283200 (0.255039) | 0.119613 / 0.141683 (-0.022069) | 1.853728 / 1.452155 (0.401574) | 1.984621 / 1.492716 (0.491904) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.282511 / 0.018006 (0.264505) | 0.563190 / 0.000490 (0.562700) | 0.000465 / 0.000200 (0.000265) | 0.000086 / 0.000054 (0.000032) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029267 / 0.037411 (-0.008144) | 0.135618 / 0.014526 (0.121093) | 0.146286 / 0.176557 (-0.030271) | 0.188570 / 0.737135 (-0.548565) | 0.155839 / 0.296338 (-0.140499) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.671660 / 0.215209 (0.456451) | 6.718775 / 2.077655 (4.641120) | 3.004601 / 1.504120 (1.500481) | 2.640504 / 1.541195 (1.099309) | 2.666788 / 1.468490 (1.198298) | 1.242655 / 4.584777 (-3.342122) | 5.780119 / 3.745712 (2.034407) | 3.247935 / 5.269862 (-2.021927) | 2.114007 / 4.565676 (-2.451669) | 0.147546 / 0.424275 (-0.276729) | 0.014408 / 0.007607 (0.006801) | 0.824407 / 0.226044 (0.598362) | 8.278185 / 2.268929 (6.009257) | 3.733463 / 55.444624 (-51.711161) | 2.976732 / 6.876477 (-3.899745) | 3.132758 / 2.142072 (0.990686) | 1.446095 / 4.805227 (-3.359132) | 0.258628 / 6.500664 (-6.242036) | 0.085513 / 0.075469 (0.010043) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.702681 / 1.841788 (-0.139106) | 18.725123 / 8.074308 (10.650815) | 19.622808 / 10.191392 (9.431416) | 0.215845 / 0.680424 (-0.464579) | 0.029246 / 0.534201 (-0.504955) | 0.554819 / 0.579283 (-0.024464) | 0.630926 / 0.434364 (0.196562) | 0.637663 / 0.540337 (0.097325) | 0.837948 / 1.386936 (-0.548988) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#c4a4f96ef0a4ec4b25f0872f160fa1eb9d2e711c \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008540 / 0.011353 (-0.002813) | 0.004538 / 0.011008 (-0.006470) | 0.101507 / 0.038508 (0.062999) | 0.029751 / 0.023109 (0.006641) | 0.292608 / 0.275898 (0.016710) | 0.354734 / 0.323480 (0.031254) | 0.007430 / 0.007986 (-0.000556) | 0.003365 / 0.004328 (-0.000964) | 0.078703 / 0.004250 (0.074452) | 0.034858 / 0.037052 (-0.002194) | 0.303518 / 0.258489 (0.045029) | 0.336523 / 0.293841 (0.042682) | 0.033741 / 0.128546 (-0.094805) | 0.011460 / 0.075646 (-0.064186) | 0.319551 / 0.419271 (-0.099721) | 0.041102 / 0.043533 (-0.002431) | 0.295914 / 0.255139 (0.040775) | 0.322142 / 0.283200 (0.038943) | 0.084694 / 0.141683 (-0.056989) | 1.481308 / 1.452155 (0.029153) | 1.530271 / 1.492716 (0.037554) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.180516 / 0.018006 (0.162510) | 0.405741 / 0.000490 (0.405251) | 0.002806 / 0.000200 (0.002606) | 0.000072 / 0.000054 (0.000018) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023359 / 0.037411 (-0.014052) | 0.096950 / 0.014526 (0.082424) | 0.103991 / 0.176557 (-0.072566) | 0.143700 / 0.737135 (-0.593435) | 0.106764 / 0.296338 (-0.189575) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.416966 / 0.215209 (0.201757) | 4.145601 / 2.077655 (2.067946) | 1.838258 / 1.504120 (0.334139) | 1.629396 / 1.541195 (0.088201) | 1.649707 / 1.468490 (0.181217) | 0.689624 / 4.584777 (-3.895153) | 3.414584 / 3.745712 (-0.331129) | 1.874295 / 5.269862 (-3.395566) | 1.251930 / 4.565676 (-3.313746) | 0.081782 / 0.424275 (-0.342493) | 0.012868 / 0.007607 (0.005261) | 0.523904 / 0.226044 (0.297859) | 5.251032 / 2.268929 (2.982104) | 2.301549 / 55.444624 (-53.143075) | 1.942110 / 6.876477 (-4.934367) | 2.023014 / 2.142072 (-0.119058) | 0.816492 / 4.805227 (-3.988736) | 0.150107 / 6.500664 (-6.350558) | 0.065118 / 0.075469 (-0.010351) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.226433 / 1.841788 (-0.615355) | 13.852569 / 8.074308 (5.778261) | 13.862779 / 10.191392 (3.671387) | 0.146361 / 0.680424 (-0.534062) | 0.028652 / 0.534201 (-0.505549) | 0.398251 / 0.579283 (-0.181032) | 0.403590 / 0.434364 (-0.030774) | 0.492184 / 0.540337 (-0.048154) | 0.581040 / 1.386936 (-0.805896) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006859 / 0.011353 (-0.004494) | 0.004632 / 0.011008 (-0.006376) | 0.076653 / 0.038508 (0.038145) | 0.027865 / 0.023109 (0.004755) | 0.354472 / 0.275898 (0.078573) | 0.385462 / 0.323480 (0.061982) | 0.005125 / 0.007986 (-0.002861) | 0.003420 / 0.004328 (-0.000909) | 0.076018 / 0.004250 (0.071768) | 0.040197 / 0.037052 (0.003144) | 0.353675 / 0.258489 (0.095186) | 0.394911 / 0.293841 (0.101070) | 0.032909 / 0.128546 (-0.095637) | 0.011713 / 0.075646 (-0.063933) | 0.085921 / 0.419271 (-0.333350) | 0.044462 / 0.043533 (0.000929) | 0.349997 / 0.255139 (0.094858) | 0.375207 / 0.283200 (0.092008) | 0.091288 / 0.141683 (-0.050394) | 1.536515 / 1.452155 (0.084361) | 1.581878 / 1.492716 (0.089162) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.273284 / 0.018006 (0.255277) | 0.424457 / 0.000490 (0.423967) | 0.044659 / 0.000200 (0.044459) | 0.000247 / 0.000054 (0.000192) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025473 / 0.037411 (-0.011938) | 0.100014 / 0.014526 (0.085488) | 0.108551 / 0.176557 (-0.068006) | 0.147913 / 0.737135 (-0.589223) | 0.112729 / 0.296338 (-0.183610) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.448162 / 0.215209 (0.232953) | 4.472701 / 2.077655 (2.395046) | 2.078384 / 1.504120 (0.574264) | 1.861292 / 1.541195 (0.320097) | 1.920482 / 1.468490 (0.451991) | 0.706968 / 4.584777 (-3.877809) | 3.433109 / 3.745712 (-0.312603) | 1.898684 / 5.269862 (-3.371178) | 1.174375 / 4.565676 (-3.391302) | 0.083666 / 0.424275 (-0.340609) | 0.012388 / 0.007607 (0.004781) | 0.546011 / 0.226044 (0.319966) | 5.487514 / 2.268929 (3.218585) | 2.534124 / 55.444624 (-52.910500) | 2.168441 / 6.876477 (-4.708036) | 2.203458 / 2.142072 (0.061386) | 0.813333 / 4.805227 (-3.991894) | 0.153169 / 6.500664 (-6.347495) | 0.067151 / 0.075469 (-0.008318) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.277815 / 1.841788 (-0.563972) | 13.920545 / 8.074308 (5.846237) | 13.473801 / 10.191392 (3.282409) | 0.129035 / 0.680424 (-0.551389) | 0.016737 / 0.534201 (-0.517464) | 0.388413 / 0.579283 (-0.190870) | 0.388785 / 0.434364 (-0.045579) | 0.481735 / 0.540337 (-0.058602) | 0.576390 / 1.386936 (-0.810546) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#c4a4f96ef0a4ec4b25f0872f160fa1eb9d2e711c \"CML watermark\")\n"
] | 2023-01-24T17:18:59 | 2023-02-01T08:45:33 | 2023-01-31T08:37:54 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5459",
"html_url": "https://github.com/huggingface/datasets/pull/5459",
"diff_url": "https://github.com/huggingface/datasets/pull/5459.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5459.patch",
"merged_at": "2023-01-31T08:37:54"
} | The library `aiohttp` performs a requoting of redirection URLs that unquotes the single quotation mark character: `%27` => `'`
This is a problem for our Hugging Face Hub, which requires exact URL from location header.
Specifically, in the query component of the URL (`https://netloc/path?query`), the value for `response-content-disposition` contains `%27`:
```
response-content-disposition=attachment%3B+filename*%3DUTF-8%27%27sample.jsonl.gz%3B+filename%3D%22sample.jsonl.gz%22%3B
```
and after the requoting, the `%27` characters get unquoted to `'`:
```
response-content-disposition=attachment%3B+filename*%3DUTF-8''sample.jsonl.gz%3B+filename%3D%22sample.jsonl.gz%22%3B
```
This PR disables the `aiohttp` requoting of redirection URLs. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5459/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5459/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5458 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5458/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5458/comments | https://api.github.com/repos/huggingface/datasets/issues/5458/events | https://github.com/huggingface/datasets/issues/5458 | 1,555,054,737 | I_kwDODunzps5csECR | 5,458 | slice split while streaming | {
"login": "SvenDS9",
"id": 122370631,
"node_id": "U_kgDOB0s6Rw",
"avatar_url": "https://avatars.githubusercontent.com/u/122370631?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SvenDS9",
"html_url": "https://github.com/SvenDS9",
"followers_url": "https://api.github.com/users/SvenDS9/followers",
"following_url": "https://api.github.com/users/SvenDS9/following{/other_user}",
"gists_url": "https://api.github.com/users/SvenDS9/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SvenDS9/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SvenDS9/subscriptions",
"organizations_url": "https://api.github.com/users/SvenDS9/orgs",
"repos_url": "https://api.github.com/users/SvenDS9/repos",
"events_url": "https://api.github.com/users/SvenDS9/events{/privacy}",
"received_events_url": "https://api.github.com/users/SvenDS9/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi! Yes, that's correct. When `streaming` is `True`, only split names can be specified as `split`, and for slicing, you have to use `.skip`/`.take` instead.\r\n\r\nE.g. \r\n`load_dataset(\"lhoestq/demo1\",revision=None, streaming=True, split=\"train[:3]\")`\r\n\r\nrewritten with `.skip`/`.take`:\r\n`load_dataset(\"lhoestq/demo1\",revision=None, streaming=True, split=\"train\").take(3)`\r\n\r\n\r\n",
"Thank you for your quick response!"
] | 2023-01-24T14:08:17 | 2023-01-24T15:11:47 | 2023-01-24T15:11:47 | NONE | null | null | null | ### Describe the bug
When using the `load_dataset` function with streaming set to True, slicing splits is apparently not supported.
Did I miss this in the documentation?
### Steps to reproduce the bug
`load_dataset("lhoestq/demo1",revision=None, streaming=True, split="train[:3]")`
causes ValueError: Bad split: train[:3]. Available splits: ['train', 'test'] in builder.py, line 1213, in as_streaming_dataset
### Expected behavior
The first 3 entries of the dataset as a stream
### Environment info
- `datasets` version: 2.8.0
- Platform: Windows-10-10.0.19045-SP0
- Python version: 3.10.9
- PyArrow version: 10.0.1
- Pandas version: 1.5.2
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5458/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5458/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5457 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5457/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5457/comments | https://api.github.com/repos/huggingface/datasets/issues/5457/events | https://github.com/huggingface/datasets/issues/5457 | 1,554,171,264 | I_kwDODunzps5cosWA | 5,457 | prebuilt dataset relies on `downloads/extracted` | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Hi! \r\n\r\nThis issue is due to our audio/image datasets not being self-contained. This allows us to save disk space (files are written only once) but also leads to the issues like this one. We plan to make all our datasets self-contained in Datasets 3.0.\r\n\r\nIn the meantime, you can run the following map to ensure your dataset is self-contained:\r\n```python\r\nfrom datasets.table import embed_table_storage\r\n# load_dataset ...\r\ndset = dset.with_format(\"arrow\")\r\ndset.map(embed_table_storage, batched=True)\r\ndset = dset.with_format(\"python\")\r\n```\r\n",
"Understood. Thank you, Mario.\r\n\r\nPerhaps the solution could be very simple - move the extracted files into the directory of the cached dataset? Which would make it self-contained already and won't require waiting for a new major release. Unless I'm missing some back-compat nuance.\r\n\r\nBut regardless if X relies on Y - it could check if Y is still there when loading X. so not checking full consistency but just the top-level directory it relies on."
] | 2023-01-24T02:09:32 | 2023-01-24T18:14:10 | null | CONTRIBUTOR | null | null | null | ### Describe the bug
I pre-built the dataset:
```
python -c 'import sys; from datasets import load_dataset; ds=load_dataset(sys.argv[1])' HuggingFaceM4/general-pmd-synthetic-testing
```
and it can be used just fine.
now I wipe out `downloads/extracted` and it no longer works.
```
rm -r ~/.cache/huggingface/datasets/downloads
```
That is I can still load it:
```
python -c 'import sys; from datasets import load_dataset; ds=load_dataset(sys.argv[1])' HuggingFaceM4/general-pmd-synthetic-testing
No config specified, defaulting to: general-pmd-synthetic-testing/100.unique
Found cached dataset general-pmd-synthetic-testing (/home/stas/.cache/huggingface/datasets/HuggingFaceM4___general-pmd-synthetic-testing/100.unique/1.1.1/86bc445e3e48cb5ef79de109eb4e54ff85b318cd55c3835c4ee8f86eae33d9d2)
```
but if I try to use it:
```
E stderr: Traceback (most recent call last):
E stderr: File "/mnt/nvme0/code/huggingface/m4-master-6/m4/training/main.py", line 116, in <module>
E stderr: train_loader, val_loader = get_dataloaders(
E stderr: File "/mnt/nvme0/code/huggingface/m4-master-6/m4/training/dataset.py", line 170, in get_dataloaders
E stderr: train_loader = get_dataloader_from_config(
E stderr: File "/mnt/nvme0/code/huggingface/m4-master-6/m4/training/dataset.py", line 443, in get_dataloader_from_config
E stderr: dataloader = get_dataloader(
E stderr: File "/mnt/nvme0/code/huggingface/m4-master-6/m4/training/dataset.py", line 264, in get_dataloader
E stderr: is_pmd = "meta" in hf_dataset[0] and "source" in hf_dataset[0]
E stderr: File "/mnt/nvme0/code/huggingface/datasets-master/src/datasets/arrow_dataset.py", line 2601, in __getitem__
E stderr: return self._getitem(
E stderr: File "/mnt/nvme0/code/huggingface/datasets-master/src/datasets/arrow_dataset.py", line 2586, in _getitem
E stderr: formatted_output = format_table(
E stderr: File "/mnt/nvme0/code/huggingface/datasets-master/src/datasets/formatting/formatting.py", line 634, in format_table
E stderr: return formatter(pa_table, query_type=query_type)
E stderr: File "/mnt/nvme0/code/huggingface/datasets-master/src/datasets/formatting/formatting.py", line 406, in __call__
E stderr: return self.format_row(pa_table)
E stderr: File "/mnt/nvme0/code/huggingface/datasets-master/src/datasets/formatting/formatting.py", line 442, in format_row
E stderr: row = self.python_features_decoder.decode_row(row)
E stderr: File "/mnt/nvme0/code/huggingface/datasets-master/src/datasets/formatting/formatting.py", line 225, in decode_row
E stderr: return self.features.decode_example(row) if self.features else row
E stderr: File "/mnt/nvme0/code/huggingface/datasets-master/src/datasets/features/features.py", line 1846, in decode_example
E stderr: return {
E stderr: File "/mnt/nvme0/code/huggingface/datasets-master/src/datasets/features/features.py", line 1847, in <dictcomp>
E stderr: column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id)
E stderr: File "/mnt/nvme0/code/huggingface/datasets-master/src/datasets/features/features.py", line 1304, in decode_nested_example
E stderr: return decode_nested_example([schema.feature], obj)
E stderr: File "/mnt/nvme0/code/huggingface/datasets-master/src/datasets/features/features.py", line 1296, in decode_nested_example
E stderr: if decode_nested_example(sub_schema, first_elmt) != first_elmt:
E stderr: File "/mnt/nvme0/code/huggingface/datasets-master/src/datasets/features/features.py", line 1309, in decode_nested_example
E stderr: return schema.decode_example(obj, token_per_repo_id=token_per_repo_id)
E stderr: File "/mnt/nvme0/code/huggingface/datasets-master/src/datasets/features/image.py", line 144, in decode_example
E stderr: image = PIL.Image.open(path)
E stderr: File "/home/stas/anaconda3/envs/py38-pt113/lib/python3.8/site-packages/PIL/Image.py", line 3092, in open
E stderr: fp = builtins.open(filename, "rb")
E stderr: FileNotFoundError: [Errno 2] No such file or directory: '/mnt/nvme0/code/data/cache/huggingface/datasets/downloads/extracted/134227b9b94c4eccf19b205bf3021d4492d0227b9be6c2ddb6bf517d8d55a8cb/data/101/images_01.jpg'
```
Only if I wipe out the cached dir and rebuild then it starts working as `download/extracted` is back again with extracted files.
```
rm -r ~/.cache/huggingface/datasets/HuggingFaceM4___general-pmd-synthetic-testing
python -c 'import sys; from datasets import load_dataset; ds=load_dataset(sys.argv[1])' HuggingFaceM4/general-pmd-synthetic-testing
```
I think there are 2 issues here:
1. why does it still rely on extracted files after `arrow` files were printed - did I do something incorrectly when creating this dataset?
2. why doesn't the dataset know that it has been gutted and loads just fine? If it has a dependency on `download/extracted` then `load_dataset` should check if it's there and fail or force rebuilding. I am sure this could be a very expensive operation, so probably really solving #1 will not require this check. and this second item is probably an overkill. Other than perhaps if it had an optional `check_consistency` flag to do that.
### Environment info
datasets@main | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5457/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5457/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5456 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5456/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5456/comments | https://api.github.com/repos/huggingface/datasets/issues/5456/events | https://github.com/huggingface/datasets/pull/5456 | 1,553,905,148 | PR_kwDODunzps5IXq92 | 5,456 | feat: tqdm for `to_parquet` | {
"login": "zanussbaum",
"id": 33707069,
"node_id": "MDQ6VXNlcjMzNzA3MDY5",
"avatar_url": "https://avatars.githubusercontent.com/u/33707069?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zanussbaum",
"html_url": "https://github.com/zanussbaum",
"followers_url": "https://api.github.com/users/zanussbaum/followers",
"following_url": "https://api.github.com/users/zanussbaum/following{/other_user}",
"gists_url": "https://api.github.com/users/zanussbaum/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zanussbaum/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zanussbaum/subscriptions",
"organizations_url": "https://api.github.com/users/zanussbaum/orgs",
"repos_url": "https://api.github.com/users/zanussbaum/repos",
"events_url": "https://api.github.com/users/zanussbaum/events{/privacy}",
"received_events_url": "https://api.github.com/users/zanussbaum/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.012395 / 0.011353 (0.001042) | 0.006466 / 0.011008 (-0.004542) | 0.127605 / 0.038508 (0.089097) | 0.044929 / 0.023109 (0.021820) | 0.399856 / 0.275898 (0.123958) | 0.491341 / 0.323480 (0.167861) | 0.009193 / 0.007986 (0.001207) | 0.005419 / 0.004328 (0.001090) | 0.100577 / 0.004250 (0.096327) | 0.045338 / 0.037052 (0.008286) | 0.409970 / 0.258489 (0.151481) | 0.452941 / 0.293841 (0.159100) | 0.054350 / 0.128546 (-0.074197) | 0.019069 / 0.075646 (-0.056578) | 0.427036 / 0.419271 (0.007765) | 0.073616 / 0.043533 (0.030083) | 0.395384 / 0.255139 (0.140245) | 0.442381 / 0.283200 (0.159181) | 0.123185 / 0.141683 (-0.018498) | 1.797640 / 1.452155 (0.345485) | 1.888860 / 1.492716 (0.396143) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.211041 / 0.018006 (0.193035) | 0.539350 / 0.000490 (0.538860) | 0.001683 / 0.000200 (0.001483) | 0.000118 / 0.000054 (0.000064) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031699 / 0.037411 (-0.005712) | 0.132696 / 0.014526 (0.118170) | 0.133710 / 0.176557 (-0.042846) | 0.190074 / 0.737135 (-0.547061) | 0.142919 / 0.296338 (-0.153420) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.643521 / 0.215209 (0.428312) | 6.137350 / 2.077655 (4.059695) | 2.463894 / 1.504120 (0.959774) | 2.120043 / 1.541195 (0.578848) | 2.121898 / 1.468490 (0.653408) | 1.287319 / 4.584777 (-3.297458) | 5.517864 / 3.745712 (1.772151) | 5.070820 / 5.269862 (-0.199042) | 2.948967 / 4.565676 (-1.616710) | 0.175861 / 0.424275 (-0.248415) | 0.015292 / 0.007607 (0.007685) | 0.843195 / 0.226044 (0.617150) | 7.884275 / 2.268929 (5.615347) | 3.182821 / 55.444624 (-52.261803) | 2.576093 / 6.876477 (-4.300384) | 2.537160 / 2.142072 (0.395088) | 1.510029 / 4.805227 (-3.295198) | 0.249404 / 6.500664 (-6.251260) | 0.080434 / 0.075469 (0.004965) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.618695 / 1.841788 (-0.223093) | 18.879207 / 8.074308 (10.804899) | 21.075272 / 10.191392 (10.883880) | 0.260781 / 0.680424 (-0.419643) | 0.046387 / 0.534201 (-0.487813) | 0.570709 / 0.579283 (-0.008574) | 0.619050 / 0.434364 (0.184686) | 0.642295 / 0.540337 (0.101958) | 0.780070 / 1.386936 (-0.606866) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010418 / 0.011353 (-0.000935) | 0.006104 / 0.011008 (-0.004905) | 0.133609 / 0.038508 (0.095101) | 0.035101 / 0.023109 (0.011992) | 0.471931 / 0.275898 (0.196033) | 0.504498 / 0.323480 (0.181018) | 0.007388 / 0.007986 (-0.000598) | 0.004852 / 0.004328 (0.000523) | 0.094535 / 0.004250 (0.090284) | 0.056832 / 0.037052 (0.019779) | 0.470513 / 0.258489 (0.212024) | 0.531285 / 0.293841 (0.237444) | 0.058271 / 0.128546 (-0.070276) | 0.020523 / 0.075646 (-0.055123) | 0.437398 / 0.419271 (0.018126) | 0.065390 / 0.043533 (0.021857) | 0.503702 / 0.255139 (0.248563) | 0.515876 / 0.283200 (0.232677) | 0.118615 / 0.141683 (-0.023068) | 1.865380 / 1.452155 (0.413225) | 1.990316 / 1.492716 (0.497600) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.246772 / 0.018006 (0.228766) | 0.560607 / 0.000490 (0.560118) | 0.005675 / 0.000200 (0.005475) | 0.000142 / 0.000054 (0.000088) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034692 / 0.037411 (-0.002719) | 0.174016 / 0.014526 (0.159490) | 0.179838 / 0.176557 (0.003282) | 0.217118 / 0.737135 (-0.520018) | 0.184811 / 0.296338 (-0.111527) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.675970 / 0.215209 (0.460760) | 6.787039 / 2.077655 (4.709384) | 2.932619 / 1.504120 (1.428499) | 2.545076 / 1.541195 (1.003882) | 2.566705 / 1.468490 (1.098215) | 1.287365 / 4.584777 (-3.297412) | 5.468441 / 3.745712 (1.722729) | 5.227726 / 5.269862 (-0.042136) | 2.868970 / 4.565676 (-1.696706) | 0.153535 / 0.424275 (-0.270740) | 0.020087 / 0.007607 (0.012480) | 0.860562 / 0.226044 (0.634518) | 8.656109 / 2.268929 (6.387180) | 3.749424 / 55.444624 (-51.695200) | 3.011337 / 6.876477 (-3.865139) | 3.119045 / 2.142072 (0.976973) | 1.562174 / 4.805227 (-3.243053) | 0.279161 / 6.500664 (-6.221504) | 0.084905 / 0.075469 (0.009436) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.638684 / 1.841788 (-0.203104) | 18.834760 / 8.074308 (10.760452) | 21.554310 / 10.191392 (11.362918) | 0.274518 / 0.680424 (-0.405906) | 0.030343 / 0.534201 (-0.503858) | 0.539094 / 0.579283 (-0.040189) | 0.627258 / 0.434364 (0.192895) | 0.624638 / 0.540337 (0.084301) | 0.742776 / 1.386936 (-0.644160) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#98c9b27be45e1f5bc8c18d8bb2414478efe68055 \"CML watermark\")\n"
] | 2023-01-23T22:05:38 | 2023-01-24T11:26:47 | 2023-01-24T11:17:12 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5456",
"html_url": "https://github.com/huggingface/datasets/pull/5456",
"diff_url": "https://github.com/huggingface/datasets/pull/5456.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5456.patch",
"merged_at": "2023-01-24T11:17:12"
} | As described in #5418
I noticed also that the `to_json` function supports multi-workers whereas `to_parquet`, is that not possible/not needed with Parquet or something that hasn't been implemented yet? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5456/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5456/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5455 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5455/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5455/comments | https://api.github.com/repos/huggingface/datasets/issues/5455/events | https://github.com/huggingface/datasets/pull/5455 | 1,553,040,080 | PR_kwDODunzps5IUvAZ | 5,455 | Single TQDM bar in multi-proc map | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008372 / 0.011353 (-0.002981) | 0.004658 / 0.011008 (-0.006350) | 0.102005 / 0.038508 (0.063497) | 0.029030 / 0.023109 (0.005920) | 0.296968 / 0.275898 (0.021070) | 0.364898 / 0.323480 (0.041418) | 0.006899 / 0.007986 (-0.001087) | 0.003410 / 0.004328 (-0.000919) | 0.079705 / 0.004250 (0.075455) | 0.034265 / 0.037052 (-0.002787) | 0.305695 / 0.258489 (0.047206) | 0.343275 / 0.293841 (0.049434) | 0.033783 / 0.128546 (-0.094763) | 0.011604 / 0.075646 (-0.064042) | 0.322577 / 0.419271 (-0.096694) | 0.040540 / 0.043533 (-0.002993) | 0.299176 / 0.255139 (0.044037) | 0.333157 / 0.283200 (0.049957) | 0.087460 / 0.141683 (-0.054223) | 1.494392 / 1.452155 (0.042237) | 1.539580 / 1.492716 (0.046863) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.176206 / 0.018006 (0.158200) | 0.413702 / 0.000490 (0.413212) | 0.002625 / 0.000200 (0.002425) | 0.000071 / 0.000054 (0.000017) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023886 / 0.037411 (-0.013525) | 0.099758 / 0.014526 (0.085232) | 0.104349 / 0.176557 (-0.072208) | 0.147138 / 0.737135 (-0.589998) | 0.108682 / 0.296338 (-0.187657) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.411957 / 0.215209 (0.196748) | 4.110004 / 2.077655 (2.032349) | 1.820951 / 1.504120 (0.316831) | 1.629726 / 1.541195 (0.088532) | 1.672573 / 1.468490 (0.204083) | 0.686627 / 4.584777 (-3.898150) | 3.382665 / 3.745712 (-0.363047) | 2.875908 / 5.269862 (-2.393954) | 1.475331 / 4.565676 (-3.090345) | 0.081353 / 0.424275 (-0.342922) | 0.012521 / 0.007607 (0.004914) | 0.516226 / 0.226044 (0.290182) | 5.157658 / 2.268929 (2.888729) | 2.302012 / 55.444624 (-53.142612) | 1.950831 / 6.876477 (-4.925646) | 1.962081 / 2.142072 (-0.179992) | 0.800007 / 4.805227 (-4.005221) | 0.148462 / 6.500664 (-6.352202) | 0.064448 / 0.075469 (-0.011021) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.227977 / 1.841788 (-0.613810) | 13.776087 / 8.074308 (5.701779) | 13.749825 / 10.191392 (3.558433) | 0.137034 / 0.680424 (-0.543390) | 0.028461 / 0.534201 (-0.505740) | 0.392335 / 0.579283 (-0.186948) | 0.397404 / 0.434364 (-0.036960) | 0.450831 / 0.540337 (-0.089507) | 0.533716 / 1.386936 (-0.853220) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006883 / 0.011353 (-0.004470) | 0.004625 / 0.011008 (-0.006383) | 0.099039 / 0.038508 (0.060531) | 0.028068 / 0.023109 (0.004958) | 0.419988 / 0.275898 (0.144090) | 0.449543 / 0.323480 (0.126063) | 0.005232 / 0.007986 (-0.002753) | 0.003527 / 0.004328 (-0.000801) | 0.076308 / 0.004250 (0.072057) | 0.040523 / 0.037052 (0.003471) | 0.420165 / 0.258489 (0.161676) | 0.463220 / 0.293841 (0.169379) | 0.032368 / 0.128546 (-0.096178) | 0.011784 / 0.075646 (-0.063863) | 0.320675 / 0.419271 (-0.098597) | 0.041861 / 0.043533 (-0.001672) | 0.424903 / 0.255139 (0.169764) | 0.443528 / 0.283200 (0.160328) | 0.090869 / 0.141683 (-0.050814) | 1.504757 / 1.452155 (0.052602) | 1.557824 / 1.492716 (0.065108) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.224020 / 0.018006 (0.206014) | 0.404090 / 0.000490 (0.403601) | 0.000403 / 0.000200 (0.000203) | 0.000058 / 0.000054 (0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024556 / 0.037411 (-0.012855) | 0.101280 / 0.014526 (0.086754) | 0.108017 / 0.176557 (-0.068540) | 0.146679 / 0.737135 (-0.590456) | 0.111468 / 0.296338 (-0.184870) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.478955 / 0.215209 (0.263746) | 4.769628 / 2.077655 (2.691973) | 2.473238 / 1.504120 (0.969118) | 2.263588 / 1.541195 (0.722393) | 2.285425 / 1.468490 (0.816935) | 0.699051 / 4.584777 (-3.885726) | 3.390495 / 3.745712 (-0.355217) | 1.858569 / 5.269862 (-3.411293) | 1.162081 / 4.565676 (-3.403596) | 0.083294 / 0.424275 (-0.340981) | 0.012410 / 0.007607 (0.004803) | 0.580786 / 0.226044 (0.354741) | 5.866868 / 2.268929 (3.597940) | 2.944358 / 55.444624 (-52.500266) | 2.596241 / 6.876477 (-4.280235) | 2.664464 / 2.142072 (0.522392) | 0.806751 / 4.805227 (-3.998476) | 0.152389 / 6.500664 (-6.348275) | 0.066945 / 0.075469 (-0.008524) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.290545 / 1.841788 (-0.551243) | 14.005727 / 8.074308 (5.931419) | 14.478951 / 10.191392 (4.287559) | 0.127488 / 0.680424 (-0.552935) | 0.016929 / 0.534201 (-0.517272) | 0.378380 / 0.579283 (-0.200904) | 0.387499 / 0.434364 (-0.046865) | 0.440816 / 0.540337 (-0.099522) | 0.525794 / 1.386936 (-0.861142) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#07549c6fcb2dced59d7614b4b8264d54ef573407 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008704 / 0.011353 (-0.002649) | 0.004474 / 0.011008 (-0.006534) | 0.101720 / 0.038508 (0.063212) | 0.030426 / 0.023109 (0.007317) | 0.298944 / 0.275898 (0.023046) | 0.371491 / 0.323480 (0.048011) | 0.007042 / 0.007986 (-0.000944) | 0.003479 / 0.004328 (-0.000850) | 0.078086 / 0.004250 (0.073835) | 0.037014 / 0.037052 (-0.000038) | 0.312964 / 0.258489 (0.054475) | 0.351251 / 0.293841 (0.057410) | 0.033286 / 0.128546 (-0.095260) | 0.011468 / 0.075646 (-0.064179) | 0.321784 / 0.419271 (-0.097488) | 0.040700 / 0.043533 (-0.002832) | 0.303799 / 0.255139 (0.048660) | 0.336982 / 0.283200 (0.053782) | 0.089448 / 0.141683 (-0.052235) | 1.462430 / 1.452155 (0.010275) | 1.524448 / 1.492716 (0.031732) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.178390 / 0.018006 (0.160384) | 0.402474 / 0.000490 (0.401984) | 0.002697 / 0.000200 (0.002497) | 0.000078 / 0.000054 (0.000023) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022679 / 0.037411 (-0.014733) | 0.097759 / 0.014526 (0.083234) | 0.105102 / 0.176557 (-0.071454) | 0.140720 / 0.737135 (-0.596415) | 0.109119 / 0.296338 (-0.187219) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.414153 / 0.215209 (0.198944) | 4.131799 / 2.077655 (2.054144) | 1.852325 / 1.504120 (0.348205) | 1.646955 / 1.541195 (0.105760) | 1.662880 / 1.468490 (0.194390) | 0.693823 / 4.584777 (-3.890954) | 3.378843 / 3.745712 (-0.366869) | 1.861324 / 5.269862 (-3.408538) | 1.156916 / 4.565676 (-3.408761) | 0.082385 / 0.424275 (-0.341890) | 0.012166 / 0.007607 (0.004559) | 0.528690 / 0.226044 (0.302646) | 5.286388 / 2.268929 (3.017459) | 2.319941 / 55.444624 (-53.124684) | 1.959462 / 6.876477 (-4.917014) | 1.995102 / 2.142072 (-0.146970) | 0.817158 / 4.805227 (-3.988069) | 0.149479 / 6.500664 (-6.351185) | 0.065668 / 0.075469 (-0.009801) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.240228 / 1.841788 (-0.601560) | 13.770357 / 8.074308 (5.696048) | 13.940638 / 10.191392 (3.749246) | 0.152589 / 0.680424 (-0.527835) | 0.028498 / 0.534201 (-0.505703) | 0.392579 / 0.579283 (-0.186704) | 0.402843 / 0.434364 (-0.031521) | 0.455429 / 0.540337 (-0.084909) | 0.541090 / 1.386936 (-0.845846) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006692 / 0.011353 (-0.004661) | 0.004514 / 0.011008 (-0.006495) | 0.097058 / 0.038508 (0.058550) | 0.027780 / 0.023109 (0.004671) | 0.415806 / 0.275898 (0.139908) | 0.443079 / 0.323480 (0.119599) | 0.005181 / 0.007986 (-0.002805) | 0.003408 / 0.004328 (-0.000921) | 0.075263 / 0.004250 (0.071013) | 0.038169 / 0.037052 (0.001116) | 0.417292 / 0.258489 (0.158803) | 0.461875 / 0.293841 (0.168034) | 0.032280 / 0.128546 (-0.096266) | 0.011571 / 0.075646 (-0.064075) | 0.319091 / 0.419271 (-0.100181) | 0.048295 / 0.043533 (0.004762) | 0.423619 / 0.255139 (0.168480) | 0.435064 / 0.283200 (0.151864) | 0.094869 / 0.141683 (-0.046814) | 1.523000 / 1.452155 (0.070846) | 1.583097 / 1.492716 (0.090381) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.214326 / 0.018006 (0.196320) | 0.391623 / 0.000490 (0.391134) | 0.004602 / 0.000200 (0.004403) | 0.000078 / 0.000054 (0.000024) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024306 / 0.037411 (-0.013106) | 0.101178 / 0.014526 (0.086652) | 0.108504 / 0.176557 (-0.068053) | 0.144114 / 0.737135 (-0.593022) | 0.111088 / 0.296338 (-0.185250) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.472573 / 0.215209 (0.257364) | 4.748929 / 2.077655 (2.671274) | 2.441602 / 1.504120 (0.937482) | 2.238841 / 1.541195 (0.697647) | 2.303303 / 1.468490 (0.834813) | 0.696618 / 4.584777 (-3.888159) | 3.373867 / 3.745712 (-0.371845) | 2.809009 / 5.269862 (-2.460852) | 1.337240 / 4.565676 (-3.228437) | 0.082682 / 0.424275 (-0.341593) | 0.012834 / 0.007607 (0.005227) | 0.569686 / 0.226044 (0.343642) | 5.723407 / 2.268929 (3.454478) | 2.882944 / 55.444624 (-52.561680) | 2.543530 / 6.876477 (-4.332947) | 2.581856 / 2.142072 (0.439784) | 0.802353 / 4.805227 (-4.002874) | 0.149947 / 6.500664 (-6.350717) | 0.065865 / 0.075469 (-0.009604) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.282146 / 1.841788 (-0.559642) | 13.831344 / 8.074308 (5.757036) | 14.081550 / 10.191392 (3.890157) | 0.141735 / 0.680424 (-0.538689) | 0.016677 / 0.534201 (-0.517524) | 0.378967 / 0.579283 (-0.200316) | 0.383775 / 0.434364 (-0.050589) | 0.432892 / 0.540337 (-0.107446) | 0.518042 / 1.386936 (-0.868894) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#01b4a5a18b56fa7b648b0f131f6b5568b1fd436a \"CML watermark\")\n",
"Omg I love this ! cc @TevenLeScao @thomasw21 this will save your terminals from infinite streams of progress bars",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008680 / 0.011353 (-0.002673) | 0.004597 / 0.011008 (-0.006411) | 0.101154 / 0.038508 (0.062646) | 0.029831 / 0.023109 (0.006722) | 0.300619 / 0.275898 (0.024721) | 0.358259 / 0.323480 (0.034779) | 0.007284 / 0.007986 (-0.000701) | 0.003511 / 0.004328 (-0.000817) | 0.078805 / 0.004250 (0.074555) | 0.037192 / 0.037052 (0.000140) | 0.307241 / 0.258489 (0.048752) | 0.354648 / 0.293841 (0.060807) | 0.033696 / 0.128546 (-0.094851) | 0.011660 / 0.075646 (-0.063986) | 0.324266 / 0.419271 (-0.095006) | 0.043393 / 0.043533 (-0.000140) | 0.297503 / 0.255139 (0.042364) | 0.326037 / 0.283200 (0.042838) | 0.091165 / 0.141683 (-0.050517) | 1.479970 / 1.452155 (0.027816) | 1.508507 / 1.492716 (0.015791) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.179995 / 0.018006 (0.161989) | 0.464282 / 0.000490 (0.463793) | 0.003953 / 0.000200 (0.003753) | 0.000077 / 0.000054 (0.000023) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022696 / 0.037411 (-0.014715) | 0.099510 / 0.014526 (0.084984) | 0.103741 / 0.176557 (-0.072816) | 0.137837 / 0.737135 (-0.599299) | 0.108776 / 0.296338 (-0.187563) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.417034 / 0.215209 (0.201825) | 4.183479 / 2.077655 (2.105824) | 1.855329 / 1.504120 (0.351209) | 1.660675 / 1.541195 (0.119481) | 1.723936 / 1.468490 (0.255446) | 0.687815 / 4.584777 (-3.896962) | 3.331280 / 3.745712 (-0.414432) | 2.821430 / 5.269862 (-2.448432) | 1.542394 / 4.565676 (-3.023283) | 0.081665 / 0.424275 (-0.342610) | 0.012483 / 0.007607 (0.004875) | 0.524758 / 0.226044 (0.298713) | 5.277285 / 2.268929 (3.008357) | 2.278067 / 55.444624 (-53.166557) | 1.923232 / 6.876477 (-4.953245) | 1.978645 / 2.142072 (-0.163428) | 0.806225 / 4.805227 (-3.999002) | 0.147568 / 6.500664 (-6.353096) | 0.064206 / 0.075469 (-0.011263) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.175079 / 1.841788 (-0.666708) | 13.677443 / 8.074308 (5.603135) | 14.064103 / 10.191392 (3.872711) | 0.167462 / 0.680424 (-0.512962) | 0.028677 / 0.534201 (-0.505524) | 0.399090 / 0.579283 (-0.180193) | 0.398930 / 0.434364 (-0.035433) | 0.461604 / 0.540337 (-0.078733) | 0.540978 / 1.386936 (-0.845958) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006846 / 0.011353 (-0.004507) | 0.004452 / 0.011008 (-0.006556) | 0.076169 / 0.038508 (0.037661) | 0.028290 / 0.023109 (0.005181) | 0.341105 / 0.275898 (0.065207) | 0.381465 / 0.323480 (0.057986) | 0.005038 / 0.007986 (-0.002948) | 0.003298 / 0.004328 (-0.001031) | 0.075794 / 0.004250 (0.071544) | 0.039225 / 0.037052 (0.002173) | 0.342995 / 0.258489 (0.084506) | 0.384878 / 0.293841 (0.091037) | 0.031766 / 0.128546 (-0.096780) | 0.011597 / 0.075646 (-0.064049) | 0.084849 / 0.419271 (-0.334423) | 0.041795 / 0.043533 (-0.001737) | 0.341770 / 0.255139 (0.086631) | 0.383142 / 0.283200 (0.099942) | 0.088854 / 0.141683 (-0.052829) | 1.465116 / 1.452155 (0.012961) | 1.566888 / 1.492716 (0.074171) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.225129 / 0.018006 (0.207123) | 0.394290 / 0.000490 (0.393801) | 0.000397 / 0.000200 (0.000197) | 0.000060 / 0.000054 (0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025492 / 0.037411 (-0.011919) | 0.100494 / 0.014526 (0.085968) | 0.110587 / 0.176557 (-0.065969) | 0.142715 / 0.737135 (-0.594420) | 0.110962 / 0.296338 (-0.185376) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.437240 / 0.215209 (0.222031) | 4.379191 / 2.077655 (2.301536) | 2.055059 / 1.504120 (0.550939) | 1.844643 / 1.541195 (0.303448) | 1.914678 / 1.468490 (0.446188) | 0.695607 / 4.584777 (-3.889170) | 3.353845 / 3.745712 (-0.391867) | 1.837403 / 5.269862 (-3.432459) | 1.155518 / 4.565676 (-3.410158) | 0.082753 / 0.424275 (-0.341523) | 0.012812 / 0.007607 (0.005205) | 0.537304 / 0.226044 (0.311260) | 5.387425 / 2.268929 (3.118497) | 2.506986 / 55.444624 (-52.937638) | 2.159031 / 6.876477 (-4.717445) | 2.187844 / 2.142072 (0.045772) | 0.796880 / 4.805227 (-4.008347) | 0.151850 / 6.500664 (-6.348815) | 0.067577 / 0.075469 (-0.007892) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.257779 / 1.841788 (-0.584009) | 13.968842 / 8.074308 (5.894534) | 13.544220 / 10.191392 (3.352828) | 0.149962 / 0.680424 (-0.530462) | 0.016875 / 0.534201 (-0.517326) | 0.394714 / 0.579283 (-0.184570) | 0.387845 / 0.434364 (-0.046519) | 0.481674 / 0.540337 (-0.058664) | 0.569820 / 1.386936 (-0.817116) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#71e50283422a93e805ea76722ce2520d1aae39c2 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009745 / 0.011353 (-0.001607) | 0.005307 / 0.011008 (-0.005702) | 0.104230 / 0.038508 (0.065722) | 0.039745 / 0.023109 (0.016635) | 0.306102 / 0.275898 (0.030204) | 0.384390 / 0.323480 (0.060910) | 0.008265 / 0.007986 (0.000279) | 0.005516 / 0.004328 (0.001187) | 0.076023 / 0.004250 (0.071772) | 0.048266 / 0.037052 (0.011213) | 0.315380 / 0.258489 (0.056891) | 0.365735 / 0.293841 (0.071895) | 0.038222 / 0.128546 (-0.090324) | 0.012397 / 0.075646 (-0.063249) | 0.348964 / 0.419271 (-0.070307) | 0.047668 / 0.043533 (0.004135) | 0.301037 / 0.255139 (0.045898) | 0.322982 / 0.283200 (0.039783) | 0.109307 / 0.141683 (-0.032376) | 1.420777 / 1.452155 (-0.031378) | 1.468290 / 1.492716 (-0.024426) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.262386 / 0.018006 (0.244380) | 0.557151 / 0.000490 (0.556661) | 0.000352 / 0.000200 (0.000152) | 0.000062 / 0.000054 (0.000007) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029508 / 0.037411 (-0.007903) | 0.113960 / 0.014526 (0.099434) | 0.123176 / 0.176557 (-0.053381) | 0.161928 / 0.737135 (-0.575207) | 0.129196 / 0.296338 (-0.167142) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.407051 / 0.215209 (0.191842) | 4.072550 / 2.077655 (1.994895) | 1.899809 / 1.504120 (0.395689) | 1.751981 / 1.541195 (0.210786) | 1.841361 / 1.468490 (0.372871) | 0.713908 / 4.584777 (-3.870869) | 3.703339 / 3.745712 (-0.042373) | 2.091283 / 5.269862 (-3.178578) | 1.323810 / 4.565676 (-3.241866) | 0.084691 / 0.424275 (-0.339584) | 0.012685 / 0.007607 (0.005078) | 0.511301 / 0.226044 (0.285257) | 5.109741 / 2.268929 (2.840813) | 2.315073 / 55.444624 (-53.129551) | 2.012746 / 6.876477 (-4.863731) | 2.160074 / 2.142072 (0.018002) | 0.853025 / 4.805227 (-3.952202) | 0.165301 / 6.500664 (-6.335363) | 0.062244 / 0.075469 (-0.013225) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.219727 / 1.841788 (-0.622061) | 15.319675 / 8.074308 (7.245367) | 13.100883 / 10.191392 (2.909491) | 0.173451 / 0.680424 (-0.506973) | 0.029173 / 0.534201 (-0.505028) | 0.440162 / 0.579283 (-0.139122) | 0.429771 / 0.434364 (-0.004593) | 0.518689 / 0.540337 (-0.021648) | 0.608590 / 1.386936 (-0.778346) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007839 / 0.011353 (-0.003514) | 0.005409 / 0.011008 (-0.005599) | 0.076468 / 0.038508 (0.037960) | 0.036568 / 0.023109 (0.013459) | 0.337568 / 0.275898 (0.061670) | 0.379353 / 0.323480 (0.055873) | 0.006208 / 0.007986 (-0.001778) | 0.005971 / 0.004328 (0.001643) | 0.073765 / 0.004250 (0.069514) | 0.056609 / 0.037052 (0.019556) | 0.344578 / 0.258489 (0.086089) | 0.405249 / 0.293841 (0.111408) | 0.037652 / 0.128546 (-0.090894) | 0.012549 / 0.075646 (-0.063097) | 0.087086 / 0.419271 (-0.332186) | 0.056669 / 0.043533 (0.013136) | 0.334121 / 0.255139 (0.078983) | 0.354582 / 0.283200 (0.071383) | 0.113293 / 0.141683 (-0.028390) | 1.437327 / 1.452155 (-0.014828) | 1.574400 / 1.492716 (0.081684) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.325235 / 0.018006 (0.307229) | 0.535405 / 0.000490 (0.534915) | 0.014119 / 0.000200 (0.013919) | 0.000278 / 0.000054 (0.000224) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030826 / 0.037411 (-0.006585) | 0.114077 / 0.014526 (0.099552) | 0.128799 / 0.176557 (-0.047758) | 0.172164 / 0.737135 (-0.564971) | 0.133665 / 0.296338 (-0.162673) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.430898 / 0.215209 (0.215689) | 4.285507 / 2.077655 (2.207853) | 2.089767 / 1.504120 (0.585647) | 1.899457 / 1.541195 (0.358262) | 2.042875 / 1.468490 (0.574385) | 0.690575 / 4.584777 (-3.894202) | 3.815905 / 3.745712 (0.070192) | 3.371085 / 5.269862 (-1.898776) | 1.865748 / 4.565676 (-2.699929) | 0.086678 / 0.424275 (-0.337597) | 0.013172 / 0.007607 (0.005565) | 0.552038 / 0.226044 (0.325994) | 5.275093 / 2.268929 (3.006165) | 2.561102 / 55.444624 (-52.883522) | 2.224235 / 6.876477 (-4.652242) | 2.330315 / 2.142072 (0.188243) | 0.845163 / 4.805227 (-3.960064) | 0.170675 / 6.500664 (-6.329989) | 0.068446 / 0.075469 (-0.007023) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.261213 / 1.841788 (-0.580575) | 15.354959 / 8.074308 (7.280651) | 15.034302 / 10.191392 (4.842910) | 0.146704 / 0.680424 (-0.533720) | 0.017986 / 0.534201 (-0.516215) | 0.425978 / 0.579283 (-0.153305) | 0.421806 / 0.434364 (-0.012558) | 0.494844 / 0.540337 (-0.045493) | 0.587870 / 1.386936 (-0.799066) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#0933901bb757e9a386095aef0fb11de9f9a04085 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.012765 / 0.011353 (0.001412) | 0.006429 / 0.011008 (-0.004579) | 0.133669 / 0.038508 (0.095161) | 0.041420 / 0.023109 (0.018311) | 0.419990 / 0.275898 (0.144092) | 0.505218 / 0.323480 (0.181738) | 0.010189 / 0.007986 (0.002204) | 0.005134 / 0.004328 (0.000805) | 0.100890 / 0.004250 (0.096640) | 0.045639 / 0.037052 (0.008587) | 0.440593 / 0.258489 (0.182103) | 0.476966 / 0.293841 (0.183125) | 0.059270 / 0.128546 (-0.069276) | 0.018625 / 0.075646 (-0.057021) | 0.444957 / 0.419271 (0.025686) | 0.060669 / 0.043533 (0.017136) | 0.415373 / 0.255139 (0.160234) | 0.461810 / 0.283200 (0.178610) | 0.116119 / 0.141683 (-0.025564) | 1.873691 / 1.452155 (0.421536) | 1.939891 / 1.492716 (0.447175) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.259529 / 0.018006 (0.241523) | 0.587213 / 0.000490 (0.586723) | 0.003729 / 0.000200 (0.003529) | 0.000115 / 0.000054 (0.000060) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032064 / 0.037411 (-0.005347) | 0.140228 / 0.014526 (0.125702) | 0.147139 / 0.176557 (-0.029417) | 0.193731 / 0.737135 (-0.543405) | 0.162126 / 0.296338 (-0.134213) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.639262 / 0.215209 (0.424053) | 6.496491 / 2.077655 (4.418836) | 2.602044 / 1.504120 (1.097924) | 2.245891 / 1.541195 (0.704696) | 2.301321 / 1.468490 (0.832831) | 1.234088 / 4.584777 (-3.350689) | 5.883315 / 3.745712 (2.137603) | 3.166902 / 5.269862 (-2.102959) | 2.258279 / 4.565676 (-2.307398) | 0.146203 / 0.424275 (-0.278072) | 0.015490 / 0.007607 (0.007883) | 0.800188 / 0.226044 (0.574144) | 8.150866 / 2.268929 (5.881938) | 3.419508 / 55.444624 (-52.025117) | 2.712174 / 6.876477 (-4.164302) | 2.805059 / 2.142072 (0.662987) | 1.421047 / 4.805227 (-3.384180) | 0.254274 / 6.500664 (-6.246390) | 0.083886 / 0.075469 (0.008417) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.651962 / 1.841788 (-0.189826) | 19.453202 / 8.074308 (11.378894) | 24.643881 / 10.191392 (14.452489) | 0.263612 / 0.680424 (-0.416812) | 0.046913 / 0.534201 (-0.487288) | 0.579861 / 0.579283 (0.000578) | 0.695137 / 0.434364 (0.260773) | 0.705479 / 0.540337 (0.165142) | 0.806073 / 1.386936 (-0.580863) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010384 / 0.011353 (-0.000969) | 0.007460 / 0.011008 (-0.003548) | 0.107830 / 0.038508 (0.069322) | 0.036792 / 0.023109 (0.013682) | 0.469585 / 0.275898 (0.193687) | 0.521278 / 0.323480 (0.197798) | 0.007472 / 0.007986 (-0.000513) | 0.007774 / 0.004328 (0.003446) | 0.105405 / 0.004250 (0.101154) | 0.053732 / 0.037052 (0.016680) | 0.486299 / 0.258489 (0.227810) | 0.537067 / 0.293841 (0.243226) | 0.053378 / 0.128546 (-0.075168) | 0.022018 / 0.075646 (-0.053628) | 0.127765 / 0.419271 (-0.291507) | 0.063844 / 0.043533 (0.020311) | 0.479724 / 0.255139 (0.224585) | 0.511243 / 0.283200 (0.228043) | 0.123223 / 0.141683 (-0.018460) | 1.934167 / 1.452155 (0.482013) | 2.003168 / 1.492716 (0.510451) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.227670 / 0.018006 (0.209664) | 0.609125 / 0.000490 (0.608635) | 0.004408 / 0.000200 (0.004208) | 0.000147 / 0.000054 (0.000092) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.035905 / 0.037411 (-0.001506) | 0.142207 / 0.014526 (0.127681) | 0.154749 / 0.176557 (-0.021808) | 0.216191 / 0.737135 (-0.520944) | 0.156577 / 0.296338 (-0.139761) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.665085 / 0.215209 (0.449876) | 6.510923 / 2.077655 (4.433269) | 2.902438 / 1.504120 (1.398318) | 2.561427 / 1.541195 (1.020232) | 2.669556 / 1.468490 (1.201066) | 1.190340 / 4.584777 (-3.394437) | 5.933066 / 3.745712 (2.187354) | 5.627784 / 5.269862 (0.357922) | 2.971922 / 4.565676 (-1.593755) | 0.140884 / 0.424275 (-0.283391) | 0.015382 / 0.007607 (0.007775) | 0.810441 / 0.226044 (0.584396) | 8.255538 / 2.268929 (5.986609) | 3.819014 / 55.444624 (-51.625611) | 3.222479 / 6.876477 (-3.653998) | 3.181700 / 2.142072 (1.039627) | 1.483403 / 4.805227 (-3.321824) | 0.262726 / 6.500664 (-6.237939) | 0.090252 / 0.075469 (0.014783) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.748566 / 1.841788 (-0.093222) | 19.566894 / 8.074308 (11.492586) | 24.382155 / 10.191392 (14.190763) | 0.260118 / 0.680424 (-0.420305) | 0.028725 / 0.534201 (-0.505476) | 0.564875 / 0.579283 (-0.014408) | 0.666708 / 0.434364 (0.232344) | 0.691165 / 0.540337 (0.150827) | 0.837061 / 1.386936 (-0.549875) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#fe6bf908e9f12e0b69b4059c392da8264881525d \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010098 / 0.011353 (-0.001255) | 0.005797 / 0.011008 (-0.005211) | 0.111262 / 0.038508 (0.072754) | 0.039687 / 0.023109 (0.016578) | 0.331081 / 0.275898 (0.055183) | 0.395878 / 0.323480 (0.072398) | 0.009244 / 0.007986 (0.001259) | 0.004498 / 0.004328 (0.000170) | 0.086129 / 0.004250 (0.081879) | 0.046662 / 0.037052 (0.009610) | 0.361926 / 0.258489 (0.103437) | 0.386155 / 0.293841 (0.092314) | 0.043657 / 0.128546 (-0.084889) | 0.013545 / 0.075646 (-0.062101) | 0.383735 / 0.419271 (-0.035537) | 0.055727 / 0.043533 (0.012194) | 0.355356 / 0.255139 (0.100217) | 0.358749 / 0.283200 (0.075550) | 0.123219 / 0.141683 (-0.018463) | 1.707982 / 1.452155 (0.255828) | 1.773342 / 1.492716 (0.280626) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.238902 / 0.018006 (0.220896) | 0.495525 / 0.000490 (0.495036) | 0.001742 / 0.000200 (0.001542) | 0.000096 / 0.000054 (0.000041) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031276 / 0.037411 (-0.006135) | 0.124286 / 0.014526 (0.109760) | 0.136236 / 0.176557 (-0.040321) | 0.180257 / 0.737135 (-0.556879) | 0.141047 / 0.296338 (-0.155292) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.465075 / 0.215209 (0.249865) | 4.543997 / 2.077655 (2.466342) | 2.036632 / 1.504120 (0.532512) | 1.820356 / 1.541195 (0.279161) | 1.860692 / 1.468490 (0.392202) | 0.807549 / 4.584777 (-3.777227) | 4.400369 / 3.745712 (0.654657) | 2.423372 / 5.269862 (-2.846490) | 1.741338 / 4.565676 (-2.824339) | 0.099457 / 0.424275 (-0.324818) | 0.014464 / 0.007607 (0.006857) | 0.599442 / 0.226044 (0.373398) | 5.867798 / 2.268929 (3.598870) | 2.641859 / 55.444624 (-52.802766) | 2.294246 / 6.876477 (-4.582231) | 2.329639 / 2.142072 (0.187567) | 0.981897 / 4.805227 (-3.823331) | 0.189278 / 6.500664 (-6.311386) | 0.071868 / 0.075469 (-0.003601) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.471800 / 1.841788 (-0.369988) | 17.149150 / 8.074308 (9.074841) | 15.818942 / 10.191392 (5.627550) | 0.174760 / 0.680424 (-0.505664) | 0.033507 / 0.534201 (-0.500694) | 0.511055 / 0.579283 (-0.068228) | 0.517107 / 0.434364 (0.082743) | 0.650813 / 0.540337 (0.110476) | 0.752515 / 1.386936 (-0.634421) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008651 / 0.011353 (-0.002702) | 0.005935 / 0.011008 (-0.005073) | 0.088589 / 0.038508 (0.050081) | 0.038796 / 0.023109 (0.015687) | 0.415430 / 0.275898 (0.139532) | 0.443693 / 0.323480 (0.120213) | 0.006631 / 0.007986 (-0.001354) | 0.004638 / 0.004328 (0.000309) | 0.085779 / 0.004250 (0.081529) | 0.053994 / 0.037052 (0.016942) | 0.408349 / 0.258489 (0.149860) | 0.475441 / 0.293841 (0.181600) | 0.042792 / 0.128546 (-0.085754) | 0.013938 / 0.075646 (-0.061709) | 0.102173 / 0.419271 (-0.317098) | 0.057940 / 0.043533 (0.014407) | 0.408967 / 0.255139 (0.153828) | 0.422741 / 0.283200 (0.139541) | 0.121844 / 0.141683 (-0.019839) | 1.772779 / 1.452155 (0.320625) | 1.837706 / 1.492716 (0.344989) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.228896 / 0.018006 (0.210890) | 0.497964 / 0.000490 (0.497475) | 0.004402 / 0.000200 (0.004202) | 0.000112 / 0.000054 (0.000057) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.035626 / 0.037411 (-0.001786) | 0.132021 / 0.014526 (0.117495) | 0.145599 / 0.176557 (-0.030957) | 0.192317 / 0.737135 (-0.544818) | 0.150165 / 0.296338 (-0.146174) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.500216 / 0.215209 (0.285007) | 5.002916 / 2.077655 (2.925262) | 2.502439 / 1.504120 (0.998319) | 2.353019 / 1.541195 (0.811825) | 2.485082 / 1.468490 (1.016592) | 0.827694 / 4.584777 (-3.757083) | 4.569319 / 3.745712 (0.823607) | 3.739820 / 5.269862 (-1.530042) | 2.097857 / 4.565676 (-2.467819) | 0.098636 / 0.424275 (-0.325639) | 0.014608 / 0.007607 (0.007001) | 0.604411 / 0.226044 (0.378366) | 6.131702 / 2.268929 (3.862774) | 3.043988 / 55.444624 (-52.400637) | 2.642427 / 6.876477 (-4.234050) | 2.687223 / 2.142072 (0.545151) | 0.968808 / 4.805227 (-3.836419) | 0.193876 / 6.500664 (-6.306788) | 0.076931 / 0.075469 (0.001462) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.511820 / 1.841788 (-0.329968) | 17.971574 / 8.074308 (9.897265) | 16.512738 / 10.191392 (6.321346) | 0.223702 / 0.680424 (-0.456722) | 0.020191 / 0.534201 (-0.514010) | 0.511045 / 0.579283 (-0.068238) | 0.499813 / 0.434364 (0.065449) | 0.642147 / 0.540337 (0.101810) | 0.756029 / 1.386936 (-0.630907) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#1f6c7b9eb4bca89ec90c465623f7a2e6f5251062 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008909 / 0.011353 (-0.002444) | 0.005096 / 0.011008 (-0.005912) | 0.098568 / 0.038508 (0.060060) | 0.034548 / 0.023109 (0.011438) | 0.294762 / 0.275898 (0.018864) | 0.366093 / 0.323480 (0.042613) | 0.007476 / 0.007986 (-0.000510) | 0.003982 / 0.004328 (-0.000347) | 0.075975 / 0.004250 (0.071725) | 0.040499 / 0.037052 (0.003446) | 0.315050 / 0.258489 (0.056561) | 0.351273 / 0.293841 (0.057433) | 0.038327 / 0.128546 (-0.090219) | 0.011943 / 0.075646 (-0.063703) | 0.332148 / 0.419271 (-0.087124) | 0.047648 / 0.043533 (0.004115) | 0.295817 / 0.255139 (0.040678) | 0.322704 / 0.283200 (0.039504) | 0.100830 / 0.141683 (-0.040853) | 1.422162 / 1.452155 (-0.029993) | 1.468972 / 1.492716 (-0.023744) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.201164 / 0.018006 (0.183158) | 0.435425 / 0.000490 (0.434935) | 0.001576 / 0.000200 (0.001376) | 0.000218 / 0.000054 (0.000163) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026667 / 0.037411 (-0.010744) | 0.106161 / 0.014526 (0.091636) | 0.115836 / 0.176557 (-0.060720) | 0.151511 / 0.737135 (-0.585624) | 0.122248 / 0.296338 (-0.174091) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.395974 / 0.215209 (0.180765) | 3.952958 / 2.077655 (1.875303) | 1.772111 / 1.504120 (0.267991) | 1.581370 / 1.541195 (0.040175) | 1.602811 / 1.468490 (0.134321) | 0.694072 / 4.584777 (-3.890705) | 3.640238 / 3.745712 (-0.105474) | 2.028865 / 5.269862 (-3.240997) | 1.419182 / 4.565676 (-3.146495) | 0.084078 / 0.424275 (-0.340197) | 0.012248 / 0.007607 (0.004641) | 0.499768 / 0.226044 (0.273723) | 4.997449 / 2.268929 (2.728521) | 2.280711 / 55.444624 (-53.163913) | 1.971701 / 6.876477 (-4.904776) | 1.983248 / 2.142072 (-0.158824) | 0.831030 / 4.805227 (-3.974198) | 0.163008 / 6.500664 (-6.337656) | 0.061887 / 0.075469 (-0.013582) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.191744 / 1.841788 (-0.650043) | 14.424546 / 8.074308 (6.350238) | 14.530127 / 10.191392 (4.338735) | 0.165793 / 0.680424 (-0.514631) | 0.029099 / 0.534201 (-0.505102) | 0.447830 / 0.579283 (-0.131453) | 0.441036 / 0.434364 (0.006672) | 0.554697 / 0.540337 (0.014360) | 0.668854 / 1.386936 (-0.718082) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006825 / 0.011353 (-0.004528) | 0.004998 / 0.011008 (-0.006010) | 0.074197 / 0.038508 (0.035689) | 0.032381 / 0.023109 (0.009272) | 0.335745 / 0.275898 (0.059847) | 0.360474 / 0.323480 (0.036994) | 0.005420 / 0.007986 (-0.002566) | 0.005121 / 0.004328 (0.000792) | 0.074980 / 0.004250 (0.070730) | 0.046392 / 0.037052 (0.009340) | 0.338693 / 0.258489 (0.080204) | 0.383679 / 0.293841 (0.089838) | 0.035380 / 0.128546 (-0.093166) | 0.012197 / 0.075646 (-0.063449) | 0.085738 / 0.419271 (-0.333533) | 0.049990 / 0.043533 (0.006458) | 0.342640 / 0.255139 (0.087501) | 0.355139 / 0.283200 (0.071939) | 0.102992 / 0.141683 (-0.038690) | 1.451900 / 1.452155 (-0.000254) | 1.550919 / 1.492716 (0.058202) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.223241 / 0.018006 (0.205235) | 0.436954 / 0.000490 (0.436464) | 0.003319 / 0.000200 (0.003120) | 0.000088 / 0.000054 (0.000034) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028042 / 0.037411 (-0.009370) | 0.106079 / 0.014526 (0.091554) | 0.122713 / 0.176557 (-0.053843) | 0.156543 / 0.737135 (-0.580593) | 0.122424 / 0.296338 (-0.173914) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.439482 / 0.215209 (0.224273) | 4.283112 / 2.077655 (2.205457) | 2.139705 / 1.504120 (0.635585) | 1.940898 / 1.541195 (0.399703) | 2.003906 / 1.468490 (0.535416) | 0.703269 / 4.584777 (-3.881508) | 3.780391 / 3.745712 (0.034679) | 2.079963 / 5.269862 (-3.189898) | 1.330669 / 4.565676 (-3.235007) | 0.086582 / 0.424275 (-0.337693) | 0.012497 / 0.007607 (0.004890) | 0.519329 / 0.226044 (0.293284) | 5.218117 / 2.268929 (2.949189) | 2.635982 / 55.444624 (-52.808643) | 2.301111 / 6.876477 (-4.575366) | 2.341312 / 2.142072 (0.199239) | 0.840157 / 4.805227 (-3.965070) | 0.166174 / 6.500664 (-6.334490) | 0.062890 / 0.075469 (-0.012579) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.257672 / 1.841788 (-0.584116) | 14.983374 / 8.074308 (6.909066) | 14.284441 / 10.191392 (4.093049) | 0.176077 / 0.680424 (-0.504347) | 0.017544 / 0.534201 (-0.516657) | 0.429619 / 0.579283 (-0.149664) | 0.426371 / 0.434364 (-0.007993) | 0.534832 / 0.540337 (-0.005506) | 0.643322 / 1.386936 (-0.743614) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#0409b1435876fa97b3674b0275285e84b49d83f8 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010622 / 0.011353 (-0.000731) | 0.005856 / 0.011008 (-0.005152) | 0.108608 / 0.038508 (0.070100) | 0.039868 / 0.023109 (0.016759) | 0.327853 / 0.275898 (0.051955) | 0.396721 / 0.323480 (0.073241) | 0.008916 / 0.007986 (0.000930) | 0.004590 / 0.004328 (0.000261) | 0.085020 / 0.004250 (0.080770) | 0.046608 / 0.037052 (0.009555) | 0.356369 / 0.258489 (0.097880) | 0.391142 / 0.293841 (0.097301) | 0.040579 / 0.128546 (-0.087967) | 0.012249 / 0.075646 (-0.063397) | 0.387740 / 0.419271 (-0.031532) | 0.057794 / 0.043533 (0.014262) | 0.335763 / 0.255139 (0.080624) | 0.369847 / 0.283200 (0.086647) | 0.121276 / 0.141683 (-0.020407) | 1.605406 / 1.452155 (0.153251) | 1.709524 / 1.492716 (0.216808) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.226688 / 0.018006 (0.208681) | 0.493320 / 0.000490 (0.492831) | 0.002825 / 0.000200 (0.002626) | 0.000088 / 0.000054 (0.000033) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031874 / 0.037411 (-0.005538) | 0.117365 / 0.014526 (0.102840) | 0.127697 / 0.176557 (-0.048859) | 0.175589 / 0.737135 (-0.561546) | 0.137731 / 0.296338 (-0.158608) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.472563 / 0.215209 (0.257354) | 4.744383 / 2.077655 (2.666728) | 2.152015 / 1.504120 (0.647895) | 1.925398 / 1.541195 (0.384203) | 2.054613 / 1.468490 (0.586123) | 0.821703 / 4.584777 (-3.763074) | 4.468177 / 3.745712 (0.722465) | 4.687682 / 5.269862 (-0.582179) | 2.379674 / 4.565676 (-2.186003) | 0.101325 / 0.424275 (-0.322950) | 0.014891 / 0.007607 (0.007284) | 0.593161 / 0.226044 (0.367117) | 5.641670 / 2.268929 (3.372741) | 2.460206 / 55.444624 (-52.984419) | 2.131148 / 6.876477 (-4.745329) | 2.351067 / 2.142072 (0.208994) | 0.997634 / 4.805227 (-3.807593) | 0.195338 / 6.500664 (-6.305326) | 0.075540 / 0.075469 (0.000071) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.411585 / 1.841788 (-0.430203) | 17.055689 / 8.074308 (8.981381) | 16.544028 / 10.191392 (6.352636) | 0.180840 / 0.680424 (-0.499584) | 0.034549 / 0.534201 (-0.499652) | 0.510256 / 0.579283 (-0.069027) | 0.525632 / 0.434364 (0.091268) | 0.601206 / 0.540337 (0.060868) | 0.668468 / 1.386936 (-0.718469) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008989 / 0.011353 (-0.002364) | 0.006065 / 0.011008 (-0.004943) | 0.088294 / 0.038508 (0.049786) | 0.040404 / 0.023109 (0.017295) | 0.405622 / 0.275898 (0.129724) | 0.454519 / 0.323480 (0.131039) | 0.006919 / 0.007986 (-0.001067) | 0.004545 / 0.004328 (0.000217) | 0.087023 / 0.004250 (0.082772) | 0.055962 / 0.037052 (0.018910) | 0.400942 / 0.258489 (0.142453) | 0.490670 / 0.293841 (0.196829) | 0.044086 / 0.128546 (-0.084461) | 0.014485 / 0.075646 (-0.061162) | 0.103333 / 0.419271 (-0.315938) | 0.059663 / 0.043533 (0.016130) | 0.404944 / 0.255139 (0.149805) | 0.425763 / 0.283200 (0.142563) | 0.123989 / 0.141683 (-0.017694) | 1.777244 / 1.452155 (0.325089) | 1.879884 / 1.492716 (0.387167) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.226440 / 0.018006 (0.208434) | 0.492688 / 0.000490 (0.492198) | 0.004691 / 0.000200 (0.004491) | 0.000110 / 0.000054 (0.000055) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.035123 / 0.037411 (-0.002288) | 0.134288 / 0.014526 (0.119762) | 0.145542 / 0.176557 (-0.031015) | 0.195372 / 0.737135 (-0.541764) | 0.152551 / 0.296338 (-0.143787) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.468615 / 0.215209 (0.253406) | 4.813363 / 2.077655 (2.735708) | 2.333606 / 1.504120 (0.829486) | 2.107344 / 1.541195 (0.566149) | 2.109109 / 1.468490 (0.640619) | 0.783779 / 4.584777 (-3.800998) | 4.521448 / 3.745712 (0.775736) | 2.290532 / 5.269862 (-2.979329) | 1.553488 / 4.565676 (-3.012189) | 0.088786 / 0.424275 (-0.335489) | 0.013091 / 0.007607 (0.005484) | 0.567165 / 0.226044 (0.341120) | 5.974315 / 2.268929 (3.705386) | 2.815018 / 55.444624 (-52.629606) | 2.488954 / 6.876477 (-4.387522) | 2.461849 / 2.142072 (0.319776) | 0.934487 / 4.805227 (-3.870740) | 0.190209 / 6.500664 (-6.310455) | 0.074811 / 0.075469 (-0.000658) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.513476 / 1.841788 (-0.328311) | 17.902599 / 8.074308 (9.828291) | 14.308027 / 10.191392 (4.116635) | 0.201992 / 0.680424 (-0.478432) | 0.018678 / 0.534201 (-0.515523) | 0.454707 / 0.579283 (-0.124576) | 0.470643 / 0.434364 (0.036279) | 0.612534 / 0.540337 (0.072197) | 0.685773 / 1.386936 (-0.701163) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#c4a66da3633a811eb8ea01d23469c41dfec0ffb8 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009385 / 0.011353 (-0.001968) | 0.005220 / 0.011008 (-0.005788) | 0.098722 / 0.038508 (0.060214) | 0.035382 / 0.023109 (0.012273) | 0.297114 / 0.275898 (0.021216) | 0.371443 / 0.323480 (0.047963) | 0.008070 / 0.007986 (0.000084) | 0.004204 / 0.004328 (-0.000125) | 0.075621 / 0.004250 (0.071370) | 0.046015 / 0.037052 (0.008963) | 0.304569 / 0.258489 (0.046080) | 0.345598 / 0.293841 (0.051757) | 0.037946 / 0.128546 (-0.090600) | 0.011972 / 0.075646 (-0.063674) | 0.331993 / 0.419271 (-0.087279) | 0.047250 / 0.043533 (0.003717) | 0.296588 / 0.255139 (0.041449) | 0.316070 / 0.283200 (0.032870) | 0.108211 / 0.141683 (-0.033472) | 1.447619 / 1.452155 (-0.004535) | 1.481243 / 1.492716 (-0.011473) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.274860 / 0.018006 (0.256854) | 0.503139 / 0.000490 (0.502649) | 0.003598 / 0.000200 (0.003398) | 0.000081 / 0.000054 (0.000027) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026752 / 0.037411 (-0.010660) | 0.109008 / 0.014526 (0.094482) | 0.119109 / 0.176557 (-0.057448) | 0.158462 / 0.737135 (-0.578673) | 0.126171 / 0.296338 (-0.170168) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.396396 / 0.215209 (0.181187) | 3.963055 / 2.077655 (1.885400) | 1.796308 / 1.504120 (0.292188) | 1.600565 / 1.541195 (0.059370) | 1.742409 / 1.468490 (0.273919) | 0.690942 / 4.584777 (-3.893835) | 3.713343 / 3.745712 (-0.032369) | 2.066804 / 5.269862 (-3.203058) | 1.292946 / 4.565676 (-3.272730) | 0.084344 / 0.424275 (-0.339931) | 0.012473 / 0.007607 (0.004865) | 0.513109 / 0.226044 (0.287065) | 5.175141 / 2.268929 (2.906213) | 2.266559 / 55.444624 (-53.178066) | 1.935737 / 6.876477 (-4.940740) | 2.028911 / 2.142072 (-0.113161) | 0.831191 / 4.805227 (-3.974036) | 0.163155 / 6.500664 (-6.337509) | 0.063414 / 0.075469 (-0.012055) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.195429 / 1.841788 (-0.646358) | 15.257933 / 8.074308 (7.183625) | 14.358815 / 10.191392 (4.167423) | 0.152677 / 0.680424 (-0.527747) | 0.028890 / 0.534201 (-0.505311) | 0.455342 / 0.579283 (-0.123941) | 0.442602 / 0.434364 (0.008238) | 0.526833 / 0.540337 (-0.013505) | 0.618296 / 1.386936 (-0.768640) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007613 / 0.011353 (-0.003740) | 0.005515 / 0.011008 (-0.005493) | 0.073759 / 0.038508 (0.035251) | 0.033944 / 0.023109 (0.010835) | 0.347764 / 0.275898 (0.071866) | 0.371143 / 0.323480 (0.047664) | 0.005997 / 0.007986 (-0.001988) | 0.004322 / 0.004328 (-0.000006) | 0.073002 / 0.004250 (0.068751) | 0.053051 / 0.037052 (0.015999) | 0.340345 / 0.258489 (0.081856) | 0.383761 / 0.293841 (0.089920) | 0.037734 / 0.128546 (-0.090813) | 0.012815 / 0.075646 (-0.062831) | 0.086998 / 0.419271 (-0.332273) | 0.050165 / 0.043533 (0.006632) | 0.343864 / 0.255139 (0.088725) | 0.356734 / 0.283200 (0.073534) | 0.108955 / 0.141683 (-0.032728) | 1.464558 / 1.452155 (0.012403) | 1.560084 / 1.492716 (0.067368) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.327885 / 0.018006 (0.309878) | 0.515515 / 0.000490 (0.515025) | 0.000439 / 0.000200 (0.000239) | 0.000059 / 0.000054 (0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030741 / 0.037411 (-0.006670) | 0.107634 / 0.014526 (0.093108) | 0.127121 / 0.176557 (-0.049436) | 0.164044 / 0.737135 (-0.573092) | 0.129097 / 0.296338 (-0.167242) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.435690 / 0.215209 (0.220481) | 4.350705 / 2.077655 (2.273050) | 2.199597 / 1.504120 (0.695477) | 2.022715 / 1.541195 (0.481521) | 2.265907 / 1.468490 (0.797417) | 0.695817 / 4.584777 (-3.888960) | 3.795207 / 3.745712 (0.049494) | 3.061587 / 5.269862 (-2.208274) | 1.872213 / 4.565676 (-2.693463) | 0.085265 / 0.424275 (-0.339010) | 0.012243 / 0.007607 (0.004636) | 0.547209 / 0.226044 (0.321164) | 5.383626 / 2.268929 (3.114698) | 2.707439 / 55.444624 (-52.737185) | 2.393773 / 6.876477 (-4.482703) | 2.481385 / 2.142072 (0.339312) | 0.826169 / 4.805227 (-3.979059) | 0.166643 / 6.500664 (-6.334021) | 0.065817 / 0.075469 (-0.009652) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.274469 / 1.841788 (-0.567318) | 15.565025 / 8.074308 (7.490717) | 14.254192 / 10.191392 (4.062800) | 0.166785 / 0.680424 (-0.513639) | 0.017830 / 0.534201 (-0.516371) | 0.430406 / 0.579283 (-0.148877) | 0.435655 / 0.434364 (0.001292) | 0.530605 / 0.540337 (-0.009732) | 0.636355 / 1.386936 (-0.750581) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#146983fdc70b9fe2cc38109368e185b6ffa7a05e \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008466 / 0.011353 (-0.002887) | 0.004679 / 0.011008 (-0.006329) | 0.100534 / 0.038508 (0.062025) | 0.029513 / 0.023109 (0.006403) | 0.302866 / 0.275898 (0.026968) | 0.352816 / 0.323480 (0.029336) | 0.006912 / 0.007986 (-0.001074) | 0.003513 / 0.004328 (-0.000815) | 0.078625 / 0.004250 (0.074375) | 0.036725 / 0.037052 (-0.000327) | 0.312135 / 0.258489 (0.053646) | 0.344579 / 0.293841 (0.050738) | 0.033870 / 0.128546 (-0.094677) | 0.011563 / 0.075646 (-0.064083) | 0.318982 / 0.419271 (-0.100290) | 0.043002 / 0.043533 (-0.000531) | 0.301956 / 0.255139 (0.046817) | 0.330798 / 0.283200 (0.047599) | 0.091755 / 0.141683 (-0.049927) | 1.458577 / 1.452155 (0.006422) | 1.532642 / 1.492716 (0.039926) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.194853 / 0.018006 (0.176847) | 0.396844 / 0.000490 (0.396354) | 0.004401 / 0.000200 (0.004201) | 0.000076 / 0.000054 (0.000022) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022971 / 0.037411 (-0.014441) | 0.096595 / 0.014526 (0.082069) | 0.106104 / 0.176557 (-0.070452) | 0.144815 / 0.737135 (-0.592320) | 0.110036 / 0.296338 (-0.186303) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.415025 / 0.215209 (0.199816) | 4.138136 / 2.077655 (2.060481) | 1.861253 / 1.504120 (0.357133) | 1.653420 / 1.541195 (0.112226) | 1.703784 / 1.468490 (0.235294) | 0.698261 / 4.584777 (-3.886516) | 3.357240 / 3.745712 (-0.388472) | 3.025790 / 5.269862 (-2.244072) | 1.637191 / 4.565676 (-2.928485) | 0.085620 / 0.424275 (-0.338655) | 0.012454 / 0.007607 (0.004846) | 0.524708 / 0.226044 (0.298663) | 5.269234 / 2.268929 (3.000306) | 2.290612 / 55.444624 (-53.154012) | 1.936107 / 6.876477 (-4.940370) | 1.968216 / 2.142072 (-0.173856) | 0.810438 / 4.805227 (-3.994789) | 0.154133 / 6.500664 (-6.346531) | 0.064978 / 0.075469 (-0.010491) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.231782 / 1.841788 (-0.610006) | 13.545573 / 8.074308 (5.471264) | 14.558765 / 10.191392 (4.367373) | 0.140763 / 0.680424 (-0.539661) | 0.029259 / 0.534201 (-0.504942) | 0.407776 / 0.579283 (-0.171507) | 0.410244 / 0.434364 (-0.024120) | 0.477313 / 0.540337 (-0.063024) | 0.551465 / 1.386936 (-0.835471) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006272 / 0.011353 (-0.005081) | 0.004397 / 0.011008 (-0.006611) | 0.077496 / 0.038508 (0.038988) | 0.026946 / 0.023109 (0.003837) | 0.342992 / 0.275898 (0.067094) | 0.374407 / 0.323480 (0.050927) | 0.004849 / 0.007986 (-0.003136) | 0.004549 / 0.004328 (0.000220) | 0.076439 / 0.004250 (0.072189) | 0.035829 / 0.037052 (-0.001224) | 0.343483 / 0.258489 (0.084994) | 0.385581 / 0.293841 (0.091740) | 0.031745 / 0.128546 (-0.096801) | 0.011617 / 0.075646 (-0.064030) | 0.087207 / 0.419271 (-0.332064) | 0.042252 / 0.043533 (-0.001281) | 0.343223 / 0.255139 (0.088084) | 0.368707 / 0.283200 (0.085508) | 0.093259 / 0.141683 (-0.048424) | 1.506904 / 1.452155 (0.054750) | 1.567583 / 1.492716 (0.074867) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.158962 / 0.018006 (0.140955) | 0.395982 / 0.000490 (0.395492) | 0.003604 / 0.000200 (0.003404) | 0.000078 / 0.000054 (0.000023) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025003 / 0.037411 (-0.012408) | 0.101176 / 0.014526 (0.086650) | 0.104494 / 0.176557 (-0.072062) | 0.140414 / 0.737135 (-0.596722) | 0.108398 / 0.296338 (-0.187941) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.436849 / 0.215209 (0.221640) | 4.369428 / 2.077655 (2.291774) | 2.070613 / 1.504120 (0.566493) | 1.867511 / 1.541195 (0.326317) | 1.866589 / 1.468490 (0.398099) | 0.700036 / 4.584777 (-3.884741) | 3.407513 / 3.745712 (-0.338199) | 3.022409 / 5.269862 (-2.247453) | 1.581423 / 4.565676 (-2.984253) | 0.083425 / 0.424275 (-0.340850) | 0.012380 / 0.007607 (0.004773) | 0.535087 / 0.226044 (0.309043) | 5.374814 / 2.268929 (3.105886) | 2.504841 / 55.444624 (-52.939784) | 2.166484 / 6.876477 (-4.709993) | 2.166363 / 2.142072 (0.024291) | 0.803692 / 4.805227 (-4.001535) | 0.150873 / 6.500664 (-6.349791) | 0.066253 / 0.075469 (-0.009216) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.291256 / 1.841788 (-0.550532) | 13.827843 / 8.074308 (5.753535) | 13.839334 / 10.191392 (3.647942) | 0.153530 / 0.680424 (-0.526894) | 0.016896 / 0.534201 (-0.517305) | 0.379937 / 0.579283 (-0.199346) | 0.396241 / 0.434364 (-0.038123) | 0.461808 / 0.540337 (-0.078530) | 0.553023 / 1.386936 (-0.833913) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#779ddc5c7ebbd406b2a6c9092c3f455a2cc7f5e7 \"CML watermark\")\n"
] | 2023-01-23T12:49:40 | 2023-02-13T20:23:34 | 2023-02-13T20:16:38 | COLLABORATOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5455",
"html_url": "https://github.com/huggingface/datasets/pull/5455",
"diff_url": "https://github.com/huggingface/datasets/pull/5455.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5455.patch",
"merged_at": "2023-02-13T20:16:38"
} | Use the "shard generator approach with periodic progress updates" (used in `save_to_disk` and multi-proc `load_dataset`) in `Dataset.map` to enable having a single TQDM progress bar in the multi-proc mode.
Closes https://github.com/huggingface/datasets/issues/771, closes https://github.com/huggingface/datasets/issues/3177
TODO:
- [x] cleaner refactor of the `_map_single` decorators now that they also have to wrap generator functions (decorate `map` instead of `map_single` with the `transmit_` decorators and predict the shards' fingerprint in `map`) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5455/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5455/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5454 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5454/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5454/comments | https://api.github.com/repos/huggingface/datasets/issues/5454/events | https://github.com/huggingface/datasets/issues/5454 | 1,552,890,419 | I_kwDODunzps5cjzoz | 5,454 | Save and resume the state of a DataLoader | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
},
{
"id": 2067400324,
"node_id": "MDU6TGFiZWwyMDY3NDAwMzI0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion",
"name": "generic discussion",
"color": "c5def5",
"default": false,
"description": "Generic discussion on the library"
}
] | open | false | null | [] | null | [
"Something that'd be nice to have is \"manual update of state\". One of the learning from training LLMs is the ability to skip some batches whenever we notice huge spike might be handy.",
"Your outline spec is very sound and clear, @lhoestq - thank you!\r\n\r\n@thomasw21, indeed that would be a wonderful extra feature. In Megatron-Deepspeed we manually drained the dataloader for the range we wanted. I wasn't very satisfied with the way we did it, since its behavior would change if you were to do multiple range skips. I think it should remember all the ranges it skipped and not just skip the last range - since otherwise the data is inconsistent (but we probably should discuss this in a separate issue not to derail this much bigger one).",
"Hi there! I think this is a critical issue and have an urgent need for it, in my attempt to train on a super large-scale dataset using `datasets`. It is impossible to resume a time-consuming (like one month) experiment by iterating all seen data again, which could possibly cost several days.\r\n\r\n@stas00 @thomasw21 @lhoestq Any updates on this problem after 1 year passed?",
"any update?",
"No update so far, I wonder if someone implemented a resumable pytorch Sampler somwhere.\r\n\r\nThen regarding resuming a streaming dataset, we'd first like to have an efficient way to skip shards automatically but this is not implemented yet",
"I opened a draft here for IterableDataset: https://github.com/huggingface/datasets/pull/6658\r\n\r\n\r\n\r\n```python\r\n\"\"\"Requires https://github.com/huggingface/datasets/pull/6658 (WIP)\"\"\"\r\nfrom datasets import load_dataset\r\nfrom torch.utils.data import DataLoader\r\n\r\nds = load_dataset(..., streaming=True)\r\n# ds = ds.map(tokenize)\r\n# ds = ds.shuffle(seed=42, buffer_size=1000)\r\n\r\n# Init the dataset state_dict, or load it from a checkpoint\r\ndataset_state_dict = ds.state_dict()\r\n\r\n# Resumable training loop\r\nds.load_state_dict(dataset_state_dict)\r\ndataloader = DataLoader(ds, batch_size=batch_size)\r\nfor step, batch in enumerate(dataloader):\r\n ...\r\n if step % save_steps == 0:\r\n dataset_state_dict = ds.state_dict()\r\n```",
"Hi @lhoestq - can you provide more information and how to implement on saving and restoring vanilla DataLoader states with map-style datasets?\r\n\r\n",
"For now the easiest is probably to use the vanilla DataLoader only for batching and multiprocessing, and implement the resuming logic using a `Dataset` (it has `.select()` to skip examples) and a `dataset_state_dict`:\r\n\r\n\r\n```python\r\nfrom datasets import load_dataset\r\nfrom torch.utils.data import DataLoader\r\n\r\nds = load_dataset(...)\r\n# ds = ds.map(tokenize)\r\n# ds = ds.shuffle(seed=42)\r\n\r\n# Init the dataset state_dict, or load it from a checkpoint\r\ndataset_state_dict = {\"step\": 0} \r\n\r\n# Resumable training loop\r\nstart_step = dataset_state_dict[\"step\"]\r\ndataloader = DataLoader(ds.select(range(start_step * batch_size, len(ds))), batch_size=batch_size)\r\nfor step, batch in enumerate(dataloader, start=start_step):\r\n ...\r\n if step % save_steps == 0:\r\n dataset_state_dict = {\"step\": step}\r\n```",
"Hello, I found a similar implementation online that seems to solve your problem. https://github.com/facebookresearch/vissl/blob/main/vissl/data/data_helper.py#L93\r\nit looks like we can set_start_iter in StatefulDistributedSampler to implement the stateful resume requirement we want.\r\n\r\n",
"Hi y'all, @lhoestq I wanted to flag that we currently have a StatefulDataLoader in `pytorch/data/torchdata` that has state_dict/load_state_dict methods, which will call a dataset's state_dict/load_state_dict methods but also handle multiprocessing under the hood. Any chance we can collaborate on this and try to get them to work well together? Please have a look here for some basic examples: https://github.com/pytorch/data/tree/main/torchdata/stateful_dataloader#saving-and-loading-state ",
"Fantastic ! This will help pushing our IterableDataset state_dict implementation at https://github.com/huggingface/datasets/pull/6658 :) I'll check if there is anything missing to maker them work together, and add tests and some docs referring to the StatefulDataLoader :)",
"Ah I just saw this disclaimer in the torchdata README and it feels like people should not rely on it. Should the StatefulDataLoader live elsewhere @andrewkho ?\r\n\r\n> ⚠️ As of July 2023, we have paused active development on TorchData and have paused new releases. We have learnt a lot from building it and hearing from users, but also believe we need to re-evaluate the technical design and approach given how much the industry has changed since we began the project. During the rest of 2023 we will be re-evaluating our plans in this space. Please reach out if you suggestions or comments (please use https://github.com/pytorch/data/issues/1196 for feedback).",
"@lhoestq Good find, we are in the midst of updating this disclaimer as we're re-starting development and regular releases, though our approach will be to iterate on DL V1 (ie StatefulDataLoader) instead of continuing development on datapipes+DLV2. Let's discuss on a call at some point to figure out the best path forward! ",
"As a heads up, `IterableDataset` state_dict has been added in https://github.com/huggingface/datasets/pull/6658\r\n\r\n...and it works out of the box with the `torchdata` `StatefulDataLoader` :)\r\n\r\nSee the docs at https://huggingface.co/docs/datasets/main/en/use_with_pytorch#checkpoint-and-resume"
] | 2023-01-23T10:58:54 | 2024-07-22T11:14:18 | null | MEMBER | null | null | null | It would be nice when using `datasets` with a PyTorch DataLoader to be able to resume a training from a DataLoader state (e.g. to resume a training that crashed)
What I have in mind (but lmk if you have other ideas or comments):
For map-style datasets, this requires to have a PyTorch Sampler state that can be saved and reloaded per node and worker.
For iterable datasets, this requires to save the state of the dataset iterator, which includes:
- the current shard idx and row position in the current shard
- the epoch number
- the rng state
- the shuffle buffer
Right now you can already resume the data loading of an iterable dataset by using `IterableDataset.skip` but it takes a lot of time because it re-iterates on all the past data until it reaches the resuming point.
cc @stas00 @sgugger | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5454/reactions",
"total_count": 10,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 7,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5454/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5453 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5453/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5453/comments | https://api.github.com/repos/huggingface/datasets/issues/5453/events | https://github.com/huggingface/datasets/pull/5453 | 1,552,727,425 | PR_kwDODunzps5ITraa | 5,453 | Fix base directory while extracting insecure TAR files | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008215 / 0.011353 (-0.003138) | 0.004510 / 0.011008 (-0.006498) | 0.099270 / 0.038508 (0.060761) | 0.028682 / 0.023109 (0.005573) | 0.332726 / 0.275898 (0.056827) | 0.371025 / 0.323480 (0.047545) | 0.006665 / 0.007986 (-0.001320) | 0.003329 / 0.004328 (-0.001000) | 0.078509 / 0.004250 (0.074259) | 0.032388 / 0.037052 (-0.004664) | 0.348540 / 0.258489 (0.090051) | 0.382212 / 0.293841 (0.088371) | 0.033307 / 0.128546 (-0.095239) | 0.011642 / 0.075646 (-0.064004) | 0.322573 / 0.419271 (-0.096699) | 0.041297 / 0.043533 (-0.002236) | 0.322710 / 0.255139 (0.067571) | 0.361593 / 0.283200 (0.078394) | 0.082276 / 0.141683 (-0.059407) | 1.481932 / 1.452155 (0.029777) | 1.531677 / 1.492716 (0.038961) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.194964 / 0.018006 (0.176958) | 0.406002 / 0.000490 (0.405512) | 0.001015 / 0.000200 (0.000815) | 0.000075 / 0.000054 (0.000021) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023317 / 0.037411 (-0.014095) | 0.097231 / 0.014526 (0.082705) | 0.103898 / 0.176557 (-0.072659) | 0.139864 / 0.737135 (-0.597271) | 0.106785 / 0.296338 (-0.189554) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.419036 / 0.215209 (0.203827) | 4.193985 / 2.077655 (2.116330) | 1.879069 / 1.504120 (0.374949) | 1.675384 / 1.541195 (0.134190) | 1.696225 / 1.468490 (0.227735) | 0.695257 / 4.584777 (-3.889520) | 3.437971 / 3.745712 (-0.307741) | 2.656037 / 5.269862 (-2.613824) | 1.463320 / 4.565676 (-3.102356) | 0.082575 / 0.424275 (-0.341700) | 0.012593 / 0.007607 (0.004986) | 0.526643 / 0.226044 (0.300599) | 5.278366 / 2.268929 (3.009437) | 2.288106 / 55.444624 (-53.156518) | 1.954875 / 6.876477 (-4.921602) | 1.950641 / 2.142072 (-0.191431) | 0.808289 / 4.805227 (-3.996938) | 0.148790 / 6.500664 (-6.351875) | 0.064775 / 0.075469 (-0.010694) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.215219 / 1.841788 (-0.626569) | 13.551467 / 8.074308 (5.477159) | 13.841547 / 10.191392 (3.650155) | 0.153610 / 0.680424 (-0.526814) | 0.028308 / 0.534201 (-0.505893) | 0.397087 / 0.579283 (-0.182196) | 0.401724 / 0.434364 (-0.032640) | 0.458042 / 0.540337 (-0.082296) | 0.544955 / 1.386936 (-0.841981) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006321 / 0.011353 (-0.005032) | 0.004336 / 0.011008 (-0.006673) | 0.097196 / 0.038508 (0.058688) | 0.026933 / 0.023109 (0.003824) | 0.416520 / 0.275898 (0.140622) | 0.450703 / 0.323480 (0.127223) | 0.004831 / 0.007986 (-0.003155) | 0.003252 / 0.004328 (-0.001076) | 0.074981 / 0.004250 (0.070730) | 0.036136 / 0.037052 (-0.000917) | 0.423166 / 0.258489 (0.164677) | 0.460936 / 0.293841 (0.167095) | 0.031859 / 0.128546 (-0.096687) | 0.011500 / 0.075646 (-0.064146) | 0.318197 / 0.419271 (-0.101074) | 0.041472 / 0.043533 (-0.002061) | 0.419227 / 0.255139 (0.164088) | 0.444712 / 0.283200 (0.161512) | 0.088841 / 0.141683 (-0.052841) | 1.497237 / 1.452155 (0.045083) | 1.572111 / 1.492716 (0.079395) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.239261 / 0.018006 (0.221255) | 0.400358 / 0.000490 (0.399868) | 0.003460 / 0.000200 (0.003261) | 0.000078 / 0.000054 (0.000024) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024016 / 0.037411 (-0.013395) | 0.098414 / 0.014526 (0.083888) | 0.107220 / 0.176557 (-0.069337) | 0.143538 / 0.737135 (-0.593598) | 0.108607 / 0.296338 (-0.187731) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.473896 / 0.215209 (0.258687) | 4.740386 / 2.077655 (2.662731) | 2.458046 / 1.504120 (0.953926) | 2.260895 / 1.541195 (0.719700) | 2.280218 / 1.468490 (0.811728) | 0.694843 / 4.584777 (-3.889934) | 3.349795 / 3.745712 (-0.395917) | 1.846970 / 5.269862 (-3.422892) | 1.151481 / 4.565676 (-3.414195) | 0.082054 / 0.424275 (-0.342221) | 0.012664 / 0.007607 (0.005057) | 0.573400 / 0.226044 (0.347355) | 5.750648 / 2.268929 (3.481720) | 2.904257 / 55.444624 (-52.540367) | 2.555181 / 6.876477 (-4.321295) | 2.595830 / 2.142072 (0.453758) | 0.799580 / 4.805227 (-4.005647) | 0.151088 / 6.500664 (-6.349576) | 0.066639 / 0.075469 (-0.008831) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.251413 / 1.841788 (-0.590375) | 13.743368 / 8.074308 (5.669060) | 13.808729 / 10.191392 (3.617337) | 0.144765 / 0.680424 (-0.535659) | 0.016606 / 0.534201 (-0.517594) | 0.376503 / 0.579283 (-0.202780) | 0.381510 / 0.434364 (-0.052854) | 0.440295 / 0.540337 (-0.100043) | 0.524248 / 1.386936 (-0.862688) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#eea1226779993687845da5ecd264cf047e46a128 \"CML watermark\")\n",
"Thanks a lot, @albertvillanova - I validated that your fix solves the original problem!"
] | 2023-01-23T08:57:40 | 2023-01-24T01:34:20 | 2023-01-23T10:10:42 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5453",
"html_url": "https://github.com/huggingface/datasets/pull/5453",
"diff_url": "https://github.com/huggingface/datasets/pull/5453.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5453.patch",
"merged_at": "2023-01-23T10:10:42"
} | This PR fixes the extraction of insecure TAR files by changing the base path against which TAR members are compared:
- from: "."
- to: `output_path`
This PR also adds tests for extracting insecure TAR files.
Related to:
- #5441
- #5452
@stas00 please note this PR addresses just one of the issues you pointed out: the use of the cwd by the extractor. The other issues (actionable error messages, raise instead of log error) should be addressed in other PRs. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5453/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5453/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5452 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5452/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5452/comments | https://api.github.com/repos/huggingface/datasets/issues/5452/events | https://github.com/huggingface/datasets/pull/5452 | 1,552,655,939 | PR_kwDODunzps5ITcA3 | 5,452 | Swap log messages for symbolic/hard links in tar extractor | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.011848 / 0.011353 (0.000495) | 0.006988 / 0.011008 (-0.004020) | 0.138078 / 0.038508 (0.099570) | 0.040310 / 0.023109 (0.017201) | 0.411857 / 0.275898 (0.135959) | 0.509496 / 0.323480 (0.186016) | 0.010695 / 0.007986 (0.002709) | 0.005275 / 0.004328 (0.000946) | 0.107157 / 0.004250 (0.102907) | 0.050987 / 0.037052 (0.013935) | 0.432387 / 0.258489 (0.173898) | 0.495136 / 0.293841 (0.201295) | 0.055273 / 0.128546 (-0.073273) | 0.019573 / 0.075646 (-0.056074) | 0.460356 / 0.419271 (0.041084) | 0.060916 / 0.043533 (0.017383) | 0.426140 / 0.255139 (0.171002) | 0.430461 / 0.283200 (0.147261) | 0.124569 / 0.141683 (-0.017114) | 1.989404 / 1.452155 (0.537250) | 1.942052 / 1.492716 (0.449335) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.287233 / 0.018006 (0.269227) | 0.606056 / 0.000490 (0.605566) | 0.004435 / 0.000200 (0.004235) | 0.000144 / 0.000054 (0.000090) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032353 / 0.037411 (-0.005058) | 0.124237 / 0.014526 (0.109711) | 0.143280 / 0.176557 (-0.033276) | 0.182081 / 0.737135 (-0.555055) | 0.148085 / 0.296338 (-0.148253) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.613550 / 0.215209 (0.398341) | 6.172421 / 2.077655 (4.094766) | 2.466018 / 1.504120 (0.961898) | 2.166433 / 1.541195 (0.625238) | 2.192511 / 1.468490 (0.724021) | 1.248777 / 4.584777 (-3.336000) | 5.746150 / 3.745712 (2.000438) | 3.097184 / 5.269862 (-2.172678) | 2.078176 / 4.565676 (-2.487501) | 0.144351 / 0.424275 (-0.279924) | 0.014830 / 0.007607 (0.007223) | 0.761699 / 0.226044 (0.535655) | 7.713201 / 2.268929 (5.444272) | 3.359647 / 55.444624 (-52.084977) | 2.652595 / 6.876477 (-4.223882) | 2.721952 / 2.142072 (0.579880) | 1.493036 / 4.805227 (-3.312192) | 0.252336 / 6.500664 (-6.248328) | 0.082906 / 0.075469 (0.007436) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.643887 / 1.841788 (-0.197901) | 18.762775 / 8.074308 (10.688466) | 22.003583 / 10.191392 (11.812191) | 0.256361 / 0.680424 (-0.424062) | 0.048048 / 0.534201 (-0.486153) | 0.601971 / 0.579283 (0.022688) | 0.712801 / 0.434364 (0.278438) | 0.684473 / 0.540337 (0.144136) | 0.802566 / 1.386936 (-0.584370) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010410 / 0.011353 (-0.000943) | 0.006719 / 0.011008 (-0.004289) | 0.132862 / 0.038508 (0.094354) | 0.036973 / 0.023109 (0.013863) | 0.470925 / 0.275898 (0.195027) | 0.502864 / 0.323480 (0.179384) | 0.007447 / 0.007986 (-0.000539) | 0.005629 / 0.004328 (0.001301) | 0.091985 / 0.004250 (0.087734) | 0.057537 / 0.037052 (0.020485) | 0.458362 / 0.258489 (0.199873) | 0.518324 / 0.293841 (0.224483) | 0.056540 / 0.128546 (-0.072007) | 0.021266 / 0.075646 (-0.054380) | 0.448289 / 0.419271 (0.029018) | 0.064211 / 0.043533 (0.020678) | 0.492596 / 0.255139 (0.237457) | 0.495030 / 0.283200 (0.211830) | 0.121858 / 0.141683 (-0.019825) | 1.823821 / 1.452155 (0.371667) | 2.012165 / 1.492716 (0.519449) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.296252 / 0.018006 (0.278245) | 0.601688 / 0.000490 (0.601198) | 0.006369 / 0.000200 (0.006169) | 0.000107 / 0.000054 (0.000053) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.035821 / 0.037411 (-0.001590) | 0.132722 / 0.014526 (0.118196) | 0.141819 / 0.176557 (-0.034738) | 0.205115 / 0.737135 (-0.532020) | 0.148917 / 0.296338 (-0.147422) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.678207 / 0.215209 (0.462998) | 6.969918 / 2.077655 (4.892263) | 3.077831 / 1.504120 (1.573711) | 2.689296 / 1.541195 (1.148102) | 2.706462 / 1.468490 (1.237972) | 1.249125 / 4.584777 (-3.335652) | 5.793917 / 3.745712 (2.048205) | 3.137565 / 5.269862 (-2.132297) | 2.056880 / 4.565676 (-2.508796) | 0.151918 / 0.424275 (-0.272357) | 0.015029 / 0.007607 (0.007422) | 0.833975 / 0.226044 (0.607930) | 8.575649 / 2.268929 (6.306720) | 3.812115 / 55.444624 (-51.632509) | 3.124219 / 6.876477 (-3.752258) | 3.178645 / 2.142072 (1.036572) | 1.488260 / 4.805227 (-3.316967) | 0.268239 / 6.500664 (-6.232425) | 0.089463 / 0.075469 (0.013993) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.645461 / 1.841788 (-0.196327) | 19.074412 / 8.074308 (11.000104) | 21.626726 / 10.191392 (11.435334) | 0.210525 / 0.680424 (-0.469899) | 0.032166 / 0.534201 (-0.502035) | 0.555572 / 0.579283 (-0.023711) | 0.654667 / 0.434364 (0.220303) | 0.632471 / 0.540337 (0.092133) | 0.756510 / 1.386936 (-0.630426) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#6681c36bbaae9b8b1daa3dbbd4a96b35aaae271b \"CML watermark\")\n"
] | 2023-01-23T07:53:38 | 2023-01-23T09:40:55 | 2023-01-23T08:31:17 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5452",
"html_url": "https://github.com/huggingface/datasets/pull/5452",
"diff_url": "https://github.com/huggingface/datasets/pull/5452.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5452.patch",
"merged_at": "2023-01-23T08:31:17"
} | The log messages do not match their if-condition. This PR swaps them.
Found while investigating:
- #5441
CC: @lhoestq | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5452/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5452/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5451 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5451/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5451/comments | https://api.github.com/repos/huggingface/datasets/issues/5451/events | https://github.com/huggingface/datasets/issues/5451 | 1,552,336,300 | I_kwDODunzps5chsWs | 5,451 | ImageFolder BadZipFile: Bad offset for central directory | {
"login": "hmartiro",
"id": 1524208,
"node_id": "MDQ6VXNlcjE1MjQyMDg=",
"avatar_url": "https://avatars.githubusercontent.com/u/1524208?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hmartiro",
"html_url": "https://github.com/hmartiro",
"followers_url": "https://api.github.com/users/hmartiro/followers",
"following_url": "https://api.github.com/users/hmartiro/following{/other_user}",
"gists_url": "https://api.github.com/users/hmartiro/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hmartiro/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hmartiro/subscriptions",
"organizations_url": "https://api.github.com/users/hmartiro/orgs",
"repos_url": "https://api.github.com/users/hmartiro/repos",
"events_url": "https://api.github.com/users/hmartiro/events{/privacy}",
"received_events_url": "https://api.github.com/users/hmartiro/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi ! Could you share the full stack trace ? Which dataset did you try to load ?\r\n\r\nit may be related to https://github.com/huggingface/datasets/pull/5640",
"The `BadZipFile` error means the ZIP file is corrupted, so I'm closing this issue as it's not directly related to `datasets`.",
"For others that find this issue following a `BadZipFile` error, I had the same problem because I had a file in a folder dataset `my-image.target` and the datasets library was incorrectly determining that the (PNG) file was a zip archive. When it tried to extract the file, this error occurred. \r\n\r\nUpdating to `datasets==2.12.0` fixed the problem for me."
] | 2023-01-22T23:50:12 | 2023-05-23T10:35:48 | 2023-02-10T16:31:36 | NONE | null | null | null | ### Describe the bug
I'm getting the following exception:
```
lib/python3.10/zipfile.py:1353 in _RealGetContents │
│ │
│ 1350 │ │ # self.start_dir: Position of start of central directory │
│ 1351 │ │ self.start_dir = offset_cd + concat │
│ 1352 │ │ if self.start_dir < 0: │
│ ❱ 1353 │ │ │ raise BadZipFile("Bad offset for central directory") │
│ 1354 │ │ fp.seek(self.start_dir, 0) │
│ 1355 │ │ data = fp.read(size_cd) │
│ 1356 │ │ fp = io.BytesIO(data) │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
BadZipFile: Bad offset for central directory
Extracting data files: 35%|█████████████████▊ | 38572/110812 [00:10<00:20, 3576.26it/s]
```
### Steps to reproduce the bug
```
load_dataset(
args.dataset_name,
args.dataset_config_name,
cache_dir=args.cache_dir,
),
```
### Expected behavior
loads the dataset
### Environment info
datasets==2.8.0
Python 3.10.8
Linux 129-146-3-202 5.15.0-52-generic #58~20.04.1-Ubuntu SMP Thu Oct 13 13:09:46 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5451/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5451/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5450 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5450/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5450/comments | https://api.github.com/repos/huggingface/datasets/issues/5450/events | https://github.com/huggingface/datasets/issues/5450 | 1,551,109,365 | I_kwDODunzps5cdAz1 | 5,450 | to_tf_dataset with a TF collator causes bizarrely persistent slowdown | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"wtf",
"Couldn't find what's causing this, this will need more investigation",
"A possible hint: The function it seems to be spending a lot of time in (when iterating over the original dataset) is `_get_mp` in the PIL JPEG decoder: \r\n![image](https://user-images.githubusercontent.com/12866554/214057267-c889f05e-efaf-4036-b805-c5381fa62f4a.png)\r\n",
"If \"mp\" is multiprocessing, this might suggest some kind of negative interaction between the JPEG decoder and TF's handling of processes/threads. Note that we haven't merged the parallel `to_tf_dataset` PR yet, so it's not caused by that PR!",
"Update: MP isn't multiprocessing at all, it's an internal PIL method for loading metadata from JPEG files. No idea why that would be a bottleneck, but I'll see if a Python profiler can't figure out where the time is actually being spent.",
"After further profiling, the slowdown is in the C methods for JPEG decoding that are included as part of PIL. Because Python profilers can't inspect inside that, I don't have any further information on which lines exactly are responsible for the slowdown or why.\r\n\r\nIn the meantime, I'm going to suggest switching from `return_tensors=\"tf\"` to `return_tensors=\"np\"` in most of our `transformers` code - this generally works better for pre-processing. Two relevant PRs are [here](https://github.com/huggingface/transformers/pull/21266) and [here](https://github.com/huggingface/notebooks/pull/308).",
"Closing this issue as we've done what we can with this one! "
] | 2023-01-20T16:08:37 | 2023-02-13T14:13:34 | 2023-02-13T14:13:34 | MEMBER | null | null | null | ### Describe the bug
This will make more sense if you take a look at [a Colab notebook that reproduces this issue.](https://colab.research.google.com/drive/1rxyeciQFWJTI0WrZ5aojp4Ls1ut18fNH?usp=sharing)
Briefly, there are several datasets that, when you iterate over them with `to_tf_dataset` **and** a data collator that returns `tf` tensors, become very slow. We haven't been able to figure this one out - it can be intermittent, and we have no idea what could possibly cause it. The weirdest thing is that **the slowdown affects other attempts to access the underlying dataset**. If you try to iterate over the `tf.data.Dataset`, then interrupt execution, and then try to iterate over the original dataset, the original dataset is now also very slow! This is true even if the dataset format is not set to `tf` - the iteration is slow even though it's not calling TF at all!
There is a simple workaround for this - we can simply get our data collators to return `np` tensors. When we do this, the bug is never triggered and everything is fine. In general, `np` is preferred for this kind of preprocessing work anyway, when the preprocessing is not going to be compiled into a pure `tf.data` pipeline! However, the issue is fascinating, and the TF team were wondering if anyone in datasets (cc @lhoestq @mariosasko) might have an idea of what could cause this.
### Steps to reproduce the bug
Run the attached Colab.
### Expected behavior
The slowdown should go away, or at least not persist after we stop iterating over the `tf.data.Dataset`
### Environment info
The issue occurs on multiple versions of Python and TF, both on local machines and on Colab.
All testing was done using the latest versions of `transformers` and `datasets` from `main` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5450/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 1,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5450/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5449 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5449/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5449/comments | https://api.github.com/repos/huggingface/datasets/issues/5449/events | https://github.com/huggingface/datasets/pull/5449 | 1,550,801,453 | PR_kwDODunzps5INgD9 | 5,449 | Support fsspec 2023.1.0 in CI | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008227 / 0.011353 (-0.003126) | 0.004496 / 0.011008 (-0.006512) | 0.099319 / 0.038508 (0.060811) | 0.029929 / 0.023109 (0.006820) | 0.296686 / 0.275898 (0.020788) | 0.355372 / 0.323480 (0.031892) | 0.006864 / 0.007986 (-0.001122) | 0.003458 / 0.004328 (-0.000871) | 0.077234 / 0.004250 (0.072983) | 0.037072 / 0.037052 (0.000020) | 0.311675 / 0.258489 (0.053186) | 0.338965 / 0.293841 (0.045124) | 0.033562 / 0.128546 (-0.094985) | 0.011399 / 0.075646 (-0.064248) | 0.322406 / 0.419271 (-0.096865) | 0.043034 / 0.043533 (-0.000499) | 0.298083 / 0.255139 (0.042944) | 0.323661 / 0.283200 (0.040462) | 0.089380 / 0.141683 (-0.052303) | 1.479363 / 1.452155 (0.027208) | 1.518337 / 1.492716 (0.025620) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.177822 / 0.018006 (0.159816) | 0.400806 / 0.000490 (0.400317) | 0.002121 / 0.000200 (0.001921) | 0.000074 / 0.000054 (0.000019) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021986 / 0.037411 (-0.015426) | 0.096749 / 0.014526 (0.082223) | 0.101443 / 0.176557 (-0.075113) | 0.137519 / 0.737135 (-0.599616) | 0.105558 / 0.296338 (-0.190780) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.418983 / 0.215209 (0.203774) | 4.189579 / 2.077655 (2.111924) | 1.877831 / 1.504120 (0.373711) | 1.666213 / 1.541195 (0.125019) | 1.680735 / 1.468490 (0.212245) | 0.693033 / 4.584777 (-3.891744) | 3.420553 / 3.745712 (-0.325160) | 1.819647 / 5.269862 (-3.450214) | 1.144934 / 4.565676 (-3.420743) | 0.082209 / 0.424275 (-0.342066) | 0.012433 / 0.007607 (0.004826) | 0.526781 / 0.226044 (0.300737) | 5.273689 / 2.268929 (3.004760) | 2.323468 / 55.444624 (-53.121156) | 1.960508 / 6.876477 (-4.915969) | 2.035338 / 2.142072 (-0.106735) | 0.812789 / 4.805227 (-3.992438) | 0.148429 / 6.500664 (-6.352235) | 0.064727 / 0.075469 (-0.010742) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.253218 / 1.841788 (-0.588569) | 13.303426 / 8.074308 (5.229118) | 13.651074 / 10.191392 (3.459682) | 0.135178 / 0.680424 (-0.545246) | 0.028483 / 0.534201 (-0.505717) | 0.393284 / 0.579283 (-0.185999) | 0.401957 / 0.434364 (-0.032407) | 0.457136 / 0.540337 (-0.083201) | 0.535835 / 1.386936 (-0.851101) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006335 / 0.011353 (-0.005017) | 0.004454 / 0.011008 (-0.006554) | 0.097565 / 0.038508 (0.059057) | 0.026917 / 0.023109 (0.003808) | 0.350779 / 0.275898 (0.074881) | 0.391979 / 0.323480 (0.068499) | 0.004648 / 0.007986 (-0.003337) | 0.003204 / 0.004328 (-0.001124) | 0.076987 / 0.004250 (0.072737) | 0.035257 / 0.037052 (-0.001796) | 0.347193 / 0.258489 (0.088704) | 0.391462 / 0.293841 (0.097621) | 0.031244 / 0.128546 (-0.097302) | 0.011460 / 0.075646 (-0.064186) | 0.321606 / 0.419271 (-0.097665) | 0.041218 / 0.043533 (-0.002315) | 0.341884 / 0.255139 (0.086745) | 0.374920 / 0.283200 (0.091720) | 0.086383 / 0.141683 (-0.055300) | 1.501750 / 1.452155 (0.049595) | 1.565060 / 1.492716 (0.072344) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.165447 / 0.018006 (0.147441) | 0.401885 / 0.000490 (0.401395) | 0.000975 / 0.000200 (0.000775) | 0.000070 / 0.000054 (0.000015) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024494 / 0.037411 (-0.012917) | 0.097334 / 0.014526 (0.082808) | 0.105324 / 0.176557 (-0.071232) | 0.142430 / 0.737135 (-0.594705) | 0.107249 / 0.296338 (-0.189089) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.441632 / 0.215209 (0.226423) | 4.407729 / 2.077655 (2.330074) | 2.078167 / 1.504120 (0.574047) | 1.864210 / 1.541195 (0.323015) | 1.885948 / 1.468490 (0.417458) | 0.693974 / 4.584777 (-3.890803) | 3.386837 / 3.745712 (-0.358875) | 1.840291 / 5.269862 (-3.429571) | 1.150524 / 4.565676 (-3.415153) | 0.082240 / 0.424275 (-0.342035) | 0.012488 / 0.007607 (0.004881) | 0.537589 / 0.226044 (0.311545) | 5.404007 / 2.268929 (3.135078) | 2.537467 / 55.444624 (-52.907157) | 2.190775 / 6.876477 (-4.685702) | 2.224746 / 2.142072 (0.082674) | 0.799524 / 4.805227 (-4.005703) | 0.150639 / 6.500664 (-6.350025) | 0.066473 / 0.075469 (-0.008997) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.258559 / 1.841788 (-0.583228) | 13.773583 / 8.074308 (5.699275) | 13.964322 / 10.191392 (3.772930) | 0.156295 / 0.680424 (-0.524129) | 0.016824 / 0.534201 (-0.517377) | 0.377476 / 0.579283 (-0.201807) | 0.390163 / 0.434364 (-0.044201) | 0.442541 / 0.540337 (-0.097796) | 0.529404 / 1.386936 (-0.857532) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#8f500a5c554b213aafe87293bd593920567742c3 \"CML watermark\")\n"
] | 2023-01-20T12:53:17 | 2023-01-20T13:32:50 | 2023-01-20T13:26:03 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5449",
"html_url": "https://github.com/huggingface/datasets/pull/5449",
"diff_url": "https://github.com/huggingface/datasets/pull/5449.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5449.patch",
"merged_at": "2023-01-20T13:26:03"
} | Support fsspec 2023.1.0 in CI.
In the 2023.1.0 fsspec release, they replaced the type of `fsspec.registry`:
- from `ReadOnlyRegistry`, with an attribute called `target`
- to `MappingProxyType`, without that attribute
Consequently, we need to change our `mock_fsspec` fixtures, that were using the `target` attribute.
Fix #5448. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5449/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5449/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5448 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5448/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5448/comments | https://api.github.com/repos/huggingface/datasets/issues/5448/events | https://github.com/huggingface/datasets/issues/5448 | 1,550,618,514 | I_kwDODunzps5cbI-S | 5,448 | Support fsspec 2023.1.0 in CI | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 2023-01-20T10:26:31 | 2023-01-20T13:26:05 | 2023-01-20T13:26:05 | MEMBER | null | null | null | Once we find out the root cause of:
- #5445
we should revert the temporary pin on fsspec introduced by:
- #5447 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5448/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5448/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5447 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5447/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5447/comments | https://api.github.com/repos/huggingface/datasets/issues/5447/events | https://github.com/huggingface/datasets/pull/5447 | 1,550,599,193 | PR_kwDODunzps5IM0Nu | 5,447 | Fix CI by temporarily pinning fsspec < 2023.1.0 | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.011875 / 0.011353 (0.000522) | 0.008188 / 0.011008 (-0.002821) | 0.131137 / 0.038508 (0.092629) | 0.038127 / 0.023109 (0.015018) | 0.383864 / 0.275898 (0.107966) | 0.458617 / 0.323480 (0.135137) | 0.010989 / 0.007986 (0.003003) | 0.004892 / 0.004328 (0.000563) | 0.101955 / 0.004250 (0.097704) | 0.045081 / 0.037052 (0.008029) | 0.409768 / 0.258489 (0.151279) | 0.446597 / 0.293841 (0.152756) | 0.058588 / 0.128546 (-0.069958) | 0.020872 / 0.075646 (-0.054774) | 0.432982 / 0.419271 (0.013711) | 0.075875 / 0.043533 (0.032342) | 0.380923 / 0.255139 (0.125784) | 0.432994 / 0.283200 (0.149795) | 0.122678 / 0.141683 (-0.019005) | 1.857865 / 1.452155 (0.405710) | 1.927801 / 1.492716 (0.435085) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.212941 / 0.018006 (0.194935) | 0.527977 / 0.000490 (0.527488) | 0.002996 / 0.000200 (0.002797) | 0.000105 / 0.000054 (0.000051) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030046 / 0.037411 (-0.007366) | 0.126384 / 0.014526 (0.111858) | 0.138307 / 0.176557 (-0.038250) | 0.185338 / 0.737135 (-0.551797) | 0.144733 / 0.296338 (-0.151606) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.627096 / 0.215209 (0.411887) | 6.418014 / 2.077655 (4.340360) | 2.547675 / 1.504120 (1.043555) | 2.195552 / 1.541195 (0.654357) | 2.200377 / 1.468490 (0.731887) | 1.289935 / 4.584777 (-3.294842) | 5.670839 / 3.745712 (1.925127) | 5.252597 / 5.269862 (-0.017265) | 2.878470 / 4.565676 (-1.687207) | 0.143754 / 0.424275 (-0.280521) | 0.014814 / 0.007607 (0.007207) | 0.810073 / 0.226044 (0.584028) | 8.183757 / 2.268929 (5.914829) | 3.375525 / 55.444624 (-52.069099) | 2.594048 / 6.876477 (-4.282428) | 2.598095 / 2.142072 (0.456023) | 1.554493 / 4.805227 (-3.250734) | 0.263159 / 6.500664 (-6.237505) | 0.089822 / 0.075469 (0.014353) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.660847 / 1.841788 (-0.180941) | 18.434283 / 8.074308 (10.359975) | 21.764887 / 10.191392 (11.573495) | 0.264524 / 0.680424 (-0.415900) | 0.048519 / 0.534201 (-0.485682) | 0.587468 / 0.579283 (0.008185) | 0.634142 / 0.434364 (0.199778) | 0.675374 / 0.540337 (0.135037) | 0.777510 / 1.386936 (-0.609426) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010021 / 0.011353 (-0.001332) | 0.006207 / 0.011008 (-0.004801) | 0.130490 / 0.038508 (0.091982) | 0.037957 / 0.023109 (0.014848) | 0.489381 / 0.275898 (0.213483) | 0.536522 / 0.323480 (0.213042) | 0.008611 / 0.007986 (0.000626) | 0.004894 / 0.004328 (0.000565) | 0.101617 / 0.004250 (0.097367) | 0.052629 / 0.037052 (0.015577) | 0.509211 / 0.258489 (0.250721) | 0.545023 / 0.293841 (0.251182) | 0.057468 / 0.128546 (-0.071078) | 0.023393 / 0.075646 (-0.052253) | 0.431408 / 0.419271 (0.012137) | 0.064967 / 0.043533 (0.021434) | 0.495261 / 0.255139 (0.240122) | 0.527098 / 0.283200 (0.243898) | 0.113172 / 0.141683 (-0.028511) | 1.937072 / 1.452155 (0.484918) | 2.048413 / 1.492716 (0.555697) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.245406 / 0.018006 (0.227399) | 0.526772 / 0.000490 (0.526283) | 0.004379 / 0.000200 (0.004179) | 0.000114 / 0.000054 (0.000060) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031785 / 0.037411 (-0.005626) | 0.130949 / 0.014526 (0.116424) | 0.145660 / 0.176557 (-0.030896) | 0.186991 / 0.737135 (-0.550144) | 0.151000 / 0.296338 (-0.145338) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.708643 / 0.215209 (0.493434) | 7.179252 / 2.077655 (5.101597) | 3.143375 / 1.504120 (1.639255) | 2.714298 / 1.541195 (1.173103) | 2.773441 / 1.468490 (1.304951) | 1.312821 / 4.584777 (-3.271956) | 5.798396 / 3.745712 (2.052684) | 3.253215 / 5.269862 (-2.016646) | 2.147260 / 4.565676 (-2.418416) | 0.154673 / 0.424275 (-0.269602) | 0.014918 / 0.007607 (0.007311) | 0.860618 / 0.226044 (0.634573) | 8.774455 / 2.268929 (6.505527) | 3.925020 / 55.444624 (-51.519604) | 3.139361 / 6.876477 (-3.737115) | 3.208883 / 2.142072 (1.066810) | 1.547305 / 4.805227 (-3.257922) | 0.268814 / 6.500664 (-6.231850) | 0.084578 / 0.075469 (0.009109) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.694990 / 1.841788 (-0.146798) | 18.619183 / 8.074308 (10.544875) | 21.929886 / 10.191392 (11.738494) | 0.265763 / 0.680424 (-0.414661) | 0.028325 / 0.534201 (-0.505876) | 0.552910 / 0.579283 (-0.026373) | 0.616864 / 0.434364 (0.182500) | 0.637858 / 0.540337 (0.097521) | 0.744508 / 1.386936 (-0.642428) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#5f819ba3d0306748aaf9fd8ea040b981dd08e5e5 \"CML watermark\")\n"
] | 2023-01-20T10:11:02 | 2023-01-20T10:38:13 | 2023-01-20T10:28:43 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5447",
"html_url": "https://github.com/huggingface/datasets/pull/5447",
"diff_url": "https://github.com/huggingface/datasets/pull/5447.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5447.patch",
"merged_at": "2023-01-20T10:28:43"
} | Temporarily pin fsspec < 2023.1.0
Fix #5445. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5447/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5447/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5446 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5446/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5446/comments | https://api.github.com/repos/huggingface/datasets/issues/5446/events | https://github.com/huggingface/datasets/pull/5446 | 1,550,591,588 | PR_kwDODunzps5IMyka | 5,446 | test v0.12.0.rc0 | {
"login": "Wauplin",
"id": 11801849,
"node_id": "MDQ6VXNlcjExODAxODQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/11801849?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Wauplin",
"html_url": "https://github.com/Wauplin",
"followers_url": "https://api.github.com/users/Wauplin/followers",
"following_url": "https://api.github.com/users/Wauplin/following{/other_user}",
"gists_url": "https://api.github.com/users/Wauplin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Wauplin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Wauplin/subscriptions",
"organizations_url": "https://api.github.com/users/Wauplin/orgs",
"repos_url": "https://api.github.com/users/Wauplin/repos",
"events_url": "https://api.github.com/users/Wauplin/events{/privacy}",
"received_events_url": "https://api.github.com/users/Wauplin/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"@Wauplin I was testing it in a dedicated branch without opening a PR: https://github.com/huggingface/datasets/commits/test-hfh-0.12.0rc0",
"Oops, sorry @albertvillanova. I thought for next time I'll start the CIs before pinging everyone.\r\nI'm closing this one.",
"@Wauplin in your Slack message, you asked people from every major dependent library to check that our CI work. That is why I am checking it... :)\r\n\r\nAlso, I think for this purpose it is better to test it in a dedicated branch, rather than opening and closing a PR.",
"Yes, yes I know. Completely my fault on this one"
] | 2023-01-20T10:05:19 | 2023-01-20T10:43:22 | 2023-01-20T10:13:48 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5446",
"html_url": "https://github.com/huggingface/datasets/pull/5446",
"diff_url": "https://github.com/huggingface/datasets/pull/5446.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5446.patch",
"merged_at": null
} | DO NOT MERGE.
Only to test the CI.
cc @lhoestq @albertvillanova | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5446/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5446/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5445 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5445/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5445/comments | https://api.github.com/repos/huggingface/datasets/issues/5445/events | https://github.com/huggingface/datasets/issues/5445 | 1,550,588,703 | I_kwDODunzps5cbBsf | 5,445 | CI tests are broken: AttributeError: 'mappingproxy' object has no attribute 'target' | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 2023-01-20T10:03:10 | 2023-01-20T10:28:44 | 2023-01-20T10:28:44 | MEMBER | null | null | null | CI tests are broken, raising `AttributeError: 'mappingproxy' object has no attribute 'target'`. See: https://github.com/huggingface/datasets/actions/runs/3966497597/jobs/6797384185
```
...
ERROR tests/test_streaming_download_manager.py::TestxPath::test_xpath_rglob[mock://top_level-date=2019-10-0[1-4]/*-expected_paths4] - AttributeError: 'mappingproxy' object has no attribute 'target'
===== 2076 passed, 19 skipped, 15 warnings, 47 errors in 115.54s (0:01:55) =====
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5445/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5445/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5444 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5444/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5444/comments | https://api.github.com/repos/huggingface/datasets/issues/5444/events | https://github.com/huggingface/datasets/issues/5444 | 1,550,185,071 | I_kwDODunzps5cZfJv | 5,444 | info messages logged as warnings | {
"login": "davidgilbertson",
"id": 4443482,
"node_id": "MDQ6VXNlcjQ0NDM0ODI=",
"avatar_url": "https://avatars.githubusercontent.com/u/4443482?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/davidgilbertson",
"html_url": "https://github.com/davidgilbertson",
"followers_url": "https://api.github.com/users/davidgilbertson/followers",
"following_url": "https://api.github.com/users/davidgilbertson/following{/other_user}",
"gists_url": "https://api.github.com/users/davidgilbertson/gists{/gist_id}",
"starred_url": "https://api.github.com/users/davidgilbertson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/davidgilbertson/subscriptions",
"organizations_url": "https://api.github.com/users/davidgilbertson/orgs",
"repos_url": "https://api.github.com/users/davidgilbertson/repos",
"events_url": "https://api.github.com/users/davidgilbertson/events{/privacy}",
"received_events_url": "https://api.github.com/users/davidgilbertson/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Looks like a duplicate of https://github.com/huggingface/datasets/issues/1948. \r\n\r\nI also think these should be logged as INFO messages, but let's see what @lhoestq thinks.",
"It can be considered unexpected to see a `map` function return instantaneously. The warning is here to explain this case by mentioning that the cache was used. I don't expect first time users (only seeing warnings) to guess that the cache works this way",
"Oh, so it's intentional? Do all Hugging Face packages use `warning` when using cache?\r\nI guess feel free to close this issue then.",
"Yes it's intentional for `map`. For `load_dataset` it's also intentional but for a different reason: it shows where in the cache the dataset is located, in case the user wants to clear the cache.",
"OK I see. It's surprising to me that these are considered \"something unexpected happened\", the concept of cache is pretty common.\r\n\r\nHas a user every actually complained that they ran their code once, and it took a minute while the data downloaded, then ran their code again and it ran really fast (and completed successfully) but they were so baffled by the fact that it ran quickly, _and_ didn't set the log level to INFO, _and_ hadn't read the docs (or thought about it) to know that datasets are cached, that they logged an issue asking that this information be output as a warning every time they run their code?\r\n\r\nThat seems like a very niche scenario to cater for, given that the side effect is to flood the console with irrelevant warnings for every other user every other time they run a bit of `datasets` code. And the real world impact is that people TURN OFF warnings, which is a pretty bad habit to get into.\r\n\r\nAnyhoo, if there's no chance I'm going to change your mind, please close the issue :)",
"I see your point and I'm not closed to switching to INFO, but I think those logs are important to make the library less opaque. I also just checked `transformers` scripts and they default to INFO which is nice. However for colab users the default is still WARNING iirc, and it counts as one of the main env where `datasets` is used.\r\n\r\nWe also use progress bars a lot in `datasets`, that are shown if the logger is at the WARNING level. But we offer a function to disable the progress bars if necessary.",
"These kinds of messages are logged as INFO in Transformers, so we should probably be consistent with them"
] | 2023-01-20T01:19:18 | 2023-07-12T17:19:31 | 2023-07-12T17:19:31 | NONE | null | null | null | ### Describe the bug
Code in `datasets` is using `logger.warning` when it should be using `logger.info`.
Some of these are probably a matter of opinion, but I think anything starting with `logger.warning(f"Loading chached` clearly falls into the info category.
Definitions from the Python docs for reference:
* INFO: Confirmation that things are working as expected.
* WARNING: An indication that something unexpected happened, or indicative of some problem in the near future (e.g. ‘disk space low’). The software is still working as expected.
In theory, a user should be able to resolve things such that there are no warnings.
### Steps to reproduce the bug
Load any dataset that's already cached.
### Expected behavior
No output when log level is at the default WARNING level.
### Environment info
- `datasets` version: 2.8.0
- Platform: Linux-5.10.102.1-microsoft-standard-WSL2-x86_64-with-glibc2.31
- Python version: 3.10.8
- PyArrow version: 9.0.0
- Pandas version: 1.5.2 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5444/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5444/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5443 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5443/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5443/comments | https://api.github.com/repos/huggingface/datasets/issues/5443/events | https://github.com/huggingface/datasets/pull/5443 | 1,550,178,914 | PR_kwDODunzps5ILbk8 | 5,443 | Update share tutorial | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009885 / 0.011353 (-0.001468) | 0.005338 / 0.011008 (-0.005670) | 0.099967 / 0.038508 (0.061459) | 0.036860 / 0.023109 (0.013751) | 0.295283 / 0.275898 (0.019385) | 0.369504 / 0.323480 (0.046024) | 0.008267 / 0.007986 (0.000281) | 0.004375 / 0.004328 (0.000046) | 0.076294 / 0.004250 (0.072043) | 0.047058 / 0.037052 (0.010006) | 0.314463 / 0.258489 (0.055974) | 0.348125 / 0.293841 (0.054284) | 0.038334 / 0.128546 (-0.090213) | 0.012102 / 0.075646 (-0.063544) | 0.333049 / 0.419271 (-0.086223) | 0.050727 / 0.043533 (0.007195) | 0.299244 / 0.255139 (0.044105) | 0.318210 / 0.283200 (0.035010) | 0.112609 / 0.141683 (-0.029074) | 1.450377 / 1.452155 (-0.001778) | 1.485177 / 1.492716 (-0.007539) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.287083 / 0.018006 (0.269077) | 0.564268 / 0.000490 (0.563778) | 0.003578 / 0.000200 (0.003378) | 0.000093 / 0.000054 (0.000039) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026755 / 0.037411 (-0.010657) | 0.105857 / 0.014526 (0.091331) | 0.118291 / 0.176557 (-0.058266) | 0.155735 / 0.737135 (-0.581401) | 0.122527 / 0.296338 (-0.173812) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.396992 / 0.215209 (0.181783) | 3.958562 / 2.077655 (1.880908) | 1.781570 / 1.504120 (0.277451) | 1.617743 / 1.541195 (0.076549) | 1.753504 / 1.468490 (0.285013) | 0.681509 / 4.584777 (-3.903268) | 3.816910 / 3.745712 (0.071198) | 2.087359 / 5.269862 (-3.182503) | 1.328380 / 4.565676 (-3.237297) | 0.083542 / 0.424275 (-0.340733) | 0.012081 / 0.007607 (0.004473) | 0.505127 / 0.226044 (0.279082) | 5.075136 / 2.268929 (2.806208) | 2.259871 / 55.444624 (-53.184753) | 1.944302 / 6.876477 (-4.932175) | 2.102624 / 2.142072 (-0.039449) | 0.819779 / 4.805227 (-3.985448) | 0.165584 / 6.500664 (-6.335080) | 0.061774 / 0.075469 (-0.013695) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.208258 / 1.841788 (-0.633530) | 14.841635 / 8.074308 (6.767327) | 14.484515 / 10.191392 (4.293123) | 0.156464 / 0.680424 (-0.523959) | 0.028839 / 0.534201 (-0.505362) | 0.440860 / 0.579283 (-0.138423) | 0.433892 / 0.434364 (-0.000472) | 0.515339 / 0.540337 (-0.024998) | 0.608838 / 1.386936 (-0.778098) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007548 / 0.011353 (-0.003804) | 0.005464 / 0.011008 (-0.005544) | 0.096987 / 0.038508 (0.058479) | 0.034472 / 0.023109 (0.011363) | 0.391249 / 0.275898 (0.115351) | 0.432779 / 0.323480 (0.109299) | 0.006170 / 0.007986 (-0.001816) | 0.004316 / 0.004328 (-0.000013) | 0.074184 / 0.004250 (0.069934) | 0.054254 / 0.037052 (0.017202) | 0.397947 / 0.258489 (0.139458) | 0.451253 / 0.293841 (0.157412) | 0.037098 / 0.128546 (-0.091449) | 0.012649 / 0.075646 (-0.062997) | 0.333533 / 0.419271 (-0.085739) | 0.050247 / 0.043533 (0.006714) | 0.390446 / 0.255139 (0.135307) | 0.410547 / 0.283200 (0.127347) | 0.110888 / 0.141683 (-0.030795) | 1.452160 / 1.452155 (0.000006) | 1.596331 / 1.492716 (0.103615) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.256061 / 0.018006 (0.238055) | 0.552674 / 0.000490 (0.552184) | 0.003362 / 0.000200 (0.003162) | 0.000095 / 0.000054 (0.000040) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030199 / 0.037411 (-0.007213) | 0.110288 / 0.014526 (0.095762) | 0.127412 / 0.176557 (-0.049145) | 0.165428 / 0.737135 (-0.571707) | 0.131658 / 0.296338 (-0.164680) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.441946 / 0.215209 (0.226737) | 4.414209 / 2.077655 (2.336555) | 2.284530 / 1.504120 (0.780410) | 2.110752 / 1.541195 (0.569557) | 2.210751 / 1.468490 (0.742260) | 0.698829 / 4.584777 (-3.885948) | 3.819044 / 3.745712 (0.073332) | 3.274021 / 5.269862 (-1.995840) | 1.781284 / 4.565676 (-2.784393) | 0.085264 / 0.424275 (-0.339011) | 0.012360 / 0.007607 (0.004753) | 0.553519 / 0.226044 (0.327475) | 5.466395 / 2.268929 (3.197467) | 2.825839 / 55.444624 (-52.618786) | 2.439451 / 6.876477 (-4.437026) | 2.582534 / 2.142072 (0.440462) | 0.841644 / 4.805227 (-3.963583) | 0.172288 / 6.500664 (-6.328376) | 0.067215 / 0.075469 (-0.008254) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.283623 / 1.841788 (-0.558165) | 15.753163 / 8.074308 (7.678855) | 14.983263 / 10.191392 (4.791871) | 0.187584 / 0.680424 (-0.492840) | 0.017999 / 0.534201 (-0.516202) | 0.427157 / 0.579283 (-0.152126) | 0.435456 / 0.434364 (0.001092) | 0.496800 / 0.540337 (-0.043537) | 0.592557 / 1.386936 (-0.794379) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#8a72676689a4a3fb466cc5077884446c7302e605 \"CML watermark\")\n"
] | 2023-01-20T01:09:14 | 2023-01-20T15:44:45 | 2023-01-20T15:37:30 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5443",
"html_url": "https://github.com/huggingface/datasets/pull/5443",
"diff_url": "https://github.com/huggingface/datasets/pull/5443.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5443.patch",
"merged_at": "2023-01-20T15:37:30"
} | Based on feedback from discussion #5423, this PR updates the sharing tutorial with a mention of writing your own dataset loading script to support more advanced dataset creation options like multiple configs.
I'll open a separate PR to update the *Create a Dataset card* with the new Hub metadata UI update 😄 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5443/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5443/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5442 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5442/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5442/comments | https://api.github.com/repos/huggingface/datasets/issues/5442/events | https://github.com/huggingface/datasets/issues/5442 | 1,550,084,450 | I_kwDODunzps5cZGli | 5,442 | OneDrive Integrations with HF Datasets | {
"login": "Mohammed20201991",
"id": 59222637,
"node_id": "MDQ6VXNlcjU5MjIyNjM3",
"avatar_url": "https://avatars.githubusercontent.com/u/59222637?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Mohammed20201991",
"html_url": "https://github.com/Mohammed20201991",
"followers_url": "https://api.github.com/users/Mohammed20201991/followers",
"following_url": "https://api.github.com/users/Mohammed20201991/following{/other_user}",
"gists_url": "https://api.github.com/users/Mohammed20201991/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Mohammed20201991/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Mohammed20201991/subscriptions",
"organizations_url": "https://api.github.com/users/Mohammed20201991/orgs",
"repos_url": "https://api.github.com/users/Mohammed20201991/repos",
"events_url": "https://api.github.com/users/Mohammed20201991/events{/privacy}",
"received_events_url": "https://api.github.com/users/Mohammed20201991/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | [
"Hi! \r\n\r\nWe use [`fsspec`](https://github.com/fsspec/filesystem_spec) to integrate with storage providers. You can find more info (and the usage examples) in [our docs](https://huggingface.co/docs/datasets/v2.8.0/filesystems#download-and-prepare-a-dataset-into-a-cloud-storage).\r\n\r\n[`gdrivefs`](https://github.com/fsspec/gdrivefs) makes it possible to use Google Drive as a storage service in Datasets, but this is not the case for OneDrive, since its[ Python SDK](https://github.com/OneDrive/onedrive-sdk-python) is not integrated with `fsspec`. Can you please request the integration with `fsspec` in their repo to address this limitation?",
"I'm closing this issue as implementing a fsspec-compliant OneDrive filesystem is not our responsibility."
] | 2023-01-19T23:12:08 | 2023-02-24T16:17:51 | 2023-02-24T16:17:51 | NONE | null | null | null | ### Feature request
First of all , I would like to thank all community who are developed DataSet storage and make it free available
How to integrate our Onedrive account or any other possible storage clouds (like google drive,...) with the **HF** datasets section.
For example, if I have **50GB** on my **Onedrive** account and I want to move between drive and Hugging face repo or vis versa
### Motivation
make the dataset section more flexible with other possible storage
like the integration between Google Collab and Google drive the storage
### Your contribution
Can be done using Hugging face CLI | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5442/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5442/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5441 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5441/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5441/comments | https://api.github.com/repos/huggingface/datasets/issues/5441/events | https://github.com/huggingface/datasets/pull/5441 | 1,548,417,594 | PR_kwDODunzps5IFeCW | 5,441 | resolving a weird tar extract issue | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.011815 / 0.011353 (0.000463) | 0.006407 / 0.011008 (-0.004601) | 0.132937 / 0.038508 (0.094429) | 0.040634 / 0.023109 (0.017525) | 0.398049 / 0.275898 (0.122151) | 0.498207 / 0.323480 (0.174727) | 0.010111 / 0.007986 (0.002126) | 0.007282 / 0.004328 (0.002954) | 0.103661 / 0.004250 (0.099411) | 0.046223 / 0.037052 (0.009171) | 0.411490 / 0.258489 (0.153001) | 0.480973 / 0.293841 (0.187132) | 0.058397 / 0.128546 (-0.070149) | 0.019952 / 0.075646 (-0.055695) | 0.440734 / 0.419271 (0.021463) | 0.064585 / 0.043533 (0.021052) | 0.392556 / 0.255139 (0.137417) | 0.437842 / 0.283200 (0.154643) | 0.130684 / 0.141683 (-0.010999) | 1.910552 / 1.452155 (0.458397) | 1.984644 / 1.492716 (0.491927) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.264417 / 0.018006 (0.246411) | 0.676519 / 0.000490 (0.676030) | 0.003369 / 0.000200 (0.003169) | 0.000125 / 0.000054 (0.000071) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034558 / 0.037411 (-0.002854) | 0.126561 / 0.014526 (0.112035) | 0.134478 / 0.176557 (-0.042079) | 0.202125 / 0.737135 (-0.535010) | 0.143273 / 0.296338 (-0.153066) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.618592 / 0.215209 (0.403383) | 6.224435 / 2.077655 (4.146780) | 2.636689 / 1.504120 (1.132569) | 2.243507 / 1.541195 (0.702313) | 2.312449 / 1.468490 (0.843959) | 1.188499 / 4.584777 (-3.396277) | 5.738347 / 3.745712 (1.992635) | 4.891933 / 5.269862 (-0.377929) | 2.697631 / 4.565676 (-1.868046) | 0.140200 / 0.424275 (-0.284076) | 0.015484 / 0.007607 (0.007877) | 0.781947 / 0.226044 (0.555903) | 7.946600 / 2.268929 (5.677671) | 3.365574 / 55.444624 (-52.079050) | 2.783443 / 6.876477 (-4.093034) | 2.738634 / 2.142072 (0.596561) | 1.487247 / 4.805227 (-3.317980) | 0.255681 / 6.500664 (-6.244983) | 0.084607 / 0.075469 (0.009138) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.717846 / 1.841788 (-0.123941) | 18.405566 / 8.074308 (10.331258) | 20.508578 / 10.191392 (10.317186) | 0.262364 / 0.680424 (-0.418060) | 0.050881 / 0.534201 (-0.483319) | 0.587516 / 0.579283 (0.008232) | 0.650900 / 0.434364 (0.216536) | 0.656168 / 0.540337 (0.115830) | 0.778876 / 1.386936 (-0.608061) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010817 / 0.011353 (-0.000536) | 0.007338 / 0.011008 (-0.003670) | 0.131949 / 0.038508 (0.093441) | 0.037244 / 0.023109 (0.014135) | 0.565994 / 0.275898 (0.290096) | 0.567434 / 0.323480 (0.243954) | 0.007733 / 0.007986 (-0.000252) | 0.005216 / 0.004328 (0.000887) | 0.096578 / 0.004250 (0.092328) | 0.056001 / 0.037052 (0.018949) | 0.538209 / 0.258489 (0.279720) | 0.580385 / 0.293841 (0.286544) | 0.053654 / 0.128546 (-0.074892) | 0.019471 / 0.075646 (-0.056176) | 0.448781 / 0.419271 (0.029509) | 0.064774 / 0.043533 (0.021241) | 0.540222 / 0.255139 (0.285083) | 0.563058 / 0.283200 (0.279858) | 0.122716 / 0.141683 (-0.018967) | 1.839402 / 1.452155 (0.387247) | 1.915523 / 1.492716 (0.422806) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.310448 / 0.018006 (0.292442) | 0.603664 / 0.000490 (0.603175) | 0.004833 / 0.000200 (0.004633) | 0.000145 / 0.000054 (0.000090) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032340 / 0.037411 (-0.005072) | 0.130115 / 0.014526 (0.115589) | 0.154192 / 0.176557 (-0.022364) | 0.200655 / 0.737135 (-0.536480) | 0.144961 / 0.296338 (-0.151377) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.671588 / 0.215209 (0.456379) | 6.691642 / 2.077655 (4.613988) | 2.915230 / 1.504120 (1.411110) | 2.573337 / 1.541195 (1.032143) | 2.578204 / 1.468490 (1.109714) | 1.249028 / 4.584777 (-3.335749) | 5.808539 / 3.745712 (2.062827) | 3.079317 / 5.269862 (-2.190545) | 2.033308 / 4.565676 (-2.532369) | 0.142411 / 0.424275 (-0.281864) | 0.015525 / 0.007607 (0.007918) | 0.800389 / 0.226044 (0.574345) | 8.228236 / 2.268929 (5.959308) | 3.660207 / 55.444624 (-51.784417) | 3.021033 / 6.876477 (-3.855444) | 3.088335 / 2.142072 (0.946263) | 1.380137 / 4.805227 (-3.425091) | 0.252065 / 6.500664 (-6.248599) | 0.084302 / 0.075469 (0.008833) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.709429 / 1.841788 (-0.132359) | 18.358770 / 8.074308 (10.284462) | 21.109844 / 10.191392 (10.918452) | 0.231549 / 0.680424 (-0.448875) | 0.029251 / 0.534201 (-0.504950) | 0.560719 / 0.579283 (-0.018564) | 0.610125 / 0.434364 (0.175761) | 0.630015 / 0.540337 (0.089678) | 0.751656 / 1.386936 (-0.635280) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#18baf4eebf71c0db1d9980f7ee164f1272ff8f26 \"CML watermark\")\n",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5441). All of your documentation changes will be reflected on that endpoint.",
"I think I managed to reproduce it:\r\n\r\n```\r\nrm -rf ~/.cache/huggingface/datasets/HuggingFaceM4___cm4-synthetic-testing\r\nmkdir -p /tmp/xxx/hf-data\r\nsudo ln -s /tmp/xxx /test\r\nmkdir -p /tmp/yyy\r\nln -sf /test/hf-data /tmp/yyy/data\r\ncd /tmp/yyy\r\npython -c 'import sys; from datasets import load_dataset; ds=load_dataset(sys.argv[1])' HuggingFaceM4/cm4-synthetic-testing\r\n```\r\n\r\nPlease note it includes a creation of a symlink from the `/` (so `sudo`) - may be there is a simpler way but I'm just trying to replicate the real setup. Of course please be careful - it's mostly under `/tmp` not to destroy anything if you try to run this.\r\n\r\nthis fails with:\r\n\r\n```\r\nNo config specified, defaulting to: cm4-synthetic-testing/100.unique\r\nDownloading and preparing dataset cm4-synthetic-testing/100.unique (download: 20.71 KiB, generated: 49.99 MiB, post-processed: Unknown size, total: 50.01 MiB) to /home/stas/.cache/huggingface/datasets/HuggingFaceM4___cm4-synthetic-testing/100.unique/1.1.1/2e33dcc086c7209b8ccff4b19e44f1d41b5be53262e7d793142b96c2e984602b...\r\nExtraction of data is blocked (illegal path: /tmp/yyy)\r\n[...]\r\nExtraction of data/115/texts_03.txt is blocked (illegal path: /tmp/yyy)\r\nGenerating 100.unique split: 0%| | 0/100 [00:00<?, ? examples/s]Generating 100-long unique records split\r\n\r\nTraceback (most recent call last):\r\n File \"/mnt/nvme0/code/huggingface/datasets-master/src/datasets/builder.py\", line 1571, in _prepare_split_single\r\n for key, record in generator:\r\n File \"/home/stas/.cache/huggingface/modules/datasets_modules/datasets/HuggingFaceM4--cm4-synthetic-testing/2e33dcc086c7209b8ccff4b19e44f1d41b5be53262e7d793142b96c2e984602b/cm4-synthetic-testing.py\", line 190, in _generate_examples\r\n raise ValueError(f\"can't find any data - check {data_path}\")\r\nValueError: can't find any data - check /home/stas/.cache/huggingface/datasets/downloads/extracted/134227b9b94c4eccf19b205bf3021d4492d0227b9be6c2ddb6bf517d8d55a8cb/data\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"<string>\", line 1, in <module>\r\n File \"/mnt/nvme0/code/huggingface/datasets-master/src/datasets/load.py\", line 1757, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"/mnt/nvme0/code/huggingface/datasets-master/src/datasets/builder.py\", line 860, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/mnt/nvme0/code/huggingface/datasets-master/src/datasets/builder.py\", line 1612, in _download_and_prepare\r\n super()._download_and_prepare(\r\n File \"/mnt/nvme0/code/huggingface/datasets-master/src/datasets/builder.py\", line 953, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"/mnt/nvme0/code/huggingface/datasets-master/src/datasets/builder.py\", line 1450, in _prepare_split\r\n for job_id, done, content in self._prepare_split_single(\r\n File \"/mnt/nvme0/code/huggingface/datasets-master/src/datasets/builder.py\", line 1607, in _prepare_split_single\r\n raise DatasetGenerationError(\"An error occurred while generating the dataset\") from e\r\ndatasets.builder.DatasetGenerationError: An error occurred while generating the dataset\r\n```\r\n\r\nnote that `illegal path: /tmp/yyy` is now with the mods of this PR.\r\n\r\n----------------------\r\n\r\nAlso I think the whole thing should have failed at the first `illegal path` and not continue running. But as it continued and gave:\r\n\r\n\r\n> ValueError: can't find any data - check /home/stas/.cache/huggingface/datasets/downloads/extracted/134227b9b94c4eccf19b205bf3021d4492d0227b9be6c2ddb6bf517d8d55a8cb/data\r\n\r\nwhat can a user do with that other than confirming that that dir is indeed empty, but no clue is given to why and it's far from obvious that one needs to scroll up and discover earlier issues. Most users won't do that.\r\n\r\n(my apologies for writing out so much - was trying to make the situation clear)",
"Thank you, Albert, for the explanation.\r\n\r\nTo summarize I think what's needed is:\r\n\r\n1. add a comment in the code to why this is done for someone being puzzled over the odd code\r\n2. and to use an actionable by the user error message\r\n3. perform an untrapped assert on that tar extract error and not continue, so that the user will not get a later misleading error that the folder is empty and is completely not actionable and it's is far from obvious that one needs to scroll up to find earlier errors, which were trapped.\r\n\r\nAfter reading the advisory I'm still not sure why `cwd` is used and not a designated `~/.cache/huggingface/datasets/downloads/extracted`, I can't see what difference does it make since I could `chdir` to the designated directory and it would be `cwd`. The security solution is trying to ensure that `/etc/passwd` won't get overriden. So why is the check done in `.` and not the real target base directory, since the extraction isn't done in the current working dir. By not using `.` you lower the chances that the user will have all sorts of local symlinks that could trigger the issue since `datasets` typically is the only one managing it's `~/.cache/huggingface/datasets` domain and 99.9% of the time the user won't manually create files in it.\r\n\r\nthank you!\r\n"
] | 2023-01-19T02:17:21 | 2023-01-20T16:49:22 | null | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5441",
"html_url": "https://github.com/huggingface/datasets/pull/5441",
"diff_url": "https://github.com/huggingface/datasets/pull/5441.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5441.patch",
"merged_at": null
} | ok, every so often, I have been getting a strange failure on dataset install:
```
$ python -c 'import sys; from datasets import load_dataset; ds=load_dataset(sys.argv[1])' HuggingFaceM4/general-pmd-synthetic-testing
No config specified, defaulting to: general-pmd-synthetic-testing/100.unique
Downloading and preparing dataset general-pmd-synthetic-testing/100.unique (download: 3.21 KiB, generated: 16.01 MiB, post-processed: Unknown size, total: 16.02 MiB) to /home/stas/.cache/huggingface/datasets/HuggingFaceM4___general-pmd-synthetic-testing/100.unique/1.1.1/86bc445e3e48cb5ef79de109eb4e54ff85b318cd55c3835c4ee8f86eae33d9d2...
Extraction of data is blocked (illegal path)
Extraction of data/1 is blocked (illegal path)
Extraction of data/1/text.null is blocked (illegal path)
[...]
```
I had no idea what to do with that - what in the world does **illegal path** mean?
I started looking at the code in `TarExtractor` and added a debug print of `base` so that told me that there was a problem with the current directory - which was a clone of one of the hf repos.
This particular dataset extracts into a directory `data` and the current dir I was running the tests from already had `data` in it which was a symbolic link to another partition and somehow all that `badpath` code was blowing up there.
https://github.com/huggingface/datasets/blob/80eb8db74f49b7ee9c0f73a819c22177fabd61db/src/datasets/utils/extract.py#L113-L114
I tried hard to come up with a repro, but no matter what I tried it only fails in that particular clone directory that has a `data` symlink and not anywhere else.
In any case, in this PR I'm proposing to at least give a user a hint of what seems to be an issue.
I'm not at all happy with the info I got with this proposed change, but at least it gave me a hint that `TarExtractor` tries to extract into the current directory without any respect to pre-existing files. Say what?
https://github.com/huggingface/datasets/blob/80eb8db74f49b7ee9c0f73a819c22177fabd61db/src/datasets/utils/extract.py#L110
why won't it use the `datasets` designated directory for that? There would never be a problem if it were to do that.
I had to look at all those `resolved`, `badpath` calls and see what it did and why it failed, since it was far from obvious. It appeared like it resolved a symlink and compared it to the original path which of course wasn't matching.
So perhaps you have a better solution than what I proposed in this PR. I think that code line I quoted is the one that should be fixed instead.
But if you can't think of a better solution let's merge this at least so that the user will have a clue that the current dir is somehow involved.
p.s. I double checked that if I remove the pre-existing `data` symlink in the current dir I'm running the dataset install command from, the problem goes away too.
Thanks.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5441/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5441/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5440 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5440/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5440/comments | https://api.github.com/repos/huggingface/datasets/issues/5440/events | https://github.com/huggingface/datasets/pull/5440 | 1,538,361,143 | PR_kwDODunzps5HpRbF | 5,440 | Fix documentation about batch samplers | {
"login": "thomasw21",
"id": 24695242,
"node_id": "MDQ6VXNlcjI0Njk1MjQy",
"avatar_url": "https://avatars.githubusercontent.com/u/24695242?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomasw21",
"html_url": "https://github.com/thomasw21",
"followers_url": "https://api.github.com/users/thomasw21/followers",
"following_url": "https://api.github.com/users/thomasw21/following{/other_user}",
"gists_url": "https://api.github.com/users/thomasw21/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomasw21/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomasw21/subscriptions",
"organizations_url": "https://api.github.com/users/thomasw21/orgs",
"repos_url": "https://api.github.com/users/thomasw21/repos",
"events_url": "https://api.github.com/users/thomasw21/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomasw21/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008874 / 0.011353 (-0.002479) | 0.004685 / 0.011008 (-0.006323) | 0.101478 / 0.038508 (0.062970) | 0.031409 / 0.023109 (0.008300) | 0.305429 / 0.275898 (0.029531) | 0.371777 / 0.323480 (0.048297) | 0.007282 / 0.007986 (-0.000704) | 0.005545 / 0.004328 (0.001217) | 0.078583 / 0.004250 (0.074333) | 0.037171 / 0.037052 (0.000118) | 0.320186 / 0.258489 (0.061696) | 0.347881 / 0.293841 (0.054040) | 0.034005 / 0.128546 (-0.094541) | 0.011534 / 0.075646 (-0.064113) | 0.326079 / 0.419271 (-0.093193) | 0.040856 / 0.043533 (-0.002677) | 0.307327 / 0.255139 (0.052188) | 0.323521 / 0.283200 (0.040321) | 0.090407 / 0.141683 (-0.051276) | 1.481994 / 1.452155 (0.029840) | 1.490372 / 1.492716 (-0.002345) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.175161 / 0.018006 (0.157155) | 0.447009 / 0.000490 (0.446519) | 0.003570 / 0.000200 (0.003370) | 0.000072 / 0.000054 (0.000017) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023868 / 0.037411 (-0.013543) | 0.100791 / 0.014526 (0.086265) | 0.108131 / 0.176557 (-0.068425) | 0.147993 / 0.737135 (-0.589142) | 0.111205 / 0.296338 (-0.185133) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.425369 / 0.215209 (0.210160) | 4.241694 / 2.077655 (2.164040) | 2.145403 / 1.504120 (0.641283) | 1.913517 / 1.541195 (0.372322) | 1.887307 / 1.468490 (0.418817) | 0.691615 / 4.584777 (-3.893162) | 3.402233 / 3.745712 (-0.343480) | 1.992532 / 5.269862 (-3.277330) | 1.322292 / 4.565676 (-3.243385) | 0.082862 / 0.424275 (-0.341413) | 0.012595 / 0.007607 (0.004988) | 0.528490 / 0.226044 (0.302445) | 5.313338 / 2.268929 (3.044409) | 2.645037 / 55.444624 (-52.799587) | 2.326279 / 6.876477 (-4.550198) | 2.396955 / 2.142072 (0.254883) | 0.819354 / 4.805227 (-3.985873) | 0.150889 / 6.500664 (-6.349775) | 0.066517 / 0.075469 (-0.008952) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.233673 / 1.841788 (-0.608114) | 14.563293 / 8.074308 (6.488985) | 14.317989 / 10.191392 (4.126597) | 0.150767 / 0.680424 (-0.529657) | 0.028972 / 0.534201 (-0.505229) | 0.400547 / 0.579283 (-0.178736) | 0.402267 / 0.434364 (-0.032097) | 0.459375 / 0.540337 (-0.080962) | 0.544419 / 1.386936 (-0.842517) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006817 / 0.011353 (-0.004536) | 0.004588 / 0.011008 (-0.006421) | 0.099224 / 0.038508 (0.060716) | 0.027730 / 0.023109 (0.004621) | 0.412310 / 0.275898 (0.136412) | 0.445731 / 0.323480 (0.122252) | 0.005197 / 0.007986 (-0.002788) | 0.003601 / 0.004328 (-0.000728) | 0.076200 / 0.004250 (0.071950) | 0.041813 / 0.037052 (0.004761) | 0.415282 / 0.258489 (0.156793) | 0.457182 / 0.293841 (0.163341) | 0.031920 / 0.128546 (-0.096626) | 0.011712 / 0.075646 (-0.063934) | 0.320859 / 0.419271 (-0.098412) | 0.041466 / 0.043533 (-0.002067) | 0.418156 / 0.255139 (0.163017) | 0.435501 / 0.283200 (0.152302) | 0.090727 / 0.141683 (-0.050955) | 1.484014 / 1.452155 (0.031859) | 1.568072 / 1.492716 (0.075356) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.263356 / 0.018006 (0.245350) | 0.410768 / 0.000490 (0.410278) | 0.015983 / 0.000200 (0.015783) | 0.000301 / 0.000054 (0.000246) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024522 / 0.037411 (-0.012889) | 0.103986 / 0.014526 (0.089460) | 0.109253 / 0.176557 (-0.067303) | 0.142308 / 0.737135 (-0.594827) | 0.114037 / 0.296338 (-0.182302) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.452617 / 0.215209 (0.237407) | 4.505215 / 2.077655 (2.427560) | 2.185546 / 1.504120 (0.681426) | 1.995540 / 1.541195 (0.454345) | 1.962875 / 1.468490 (0.494385) | 0.690237 / 4.584777 (-3.894540) | 3.448311 / 3.745712 (-0.297401) | 1.901572 / 5.269862 (-3.368289) | 1.170832 / 4.565676 (-3.394844) | 0.082333 / 0.424275 (-0.341942) | 0.012569 / 0.007607 (0.004962) | 0.547822 / 0.226044 (0.321778) | 5.504180 / 2.268929 (3.235251) | 2.693981 / 55.444624 (-52.750644) | 2.320710 / 6.876477 (-4.555767) | 2.270508 / 2.142072 (0.128435) | 0.803145 / 4.805227 (-4.002083) | 0.152168 / 6.500664 (-6.348496) | 0.067408 / 0.075469 (-0.008061) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.260689 / 1.841788 (-0.581099) | 14.281112 / 8.074308 (6.206804) | 14.549742 / 10.191392 (4.358350) | 0.129337 / 0.680424 (-0.551087) | 0.017181 / 0.534201 (-0.517020) | 0.380473 / 0.579283 (-0.198810) | 0.387689 / 0.434364 (-0.046675) | 0.446734 / 0.540337 (-0.093603) | 0.532479 / 1.386936 (-0.854457) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#7972a0b5f1ad2c36023a79686f6ef026f4ffa64f \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008953 / 0.011353 (-0.002400) | 0.004917 / 0.011008 (-0.006091) | 0.098699 / 0.038508 (0.060191) | 0.034460 / 0.023109 (0.011351) | 0.294604 / 0.275898 (0.018706) | 0.322709 / 0.323480 (-0.000770) | 0.007780 / 0.007986 (-0.000206) | 0.004061 / 0.004328 (-0.000267) | 0.076134 / 0.004250 (0.071883) | 0.043786 / 0.037052 (0.006734) | 0.302155 / 0.258489 (0.043666) | 0.339779 / 0.293841 (0.045938) | 0.038305 / 0.128546 (-0.090241) | 0.012131 / 0.075646 (-0.063515) | 0.332656 / 0.419271 (-0.086615) | 0.048029 / 0.043533 (0.004496) | 0.303859 / 0.255139 (0.048720) | 0.315861 / 0.283200 (0.032662) | 0.100758 / 0.141683 (-0.040925) | 1.468072 / 1.452155 (0.015918) | 1.521325 / 1.492716 (0.028609) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.244975 / 0.018006 (0.226969) | 0.524392 / 0.000490 (0.523902) | 0.003720 / 0.000200 (0.003520) | 0.000087 / 0.000054 (0.000032) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027704 / 0.037411 (-0.009707) | 0.109048 / 0.014526 (0.094522) | 0.118298 / 0.176557 (-0.058259) | 0.158748 / 0.737135 (-0.578388) | 0.125654 / 0.296338 (-0.170684) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.406973 / 0.215209 (0.191764) | 4.057502 / 2.077655 (1.979847) | 1.939847 / 1.504120 (0.435727) | 1.746457 / 1.541195 (0.205262) | 1.698866 / 1.468490 (0.230376) | 0.692884 / 4.584777 (-3.891893) | 3.736988 / 3.745712 (-0.008724) | 2.050122 / 5.269862 (-3.219740) | 1.299808 / 4.565676 (-3.265868) | 0.085285 / 0.424275 (-0.338990) | 0.012768 / 0.007607 (0.005161) | 0.510814 / 0.226044 (0.284770) | 5.105319 / 2.268929 (2.836391) | 2.304003 / 55.444624 (-53.140621) | 1.951123 / 6.876477 (-4.925354) | 1.998504 / 2.142072 (-0.143568) | 0.840235 / 4.805227 (-3.964993) | 0.164521 / 6.500664 (-6.336143) | 0.064215 / 0.075469 (-0.011254) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.272520 / 1.841788 (-0.569268) | 14.648110 / 8.074308 (6.573802) | 14.573754 / 10.191392 (4.382362) | 0.170053 / 0.680424 (-0.510371) | 0.029389 / 0.534201 (-0.504811) | 0.438924 / 0.579283 (-0.140359) | 0.433572 / 0.434364 (-0.000792) | 0.517702 / 0.540337 (-0.022635) | 0.600389 / 1.386936 (-0.786547) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007362 / 0.011353 (-0.003991) | 0.005451 / 0.011008 (-0.005557) | 0.099336 / 0.038508 (0.060828) | 0.033284 / 0.023109 (0.010174) | 0.377143 / 0.275898 (0.101245) | 0.423724 / 0.323480 (0.100244) | 0.006194 / 0.007986 (-0.001792) | 0.004208 / 0.004328 (-0.000121) | 0.074473 / 0.004250 (0.070223) | 0.049874 / 0.037052 (0.012821) | 0.376012 / 0.258489 (0.117523) | 0.439942 / 0.293841 (0.146101) | 0.037860 / 0.128546 (-0.090686) | 0.012546 / 0.075646 (-0.063100) | 0.349123 / 0.419271 (-0.070148) | 0.048980 / 0.043533 (0.005447) | 0.391205 / 0.255139 (0.136066) | 0.396474 / 0.283200 (0.113274) | 0.105846 / 0.141683 (-0.035836) | 1.502475 / 1.452155 (0.050321) | 1.612303 / 1.492716 (0.119587) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.300815 / 0.018006 (0.282809) | 0.542171 / 0.000490 (0.541681) | 0.005465 / 0.000200 (0.005265) | 0.000094 / 0.000054 (0.000039) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028904 / 0.037411 (-0.008508) | 0.110352 / 0.014526 (0.095827) | 0.123275 / 0.176557 (-0.053282) | 0.161958 / 0.737135 (-0.575178) | 0.133595 / 0.296338 (-0.162743) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.438724 / 0.215209 (0.223515) | 4.373633 / 2.077655 (2.295979) | 2.178981 / 1.504120 (0.674861) | 1.992442 / 1.541195 (0.451247) | 2.063149 / 1.468490 (0.594659) | 0.696688 / 4.584777 (-3.888089) | 3.849370 / 3.745712 (0.103658) | 3.509495 / 5.269862 (-1.760367) | 1.923320 / 4.565676 (-2.642356) | 0.085554 / 0.424275 (-0.338721) | 0.012510 / 0.007607 (0.004903) | 0.535953 / 0.226044 (0.309909) | 5.365684 / 2.268929 (3.096755) | 2.686902 / 55.444624 (-52.757723) | 2.330922 / 6.876477 (-4.545554) | 2.353445 / 2.142072 (0.211373) | 0.878336 / 4.805227 (-3.926891) | 0.167296 / 6.500664 (-6.333368) | 0.064564 / 0.075469 (-0.010905) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.244696 / 1.841788 (-0.597091) | 15.027981 / 8.074308 (6.953673) | 14.545797 / 10.191392 (4.354405) | 0.147229 / 0.680424 (-0.533194) | 0.018007 / 0.534201 (-0.516194) | 0.446196 / 0.579283 (-0.133087) | 0.437418 / 0.434364 (0.003054) | 0.510732 / 0.540337 (-0.029606) | 0.594814 / 1.386936 (-0.792122) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#80eb8db74f49b7ee9c0f73a819c22177fabd61db \"CML watermark\")\n"
] | 2023-01-18T17:04:27 | 2023-01-18T17:57:29 | 2023-01-18T17:50:04 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5440",
"html_url": "https://github.com/huggingface/datasets/pull/5440",
"diff_url": "https://github.com/huggingface/datasets/pull/5440.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5440.patch",
"merged_at": "2023-01-18T17:50:04"
} | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5440/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5440/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5439 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5439/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5439/comments | https://api.github.com/repos/huggingface/datasets/issues/5439/events | https://github.com/huggingface/datasets/issues/5439 | 1,537,973,564 | I_kwDODunzps5bq508 | 5,439 | [dataset request] Add Common Voice 12.0 | {
"login": "MohammedRakib",
"id": 31034499,
"node_id": "MDQ6VXNlcjMxMDM0NDk5",
"avatar_url": "https://avatars.githubusercontent.com/u/31034499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MohammedRakib",
"html_url": "https://github.com/MohammedRakib",
"followers_url": "https://api.github.com/users/MohammedRakib/followers",
"following_url": "https://api.github.com/users/MohammedRakib/following{/other_user}",
"gists_url": "https://api.github.com/users/MohammedRakib/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MohammedRakib/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MohammedRakib/subscriptions",
"organizations_url": "https://api.github.com/users/MohammedRakib/orgs",
"repos_url": "https://api.github.com/users/MohammedRakib/repos",
"events_url": "https://api.github.com/users/MohammedRakib/events{/privacy}",
"received_events_url": "https://api.github.com/users/MohammedRakib/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | {
"login": "polinaeterna",
"id": 16348744,
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/polinaeterna",
"html_url": "https://github.com/polinaeterna",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "polinaeterna",
"id": 16348744,
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/polinaeterna",
"html_url": "https://github.com/polinaeterna",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"@polinaeterna any tentative date on when the Common Voice 12.0 dataset will be added ?",
"This dataset is now hosted on the Hub here: https://huggingface.co/datasets/mozilla-foundation/common_voice_12_0"
] | 2023-01-18T13:07:05 | 2023-07-21T14:26:10 | 2023-07-21T14:26:09 | NONE | null | null | null | ### Feature request
Please add the common voice 12_0 datasets. Apart from English, a significant amount of audio-data has been added to the other minor-language datasets.
### Motivation
The dataset link:
https://commonvoice.mozilla.org/en/datasets
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5439/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5439/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5438 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5438/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5438/comments | https://api.github.com/repos/huggingface/datasets/issues/5438/events | https://github.com/huggingface/datasets/pull/5438 | 1,537,489,730 | PR_kwDODunzps5HmWA8 | 5,438 | Update actions/checkout in CD Conda release | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008470 / 0.011353 (-0.002883) | 0.004721 / 0.011008 (-0.006287) | 0.099024 / 0.038508 (0.060516) | 0.029831 / 0.023109 (0.006722) | 0.325887 / 0.275898 (0.049989) | 0.380753 / 0.323480 (0.057273) | 0.007101 / 0.007986 (-0.000885) | 0.004734 / 0.004328 (0.000406) | 0.077576 / 0.004250 (0.073326) | 0.037207 / 0.037052 (0.000154) | 0.320463 / 0.258489 (0.061974) | 0.369284 / 0.293841 (0.075443) | 0.033411 / 0.128546 (-0.095135) | 0.011610 / 0.075646 (-0.064037) | 0.321460 / 0.419271 (-0.097811) | 0.041315 / 0.043533 (-0.002217) | 0.349186 / 0.255139 (0.094047) | 0.384546 / 0.283200 (0.101347) | 0.088045 / 0.141683 (-0.053637) | 1.536341 / 1.452155 (0.084186) | 1.527806 / 1.492716 (0.035089) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.193435 / 0.018006 (0.175429) | 0.451732 / 0.000490 (0.451243) | 0.003165 / 0.000200 (0.002965) | 0.000082 / 0.000054 (0.000027) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023203 / 0.037411 (-0.014208) | 0.096211 / 0.014526 (0.081685) | 0.105665 / 0.176557 (-0.070891) | 0.141074 / 0.737135 (-0.596061) | 0.108584 / 0.296338 (-0.187755) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.419041 / 0.215209 (0.203832) | 4.187915 / 2.077655 (2.110261) | 1.855336 / 1.504120 (0.351216) | 1.660046 / 1.541195 (0.118851) | 1.674646 / 1.468490 (0.206156) | 0.692257 / 4.584777 (-3.892520) | 3.466853 / 3.745712 (-0.278860) | 1.900925 / 5.269862 (-3.368936) | 1.294696 / 4.565676 (-3.270980) | 0.082792 / 0.424275 (-0.341483) | 0.012808 / 0.007607 (0.005201) | 0.529622 / 0.226044 (0.303578) | 5.337025 / 2.268929 (3.068096) | 2.326558 / 55.444624 (-53.118066) | 1.956256 / 6.876477 (-4.920221) | 2.035911 / 2.142072 (-0.106161) | 0.815824 / 4.805227 (-3.989403) | 0.148720 / 6.500664 (-6.351944) | 0.064226 / 0.075469 (-0.011243) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.231347 / 1.841788 (-0.610440) | 13.724596 / 8.074308 (5.650288) | 13.933878 / 10.191392 (3.742486) | 0.150913 / 0.680424 (-0.529511) | 0.028460 / 0.534201 (-0.505741) | 0.393564 / 0.579283 (-0.185719) | 0.407185 / 0.434364 (-0.027179) | 0.458250 / 0.540337 (-0.082087) | 0.547993 / 1.386936 (-0.838943) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006653 / 0.011353 (-0.004699) | 0.004615 / 0.011008 (-0.006393) | 0.098062 / 0.038508 (0.059554) | 0.027849 / 0.023109 (0.004740) | 0.409116 / 0.275898 (0.133218) | 0.448770 / 0.323480 (0.125290) | 0.004856 / 0.007986 (-0.003130) | 0.003427 / 0.004328 (-0.000901) | 0.075748 / 0.004250 (0.071498) | 0.037942 / 0.037052 (0.000889) | 0.410232 / 0.258489 (0.151743) | 0.457394 / 0.293841 (0.163553) | 0.031927 / 0.128546 (-0.096620) | 0.011618 / 0.075646 (-0.064028) | 0.321231 / 0.419271 (-0.098040) | 0.041416 / 0.043533 (-0.002117) | 0.413535 / 0.255139 (0.158396) | 0.438196 / 0.283200 (0.154997) | 0.089551 / 0.141683 (-0.052132) | 1.459298 / 1.452155 (0.007143) | 1.552594 / 1.492716 (0.059878) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.228186 / 0.018006 (0.210180) | 0.404393 / 0.000490 (0.403904) | 0.006944 / 0.000200 (0.006744) | 0.000081 / 0.000054 (0.000026) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025167 / 0.037411 (-0.012244) | 0.101282 / 0.014526 (0.086756) | 0.107282 / 0.176557 (-0.069275) | 0.139797 / 0.737135 (-0.597339) | 0.110477 / 0.296338 (-0.185861) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.479121 / 0.215209 (0.263912) | 4.778210 / 2.077655 (2.700555) | 2.464687 / 1.504120 (0.960567) | 2.255312 / 1.541195 (0.714118) | 2.287348 / 1.468490 (0.818858) | 0.694769 / 4.584777 (-3.890008) | 3.460860 / 3.745712 (-0.284852) | 3.078881 / 5.269862 (-2.190980) | 1.297726 / 4.565676 (-3.267950) | 0.082699 / 0.424275 (-0.341576) | 0.012652 / 0.007607 (0.005045) | 0.583308 / 0.226044 (0.357263) | 5.839199 / 2.268929 (3.570271) | 2.893724 / 55.444624 (-52.550900) | 2.546503 / 6.876477 (-4.329974) | 2.559570 / 2.142072 (0.417498) | 0.802357 / 4.805227 (-4.002870) | 0.151890 / 6.500664 (-6.348774) | 0.068593 / 0.075469 (-0.006876) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.262421 / 1.841788 (-0.579367) | 13.771848 / 8.074308 (5.697540) | 14.046017 / 10.191392 (3.854625) | 0.140950 / 0.680424 (-0.539474) | 0.016839 / 0.534201 (-0.517362) | 0.378870 / 0.579283 (-0.200413) | 0.385908 / 0.434364 (-0.048456) | 0.438539 / 0.540337 (-0.101799) | 0.522761 / 1.386936 (-0.864175) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#8145ebfd4fc3508d0be0de9a0f9c58877f2b32f8 \"CML watermark\")\n"
] | 2023-01-18T06:53:15 | 2023-01-18T13:49:51 | 2023-01-18T13:42:49 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5438",
"html_url": "https://github.com/huggingface/datasets/pull/5438",
"diff_url": "https://github.com/huggingface/datasets/pull/5438.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5438.patch",
"merged_at": "2023-01-18T13:42:48"
} | This PR updates the "checkout" GitHub Action to its latest version, as previous ones are deprecated: https://github.blog/changelog/2022-09-22-github-actions-all-actions-will-begin-running-on-node16-instead-of-node12/ | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5438/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5438/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5437 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5437/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5437/comments | https://api.github.com/repos/huggingface/datasets/issues/5437/events | https://github.com/huggingface/datasets/issues/5437 | 1,536,837,144 | I_kwDODunzps5bmkYY | 5,437 | Can't load png dataset with 4 channel (RGBA) | {
"login": "WiNE-iNEFF",
"id": 41611046,
"node_id": "MDQ6VXNlcjQxNjExMDQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/41611046?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/WiNE-iNEFF",
"html_url": "https://github.com/WiNE-iNEFF",
"followers_url": "https://api.github.com/users/WiNE-iNEFF/followers",
"following_url": "https://api.github.com/users/WiNE-iNEFF/following{/other_user}",
"gists_url": "https://api.github.com/users/WiNE-iNEFF/gists{/gist_id}",
"starred_url": "https://api.github.com/users/WiNE-iNEFF/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/WiNE-iNEFF/subscriptions",
"organizations_url": "https://api.github.com/users/WiNE-iNEFF/orgs",
"repos_url": "https://api.github.com/users/WiNE-iNEFF/repos",
"events_url": "https://api.github.com/users/WiNE-iNEFF/events{/privacy}",
"received_events_url": "https://api.github.com/users/WiNE-iNEFF/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi! Can you please share the directory structure of your image folder and the `load_dataset` call? We decode images with Pillow, and Pillow supports RGBA PNGs, so this shouldn't be a problem.\r\n\r\n",
"> Hi! Can you please share the directory structure of your image folder and the `load_dataset` call? We decode images with Pillow, and Pillow supports RGBA PNGs, so this shouldn't be a problem.\n> \n> \n\nI have only 1 folder that I use in the load_dataset function with the name \"IMGDATA\" and all my 9000 images are located in this folder.\n`\nfrom datasets import load_dataset\n\ndataset = load_dataset(\"IMGDATA\")\n`\nAt the same time, using another data set with images consisting of 3 RGB channels, everything works",
"Okay, I figured out what was wrong. When uploading my dataset via Google Drive, the images broke and Pillow couldn't open them. As a result, I solved the problem by downloading the ZIP archive"
] | 2023-01-17T18:22:27 | 2023-01-18T20:20:15 | 2023-01-18T20:20:15 | NONE | null | null | null | I try to create dataset which contains about 9000 png images 64x64 in size, and they are all 4-channel (RGBA). When trying to use load_dataset() then a dataset is created from only 2 images. What exactly interferes I can not understand.![Screenshot_20230117_212213.jpg](https://user-images.githubusercontent.com/41611046/212980147-9aa68e30-76e9-4b61-a937-c2fdabd56564.jpg) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5437/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5437/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5436 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5436/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5436/comments | https://api.github.com/repos/huggingface/datasets/issues/5436/events | https://github.com/huggingface/datasets/pull/5436 | 1,536,633,173 | PR_kwDODunzps5Hjh4v | 5,436 | Revert container image pin in CI benchmarks | {
"login": "0x2b3bfa0",
"id": 11387611,
"node_id": "MDQ6VXNlcjExMzg3NjEx",
"avatar_url": "https://avatars.githubusercontent.com/u/11387611?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/0x2b3bfa0",
"html_url": "https://github.com/0x2b3bfa0",
"followers_url": "https://api.github.com/users/0x2b3bfa0/followers",
"following_url": "https://api.github.com/users/0x2b3bfa0/following{/other_user}",
"gists_url": "https://api.github.com/users/0x2b3bfa0/gists{/gist_id}",
"starred_url": "https://api.github.com/users/0x2b3bfa0/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/0x2b3bfa0/subscriptions",
"organizations_url": "https://api.github.com/users/0x2b3bfa0/orgs",
"repos_url": "https://api.github.com/users/0x2b3bfa0/repos",
"events_url": "https://api.github.com/users/0x2b3bfa0/events{/privacy}",
"received_events_url": "https://api.github.com/users/0x2b3bfa0/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.013736 / 0.011353 (0.002383) | 0.006253 / 0.011008 (-0.004755) | 0.127076 / 0.038508 (0.088568) | 0.040997 / 0.023109 (0.017888) | 0.394744 / 0.275898 (0.118846) | 0.454285 / 0.323480 (0.130805) | 0.009864 / 0.007986 (0.001878) | 0.005093 / 0.004328 (0.000765) | 0.098714 / 0.004250 (0.094464) | 0.044308 / 0.037052 (0.007255) | 0.421951 / 0.258489 (0.163462) | 0.462280 / 0.293841 (0.168439) | 0.059979 / 0.128546 (-0.068567) | 0.020607 / 0.075646 (-0.055039) | 0.443593 / 0.419271 (0.024321) | 0.062332 / 0.043533 (0.018799) | 0.411335 / 0.255139 (0.156196) | 0.426524 / 0.283200 (0.143324) | 0.118233 / 0.141683 (-0.023450) | 1.877681 / 1.452155 (0.425527) | 1.865271 / 1.492716 (0.372555) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.234791 / 0.018006 (0.216784) | 0.557322 / 0.000490 (0.556833) | 0.000528 / 0.000200 (0.000328) | 0.000105 / 0.000054 (0.000051) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030260 / 0.037411 (-0.007151) | 0.122594 / 0.014526 (0.108068) | 0.142142 / 0.176557 (-0.034414) | 0.197098 / 0.737135 (-0.540037) | 0.150978 / 0.296338 (-0.145360) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.622644 / 0.215209 (0.407435) | 6.320078 / 2.077655 (4.242423) | 2.552755 / 1.504120 (1.048635) | 2.188647 / 1.541195 (0.647453) | 2.226602 / 1.468490 (0.758112) | 1.288083 / 4.584777 (-3.296694) | 5.624143 / 3.745712 (1.878431) | 3.208382 / 5.269862 (-2.061480) | 2.115222 / 4.565676 (-2.450455) | 0.146420 / 0.424275 (-0.277856) | 0.014464 / 0.007607 (0.006857) | 0.816470 / 0.226044 (0.590425) | 7.984049 / 2.268929 (5.715120) | 3.364942 / 55.444624 (-52.079682) | 2.552306 / 6.876477 (-4.324171) | 2.664575 / 2.142072 (0.522503) | 1.556177 / 4.805227 (-3.249050) | 0.263389 / 6.500664 (-6.237275) | 0.076861 / 0.075469 (0.001391) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.553734 / 1.841788 (-0.288054) | 18.365029 / 8.074308 (10.290721) | 20.993993 / 10.191392 (10.802601) | 0.235642 / 0.680424 (-0.444782) | 0.047084 / 0.534201 (-0.487117) | 0.555610 / 0.579283 (-0.023673) | 0.659413 / 0.434364 (0.225049) | 0.639284 / 0.540337 (0.098947) | 0.756317 / 1.386936 (-0.630620) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.014709 / 0.011353 (0.003356) | 0.006673 / 0.011008 (-0.004335) | 0.133718 / 0.038508 (0.095210) | 0.035699 / 0.023109 (0.012590) | 0.459089 / 0.275898 (0.183191) | 0.538071 / 0.323480 (0.214591) | 0.007376 / 0.007986 (-0.000610) | 0.004688 / 0.004328 (0.000360) | 0.104909 / 0.004250 (0.100659) | 0.064942 / 0.037052 (0.027890) | 0.466158 / 0.258489 (0.207669) | 0.566100 / 0.293841 (0.272259) | 0.057368 / 0.128546 (-0.071178) | 0.021572 / 0.075646 (-0.054075) | 0.413826 / 0.419271 (-0.005446) | 0.079543 / 0.043533 (0.036010) | 0.493313 / 0.255139 (0.238174) | 0.517787 / 0.283200 (0.234587) | 0.119836 / 0.141683 (-0.021847) | 1.833956 / 1.452155 (0.381801) | 2.003288 / 1.492716 (0.510572) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.276013 / 0.018006 (0.258007) | 0.549194 / 0.000490 (0.548704) | 0.010939 / 0.000200 (0.010739) | 0.000129 / 0.000054 (0.000075) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034983 / 0.037411 (-0.002428) | 0.131576 / 0.014526 (0.117050) | 0.140651 / 0.176557 (-0.035906) | 0.186455 / 0.737135 (-0.550681) | 0.146309 / 0.296338 (-0.150029) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.675973 / 0.215209 (0.460763) | 6.821862 / 2.077655 (4.744208) | 3.090307 / 1.504120 (1.586187) | 2.710679 / 1.541195 (1.169484) | 2.891577 / 1.468490 (1.423087) | 1.306160 / 4.584777 (-3.278617) | 5.629763 / 3.745712 (1.884051) | 4.662578 / 5.269862 (-0.607283) | 2.670195 / 4.565676 (-1.895482) | 0.153867 / 0.424275 (-0.270408) | 0.016028 / 0.007607 (0.008421) | 0.878702 / 0.226044 (0.652658) | 8.801612 / 2.268929 (6.532683) | 4.005520 / 55.444624 (-51.439104) | 3.124755 / 6.876477 (-3.751721) | 3.382132 / 2.142072 (1.240060) | 1.525951 / 4.805227 (-3.279277) | 0.263350 / 6.500664 (-6.237315) | 0.079285 / 0.075469 (0.003815) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.647591 / 1.841788 (-0.194197) | 18.281646 / 8.074308 (10.207338) | 21.072142 / 10.191392 (10.880750) | 0.232236 / 0.680424 (-0.448188) | 0.026126 / 0.534201 (-0.508075) | 0.546926 / 0.579283 (-0.032357) | 0.634496 / 0.434364 (0.200132) | 0.604345 / 0.540337 (0.064007) | 0.730159 / 1.386936 (-0.656777) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#cfe8a6aa4cd2d3d0d7067f390152d1a4aeb4c710 \"CML watermark\")\n"
] | 2023-01-17T15:59:50 | 2023-01-18T09:05:49 | 2023-01-18T06:29:06 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5436",
"html_url": "https://github.com/huggingface/datasets/pull/5436",
"diff_url": "https://github.com/huggingface/datasets/pull/5436.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5436.patch",
"merged_at": "2023-01-18T06:29:06"
} | Closes #5433, reverts #5432, and also:
* Uses [ghcr.io container images](https://cml.dev/doc/self-hosted-runners/#docker-images) for extra speed
* Updates `actions/checkout` to `v3` (note that `v2` is [deprecated](https://github.blog/changelog/2022-09-22-github-actions-all-actions-will-begin-running-on-node16-instead-of-node12/))
* Follows the new naming convention for environment variables introduced with [iterative/cml#1272](https://github.com/iterative/cml/pull/1272) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5436/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5436/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5435 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5435/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5435/comments | https://api.github.com/repos/huggingface/datasets/issues/5435/events | https://github.com/huggingface/datasets/issues/5435 | 1,536,099,300 | I_kwDODunzps5bjwPk | 5,435 | Wrong statement in "Load a Dataset in Streaming mode" leads to data leakage | {
"login": "DanielYang59",
"id": 80093591,
"node_id": "MDQ6VXNlcjgwMDkzNTkx",
"avatar_url": "https://avatars.githubusercontent.com/u/80093591?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DanielYang59",
"html_url": "https://github.com/DanielYang59",
"followers_url": "https://api.github.com/users/DanielYang59/followers",
"following_url": "https://api.github.com/users/DanielYang59/following{/other_user}",
"gists_url": "https://api.github.com/users/DanielYang59/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DanielYang59/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DanielYang59/subscriptions",
"organizations_url": "https://api.github.com/users/DanielYang59/orgs",
"repos_url": "https://api.github.com/users/DanielYang59/repos",
"events_url": "https://api.github.com/users/DanielYang59/events{/privacy}",
"received_events_url": "https://api.github.com/users/DanielYang59/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Just for your information, Tensorflow confirmed this issue [here.](https://github.com/tensorflow/tensorflow/issues/59279)",
"Thanks for reporting, @HaoyuYang59.\r\n\r\nPlease note that these are different \"dataset\" objects: our docs refer to Hugging Face `datasets.Dataset` and not to TensorFlow `tf.data.Dataset`.\r\n\r\nOur `datasets.Dataset.shuffle` method does not have a `reshuffle_each_iteration` argument. Therefore, I would say the statement in our docs is True because they refer to `datasets.Dataset.shuffle`, `datasets.Dataset.skip` and `datasets.Dataset.take`.\r\n\r\nI think this issue is restricted to TensorFlow dataset, and this would be addressed by them in the issue you opened in their repo: https://github.com/tensorflow/tensorflow/issues/59279",
"Also note that you are referring to an outdated documentation page: datasets 1.10.2 version\r\n\r\nCurrent datasets version is 2.8.0 and the corresponding documentation page is: https://huggingface.co/docs/datasets/stream#split-dataset",
"Hi @albertvillanova thanks for your reply and your explaination here. \r\n\r\nSorry for the confusion as I'm not actually a user of your repo and I just happen to find the thread by Google (and didn't read carefully).\r\n\r\nGreat to know that and you made everything very clear now.\r\n\r\nThanks for your time and sorry for the consusion.\r\n\r\nWishing you a wonderful time. \r\n\r\nRegards"
] | 2023-01-17T10:04:16 | 2023-01-19T09:56:03 | 2023-01-19T09:56:03 | NONE | null | null | null | ### Describe the bug
In the [Split your dataset with take and skip](https://huggingface.co/docs/datasets/v1.10.2/dataset_streaming.html#split-your-dataset-with-take-and-skip), it states:
> Using take (or skip) prevents future calls to shuffle from shuffling the dataset shards order, otherwise the taken examples could come from other shards. In this case it only uses the shuffle buffer. Therefore it is advised to shuffle the dataset before splitting using take or skip. See more details in the [Shuffling the dataset: shuffle](https://huggingface.co/docs/datasets/v1.10.2/dataset_streaming.html#iterable-dataset-shuffling) section.`
>> \# You can also create splits from a shuffled dataset
>> train_dataset = shuffled_dataset.skip(1000)
>> eval_dataset = shuffled_dataset.take(1000)
Where the shuffled dataset comes from:
`shuffled_dataset = dataset.shuffle(buffer_size=10_000, seed=42)`
At least in Tensorflow 2.9/2.10/2.11, [docs](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#shuffle) states the `reshuffle_each_iteration` argument is `True` by default. This means the dataset would be shuffled after each epoch, and as a result **the validation data would leak into training test**.
### Steps to reproduce the bug
N/A
### Expected behavior
The `reshuffle_each_iteration` argument should be set to `False`.
### Environment info
Tensorflow 2.9/2.10/2.11 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5435/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5435/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5434 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5434/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5434/comments | https://api.github.com/repos/huggingface/datasets/issues/5434/events | https://github.com/huggingface/datasets/issues/5434 | 1,536,090,042 | I_kwDODunzps5bjt-6 | 5,434 | sample_dataset module not found | {
"login": "nickums",
"id": 15816213,
"node_id": "MDQ6VXNlcjE1ODE2MjEz",
"avatar_url": "https://avatars.githubusercontent.com/u/15816213?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nickums",
"html_url": "https://github.com/nickums",
"followers_url": "https://api.github.com/users/nickums/followers",
"following_url": "https://api.github.com/users/nickums/following{/other_user}",
"gists_url": "https://api.github.com/users/nickums/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nickums/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nickums/subscriptions",
"organizations_url": "https://api.github.com/users/nickums/orgs",
"repos_url": "https://api.github.com/users/nickums/repos",
"events_url": "https://api.github.com/users/nickums/events{/privacy}",
"received_events_url": "https://api.github.com/users/nickums/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi! Can you describe what the actual error is?",
"working on the setfit example script\r\n\r\n from setfit import SetFitModel, SetFitTrainer, sample_dataset\r\n\r\nImportError: cannot import name 'sample_dataset' from 'setfit' (C:\\Python\\Python38\\lib\\site-packages\\setfit\\__init__.py)\r\n\r\n apart from that, I also had to hack these loads to import thses modules:\r\n from datasets.load import load_dataset \r\n from datasets.arrow_dataset import Dataset\r\n from datasets.dataset_dict import DatasetDict",
"Hi! This issue is related to the [SetFit](https://github.com/huggingface/setfit) project, so can you please open it there?"
] | 2023-01-17T09:57:54 | 2023-01-19T13:52:12 | 2023-01-19T07:55:11 | NONE | null | null | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5434/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5434/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5433 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5433/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5433/comments | https://api.github.com/repos/huggingface/datasets/issues/5433/events | https://github.com/huggingface/datasets/issues/5433 | 1,536,017,901 | I_kwDODunzps5bjcXt | 5,433 | Support latest Docker image in CI benchmarks | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | [
"Sorry, it was us:[^1] https://github.com/iterative/cml/pull/1317 & https://github.com/iterative/cml/issues/1319#issuecomment-1385599559; should be fixed with [v0.18.17](https://github.com/iterative/cml/releases/tag/v0.18.17).\r\n\r\n[^1]: More or less, see https://github.com/yargs/yargs/issues/873.",
"Opened https://github.com/huggingface/datasets/pull/5436 unpinning again the container image.",
"Hi @0x2b3bfa0, thanks a lot for the investigation, the context about the the root cause and for fixing it!!\r\n\r\nWe are reviewing your PR to unpin the container image."
] | 2023-01-17T09:06:08 | 2023-01-18T06:29:08 | 2023-01-18T06:29:08 | MEMBER | null | null | null | Once we find out the root cause of:
- #5431
we should revert the temporary pin on the Docker image version introduced by:
- #5432 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5433/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5433/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5432 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5432/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5432/comments | https://api.github.com/repos/huggingface/datasets/issues/5432/events | https://github.com/huggingface/datasets/pull/5432 | 1,535,893,019 | PR_kwDODunzps5HhEA8 | 5,432 | Fix CI benchmarks by temporarily pinning Docker image version | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008519 / 0.011353 (-0.002834) | 0.004451 / 0.011008 (-0.006558) | 0.102401 / 0.038508 (0.063893) | 0.029779 / 0.023109 (0.006669) | 0.302654 / 0.275898 (0.026756) | 0.366002 / 0.323480 (0.042522) | 0.007044 / 0.007986 (-0.000942) | 0.003350 / 0.004328 (-0.000978) | 0.078213 / 0.004250 (0.073963) | 0.035208 / 0.037052 (-0.001844) | 0.312980 / 0.258489 (0.054491) | 0.344217 / 0.293841 (0.050376) | 0.033089 / 0.128546 (-0.095457) | 0.011443 / 0.075646 (-0.064203) | 0.353143 / 0.419271 (-0.066128) | 0.040851 / 0.043533 (-0.002682) | 0.304501 / 0.255139 (0.049362) | 0.329118 / 0.283200 (0.045918) | 0.087399 / 0.141683 (-0.054284) | 1.500200 / 1.452155 (0.048046) | 1.536176 / 1.492716 (0.043459) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.209626 / 0.018006 (0.191619) | 0.425551 / 0.000490 (0.425061) | 0.001168 / 0.000200 (0.000968) | 0.000069 / 0.000054 (0.000014) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023664 / 0.037411 (-0.013748) | 0.096792 / 0.014526 (0.082266) | 0.105652 / 0.176557 (-0.070905) | 0.140796 / 0.737135 (-0.596340) | 0.109319 / 0.296338 (-0.187019) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.414802 / 0.215209 (0.199593) | 4.152619 / 2.077655 (2.074964) | 1.814403 / 1.504120 (0.310283) | 1.611392 / 1.541195 (0.070198) | 1.667350 / 1.468490 (0.198860) | 0.691855 / 4.584777 (-3.892922) | 3.406584 / 3.745712 (-0.339128) | 1.940332 / 5.269862 (-3.329530) | 1.279061 / 4.565676 (-3.286615) | 0.082938 / 0.424275 (-0.341337) | 0.012388 / 0.007607 (0.004781) | 0.521738 / 0.226044 (0.295693) | 5.233764 / 2.268929 (2.964835) | 2.306573 / 55.444624 (-53.138051) | 1.954631 / 6.876477 (-4.921845) | 2.048315 / 2.142072 (-0.093757) | 0.816921 / 4.805227 (-3.988306) | 0.150983 / 6.500664 (-6.349681) | 0.066628 / 0.075469 (-0.008842) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.235939 / 1.841788 (-0.605849) | 14.047114 / 8.074308 (5.972806) | 14.149842 / 10.191392 (3.958450) | 0.152836 / 0.680424 (-0.527588) | 0.028837 / 0.534201 (-0.505364) | 0.396232 / 0.579283 (-0.183051) | 0.409950 / 0.434364 (-0.024414) | 0.460296 / 0.540337 (-0.080041) | 0.556787 / 1.386936 (-0.830149) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006582 / 0.011353 (-0.004771) | 0.004491 / 0.011008 (-0.006518) | 0.100093 / 0.038508 (0.061585) | 0.026826 / 0.023109 (0.003717) | 0.413971 / 0.275898 (0.138073) | 0.445625 / 0.323480 (0.122145) | 0.004892 / 0.007986 (-0.003094) | 0.003295 / 0.004328 (-0.001034) | 0.077879 / 0.004250 (0.073628) | 0.039177 / 0.037052 (0.002125) | 0.353299 / 0.258489 (0.094810) | 0.406566 / 0.293841 (0.112725) | 0.031633 / 0.128546 (-0.096913) | 0.011517 / 0.075646 (-0.064130) | 0.320939 / 0.419271 (-0.098332) | 0.041487 / 0.043533 (-0.002046) | 0.353735 / 0.255139 (0.098596) | 0.434786 / 0.283200 (0.151586) | 0.087722 / 0.141683 (-0.053961) | 1.515134 / 1.452155 (0.062979) | 1.588908 / 1.492716 (0.096191) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.225312 / 0.018006 (0.207305) | 0.398324 / 0.000490 (0.397834) | 0.000453 / 0.000200 (0.000253) | 0.000064 / 0.000054 (0.000009) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024645 / 0.037411 (-0.012766) | 0.099399 / 0.014526 (0.084873) | 0.107006 / 0.176557 (-0.069550) | 0.145090 / 0.737135 (-0.592045) | 0.110046 / 0.296338 (-0.186292) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.450573 / 0.215209 (0.235364) | 4.498030 / 2.077655 (2.420375) | 2.193164 / 1.504120 (0.689044) | 1.940103 / 1.541195 (0.398908) | 1.957137 / 1.468490 (0.488647) | 0.697599 / 4.584777 (-3.887178) | 3.465146 / 3.745712 (-0.280566) | 1.918209 / 5.269862 (-3.351653) | 1.183921 / 4.565676 (-3.381756) | 0.082540 / 0.424275 (-0.341735) | 0.012495 / 0.007607 (0.004888) | 0.549702 / 0.226044 (0.323658) | 5.526841 / 2.268929 (3.257912) | 2.658611 / 55.444624 (-52.786014) | 2.259542 / 6.876477 (-4.616935) | 2.310139 / 2.142072 (0.168066) | 0.810550 / 4.805227 (-3.994677) | 0.152369 / 6.500664 (-6.348295) | 0.066295 / 0.075469 (-0.009174) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.289240 / 1.841788 (-0.552547) | 14.032143 / 8.074308 (5.957834) | 13.973492 / 10.191392 (3.782100) | 0.140082 / 0.680424 (-0.540342) | 0.017113 / 0.534201 (-0.517088) | 0.386534 / 0.579283 (-0.192749) | 0.393723 / 0.434364 (-0.040641) | 0.448891 / 0.540337 (-0.091446) | 0.533085 / 1.386936 (-0.853851) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png \"CML watermark\")\n"
] | 2023-01-17T07:15:31 | 2023-01-17T08:58:22 | 2023-01-17T08:51:17 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5432",
"html_url": "https://github.com/huggingface/datasets/pull/5432",
"diff_url": "https://github.com/huggingface/datasets/pull/5432.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5432.patch",
"merged_at": "2023-01-17T08:51:17"
} | This PR fixes CI benchmarks, by temporarily pinning Docker image version, instead of "latest" tag.
It also updates deprecated `cml-send-comment` command and using `cml comment create` instead.
Fix #5431. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5432/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5432/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5431 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5431/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5431/comments | https://api.github.com/repos/huggingface/datasets/issues/5431/events | https://github.com/huggingface/datasets/issues/5431 | 1,535,862,621 | I_kwDODunzps5bi2dd | 5,431 | CI benchmarks are broken: Unknown arguments: runnerPath, path | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 4296013012,
"node_id": "LA_kwDODunzps8AAAABAA_01A",
"url": "https://api.github.com/repos/huggingface/datasets/labels/maintenance",
"name": "maintenance",
"color": "d4c5f9",
"default": false,
"description": "Maintenance tasks"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 2023-01-17T06:49:57 | 2023-01-18T06:33:24 | 2023-01-17T08:51:18 | MEMBER | null | null | null | Our CI benchmarks are broken, raising `Unknown arguments` error: https://github.com/huggingface/datasets/actions/runs/3932397079/jobs/6724905161
```
Unknown arguments: runnerPath, path
```
Stack trace:
```
100%|██████████| 500/500 [00:01<00:00, 338.98ba/s]
Updating lock file 'dvc.lock'
To track the changes with git, run:
git add dvc.lock
To enable auto staging, run:
dvc config core.autostage true
Use `dvc push` to send your updates to remote storage.
cml send-comment <markdown file>
Global Options:
--log Logging verbosity
[string] [choices: "error", "warn", "info", "debug"] [default: "info"]
--driver Git provider where the repository is hosted
[string] [choices: "github", "gitlab", "bitbucket"] [default: infer from the
environment]
--repo Repository URL or slug
[string] [default: infer from the environment]
--driver-token, --token CI driver personal/project access token (PAT)
[string] [default: infer from the environment]
--help Show help [boolean]
Options:
--target Comment type (`commit`, `pr`, `commit/f00bar`,
`pr/42`, `issue/1337`),default is automatic (`pr`
but fallback to `commit`). [string]
--watch Watch for changes and automatically update the
comment [boolean]
--publish Upload any local images found in the Markdown
report [boolean] [default: true]
--publish-url Self-hosted image server URL
[string] [default: "https://asset.cml.dev/"]
--publish-native, --native Uses driver's native capabilities to upload assets
instead of CML's storage; not available on GitHub
[boolean]
--watermark-title Hidden comment marker (used for targeting in
subsequent `cml comment update`); "{workflow}" &
"{run}" are auto-replaced [string] [default: ""]
Unknown arguments: runnerPath, path
Error: Process completed with exit code 1.
```
Issue reported to iterative/cml:
- iterative/cml#1319 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5431/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5431/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5430 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5430/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5430/comments | https://api.github.com/repos/huggingface/datasets/issues/5430/events | https://github.com/huggingface/datasets/issues/5430 | 1,535,856,503 | I_kwDODunzps5bi093 | 5,430 | Support Apache Beam >= 2.44.0 | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Some of the shard files now have 0 number of rows.\r\n\r\nWe have opened an issue in the Apache Beam repo:\r\n- https://github.com/apache/beam/issues/25041"
] | 2023-01-17T06:42:12 | 2024-02-06T19:24:21 | 2024-02-06T19:24:21 | MEMBER | null | null | null | Once we find out the root cause of:
- #5426
we should revert the temporary pin on apache-beam introduced by:
- #5429 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5430/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5430/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5429 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5429/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5429/comments | https://api.github.com/repos/huggingface/datasets/issues/5429/events | https://github.com/huggingface/datasets/pull/5429 | 1,535,192,687 | PR_kwDODunzps5HeuyT | 5,429 | Fix CI by temporarily pinning apache-beam < 2.44.0 | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2023-01-16T16:20:09 | 2023-01-16T16:51:42 | 2023-01-16T16:49:03 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5429",
"html_url": "https://github.com/huggingface/datasets/pull/5429",
"diff_url": "https://github.com/huggingface/datasets/pull/5429.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5429.patch",
"merged_at": "2023-01-16T16:49:03"
} | Temporarily pin apache-beam < 2.44.0
Fix #5426. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5429/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5429/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5428 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5428/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5428/comments | https://api.github.com/repos/huggingface/datasets/issues/5428/events | https://github.com/huggingface/datasets/issues/5428 | 1,535,166,139 | I_kwDODunzps5bgMa7 | 5,428 | Load/Save FAISS index using fsspec | {
"login": "Dref360",
"id": 8976546,
"node_id": "MDQ6VXNlcjg5NzY1NDY=",
"avatar_url": "https://avatars.githubusercontent.com/u/8976546?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Dref360",
"html_url": "https://github.com/Dref360",
"followers_url": "https://api.github.com/users/Dref360/followers",
"following_url": "https://api.github.com/users/Dref360/following{/other_user}",
"gists_url": "https://api.github.com/users/Dref360/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Dref360/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Dref360/subscriptions",
"organizations_url": "https://api.github.com/users/Dref360/orgs",
"repos_url": "https://api.github.com/users/Dref360/repos",
"events_url": "https://api.github.com/users/Dref360/events{/privacy}",
"received_events_url": "https://api.github.com/users/Dref360/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | [
"Hi! Sure, feel free to submit a PR. Maybe if we want to be consistent with the existing API, it would be cleaner to directly add support for `fsspec` paths in `Dataset.load_faiss_index`/`Dataset.save_faiss_index` in the same manner as it was done in `Dataset.load_from_disk`/`Dataset.save_to_disk`.",
"That's a great idea! I'll do that instead. "
] | 2023-01-16T16:08:12 | 2023-03-27T15:18:22 | 2023-03-27T15:18:22 | CONTRIBUTOR | null | null | null | ### Feature request
From what I understand `faiss` already support this [link](https://github.com/facebookresearch/faiss/wiki/Index-IO,-cloning-and-hyper-parameter-tuning#generic-io-support)
I would like to use a stream as input to `Dataset.load_faiss_index` and `Dataset.save_faiss_index`.
### Motivation
In my case, I'm saving faiss index in cloud storage and use `fsspec` to load them. It would be ideal if I could send the stream directly instead of copying the file locally (or mounting the bucket) and then load the index.
### Your contribution
I can submit the PR | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5428/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5428/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5427 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5427/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5427/comments | https://api.github.com/repos/huggingface/datasets/issues/5427/events | https://github.com/huggingface/datasets/issues/5427 | 1,535,162,889 | I_kwDODunzps5bgLoJ | 5,427 | Unable to download dataset id_clickbait | {
"login": "ilos-vigil",
"id": 45941585,
"node_id": "MDQ6VXNlcjQ1OTQxNTg1",
"avatar_url": "https://avatars.githubusercontent.com/u/45941585?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ilos-vigil",
"html_url": "https://github.com/ilos-vigil",
"followers_url": "https://api.github.com/users/ilos-vigil/followers",
"following_url": "https://api.github.com/users/ilos-vigil/following{/other_user}",
"gists_url": "https://api.github.com/users/ilos-vigil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ilos-vigil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ilos-vigil/subscriptions",
"organizations_url": "https://api.github.com/users/ilos-vigil/orgs",
"repos_url": "https://api.github.com/users/ilos-vigil/repos",
"events_url": "https://api.github.com/users/ilos-vigil/events{/privacy}",
"received_events_url": "https://api.github.com/users/ilos-vigil/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Thanks for reporting, @ilos-vigil.\r\n\r\nWe have transferred this issue to the corresponding dataset on the Hugging Face Hub: https://huggingface.co/datasets/id_clickbait/discussions/1 "
] | 2023-01-16T16:05:36 | 2023-01-18T09:51:28 | 2023-01-18T09:25:19 | NONE | null | null | null | ### Describe the bug
I tried to download dataset `id_clickbait`, but receive this error message.
```
FileNotFoundError: Couldn't find file at https://md-datasets-cache-zipfiles-prod.s3.eu-west-1.amazonaws.com/k42j7x2kpn-1.zip
```
When i open the link using browser, i got this XML data.
```xml
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>NoSuchBucket</Code><Message>The specified bucket does not exist</Message><BucketName>md-datasets-cache-zipfiles-prod</BucketName><RequestId>NVRM6VEEQD69SD00</RequestId><HostId>W/SPDxLGvlCGi0OD6d7mSDvfOAUqLAfvs9nTX50BkJrjMny+X9Jnqp/Li2lG9eTUuT4MUkAA2jjTfCrCiUmu7A==</HostId></Error>
```
### Steps to reproduce the bug
Code snippet:
```
from datasets import load_dataset
load_dataset('id_clickbait', 'annotated')
load_dataset('id_clickbait', 'raw')
```
Link to Kaggle notebook: https://www.kaggle.com/code/ilosvigil/bug-check-on-id-clickbait-dataset
### Expected behavior
Successfully download and load `id_newspaper` dataset.
### Environment info
- `datasets` version: 2.8.0
- Platform: Linux-5.15.65+-x86_64-with-debian-bullseye-sid
- Python version: 3.7.12
- PyArrow version: 8.0.0
- Pandas version: 1.3.5 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5427/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5427/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5426 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5426/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5426/comments | https://api.github.com/repos/huggingface/datasets/issues/5426/events | https://github.com/huggingface/datasets/issues/5426 | 1,535,158,555 | I_kwDODunzps5bgKkb | 5,426 | CI tests are broken: SchemaInferenceError | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 2023-01-16T16:02:07 | 2023-06-02T06:40:32 | 2023-01-16T16:49:04 | MEMBER | null | null | null | CI test (unit, ubuntu-latest, deps-minimum) is broken, raising a `SchemaInferenceError`: see https://github.com/huggingface/datasets/actions/runs/3930901593/jobs/6721492004
```
FAILED tests/test_beam.py::BeamBuilderTest::test_download_and_prepare_sharded - datasets.arrow_writer.SchemaInferenceError: Please pass `features` or at least one example when writing data
```
Stack trace:
```
______________ BeamBuilderTest.test_download_and_prepare_sharded _______________
[gw1] linux -- Python 3.7.15 /opt/hostedtoolcache/Python/3.7.15/x64/bin/python
self = <tests.test_beam.BeamBuilderTest testMethod=test_download_and_prepare_sharded>
@require_beam
def test_download_and_prepare_sharded(self):
import apache_beam as beam
original_write_parquet = beam.io.parquetio.WriteToParquet
expected_num_examples = len(get_test_dummy_examples())
with tempfile.TemporaryDirectory() as tmp_cache_dir:
builder = DummyBeamDataset(cache_dir=tmp_cache_dir, beam_runner="DirectRunner")
with patch("apache_beam.io.parquetio.WriteToParquet") as write_parquet_mock:
write_parquet_mock.side_effect = partial(original_write_parquet, num_shards=2)
> builder.download_and_prepare()
tests/test_beam.py:97:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages/datasets/builder.py:864: in download_and_prepare
**download_and_prepare_kwargs,
/opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages/datasets/builder.py:1976: in _download_and_prepare
num_examples, num_bytes = beam_writer.finalize(metrics.query(m_filter))
/opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages/datasets/arrow_writer.py:694: in finalize
shard_num_bytes, _ = parquet_to_arrow(source, destination)
/opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages/datasets/arrow_writer.py:740: in parquet_to_arrow
num_bytes, num_examples = writer.finalize()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <datasets.arrow_writer.ArrowWriter object at 0x7f6dcbb3e810>
close_stream = True
def finalize(self, close_stream=True):
self.write_rows_on_file()
# In case current_examples < writer_batch_size, but user uses finalize()
if self._check_duplicates:
self.check_duplicate_keys()
# Re-intializing to empty list for next batch
self.hkey_record = []
self.write_examples_on_file()
# If schema is known, infer features even if no examples were written
if self.pa_writer is None and self.schema:
self._build_writer(self.schema)
if self.pa_writer is not None:
self.pa_writer.close()
self.pa_writer = None
if close_stream:
self.stream.close()
else:
if close_stream:
self.stream.close()
> raise SchemaInferenceError("Please pass `features` or at least one example when writing data")
E datasets.arrow_writer.SchemaInferenceError: Please pass `features` or at least one example when writing data
/opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages/datasets/arrow_writer.py:593: SchemaInferenceError
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5426/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5426/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5425 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5425/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5425/comments | https://api.github.com/repos/huggingface/datasets/issues/5425/events | https://github.com/huggingface/datasets/issues/5425 | 1,534,581,850 | I_kwDODunzps5bd9xa | 5,425 | Sort on multiple keys with datasets.Dataset.sort() | {
"login": "rocco-fortuna",
"id": 101344863,
"node_id": "U_kgDOBgpmXw",
"avatar_url": "https://avatars.githubusercontent.com/u/101344863?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rocco-fortuna",
"html_url": "https://github.com/rocco-fortuna",
"followers_url": "https://api.github.com/users/rocco-fortuna/followers",
"following_url": "https://api.github.com/users/rocco-fortuna/following{/other_user}",
"gists_url": "https://api.github.com/users/rocco-fortuna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rocco-fortuna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rocco-fortuna/subscriptions",
"organizations_url": "https://api.github.com/users/rocco-fortuna/orgs",
"repos_url": "https://api.github.com/users/rocco-fortuna/repos",
"events_url": "https://api.github.com/users/rocco-fortuna/events{/privacy}",
"received_events_url": "https://api.github.com/users/rocco-fortuna/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
},
{
"id": 1935892877,
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue",
"name": "good first issue",
"color": "7057ff",
"default": true,
"description": "Good for newcomers"
}
] | closed | false | null | [] | null | [
"Hi! \r\n\r\n`Dataset.sort` calls `df.sort_values` internally, and `df.sort_values` brings all the \"sort\" columns in memory, so sorting on multiple keys could be very expensive. This makes me think that maybe we can replace `df.sort_values` with `pyarrow.compute.sort_indices` - the latter can also sort on multiple keys and currently loads the data into memory; however, there is a plan to eventually implement \"memory-map\" friendly kernels for the Arrow compute ops (using the Acero execution engine). \r\n\r\nSo to address this issue, you should replace `df.sort_values` with `pyarrow.compute.sort_indices` in `Dataset.sort` and adjust the signature of this function (deprecate the `kind` parameter, etc.).\r\n\r\nPS: Feel free to ping us if you need some additional help/pointers",
"@mariosasko If I understand the code right, using `pyarrow.compute.sort_indices` would also require changes to the `select` method if it is meant to sort multiple keys. That's because `select` only accepts 1D input for `indices`, not an iterable or similar which would be required for multiple keys unless you want some looping over selects. Doesn't seem that straight-forward but I might be missing something here... ",
"@MichlF No, it doesn't require modifying select because sorting on multiple keys also returns a 1D array.\r\n\r\nIt's easier to understand with an example:\r\n```python\r\n>>> import pyarrow as pa\r\n>>> import pyarrow.compute as pc\r\n>>> table = pa.table({\r\n... \"name\": [\"John\", \"Eve\", \"Peter\", \"John\"],\r\n... \"surname\": [\"Johnson\", \"Smith\", \"Smith\", \"Doe\"],\r\n... \"age\": [20, 40, 30, 50],\r\n... })\r\n>>> indices = pc.sort_indices(table, sort_keys=[(\"name\", \"ascending\"), (\"surname\", \"ascending\")])\r\n>>> print(indices)\r\n[\r\n 1,\r\n 3,\r\n 0,\r\n 2\r\n]\r\n```\r\n\r\n",
"Thanks for clarifying.\r\nI can prepare a PR to address this issue. This would be my first PR here so I have a few maybe silly questions but:\r\n- What is the preferred input type of `sort_keys` for the sort method? A sequence with name, order tuples like pyarrow's `sort_indices` requires?\r\n- What about backwards compatability: is it supposed to also accept the old way of calling sort() or should both `column` and `kind` be deprecated?\r\n- If `sort_keys` is provided in the same format as for pyarrow's `sort_indices` - i.e. along with order for each column -, `reverse` doesn't make much sense either and should be deprecated as well I assume.",
"I think we can have the following signature:\r\n```python\r\ndef sort(\r\n self,\r\n column_names: Union[str, Sequence[str]],\r\n reverse: Union[bool, Sequence[bool]] = False,\r\n kind=\"deprecated\",\r\n null_placement: str = \"last\",\r\n keep_in_memory: bool = False,\r\n load_from_cache_file: bool = True,\r\n indices_cache_file_name: Optional[str] = None,\r\n writer_batch_size: Optional[int] = 1000,\r\n new_fingerprint: Optional[str] = None,\r\n ) -> \"Dataset\":\r\n``` \r\n\r\nSo we should:\r\n* rename`column` to `column_names`. `column` is a positional argument, so it's OK to rename it (not marked as positional-only with \"/\", but still should be fine)\r\n* deprecate `kind`\r\n* keep `reverse` instead of introducing `sort_keys`, but we should allow passing a list of booleans that defines the sort order of each column from `column_names` to it (`reverse = False` would be equal to `[False] * len(column_names)` and `reverse = True` to `[True] * len(column_names)`)",
"I am pretty much done with the PR. Just one clarification: `Sequence` in `arrow_dataset.py` is a custom dataclass from `features.py` instead of the `type.hinting` class `Sequence` from Python. Do you suggest using that custom `Sequence` class somehow ? Otherwise signature currently reads instead:\r\n```Python\r\n def sort(\r\n self,\r\n column_names: Union[str, List[str]],\r\n reverse: Union[bool, List[bool]] = False,\r\n kind = \"deprecated\",\r\n null_placement: str = \"last\",\r\n keep_in_memory: bool = False,\r\n load_from_cache_file: bool = True,\r\n indices_cache_file_name: Optional[str] = None,\r\n writer_batch_size: Optional[int] = 1000,\r\n new_fingerprint: Optional[str] = None,\r\n )\r\n```\r\n\r\nAlso, to maintain backwards compatibility, I added conditionals for `null_placement`, because pyarrow's `null_placement` only accepts `at_start` and `at_end`, and not `last` and `first`.\r\nIf that is all good, I think I can open the PR.",
"I meant `typing.Sequence` (`datasets.Sequence` is a feature type). \r\n\r\nRegarding `null_placement`, I think we can support both `at_start` and `at_end`, and `last` and `first` (for backward compatibility; convert internally to `at_end` and `at_start` respectively).",
"> I meant typing.Sequence (datasets.Sequence is a feature type).\r\n\r\nSorry, I actually meant `typing.Sequence` and not `type.hinting`. However, the issue is still that `dataset.Sequence` is imported in `arrow_dataset.py` so I cannot import and use `typing.Sequence` for the `sort`'s signature without overwriting the `dataset.Sequence` import. The latter is used in the `align_labels_with_mapping` method so it's a necessary import for `arrow_dataset.py`. \r\nTo import `typing.Sequence` as something else than `Sequence` to avoid overwriting may only be confusing and doesn't seem good practice!? The other solution is to keep `List` type hinting as in the signature I posted in my previous post but this excludes other Sequence types and may cause problems further down the line.\r\nPlease advise,\r\nThanks for all the clarifications!",
"You can avoid the name collision by renaming `typing.Sequence` to `Sequence_` when importing:\r\n```python\r\nfrom typing import Sequence as Sequence_\r\n```",
"Resolved via #5502 "
] | 2023-01-16T09:22:26 | 2023-02-24T16:15:11 | 2023-02-24T16:15:11 | NONE | null | null | null | ### Feature request
From discussion on forum: https://discuss.huggingface.co/t/datasets-dataset-sort-does-not-preserve-ordering/29065/1
`sort()` does not preserve ordering, and it does not support sorting on multiple columns, nor a key function.
The suggested solution:
> ... having something similar to pandas and be able to specify multiple columns for sorting. We’re already using pandas under the hood to do the sorting in datasets.
The suggested workaround:
> convert your dataset to pandas and use `df.sort_values()`
### Motivation
Preserved ordering when sorting is very handy when one needs to sort on multiple columns, A and B, so that e.g. whenever A is equal for two or more rows, B is kept sorted.
Having a parameter to do this in 🤗datasets would be cleaner than going through pandas and back, and it wouldn't add much complexity to the library.
Alternatives:
- the possibility to specify multiple keys to sort by with decreasing priority (suggested solution),
- the ability to provide a key function for sorting, so that one can manually specify the sorting criteria.
### Your contribution
I'll be happy to contribute by submitting a PR. Will get documented on `CONTRIBUTING.MD`.
Would love to get thoughts on this, if anyone has anything to add. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5425/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5425/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5424 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5424/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5424/comments | https://api.github.com/repos/huggingface/datasets/issues/5424/events | https://github.com/huggingface/datasets/issues/5424 | 1,534,394,756 | I_kwDODunzps5bdQGE | 5,424 | When applying `ReadInstruction` to custom load it's not DatasetDict but list of Dataset? | {
"login": "macabdul9",
"id": 25720695,
"node_id": "MDQ6VXNlcjI1NzIwNjk1",
"avatar_url": "https://avatars.githubusercontent.com/u/25720695?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/macabdul9",
"html_url": "https://github.com/macabdul9",
"followers_url": "https://api.github.com/users/macabdul9/followers",
"following_url": "https://api.github.com/users/macabdul9/following{/other_user}",
"gists_url": "https://api.github.com/users/macabdul9/gists{/gist_id}",
"starred_url": "https://api.github.com/users/macabdul9/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/macabdul9/subscriptions",
"organizations_url": "https://api.github.com/users/macabdul9/orgs",
"repos_url": "https://api.github.com/users/macabdul9/repos",
"events_url": "https://api.github.com/users/macabdul9/events{/privacy}",
"received_events_url": "https://api.github.com/users/macabdul9/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi! You can get a `DatasetDict` if you pass a dictionary with read instructions as follows:\r\n```python\r\ninstructions = [\r\n ReadInstruction(split_name=\"train\", from_=0, to=10, unit='%', rounding='closest'),\r\n ReadInstruction(split_name=\"dev\", from_=0, to=10, unit='%', rounding='closest'),\r\n ReadInstruction(split_name=\"test\", from_=0, to=5, unit='%', rounding='closest')\r\n]\r\n\r\ndataset = load_dataset('csv', data_dir=\"data/\", data_files={\"train\":\"train.tsv\", \"dev\":\"dev.tsv\", \"test\":\"test.tsv\"}, delimiter=\"\\t\", split={inst.split_name: inst for inst in instructions})\r\n```\r\n"
] | 2023-01-16T06:54:28 | 2023-02-24T16:19:00 | 2023-02-24T16:19:00 | NONE | null | null | null | ### Describe the bug
I am loading datasets from custom `tsv` files stored locally and applying split instructions for each split. Although the ReadInstruction is being applied correctly and I was expecting it to be `DatasetDict` but instead it is a list of `Dataset`.
### Steps to reproduce the bug
Steps to reproduce the behaviour:
1. Import
`from datasets import load_dataset, ReadInstruction`
2. Instruction to load the dataset
```
instructions = [
ReadInstruction(split_name="train", from_=0, to=10, unit='%', rounding='closest'),
ReadInstruction(split_name="dev", from_=0, to=10, unit='%', rounding='closest'),
ReadInstruction(split_name="test", from_=0, to=5, unit='%', rounding='closest')
]
```
3. Load
`dataset = load_dataset('csv', data_dir="data/", data_files={"train":"train.tsv", "dev":"dev.tsv", "test":"test.tsv"}, delimiter="\t", split=instructions)`
### Expected behavior
**Current behaviour**
![Screenshot from 2023-01-16 10-45-27](https://user-images.githubusercontent.com/25720695/212614754-306898d8-8c27-4475-9bb8-0321bd939561.png)
:
**Expected behaviour**
![Screenshot from 2023-01-16 10-45-42](https://user-images.githubusercontent.com/25720695/212614813-0d336bf7-5266-482e-bb96-ef51f64de204.png)
### Environment info
``datasets==2.8.0
``
`Python==3.8.5
`
`Platform - Ubuntu 20.04.4 LTS` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5424/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5424/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5422 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5422/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5422/comments | https://api.github.com/repos/huggingface/datasets/issues/5422/events | https://github.com/huggingface/datasets/issues/5422 | 1,533,385,239 | I_kwDODunzps5bZZoX | 5,422 | Datasets load error for saved github issues | {
"login": "folterj",
"id": 7360564,
"node_id": "MDQ6VXNlcjczNjA1NjQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/7360564?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/folterj",
"html_url": "https://github.com/folterj",
"followers_url": "https://api.github.com/users/folterj/followers",
"following_url": "https://api.github.com/users/folterj/following{/other_user}",
"gists_url": "https://api.github.com/users/folterj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/folterj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/folterj/subscriptions",
"organizations_url": "https://api.github.com/users/folterj/orgs",
"repos_url": "https://api.github.com/users/folterj/repos",
"events_url": "https://api.github.com/users/folterj/events{/privacy}",
"received_events_url": "https://api.github.com/users/folterj/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"I can confirm that the error exists!\r\nI'm trying to read 3 parquet files locally:\r\n```python\r\nfrom datasets import load_dataset, Features, Value, ClassLabel\r\n\r\nreview_dataset = load_dataset(\r\n \"parquet\",\r\n data_files={\r\n \"train\": os.path.join(sentiment_analysis_data_path, \"train.parquet\"),\r\n \"validation\": os.path.join(sentiment_analysis_data_path, \"validation.parquet\"),\r\n \"test\": os.path.join(sentiment_analysis_data_path, \"test.parquet\"),\r\n },\r\n)\r\n```\r\n\r\nBut you can fix it, by specifying `features` for `load_dataset()` function like this:\r\n```python\r\nfrom datasets import load_dataset, Features, Value, ClassLabel\r\n\r\nfeatures = Features(\r\n {\r\n \"label\": ClassLabel(\r\n num_classes=3,\r\n names=[\"negative\", \"neutral\", \"positive\"],\r\n ),\r\n \"text\": Value(dtype=\"string\"),\r\n }\r\n)\r\n\r\nreview_dataset = load_dataset(\r\n \"parquet\",\r\n data_files={\r\n \"train\": os.path.join(sentiment_analysis_data_path, \"train.parquet\"),\r\n \"validation\": os.path.join(sentiment_analysis_data_path, \"validation.parquet\"),\r\n \"test\": os.path.join(sentiment_analysis_data_path, \"test.parquet\"),\r\n },\r\n features=features,\r\n)\r\n\r\nprint(review_dataset)\r\n```",
"@Extremesarova I think this is a different issue, but understand using features could be a work-around.\r\nIt seems the field `closed_at` is `null` in many cases.\r\n\r\nI've not found a way to specify only a single feature without (succesfully) specifiying the full and quite detailed set of expected features. Using this features set gives an error the column names don't match.\r\n`features = Features({'closed_at': Value(dtype='timestamp[s]', id=None)})`\r\n\r\n",
"Found this when searching for the same error, looks like based on #3965 it's just an issue with the data. I found that changing `df = pd.DataFrame.from_records(all_issues)` to `df = pd.DataFrame.from_records(all_issues).dropna(axis=1, how='all').drop(['milestone'], axis=1)` from the fetch_issues function fixed the issue. \r\nThe \"milestone\" column seemed to be problematic (only ~50 non null rows) and dropped any columns that were all null as well just in case.",
"I have this same issue. I saved a dataset to disk and now I can't load it.",
"Ok the solution was to use load_from_disk instead of load_dataset.",
"Hi @folterj , I faced same issue while creating `issues_dataset` (https://huggingface.co/learn/nlp-course/chapter5/5?fw=pt). The fix which worked for me was loading the *.jsonl file as pd.read_json and then converting it into a Dataset using datasets API.\r\n```\r\nimport pandas as pd\r\ndf=pd.read_json(\"datasets-issues.jsonl\", lines=True)\r\ndf.head()\r\n\r\nfrom datasets import Dataset\r\nissues_dataset = Dataset.from_pandas(df)\r\nissues_dataset\r\nsample = issues_dataset.shuffle(seed=666).select(range(3))\r\nsample[0]\r\n```",
"I understand some work-around suggestions would be to not use load_dataset(), and instead using a different API function. Another alternative would be using the same function using streaming, as I had already suggested in my original post.\r\nHowever, the fact remains that there is an issue in this function as reported."
] | 2023-01-14T17:29:38 | 2023-09-14T11:39:57 | null | NONE | null | null | null | ### Describe the bug
Loading a previously downloaded & saved dataset as described in the HuggingFace course:
issues_dataset = load_dataset("json", data_files="issues/datasets-issues.jsonl", split="train")
Gives this error:
datasets.builder.DatasetGenerationError: An error occurred while generating the dataset
A work-around I found was to use streaming.
### Steps to reproduce the bug
Reproduce by executing the code provided:
https://huggingface.co/course/chapter5/5?fw=pt
From the heading:
'let’s create a function that can download all the issues from a GitHub repository'
### Expected behavior
No error
### Environment info
Datasets version 2.8.0. Note that version 2.6.1 gives the same error (related to null timestamp).
**[EDIT]**
This is the complete error trace confirming the issue is related to the timestamp (`Couldn't cast array of type timestamp[s] to null`)
```
Using custom data configuration default-950028611d2860c8
Downloading and preparing dataset json/default to [...]/.cache/huggingface/datasets/json/default-950028611d2860c8/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51...
Downloading data files: 100%|██████████| 1/1 [00:00<?, ?it/s]
Extracting data files: 100%|██████████| 1/1 [00:00<00:00, 500.63it/s]
Generating train split: 2619 examples [00:00, 7155.72 examples/s]Traceback (most recent call last):
File "[...]\miniconda3\envs\HuggingFace\lib\site-packages\datasets\builder.py", line 1831, in _prepare_split_single
writer.write_table(table)
File "[...]\miniconda3\envs\HuggingFace\lib\site-packages\datasets\arrow_writer.py", line 567, in write_table
pa_table = table_cast(pa_table, self._schema)
File "[...]\miniconda3\envs\HuggingFace\lib\site-packages\datasets\table.py", line 2282, in table_cast
return cast_table_to_schema(table, schema)
File "[...]\miniconda3\envs\HuggingFace\lib\site-packages\datasets\table.py", line 2241, in cast_table_to_schema
arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()]
File "[...]\miniconda3\envs\HuggingFace\lib\site-packages\datasets\table.py", line 2241, in <listcomp>
arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()]
File "[...]\miniconda3\envs\HuggingFace\lib\site-packages\datasets\table.py", line 1807, in wrapper
return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
File "[...]\miniconda3\envs\HuggingFace\lib\site-packages\datasets\table.py", line 1807, in <listcomp>
return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
File "[...]\miniconda3\envs\HuggingFace\lib\site-packages\datasets\table.py", line 2035, in cast_array_to_feature
arrays = [_c(array.field(name), subfeature) for name, subfeature in feature.items()]
File "[...]\miniconda3\envs\HuggingFace\lib\site-packages\datasets\table.py", line 2035, in <listcomp>
arrays = [_c(array.field(name), subfeature) for name, subfeature in feature.items()]
File "[...]\miniconda3\envs\HuggingFace\lib\site-packages\datasets\table.py", line 1809, in wrapper
return func(array, *args, **kwargs)
File "[...]\miniconda3\envs\HuggingFace\lib\site-packages\datasets\table.py", line 2101, in cast_array_to_feature
return array_cast(array, feature(), allow_number_to_str=allow_number_to_str)
File "[...]\miniconda3\envs\HuggingFace\lib\site-packages\datasets\table.py", line 1809, in wrapper
return func(array, *args, **kwargs)
File "[...]\miniconda3\envs\HuggingFace\lib\site-packages\datasets\table.py", line 1990, in array_cast
raise TypeError(f"Couldn't cast array of type {array.type} to {pa_type}")
TypeError: Couldn't cast array of type timestamp[s] to null
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\Program Files\JetBrains\PyCharm 2022.1.3\plugins\python\helpers\pydev\pydevconsole.py", line 364, in runcode
coro = func()
File "<input>", line 1, in <module>
File "C:\Program Files\JetBrains\PyCharm 2022.1.3\plugins\python\helpers\pydev\_pydev_bundle\pydev_umd.py", line 198, in runfile
pydev_imports.execfile(filename, global_vars, local_vars) # execute the script
File "C:\Program Files\JetBrains\PyCharm 2022.1.3\plugins\python\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "[...]\PycharmProjects\TransformersTesting\dataset_issues.py", line 20, in <module>
issues_dataset = load_dataset("json", data_files="issues/datasets-issues.jsonl", split="train")
File "[...]\miniconda3\envs\HuggingFace\lib\site-packages\datasets\load.py", line 1757, in load_dataset
builder_instance.download_and_prepare(
File "[...]\miniconda3\envs\HuggingFace\lib\site-packages\datasets\builder.py", line 860, in download_and_prepare
self._download_and_prepare(
File "[...]\miniconda3\envs\HuggingFace\lib\site-packages\datasets\builder.py", line 953, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "[...]\miniconda3\envs\HuggingFace\lib\site-packages\datasets\builder.py", line 1706, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "[...]\miniconda3\envs\HuggingFace\lib\site-packages\datasets\builder.py", line 1849, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.builder.DatasetGenerationError: An error occurred while generating the dataset
Generating train split: 2619 examples [00:19, 7155.72 examples/s]
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5422/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5422/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5421 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5421/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5421/comments | https://api.github.com/repos/huggingface/datasets/issues/5421/events | https://github.com/huggingface/datasets/issues/5421 | 1,532,278,307 | I_kwDODunzps5bVLYj | 5,421 | Support case-insensitive Hub dataset name in load_dataset | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | [
"Closing as case-insensitivity should be only for URL redirection on the Hub. In the APIs, we will only support the canonical name (https://github.com/huggingface/moon-landing/pull/2399#issuecomment-1382085611)"
] | 2023-01-13T13:07:07 | 2023-01-13T20:12:32 | 2023-01-13T20:12:32 | CONTRIBUTOR | null | null | null | ### Feature request
The dataset name on the Hub is case-insensitive (see https://github.com/huggingface/moon-landing/pull/2399, internal issue), i.e., https://huggingface.co/datasets/GLUE redirects to https://huggingface.co/datasets/glue.
Ideally, we could load the glue dataset using the following:
```
from datasets import load_dataset
load_dataset('GLUE', 'cola')
```
It breaks because the loading script `GLUE.py` does not exist (`glue.py` should be selected instead).
Minor additional comment: in other cases without a loading script, we can load the dataset, but the automatically generated config name depends on the casing:
- `load_dataset('severo/danish-wit')` generates the config name `severo--danish-wit-e6fda5b070deb133`, while
- `load_dataset('severo/danish-WIT')` generates the config name `severo--danish-WIT-e6fda5b070deb133`
### Motivation
To follow the same UX on the Hub and in the datasets library.
### Your contribution
... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5421/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5421/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5420 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5420/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5420/comments | https://api.github.com/repos/huggingface/datasets/issues/5420/events | https://github.com/huggingface/datasets/pull/5420 | 1,532,265,742 | PR_kwDODunzps5HVAhL | 5,420 | ci: 🎡 remove two obsolete issue templates | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008450 / 0.011353 (-0.002902) | 0.004478 / 0.011008 (-0.006530) | 0.100440 / 0.038508 (0.061931) | 0.029568 / 0.023109 (0.006459) | 0.296705 / 0.275898 (0.020807) | 0.354565 / 0.323480 (0.031085) | 0.006887 / 0.007986 (-0.001098) | 0.003415 / 0.004328 (-0.000914) | 0.078876 / 0.004250 (0.074626) | 0.034927 / 0.037052 (-0.002125) | 0.307695 / 0.258489 (0.049206) | 0.340917 / 0.293841 (0.047076) | 0.033630 / 0.128546 (-0.094916) | 0.011626 / 0.075646 (-0.064020) | 0.322644 / 0.419271 (-0.096627) | 0.040254 / 0.043533 (-0.003279) | 0.297419 / 0.255139 (0.042280) | 0.321584 / 0.283200 (0.038384) | 0.086202 / 0.141683 (-0.055481) | 1.465579 / 1.452155 (0.013425) | 1.521456 / 1.492716 (0.028740) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.200890 / 0.018006 (0.182884) | 0.410300 / 0.000490 (0.409811) | 0.001647 / 0.000200 (0.001447) | 0.000074 / 0.000054 (0.000020) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022569 / 0.037411 (-0.014843) | 0.096062 / 0.014526 (0.081536) | 0.102474 / 0.176557 (-0.074082) | 0.138596 / 0.737135 (-0.598539) | 0.106262 / 0.296338 (-0.190077) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.415976 / 0.215209 (0.200766) | 4.144322 / 2.077655 (2.066667) | 1.871783 / 1.504120 (0.367663) | 1.669478 / 1.541195 (0.128283) | 1.718214 / 1.468490 (0.249724) | 0.687870 / 4.584777 (-3.896907) | 3.362084 / 3.745712 (-0.383628) | 1.844127 / 5.269862 (-3.425735) | 1.149611 / 4.565676 (-3.416066) | 0.081410 / 0.424275 (-0.342865) | 0.012278 / 0.007607 (0.004671) | 0.518245 / 0.226044 (0.292200) | 5.185164 / 2.268929 (2.916236) | 2.299029 / 55.444624 (-53.145595) | 1.960021 / 6.876477 (-4.916456) | 2.009751 / 2.142072 (-0.132322) | 0.803759 / 4.805227 (-4.001468) | 0.147340 / 6.500664 (-6.353324) | 0.063896 / 0.075469 (-0.011573) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.254142 / 1.841788 (-0.587646) | 13.799683 / 8.074308 (5.725375) | 13.940387 / 10.191392 (3.748995) | 0.151246 / 0.680424 (-0.529178) | 0.028709 / 0.534201 (-0.505491) | 0.391600 / 0.579283 (-0.187683) | 0.405750 / 0.434364 (-0.028614) | 0.455479 / 0.540337 (-0.084858) | 0.541022 / 1.386936 (-0.845914) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006462 / 0.011353 (-0.004891) | 0.004462 / 0.011008 (-0.006547) | 0.096588 / 0.038508 (0.058080) | 0.026931 / 0.023109 (0.003822) | 0.344595 / 0.275898 (0.068697) | 0.378743 / 0.323480 (0.055264) | 0.005672 / 0.007986 (-0.002314) | 0.003345 / 0.004328 (-0.000984) | 0.074363 / 0.004250 (0.070112) | 0.037300 / 0.037052 (0.000248) | 0.346895 / 0.258489 (0.088406) | 0.388585 / 0.293841 (0.094744) | 0.031459 / 0.128546 (-0.097088) | 0.011522 / 0.075646 (-0.064124) | 0.318507 / 0.419271 (-0.100764) | 0.041145 / 0.043533 (-0.002388) | 0.343866 / 0.255139 (0.088727) | 0.366490 / 0.283200 (0.083291) | 0.086793 / 0.141683 (-0.054890) | 1.483859 / 1.452155 (0.031704) | 1.574006 / 1.492716 (0.081290) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.220436 / 0.018006 (0.202430) | 0.402988 / 0.000490 (0.402498) | 0.000435 / 0.000200 (0.000235) | 0.000063 / 0.000054 (0.000009) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024573 / 0.037411 (-0.012838) | 0.099190 / 0.014526 (0.084664) | 0.106796 / 0.176557 (-0.069761) | 0.142387 / 0.737135 (-0.594748) | 0.109991 / 0.296338 (-0.186347) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.473452 / 0.215209 (0.258243) | 4.749554 / 2.077655 (2.671899) | 2.433482 / 1.504120 (0.929362) | 2.224276 / 1.541195 (0.683082) | 2.261579 / 1.468490 (0.793088) | 0.699876 / 4.584777 (-3.884901) | 3.378366 / 3.745712 (-0.367346) | 1.835062 / 5.269862 (-3.434799) | 1.161249 / 4.565676 (-3.404427) | 0.082967 / 0.424275 (-0.341308) | 0.012745 / 0.007607 (0.005138) | 0.580006 / 0.226044 (0.353962) | 5.789868 / 2.268929 (3.520939) | 2.909496 / 55.444624 (-52.535128) | 2.539196 / 6.876477 (-4.337280) | 2.617737 / 2.142072 (0.475665) | 0.810320 / 4.805227 (-3.994907) | 0.152501 / 6.500664 (-6.348163) | 0.067201 / 0.075469 (-0.008268) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.257844 / 1.841788 (-0.583943) | 13.865295 / 8.074308 (5.790987) | 14.169073 / 10.191392 (3.977680) | 0.135655 / 0.680424 (-0.544769) | 0.016597 / 0.534201 (-0.517604) | 0.374915 / 0.579283 (-0.204368) | 0.382771 / 0.434364 (-0.051593) | 0.431934 / 0.540337 (-0.108403) | 0.524617 / 1.386936 (-0.862319) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008748 / 0.011353 (-0.002605) | 0.004489 / 0.011008 (-0.006519) | 0.100923 / 0.038508 (0.062415) | 0.031436 / 0.023109 (0.008326) | 0.306508 / 0.275898 (0.030610) | 0.365110 / 0.323480 (0.041630) | 0.007161 / 0.007986 (-0.000824) | 0.005489 / 0.004328 (0.001160) | 0.078909 / 0.004250 (0.074658) | 0.036097 / 0.037052 (-0.000955) | 0.307907 / 0.258489 (0.049418) | 0.370277 / 0.293841 (0.076436) | 0.034184 / 0.128546 (-0.094362) | 0.011613 / 0.075646 (-0.064033) | 0.322896 / 0.419271 (-0.096375) | 0.041829 / 0.043533 (-0.001704) | 0.299669 / 0.255139 (0.044530) | 0.322217 / 0.283200 (0.039017) | 0.087751 / 0.141683 (-0.053932) | 1.476277 / 1.452155 (0.024122) | 1.548196 / 1.492716 (0.055480) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.183002 / 0.018006 (0.164995) | 0.415627 / 0.000490 (0.415138) | 0.003272 / 0.000200 (0.003072) | 0.000070 / 0.000054 (0.000015) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024881 / 0.037411 (-0.012531) | 0.103424 / 0.014526 (0.088898) | 0.106446 / 0.176557 (-0.070110) | 0.142806 / 0.737135 (-0.594330) | 0.110938 / 0.296338 (-0.185401) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.421669 / 0.215209 (0.206460) | 4.207457 / 2.077655 (2.129802) | 1.882176 / 1.504120 (0.378056) | 1.677609 / 1.541195 (0.136415) | 1.734065 / 1.468490 (0.265575) | 0.695915 / 4.584777 (-3.888862) | 3.416731 / 3.745712 (-0.328981) | 1.872575 / 5.269862 (-3.397286) | 1.163612 / 4.565676 (-3.402064) | 0.082710 / 0.424275 (-0.341565) | 0.012659 / 0.007607 (0.005052) | 0.528785 / 0.226044 (0.302741) | 5.305328 / 2.268929 (3.036399) | 2.299850 / 55.444624 (-53.144774) | 1.968137 / 6.876477 (-4.908339) | 2.028326 / 2.142072 (-0.113746) | 0.813157 / 4.805227 (-3.992070) | 0.149997 / 6.500664 (-6.350668) | 0.066739 / 0.075469 (-0.008730) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.206332 / 1.841788 (-0.635456) | 13.795510 / 8.074308 (5.721202) | 14.367695 / 10.191392 (4.176303) | 0.138106 / 0.680424 (-0.542318) | 0.028760 / 0.534201 (-0.505441) | 0.394822 / 0.579283 (-0.184461) | 0.403291 / 0.434364 (-0.031073) | 0.463273 / 0.540337 (-0.077065) | 0.540881 / 1.386936 (-0.846055) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006830 / 0.011353 (-0.004523) | 0.004606 / 0.011008 (-0.006402) | 0.097763 / 0.038508 (0.059255) | 0.027832 / 0.023109 (0.004723) | 0.422970 / 0.275898 (0.147072) | 0.460313 / 0.323480 (0.136833) | 0.005110 / 0.007986 (-0.002876) | 0.003428 / 0.004328 (-0.000901) | 0.075047 / 0.004250 (0.070797) | 0.038374 / 0.037052 (0.001322) | 0.422762 / 0.258489 (0.164273) | 0.469886 / 0.293841 (0.176045) | 0.032391 / 0.128546 (-0.096155) | 0.011804 / 0.075646 (-0.063843) | 0.320439 / 0.419271 (-0.098832) | 0.041939 / 0.043533 (-0.001594) | 0.422521 / 0.255139 (0.167382) | 0.446420 / 0.283200 (0.163220) | 0.090715 / 0.141683 (-0.050968) | 1.484578 / 1.452155 (0.032423) | 1.556154 / 1.492716 (0.063438) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.260735 / 0.018006 (0.242728) | 0.415586 / 0.000490 (0.415096) | 0.026960 / 0.000200 (0.026760) | 0.000296 / 0.000054 (0.000241) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024926 / 0.037411 (-0.012486) | 0.099651 / 0.014526 (0.085125) | 0.107810 / 0.176557 (-0.068747) | 0.148685 / 0.737135 (-0.588451) | 0.112725 / 0.296338 (-0.183614) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.472669 / 0.215209 (0.257460) | 4.718827 / 2.077655 (2.641172) | 2.475583 / 1.504120 (0.971463) | 2.260862 / 1.541195 (0.719667) | 2.307820 / 1.468490 (0.839330) | 0.699464 / 4.584777 (-3.885313) | 3.376282 / 3.745712 (-0.369431) | 1.872650 / 5.269862 (-3.397211) | 1.176399 / 4.565676 (-3.389277) | 0.082854 / 0.424275 (-0.341421) | 0.012845 / 0.007607 (0.005237) | 0.582088 / 0.226044 (0.356044) | 5.861609 / 2.268929 (3.592681) | 2.930728 / 55.444624 (-52.513896) | 2.624310 / 6.876477 (-4.252167) | 2.762130 / 2.142072 (0.620058) | 0.811902 / 4.805227 (-3.993325) | 0.152516 / 6.500664 (-6.348149) | 0.067670 / 0.075469 (-0.007799) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.289790 / 1.841788 (-0.551997) | 14.267607 / 8.074308 (6.193299) | 14.120655 / 10.191392 (3.929263) | 0.128442 / 0.680424 (-0.551982) | 0.017079 / 0.534201 (-0.517121) | 0.381807 / 0.579283 (-0.197476) | 0.400546 / 0.434364 (-0.033818) | 0.447629 / 0.540337 (-0.092709) | 0.532006 / 1.386936 (-0.854930) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png \"CML watermark\")\n"
] | 2023-01-13T12:58:43 | 2023-01-13T13:36:00 | 2023-01-13T13:29:01 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5420",
"html_url": "https://github.com/huggingface/datasets/pull/5420",
"diff_url": "https://github.com/huggingface/datasets/pull/5420.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5420.patch",
"merged_at": "2023-01-13T13:29:01"
} | add-dataset is not needed anymore since the "canonical" datasets are on the Hub. And dataset-viewer is managed within the datasets-server project.
See https://github.com/huggingface/datasets/issues/new/choose
<img width="1245" alt="Capture d’écran 2023-01-13 à 13 59 58" src="https://user-images.githubusercontent.com/1676121/212325813-2d4c30e2-343e-4aa2-8cce-b2b77f45628e.png">
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5420/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5420/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5419 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5419/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5419/comments | https://api.github.com/repos/huggingface/datasets/issues/5419/events | https://github.com/huggingface/datasets/issues/5419 | 1,531,999,850 | I_kwDODunzps5bUHZq | 5,419 | label_column='labels' in datasets.TextClassification and 'label' or 'label_ids' in transformers.DataColator | {
"login": "CreatixEA",
"id": 172385,
"node_id": "MDQ6VXNlcjE3MjM4NQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/172385?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CreatixEA",
"html_url": "https://github.com/CreatixEA",
"followers_url": "https://api.github.com/users/CreatixEA/followers",
"following_url": "https://api.github.com/users/CreatixEA/following{/other_user}",
"gists_url": "https://api.github.com/users/CreatixEA/gists{/gist_id}",
"starred_url": "https://api.github.com/users/CreatixEA/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CreatixEA/subscriptions",
"organizations_url": "https://api.github.com/users/CreatixEA/orgs",
"repos_url": "https://api.github.com/users/CreatixEA/repos",
"events_url": "https://api.github.com/users/CreatixEA/events{/privacy}",
"received_events_url": "https://api.github.com/users/CreatixEA/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi! Thanks for pointing out this inconsistency. Changing the default value at this point is probably not worth it, considering we've started discussing the state of the task API internally - we will most likely deprecate the current one and replace it with a more robust solution that relies on the `train_eval_index` field stored in the YAML section of the dataset cards.",
"The task templates API has been deprecated (will be removed in version 3.0), so I'm closing this issue."
] | 2023-01-13T09:40:07 | 2023-07-21T14:27:08 | 2023-07-21T14:27:08 | NONE | null | null | null | ### Describe the bug
When preparing a dataset for a task using `datasets.TextClassification`, the output feature is named `labels`. When preparing the trainer using the `transformers.DataCollator` the default column name is `label` if binary or `label_ids` if multi-class problem.
It is required to rename the column accordingly to the expected name : `label` or `label_ids`
### Steps to reproduce the bug
```python
from datasets import TextClassification, AutoTokenized, DataCollatorWithPadding
ds_prepared = my_dataset.prepare_for_task(TextClassification(text_column='TEXT', label_column='MY_LABEL_COLUMN_1_OR_0'))
print(ds_prepared)
tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased")
ds_tokenized = ds_prepared.map(lambda x: tokenizer(x['text'], truncation=True), batched=True)
print(ds_tokenized)
data_collator = DataCollatorWithPadding(tokenizer=tokenizer, return_tensors="tf")
tf_data = model.prepare_tf_dataset(ds_tokenized, shuffle=True, batch_size=16, collate_fn=data_collator)
print(tf_data)
```
### Expected behavior
Without renaming the the column, the target column is not in the final tf_data since it is not in the column name expected by the data_collator.
To correct this, we have to rename the column:
```python
ds_prepared = my_dataset.prepare_for_task(TextClassification(text_column='TEXT', label_column='MY_LABEL_COLUMN_1_OR_0')).rename_column('labels', 'label')
```
### Environment info
- `datasets` version: 2.8.0
- Platform: Linux-5.15.79.1-microsoft-standard-WSL2-x86_64-with-glibc2.35
- Python version: 3.10.6
- PyArrow version: 10.0.1
- Pandas version: 1.5.2
- `transformers` version: 4.26.0.dev0
- Platform: Linux-5.15.79.1-microsoft-standard-WSL2-x86_64-with-glibc2.35
- Python version: 3.10.6
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): not installed (NA)
- Tensorflow version (GPU?): 2.11.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in> | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5419/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5419/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5418 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5418/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5418/comments | https://api.github.com/repos/huggingface/datasets/issues/5418/events | https://github.com/huggingface/datasets/issues/5418 | 1,530,111,184 | I_kwDODunzps5bM6TQ | 5,418 | Add ProgressBar for `to_parquet` | {
"login": "zanussbaum",
"id": 33707069,
"node_id": "MDQ6VXNlcjMzNzA3MDY5",
"avatar_url": "https://avatars.githubusercontent.com/u/33707069?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zanussbaum",
"html_url": "https://github.com/zanussbaum",
"followers_url": "https://api.github.com/users/zanussbaum/followers",
"following_url": "https://api.github.com/users/zanussbaum/following{/other_user}",
"gists_url": "https://api.github.com/users/zanussbaum/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zanussbaum/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zanussbaum/subscriptions",
"organizations_url": "https://api.github.com/users/zanussbaum/orgs",
"repos_url": "https://api.github.com/users/zanussbaum/repos",
"events_url": "https://api.github.com/users/zanussbaum/events{/privacy}",
"received_events_url": "https://api.github.com/users/zanussbaum/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | {
"login": "zanussbaum",
"id": 33707069,
"node_id": "MDQ6VXNlcjMzNzA3MDY5",
"avatar_url": "https://avatars.githubusercontent.com/u/33707069?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zanussbaum",
"html_url": "https://github.com/zanussbaum",
"followers_url": "https://api.github.com/users/zanussbaum/followers",
"following_url": "https://api.github.com/users/zanussbaum/following{/other_user}",
"gists_url": "https://api.github.com/users/zanussbaum/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zanussbaum/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zanussbaum/subscriptions",
"organizations_url": "https://api.github.com/users/zanussbaum/orgs",
"repos_url": "https://api.github.com/users/zanussbaum/repos",
"events_url": "https://api.github.com/users/zanussbaum/events{/privacy}",
"received_events_url": "https://api.github.com/users/zanussbaum/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "zanussbaum",
"id": 33707069,
"node_id": "MDQ6VXNlcjMzNzA3MDY5",
"avatar_url": "https://avatars.githubusercontent.com/u/33707069?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zanussbaum",
"html_url": "https://github.com/zanussbaum",
"followers_url": "https://api.github.com/users/zanussbaum/followers",
"following_url": "https://api.github.com/users/zanussbaum/following{/other_user}",
"gists_url": "https://api.github.com/users/zanussbaum/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zanussbaum/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zanussbaum/subscriptions",
"organizations_url": "https://api.github.com/users/zanussbaum/orgs",
"repos_url": "https://api.github.com/users/zanussbaum/repos",
"events_url": "https://api.github.com/users/zanussbaum/events{/privacy}",
"received_events_url": "https://api.github.com/users/zanussbaum/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Thanks for your proposal, @zanussbaum. Yes, I agree that would definitely be a nice feature to have!",
"@albertvillanova I’m happy to make a quick PR for the feature! let me know ",
"That would be awesome ! You can comment `#self-assign` to assign you to this issue and open a PR :) Will be happy to review",
"Closing as this has been merged @lhoestq "
] | 2023-01-12T05:06:20 | 2023-01-24T18:18:24 | 2023-01-24T18:18:24 | CONTRIBUTOR | null | null | null | ### Feature request
Add a progress bar for `Dataset.to_parquet`, similar to how `to_json` works.
### Motivation
It's a bit frustrating to not know how long a dataset will take to write to file and if it's stuck or not without a progress bar
### Your contribution
Sure I can help if needed | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5418/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5418/timeline | null | completed | false |