url
string | repository_url
string | labels_url
string | comments_url
string | events_url
string | html_url
string | id
int64 | node_id
string | number
int64 | title
string | user
dict | labels
list | state
string | locked
bool | assignee
dict | assignees
list | milestone
dict | comments
list | created_at
timestamp[ns, tz=UTC] | updated_at
timestamp[ns, tz=UTC] | closed_at
timestamp[ns, tz=UTC] | author_association
string | type
float64 | active_lock_reason
float64 | sub_issues_summary
dict | body
string | closed_by
dict | reactions
dict | timeline_url
string | performed_via_github_app
float64 | state_reason
string | draft
float64 | pull_request
dict |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/6458
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6458/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6458/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6458/events
|
https://github.com/huggingface/datasets/pull/6458
| 2,016,577,761
|
PR_kwDODunzps5gqy4M
| 6,458
|
Lazy data files resolution
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005097 / 0.011353 (-0.006256) | 0.003523 / 0.011008 (-0.007485) | 0.062827 / 0.038508 (0.024319) | 0.051677 / 0.023109 (0.028568) | 0.248919 / 0.275898 (-0.026980) | 0.275892 / 0.323480 (-0.047588) | 0.003908 / 0.007986 (-0.004077) | 0.002622 / 0.004328 (-0.001706) | 0.048634 / 0.004250 (0.044383) | 0.037903 / 0.037052 (0.000850) | 0.255754 / 0.258489 (-0.002735) | 0.283343 / 0.293841 (-0.010498) | 0.027886 / 0.128546 (-0.100660) | 0.010849 / 0.075646 (-0.064797) | 0.208255 / 0.419271 (-0.211017) | 0.035664 / 0.043533 (-0.007869) | 0.254661 / 0.255139 (-0.000478) | 0.274366 / 0.283200 (-0.008834) | 0.017240 / 0.141683 (-0.124443) | 1.092952 / 1.452155 (-0.359203) | 1.148373 / 1.492716 (-0.344344) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091592 / 0.018006 (0.073586) | 0.301926 / 0.000490 (0.301436) | 0.000207 / 0.000200 (0.000007) | 0.000051 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018525 / 0.037411 (-0.018887) | 0.060539 / 0.014526 (0.046014) | 0.073812 / 0.176557 (-0.102745) | 0.120655 / 0.737135 (-0.616480) | 0.076931 / 0.296338 (-0.219407) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.282797 / 0.215209 (0.067588) | 2.746573 / 2.077655 (0.668918) | 1.477652 / 1.504120 (-0.026468) | 1.349922 / 1.541195 (-0.191273) | 1.374347 / 1.468490 (-0.094143) | 0.574096 / 4.584777 (-4.010681) | 2.383317 / 3.745712 (-1.362395) | 2.809320 / 5.269862 (-2.460541) | 1.758947 / 4.565676 (-2.806729) | 0.064029 / 0.424275 (-0.360246) | 0.004936 / 0.007607 (-0.002672) | 0.331403 / 0.226044 (0.105358) | 3.260908 / 2.268929 (0.991980) | 1.817670 / 55.444624 (-53.626954) | 1.525863 / 6.876477 (-5.350613) | 1.542017 / 2.142072 (-0.600055) | 0.638900 / 4.805227 (-4.166327) | 0.119485 / 6.500664 (-6.381179) | 0.042588 / 0.075469 (-0.032881) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.951583 / 1.841788 (-0.890205) | 11.621917 / 8.074308 (3.547609) | 10.511062 / 10.191392 (0.319670) | 0.130137 / 0.680424 (-0.550287) | 0.014048 / 0.534201 (-0.520153) | 0.290621 / 0.579283 (-0.288662) | 0.271665 / 0.434364 (-0.162699) | 0.331260 / 0.540337 (-0.209077) | 0.441621 / 1.386936 (-0.945316) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005272 / 0.011353 (-0.006081) | 0.003656 / 0.011008 (-0.007352) | 0.049245 / 0.038508 (0.010737) | 0.054130 / 0.023109 (0.031021) | 0.274775 / 0.275898 (-0.001123) | 0.296664 / 0.323480 (-0.026816) | 0.004870 / 0.007986 (-0.003115) | 0.002728 / 0.004328 (-0.001601) | 0.048087 / 0.004250 (0.043837) | 0.041448 / 0.037052 (0.004396) | 0.279110 / 0.258489 (0.020621) | 0.303660 / 0.293841 (0.009819) | 0.029767 / 0.128546 (-0.098779) | 0.010799 / 0.075646 (-0.064848) | 0.058650 / 0.419271 (-0.360622) | 0.033088 / 0.043533 (-0.010445) | 0.274456 / 0.255139 (0.019317) | 0.290206 / 0.283200 (0.007007) | 0.017259 / 0.141683 (-0.124424) | 1.176501 / 1.452155 (-0.275654) | 1.197552 / 1.492716 (-0.295165) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092865 / 0.018006 (0.074859) | 0.302437 / 0.000490 (0.301947) | 0.000209 / 0.000200 (0.000009) | 0.000048 / 0.000054 (-0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021211 / 0.037411 (-0.016200) | 0.068858 / 0.014526 (0.054332) | 0.081783 / 0.176557 (-0.094773) | 0.120472 / 0.737135 (-0.616663) | 0.083900 / 0.296338 (-0.212438) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.295157 / 0.215209 (0.079948) | 2.910979 / 2.077655 (0.833324) | 1.575772 / 1.504120 (0.071652) | 1.456955 / 1.541195 (-0.084239) | 1.468982 / 1.468490 (0.000492) | 0.560309 / 4.584777 (-4.024468) | 2.460171 / 3.745712 (-1.285541) | 2.805713 / 5.269862 (-2.464149) | 1.754074 / 4.565676 (-2.811603) | 0.063333 / 0.424275 (-0.360942) | 0.004940 / 0.007607 (-0.002667) | 0.346141 / 0.226044 (0.120097) | 3.463431 / 2.268929 (1.194502) | 1.929135 / 55.444624 (-53.515490) | 1.660191 / 6.876477 (-5.216286) | 1.668327 / 2.142072 (-0.473746) | 0.644183 / 4.805227 (-4.161044) | 0.115738 / 6.500664 (-6.384926) | 0.041347 / 0.075469 (-0.034122) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.961565 / 1.841788 (-0.880222) | 12.232589 / 8.074308 (4.158281) | 10.778774 / 10.191392 (0.587382) | 0.132709 / 0.680424 (-0.547715) | 0.015964 / 0.534201 (-0.518237) | 0.286944 / 0.579283 (-0.292340) | 0.279740 / 0.434364 (-0.154624) | 0.333024 / 0.540337 (-0.207314) | 0.438819 / 1.386936 (-0.948117) |\n\n</details>\n</details>\n\n\n",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6458). All of your documentation changes will be reflected on that endpoint.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005317 / 0.011353 (-0.006036) | 0.003936 / 0.011008 (-0.007072) | 0.063122 / 0.038508 (0.024614) | 0.061274 / 0.023109 (0.038165) | 0.251764 / 0.275898 (-0.024134) | 0.274849 / 0.323480 (-0.048631) | 0.004059 / 0.007986 (-0.003927) | 0.002874 / 0.004328 (-0.001455) | 0.048716 / 0.004250 (0.044465) | 0.038281 / 0.037052 (0.001228) | 0.265224 / 0.258489 (0.006735) | 0.285962 / 0.293841 (-0.007878) | 0.028522 / 0.128546 (-0.100024) | 0.011150 / 0.075646 (-0.064496) | 0.208362 / 0.419271 (-0.210910) | 0.038900 / 0.043533 (-0.004633) | 0.254113 / 0.255139 (-0.001026) | 0.276721 / 0.283200 (-0.006478) | 0.018372 / 0.141683 (-0.123311) | 1.121336 / 1.452155 (-0.330818) | 1.189548 / 1.492716 (-0.303168) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.097633 / 0.018006 (0.079627) | 0.304443 / 0.000490 (0.303953) | 0.000218 / 0.000200 (0.000018) | 0.000054 / 0.000054 (-0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021757 / 0.037411 (-0.015654) | 0.061978 / 0.014526 (0.047453) | 0.076296 / 0.176557 (-0.100260) | 0.122320 / 0.737135 (-0.614816) | 0.076738 / 0.296338 (-0.219601) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.284328 / 0.215209 (0.069119) | 2.793071 / 2.077655 (0.715417) | 1.504768 / 1.504120 (0.000648) | 1.386083 / 1.541195 (-0.155111) | 1.457593 / 1.468490 (-0.010897) | 0.575887 / 4.584777 (-4.008890) | 2.419396 / 3.745712 (-1.326316) | 2.931305 / 5.269862 (-2.338556) | 1.840759 / 4.565676 (-2.724917) | 0.063801 / 0.424275 (-0.360474) | 0.004966 / 0.007607 (-0.002641) | 0.341612 / 0.226044 (0.115568) | 3.402842 / 2.268929 (1.133913) | 1.860521 / 55.444624 (-53.584103) | 1.603156 / 6.876477 (-5.273321) | 1.665835 / 2.142072 (-0.476237) | 0.655299 / 4.805227 (-4.149929) | 0.124527 / 6.500664 (-6.376137) | 0.044021 / 0.075469 (-0.031449) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.972068 / 1.841788 (-0.869720) | 12.393202 / 8.074308 (4.318894) | 10.420876 / 10.191392 (0.229484) | 0.140684 / 0.680424 (-0.539740) | 0.014442 / 0.534201 (-0.519759) | 0.288182 / 0.579283 (-0.291101) | 0.265029 / 0.434364 (-0.169334) | 0.327133 / 0.540337 (-0.213204) | 0.443403 / 1.386936 (-0.943533) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005559 / 0.011353 (-0.005794) | 0.004046 / 0.011008 (-0.006962) | 0.048991 / 0.038508 (0.010483) | 0.059576 / 0.023109 (0.036467) | 0.273596 / 0.275898 (-0.002302) | 0.296658 / 0.323480 (-0.026822) | 0.004089 / 0.007986 (-0.003897) | 0.002777 / 0.004328 (-0.001551) | 0.048216 / 0.004250 (0.043966) | 0.043200 / 0.037052 (0.006148) | 0.276815 / 0.258489 (0.018326) | 0.300570 / 0.293841 (0.006729) | 0.030250 / 0.128546 (-0.098296) | 0.011322 / 0.075646 (-0.064324) | 0.057843 / 0.419271 (-0.361429) | 0.033366 / 0.043533 (-0.010167) | 0.275636 / 0.255139 (0.020497) | 0.293750 / 0.283200 (0.010550) | 0.018551 / 0.141683 (-0.123132) | 1.160919 / 1.452155 (-0.291236) | 1.214519 / 1.492716 (-0.278197) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.100074 / 0.018006 (0.082068) | 0.308434 / 0.000490 (0.307944) | 0.000232 / 0.000200 (0.000032) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022600 / 0.037411 (-0.014811) | 0.070506 / 0.014526 (0.055980) | 0.081185 / 0.176557 (-0.095371) | 0.120688 / 0.737135 (-0.616448) | 0.082897 / 0.296338 (-0.213441) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.306661 / 0.215209 (0.091452) | 2.989656 / 2.077655 (0.912001) | 1.618868 / 1.504120 (0.114749) | 1.485045 / 1.541195 (-0.056149) | 1.549359 / 1.468490 (0.080869) | 0.593596 / 4.584777 (-3.991181) | 2.466215 / 3.745712 (-1.279497) | 2.956570 / 5.269862 (-2.313292) | 1.823160 / 4.565676 (-2.742516) | 0.063442 / 0.424275 (-0.360833) | 0.004928 / 0.007607 (-0.002679) | 0.358464 / 0.226044 (0.132419) | 3.566345 / 2.268929 (1.297417) | 2.006784 / 55.444624 (-53.437840) | 1.687091 / 6.876477 (-5.189386) | 1.729464 / 2.142072 (-0.412609) | 0.655656 / 4.805227 (-4.149572) | 0.119044 / 6.500664 (-6.381620) | 0.042782 / 0.075469 (-0.032687) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.974937 / 1.841788 (-0.866850) | 12.992888 / 8.074308 (4.918580) | 10.893713 / 10.191392 (0.702321) | 0.133853 / 0.680424 (-0.546570) | 0.016055 / 0.534201 (-0.518145) | 0.289342 / 0.579283 (-0.289941) | 0.286094 / 0.434364 (-0.148270) | 0.328670 / 0.540337 (-0.211667) | 0.444605 / 1.386936 (-0.942331) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005705 / 0.011353 (-0.005648) | 0.003519 / 0.011008 (-0.007489) | 0.062009 / 0.038508 (0.023501) | 0.053481 / 0.023109 (0.030372) | 0.262669 / 0.275898 (-0.013229) | 0.280290 / 0.323480 (-0.043189) | 0.002957 / 0.007986 (-0.005029) | 0.002587 / 0.004328 (-0.001741) | 0.047876 / 0.004250 (0.043626) | 0.038868 / 0.037052 (0.001815) | 0.267854 / 0.258489 (0.009365) | 0.290430 / 0.293841 (-0.003411) | 0.028120 / 0.128546 (-0.100427) | 0.011042 / 0.075646 (-0.064605) | 0.206113 / 0.419271 (-0.213158) | 0.036039 / 0.043533 (-0.007494) | 0.257715 / 0.255139 (0.002576) | 0.281279 / 0.283200 (-0.001921) | 0.019790 / 0.141683 (-0.121893) | 1.114472 / 1.452155 (-0.337683) | 1.192219 / 1.492716 (-0.300497) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091049 / 0.018006 (0.073043) | 0.300846 / 0.000490 (0.300356) | 0.000208 / 0.000200 (0.000008) | 0.000051 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018569 / 0.037411 (-0.018843) | 0.060075 / 0.014526 (0.045549) | 0.073877 / 0.176557 (-0.102680) | 0.120337 / 0.737135 (-0.616799) | 0.075454 / 0.296338 (-0.220884) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.290084 / 0.215209 (0.074875) | 2.805712 / 2.077655 (0.728057) | 1.459393 / 1.504120 (-0.044727) | 1.327356 / 1.541195 (-0.213838) | 1.384734 / 1.468490 (-0.083756) | 0.574532 / 4.584777 (-4.010245) | 2.419696 / 3.745712 (-1.326016) | 2.805449 / 5.269862 (-2.464412) | 1.764127 / 4.565676 (-2.801549) | 0.063256 / 0.424275 (-0.361020) | 0.004954 / 0.007607 (-0.002653) | 0.344246 / 0.226044 (0.118202) | 3.396050 / 2.268929 (1.127121) | 1.807621 / 55.444624 (-53.637004) | 1.536627 / 6.876477 (-5.339850) | 1.552450 / 2.142072 (-0.589623) | 0.651156 / 4.805227 (-4.154071) | 0.119358 / 6.500664 (-6.381306) | 0.042810 / 0.075469 (-0.032660) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.930646 / 1.841788 (-0.911142) | 11.830454 / 8.074308 (3.756146) | 10.615315 / 10.191392 (0.423923) | 0.130617 / 0.680424 (-0.549807) | 0.014081 / 0.534201 (-0.520120) | 0.285027 / 0.579283 (-0.294256) | 0.267296 / 0.434364 (-0.167068) | 0.331478 / 0.540337 (-0.208859) | 0.442676 / 1.386936 (-0.944260) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005340 / 0.011353 (-0.006013) | 0.003745 / 0.011008 (-0.007264) | 0.049011 / 0.038508 (0.010503) | 0.051342 / 0.023109 (0.028233) | 0.272482 / 0.275898 (-0.003416) | 0.292816 / 0.323480 (-0.030663) | 0.003977 / 0.007986 (-0.004008) | 0.002642 / 0.004328 (-0.001687) | 0.048213 / 0.004250 (0.043963) | 0.040341 / 0.037052 (0.003289) | 0.275176 / 0.258489 (0.016687) | 0.301098 / 0.293841 (0.007257) | 0.029052 / 0.128546 (-0.099495) | 0.010796 / 0.075646 (-0.064850) | 0.057654 / 0.419271 (-0.361618) | 0.032914 / 0.043533 (-0.010619) | 0.271235 / 0.255139 (0.016096) | 0.289883 / 0.283200 (0.006684) | 0.018548 / 0.141683 (-0.123135) | 1.134072 / 1.452155 (-0.318083) | 1.208228 / 1.492716 (-0.284488) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094524 / 0.018006 (0.076518) | 0.310162 / 0.000490 (0.309672) | 0.000237 / 0.000200 (0.000037) | 0.000057 / 0.000054 (0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021090 / 0.037411 (-0.016321) | 0.068351 / 0.014526 (0.053825) | 0.082370 / 0.176557 (-0.094186) | 0.121648 / 0.737135 (-0.615487) | 0.083433 / 0.296338 (-0.212906) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.294616 / 0.215209 (0.079407) | 2.894194 / 2.077655 (0.816539) | 1.619739 / 1.504120 (0.115619) | 1.492466 / 1.541195 (-0.048729) | 1.511662 / 1.468490 (0.043172) | 0.557179 / 4.584777 (-4.027597) | 2.400669 / 3.745712 (-1.345043) | 2.781363 / 5.269862 (-2.488499) | 1.769144 / 4.565676 (-2.796533) | 0.063996 / 0.424275 (-0.360279) | 0.004922 / 0.007607 (-0.002685) | 0.354483 / 0.226044 (0.128438) | 3.474795 / 2.268929 (1.205867) | 1.985743 / 55.444624 (-53.458881) | 1.693173 / 6.876477 (-5.183303) | 1.695857 / 2.142072 (-0.446216) | 0.654800 / 4.805227 (-4.150427) | 0.117316 / 6.500664 (-6.383348) | 0.040708 / 0.075469 (-0.034761) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.977678 / 1.841788 (-0.864109) | 12.214098 / 8.074308 (4.139790) | 10.741857 / 10.191392 (0.550465) | 0.130308 / 0.680424 (-0.550116) | 0.015053 / 0.534201 (-0.519148) | 0.295496 / 0.579283 (-0.283787) | 0.276348 / 0.434364 (-0.158015) | 0.326568 / 0.540337 (-0.213769) | 0.441902 / 1.386936 (-0.945034) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005218 / 0.011353 (-0.006135) | 0.003270 / 0.011008 (-0.007738) | 0.062380 / 0.038508 (0.023872) | 0.052896 / 0.023109 (0.029787) | 0.233060 / 0.275898 (-0.042838) | 0.259194 / 0.323480 (-0.064286) | 0.002880 / 0.007986 (-0.005106) | 0.002643 / 0.004328 (-0.001686) | 0.048084 / 0.004250 (0.043833) | 0.038807 / 0.037052 (0.001755) | 0.244925 / 0.258489 (-0.013564) | 0.269619 / 0.293841 (-0.024222) | 0.026901 / 0.128546 (-0.101646) | 0.010150 / 0.075646 (-0.065497) | 0.206854 / 0.419271 (-0.212417) | 0.035618 / 0.043533 (-0.007915) | 0.239577 / 0.255139 (-0.015562) | 0.259684 / 0.283200 (-0.023516) | 0.019823 / 0.141683 (-0.121860) | 1.074472 / 1.452155 (-0.377682) | 1.142911 / 1.492716 (-0.349805) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092616 / 0.018006 (0.074610) | 0.301974 / 0.000490 (0.301485) | 0.000201 / 0.000200 (0.000002) | 0.000048 / 0.000054 (-0.000007) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018864 / 0.037411 (-0.018548) | 0.061007 / 0.014526 (0.046481) | 0.073228 / 0.176557 (-0.103328) | 0.120719 / 0.737135 (-0.616416) | 0.075686 / 0.296338 (-0.220653) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.281404 / 0.215209 (0.066195) | 2.777671 / 2.077655 (0.700017) | 1.464689 / 1.504120 (-0.039431) | 1.345357 / 1.541195 (-0.195838) | 1.384273 / 1.468490 (-0.084217) | 0.560298 / 4.584777 (-4.024479) | 2.389877 / 3.745712 (-1.355835) | 2.755564 / 5.269862 (-2.514297) | 1.737754 / 4.565676 (-2.827922) | 0.063025 / 0.424275 (-0.361251) | 0.004975 / 0.007607 (-0.002632) | 0.346741 / 0.226044 (0.120697) | 3.321918 / 2.268929 (1.052989) | 1.815700 / 55.444624 (-53.628924) | 1.547333 / 6.876477 (-5.329144) | 1.564809 / 2.142072 (-0.577263) | 0.638645 / 4.805227 (-4.166582) | 0.118157 / 6.500664 (-6.382507) | 0.041605 / 0.075469 (-0.033864) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.942515 / 1.841788 (-0.899273) | 11.400386 / 8.074308 (3.326078) | 10.208763 / 10.191392 (0.017370) | 0.138144 / 0.680424 (-0.542280) | 0.014354 / 0.534201 (-0.519847) | 0.288289 / 0.579283 (-0.290994) | 0.265973 / 0.434364 (-0.168391) | 0.327703 / 0.540337 (-0.212634) | 0.435474 / 1.386936 (-0.951462) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005163 / 0.011353 (-0.006190) | 0.003307 / 0.011008 (-0.007701) | 0.048885 / 0.038508 (0.010377) | 0.049044 / 0.023109 (0.025935) | 0.261408 / 0.275898 (-0.014490) | 0.284625 / 0.323480 (-0.038855) | 0.003970 / 0.007986 (-0.004015) | 0.002754 / 0.004328 (-0.001575) | 0.048271 / 0.004250 (0.044021) | 0.039849 / 0.037052 (0.002797) | 0.266898 / 0.258489 (0.008409) | 0.291445 / 0.293841 (-0.002396) | 0.028477 / 0.128546 (-0.100069) | 0.010656 / 0.075646 (-0.064990) | 0.057732 / 0.419271 (-0.361539) | 0.033298 / 0.043533 (-0.010235) | 0.297773 / 0.255139 (0.042634) | 0.281894 / 0.283200 (-0.001305) | 0.018595 / 0.141683 (-0.123088) | 1.168849 / 1.452155 (-0.283306) | 1.183493 / 1.492716 (-0.309224) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092683 / 0.018006 (0.074677) | 0.300387 / 0.000490 (0.299897) | 0.000221 / 0.000200 (0.000021) | 0.000052 / 0.000054 (-0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021356 / 0.037411 (-0.016055) | 0.068095 / 0.014526 (0.053569) | 0.079806 / 0.176557 (-0.096750) | 0.118965 / 0.737135 (-0.618170) | 0.082066 / 0.296338 (-0.214273) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.293105 / 0.215209 (0.077896) | 2.842800 / 2.077655 (0.765146) | 1.572052 / 1.504120 (0.067932) | 1.450156 / 1.541195 (-0.091038) | 1.464227 / 1.468490 (-0.004263) | 0.561215 / 4.584777 (-4.023562) | 2.456117 / 3.745712 (-1.289596) | 2.739766 / 5.269862 (-2.530095) | 1.730354 / 4.565676 (-2.835323) | 0.062636 / 0.424275 (-0.361639) | 0.004933 / 0.007607 (-0.002674) | 0.345800 / 0.226044 (0.119756) | 3.415858 / 2.268929 (1.146929) | 1.937288 / 55.444624 (-53.507336) | 1.661975 / 6.876477 (-5.214502) | 1.660347 / 2.142072 (-0.481726) | 0.642780 / 4.805227 (-4.162448) | 0.116643 / 6.500664 (-6.384021) | 0.041282 / 0.075469 (-0.034187) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.976629 / 1.841788 (-0.865159) | 11.900319 / 8.074308 (3.826011) | 10.574198 / 10.191392 (0.382806) | 0.129689 / 0.680424 (-0.550735) | 0.015390 / 0.534201 (-0.518811) | 0.286543 / 0.579283 (-0.292741) | 0.277676 / 0.434364 (-0.156688) | 0.325053 / 0.540337 (-0.215284) | 0.439663 / 1.386936 (-0.947274) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005382 / 0.011353 (-0.005971) | 0.003606 / 0.011008 (-0.007402) | 0.063234 / 0.038508 (0.024726) | 0.053738 / 0.023109 (0.030629) | 0.250405 / 0.275898 (-0.025493) | 0.272244 / 0.323480 (-0.051236) | 0.002896 / 0.007986 (-0.005090) | 0.002684 / 0.004328 (-0.001644) | 0.048394 / 0.004250 (0.044143) | 0.039017 / 0.037052 (0.001964) | 0.259554 / 0.258489 (0.001065) | 0.287215 / 0.293841 (-0.006626) | 0.028290 / 0.128546 (-0.100257) | 0.011482 / 0.075646 (-0.064164) | 0.214264 / 0.419271 (-0.205007) | 0.036257 / 0.043533 (-0.007276) | 0.252873 / 0.255139 (-0.002266) | 0.271269 / 0.283200 (-0.011931) | 0.017173 / 0.141683 (-0.124510) | 1.137474 / 1.452155 (-0.314681) | 1.161499 / 1.492716 (-0.331217) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092424 / 0.018006 (0.074418) | 0.283703 / 0.000490 (0.283213) | 0.000209 / 0.000200 (0.000009) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018307 / 0.037411 (-0.019105) | 0.060780 / 0.014526 (0.046254) | 0.073984 / 0.176557 (-0.102573) | 0.120824 / 0.737135 (-0.616311) | 0.074724 / 0.296338 (-0.221615) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.297682 / 0.215209 (0.082473) | 2.853267 / 2.077655 (0.775612) | 1.567643 / 1.504120 (0.063523) | 1.437218 / 1.541195 (-0.103976) | 1.467187 / 1.468490 (-0.001304) | 0.560552 / 4.584777 (-4.024225) | 2.387848 / 3.745712 (-1.357864) | 2.718946 / 5.269862 (-2.550916) | 1.724107 / 4.565676 (-2.841570) | 0.061923 / 0.424275 (-0.362352) | 0.004828 / 0.007607 (-0.002779) | 0.353916 / 0.226044 (0.127871) | 3.404477 / 2.268929 (1.135548) | 1.906078 / 55.444624 (-53.538546) | 1.629686 / 6.876477 (-5.246791) | 1.640839 / 2.142072 (-0.501233) | 0.641082 / 4.805227 (-4.164145) | 0.118078 / 6.500664 (-6.382586) | 0.041881 / 0.075469 (-0.033588) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.936062 / 1.841788 (-0.905726) | 11.397678 / 8.074308 (3.323370) | 10.385159 / 10.191392 (0.193766) | 0.127337 / 0.680424 (-0.553087) | 0.013562 / 0.534201 (-0.520639) | 0.290817 / 0.579283 (-0.288466) | 0.259377 / 0.434364 (-0.174987) | 0.324829 / 0.540337 (-0.215508) | 0.434344 / 1.386936 (-0.952592) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005134 / 0.011353 (-0.006219) | 0.003404 / 0.011008 (-0.007604) | 0.048281 / 0.038508 (0.009772) | 0.050952 / 0.023109 (0.027842) | 0.277553 / 0.275898 (0.001655) | 0.298855 / 0.323480 (-0.024625) | 0.003928 / 0.007986 (-0.004058) | 0.002642 / 0.004328 (-0.001687) | 0.047374 / 0.004250 (0.043123) | 0.039883 / 0.037052 (0.002831) | 0.279808 / 0.258489 (0.021318) | 0.301604 / 0.293841 (0.007763) | 0.028708 / 0.128546 (-0.099838) | 0.010949 / 0.075646 (-0.064697) | 0.057090 / 0.419271 (-0.362181) | 0.032438 / 0.043533 (-0.011095) | 0.274690 / 0.255139 (0.019551) | 0.290912 / 0.283200 (0.007712) | 0.017556 / 0.141683 (-0.124127) | 1.111091 / 1.452155 (-0.341064) | 1.166063 / 1.492716 (-0.326653) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.090557 / 0.018006 (0.072551) | 0.298661 / 0.000490 (0.298171) | 0.000228 / 0.000200 (0.000028) | 0.000045 / 0.000054 (-0.000009) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021712 / 0.037411 (-0.015699) | 0.068682 / 0.014526 (0.054156) | 0.080108 / 0.176557 (-0.096449) | 0.119480 / 0.737135 (-0.617655) | 0.082703 / 0.296338 (-0.213636) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.294095 / 0.215209 (0.078886) | 2.884758 / 2.077655 (0.807103) | 1.598312 / 1.504120 (0.094192) | 1.480050 / 1.541195 (-0.061145) | 1.488611 / 1.468490 (0.020121) | 0.556052 / 4.584777 (-4.028724) | 2.435484 / 3.745712 (-1.310228) | 2.741592 / 5.269862 (-2.528270) | 1.706223 / 4.565676 (-2.859454) | 0.062214 / 0.424275 (-0.362061) | 0.004901 / 0.007607 (-0.002706) | 0.346301 / 0.226044 (0.120257) | 3.474516 / 2.268929 (1.205587) | 1.995205 / 55.444624 (-53.449419) | 1.726349 / 6.876477 (-5.150128) | 1.659600 / 2.142072 (-0.482472) | 0.643560 / 4.805227 (-4.161667) | 0.115222 / 6.500664 (-6.385442) | 0.041137 / 0.075469 (-0.034332) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.974566 / 1.841788 (-0.867221) | 11.872479 / 8.074308 (3.798171) | 10.496919 / 10.191392 (0.305527) | 0.129087 / 0.680424 (-0.551337) | 0.014627 / 0.534201 (-0.519574) | 0.289070 / 0.579283 (-0.290213) | 0.269609 / 0.434364 (-0.164755) | 0.327785 / 0.540337 (-0.212553) | 0.444634 / 1.386936 (-0.942302) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005080 / 0.011353 (-0.006273) | 0.003782 / 0.011008 (-0.007226) | 0.062816 / 0.038508 (0.024308) | 0.056338 / 0.023109 (0.033229) | 0.251317 / 0.275898 (-0.024581) | 0.269414 / 0.323480 (-0.054066) | 0.003984 / 0.007986 (-0.004001) | 0.002749 / 0.004328 (-0.001580) | 0.048126 / 0.004250 (0.043876) | 0.038516 / 0.037052 (0.001464) | 0.253809 / 0.258489 (-0.004680) | 0.283309 / 0.293841 (-0.010532) | 0.027015 / 0.128546 (-0.101531) | 0.010610 / 0.075646 (-0.065037) | 0.213024 / 0.419271 (-0.206247) | 0.035734 / 0.043533 (-0.007799) | 0.247909 / 0.255139 (-0.007230) | 0.263539 / 0.283200 (-0.019660) | 0.018408 / 0.141683 (-0.123275) | 1.104366 / 1.452155 (-0.347789) | 1.169668 / 1.492716 (-0.323048) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.114366 / 0.018006 (0.096360) | 0.317674 / 0.000490 (0.317184) | 0.000227 / 0.000200 (0.000027) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018955 / 0.037411 (-0.018457) | 0.060716 / 0.014526 (0.046190) | 0.072963 / 0.176557 (-0.103593) | 0.121671 / 0.737135 (-0.615464) | 0.073785 / 0.296338 (-0.222554) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.292349 / 0.215209 (0.077140) | 2.832049 / 2.077655 (0.754394) | 1.504488 / 1.504120 (0.000368) | 1.403418 / 1.541195 (-0.137777) | 1.449223 / 1.468490 (-0.019267) | 0.563846 / 4.584777 (-4.020931) | 2.376726 / 3.745712 (-1.368986) | 2.823304 / 5.269862 (-2.446558) | 1.774858 / 4.565676 (-2.790818) | 0.063229 / 0.424275 (-0.361046) | 0.004923 / 0.007607 (-0.002684) | 0.347240 / 0.226044 (0.121195) | 3.486563 / 2.268929 (1.217634) | 1.890516 / 55.444624 (-53.554109) | 1.570620 / 6.876477 (-5.305857) | 1.600842 / 2.142072 (-0.541231) | 0.644287 / 4.805227 (-4.160940) | 0.116931 / 6.500664 (-6.383733) | 0.042068 / 0.075469 (-0.033401) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.935662 / 1.841788 (-0.906126) | 11.950247 / 8.074308 (3.875939) | 10.636225 / 10.191392 (0.444833) | 0.139137 / 0.680424 (-0.541287) | 0.014473 / 0.534201 (-0.519728) | 0.294213 / 0.579283 (-0.285070) | 0.273413 / 0.434364 (-0.160951) | 0.325930 / 0.540337 (-0.214407) | 0.444265 / 1.386936 (-0.942671) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005448 / 0.011353 (-0.005904) | 0.003155 / 0.011008 (-0.007853) | 0.048626 / 0.038508 (0.010117) | 0.057427 / 0.023109 (0.034318) | 0.270412 / 0.275898 (-0.005486) | 0.290816 / 0.323480 (-0.032664) | 0.004744 / 0.007986 (-0.003241) | 0.002776 / 0.004328 (-0.001552) | 0.047953 / 0.004250 (0.043703) | 0.041126 / 0.037052 (0.004073) | 0.276046 / 0.258489 (0.017557) | 0.297548 / 0.293841 (0.003707) | 0.029308 / 0.128546 (-0.099238) | 0.010516 / 0.075646 (-0.065131) | 0.056982 / 0.419271 (-0.362290) | 0.032922 / 0.043533 (-0.010611) | 0.271342 / 0.255139 (0.016203) | 0.288963 / 0.283200 (0.005763) | 0.019048 / 0.141683 (-0.122635) | 1.130453 / 1.452155 (-0.321702) | 1.206462 / 1.492716 (-0.286254) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.099249 / 0.018006 (0.081242) | 0.312409 / 0.000490 (0.311919) | 0.000224 / 0.000200 (0.000024) | 0.000044 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021992 / 0.037411 (-0.015419) | 0.068377 / 0.014526 (0.053851) | 0.080749 / 0.176557 (-0.095807) | 0.120534 / 0.737135 (-0.616602) | 0.082549 / 0.296338 (-0.213790) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.299634 / 0.215209 (0.084425) | 2.943496 / 2.077655 (0.865841) | 1.602842 / 1.504120 (0.098722) | 1.462140 / 1.541195 (-0.079055) | 1.511082 / 1.468490 (0.042592) | 0.574148 / 4.584777 (-4.010629) | 2.492158 / 3.745712 (-1.253554) | 2.921695 / 5.269862 (-2.348166) | 1.812416 / 4.565676 (-2.753260) | 0.064145 / 0.424275 (-0.360130) | 0.005133 / 0.007607 (-0.002475) | 0.357935 / 0.226044 (0.131891) | 3.543728 / 2.268929 (1.274800) | 1.948676 / 55.444624 (-53.495948) | 1.664960 / 6.876477 (-5.211517) | 1.678703 / 2.142072 (-0.463370) | 0.645867 / 4.805227 (-4.159360) | 0.117671 / 6.500664 (-6.382993) | 0.040887 / 0.075469 (-0.034582) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.979127 / 1.841788 (-0.862661) | 12.363904 / 8.074308 (4.289596) | 10.673725 / 10.191392 (0.482333) | 0.143358 / 0.680424 (-0.537066) | 0.015375 / 0.534201 (-0.518825) | 0.287590 / 0.579283 (-0.291694) | 0.284742 / 0.434364 (-0.149622) | 0.326901 / 0.540337 (-0.213437) | 0.443962 / 1.386936 (-0.942974) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004994 / 0.011353 (-0.006359) | 0.003368 / 0.011008 (-0.007640) | 0.062803 / 0.038508 (0.024295) | 0.050778 / 0.023109 (0.027669) | 0.255955 / 0.275898 (-0.019943) | 0.278215 / 0.323480 (-0.045265) | 0.003801 / 0.007986 (-0.004184) | 0.002703 / 0.004328 (-0.001626) | 0.048369 / 0.004250 (0.044119) | 0.037795 / 0.037052 (0.000743) | 0.255634 / 0.258489 (-0.002855) | 0.284226 / 0.293841 (-0.009615) | 0.027252 / 0.128546 (-0.101294) | 0.010686 / 0.075646 (-0.064961) | 0.206139 / 0.419271 (-0.213133) | 0.035543 / 0.043533 (-0.007990) | 0.257167 / 0.255139 (0.002028) | 0.277784 / 0.283200 (-0.005416) | 0.016938 / 0.141683 (-0.124745) | 1.108595 / 1.452155 (-0.343560) | 1.188542 / 1.492716 (-0.304175) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.090938 / 0.018006 (0.072932) | 0.298463 / 0.000490 (0.297973) | 0.000203 / 0.000200 (0.000003) | 0.000048 / 0.000054 (-0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027762 / 0.037411 (-0.009649) | 0.060539 / 0.014526 (0.046014) | 0.075986 / 0.176557 (-0.100570) | 0.133851 / 0.737135 (-0.603285) | 0.074669 / 0.296338 (-0.221670) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.285614 / 0.215209 (0.070405) | 2.810529 / 2.077655 (0.732874) | 1.537092 / 1.504120 (0.032973) | 1.412211 / 1.541195 (-0.128983) | 1.446395 / 1.468490 (-0.022095) | 0.559008 / 4.584777 (-4.025769) | 2.343445 / 3.745712 (-1.402267) | 2.748113 / 5.269862 (-2.521748) | 1.733593 / 4.565676 (-2.832083) | 0.061720 / 0.424275 (-0.362555) | 0.004930 / 0.007607 (-0.002677) | 0.330646 / 0.226044 (0.104602) | 3.314999 / 2.268929 (1.046071) | 1.854527 / 55.444624 (-53.590098) | 1.605819 / 6.876477 (-5.270657) | 1.591406 / 2.142072 (-0.550667) | 0.624239 / 4.805227 (-4.180988) | 0.115352 / 6.500664 (-6.385312) | 0.041600 / 0.075469 (-0.033869) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.933179 / 1.841788 (-0.908608) | 11.456372 / 8.074308 (3.382064) | 10.578042 / 10.191392 (0.386650) | 0.128045 / 0.680424 (-0.552379) | 0.014212 / 0.534201 (-0.519989) | 0.284795 / 0.579283 (-0.294488) | 0.266210 / 0.434364 (-0.168153) | 0.344468 / 0.540337 (-0.195869) | 0.434414 / 1.386936 (-0.952522) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005142 / 0.011353 (-0.006211) | 0.003607 / 0.011008 (-0.007401) | 0.048770 / 0.038508 (0.010262) | 0.051147 / 0.023109 (0.028038) | 0.277329 / 0.275898 (0.001430) | 0.300863 / 0.323480 (-0.022617) | 0.004005 / 0.007986 (-0.003980) | 0.002624 / 0.004328 (-0.001705) | 0.047740 / 0.004250 (0.043489) | 0.040811 / 0.037052 (0.003759) | 0.280020 / 0.258489 (0.021531) | 0.303758 / 0.293841 (0.009918) | 0.028273 / 0.128546 (-0.100274) | 0.010379 / 0.075646 (-0.065267) | 0.057503 / 0.419271 (-0.361768) | 0.032717 / 0.043533 (-0.010816) | 0.277560 / 0.255139 (0.022421) | 0.300622 / 0.283200 (0.017422) | 0.018142 / 0.141683 (-0.123541) | 1.121890 / 1.452155 (-0.330265) | 1.251481 / 1.492716 (-0.241235) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091523 / 0.018006 (0.073517) | 0.300173 / 0.000490 (0.299683) | 0.000216 / 0.000200 (0.000016) | 0.000051 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026386 / 0.037411 (-0.011025) | 0.078710 / 0.014526 (0.064184) | 0.090594 / 0.176557 (-0.085962) | 0.130623 / 0.737135 (-0.606512) | 0.092637 / 0.296338 (-0.203701) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.299427 / 0.215209 (0.084218) | 2.929463 / 2.077655 (0.851808) | 1.608905 / 1.504120 (0.104785) | 1.490863 / 1.541195 (-0.050331) | 1.484286 / 1.468490 (0.015796) | 0.568208 / 4.584777 (-4.016569) | 2.447081 / 3.745712 (-1.298632) | 2.801287 / 5.269862 (-2.468574) | 1.744449 / 4.565676 (-2.821227) | 0.064222 / 0.424275 (-0.360053) | 0.004959 / 0.007607 (-0.002648) | 0.350207 / 0.226044 (0.124162) | 3.471944 / 2.268929 (1.203016) | 1.951715 / 55.444624 (-53.492909) | 1.668764 / 6.876477 (-5.207713) | 1.675322 / 2.142072 (-0.466751) | 0.642217 / 4.805227 (-4.163011) | 0.116776 / 6.500664 (-6.383888) | 0.040812 / 0.075469 (-0.034658) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.996478 / 1.841788 (-0.845310) | 12.090647 / 8.074308 (4.016339) | 10.723688 / 10.191392 (0.532296) | 0.141770 / 0.680424 (-0.538653) | 0.015578 / 0.534201 (-0.518623) | 0.288236 / 0.579283 (-0.291047) | 0.278542 / 0.434364 (-0.155822) | 0.327411 / 0.540337 (-0.212927) | 0.450309 / 1.386936 (-0.936627) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004967 / 0.011353 (-0.006385) | 0.003382 / 0.011008 (-0.007627) | 0.063436 / 0.038508 (0.024928) | 0.050769 / 0.023109 (0.027659) | 0.254214 / 0.275898 (-0.021684) | 0.272076 / 0.323480 (-0.051404) | 0.003815 / 0.007986 (-0.004170) | 0.002618 / 0.004328 (-0.001711) | 0.049021 / 0.004250 (0.044771) | 0.037329 / 0.037052 (0.000277) | 0.261112 / 0.258489 (0.002623) | 0.284133 / 0.293841 (-0.009708) | 0.026828 / 0.128546 (-0.101719) | 0.010757 / 0.075646 (-0.064889) | 0.208047 / 0.419271 (-0.211225) | 0.035061 / 0.043533 (-0.008472) | 0.250896 / 0.255139 (-0.004243) | 0.273038 / 0.283200 (-0.010162) | 0.016559 / 0.141683 (-0.125124) | 1.128899 / 1.452155 (-0.323255) | 1.188857 / 1.492716 (-0.303860) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.100121 / 0.018006 (0.082114) | 0.298427 / 0.000490 (0.297937) | 0.000218 / 0.000200 (0.000018) | 0.000043 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018369 / 0.037411 (-0.019042) | 0.060425 / 0.014526 (0.045899) | 0.073501 / 0.176557 (-0.103055) | 0.120254 / 0.737135 (-0.616881) | 0.074889 / 0.296338 (-0.221450) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.287153 / 0.215209 (0.071944) | 2.797036 / 2.077655 (0.719382) | 1.446216 / 1.504120 (-0.057904) | 1.336015 / 1.541195 (-0.205179) | 1.369841 / 1.468490 (-0.098650) | 0.559424 / 4.584777 (-4.025353) | 2.361344 / 3.745712 (-1.384368) | 2.766619 / 5.269862 (-2.503243) | 1.747235 / 4.565676 (-2.818441) | 0.066243 / 0.424275 (-0.358032) | 0.004974 / 0.007607 (-0.002633) | 0.333565 / 0.226044 (0.107520) | 3.319877 / 2.268929 (1.050948) | 1.798024 / 55.444624 (-53.646601) | 1.495896 / 6.876477 (-5.380580) | 1.529243 / 2.142072 (-0.612830) | 0.636609 / 4.805227 (-4.168618) | 0.116151 / 6.500664 (-6.384514) | 0.041779 / 0.075469 (-0.033690) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.952176 / 1.841788 (-0.889611) | 11.559160 / 8.074308 (3.484852) | 10.556771 / 10.191392 (0.365379) | 0.127118 / 0.680424 (-0.553306) | 0.014142 / 0.534201 (-0.520059) | 0.286585 / 0.579283 (-0.292698) | 0.260233 / 0.434364 (-0.174131) | 0.324012 / 0.540337 (-0.216326) | 0.435131 / 1.386936 (-0.951805) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005171 / 0.011353 (-0.006182) | 0.003402 / 0.011008 (-0.007607) | 0.048826 / 0.038508 (0.010318) | 0.050455 / 0.023109 (0.027346) | 0.272120 / 0.275898 (-0.003778) | 0.290404 / 0.323480 (-0.033076) | 0.003986 / 0.007986 (-0.003999) | 0.002569 / 0.004328 (-0.001760) | 0.047845 / 0.004250 (0.043595) | 0.040203 / 0.037052 (0.003150) | 0.278263 / 0.258489 (0.019774) | 0.299255 / 0.293841 (0.005414) | 0.028643 / 0.128546 (-0.099903) | 0.010584 / 0.075646 (-0.065062) | 0.056921 / 0.419271 (-0.362351) | 0.032362 / 0.043533 (-0.011171) | 0.274010 / 0.255139 (0.018871) | 0.288601 / 0.283200 (0.005401) | 0.017856 / 0.141683 (-0.123827) | 1.154112 / 1.452155 (-0.298043) | 1.216288 / 1.492716 (-0.276428) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091399 / 0.018006 (0.073392) | 0.299966 / 0.000490 (0.299477) | 0.000218 / 0.000200 (0.000018) | 0.000054 / 0.000054 (-0.000000) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021728 / 0.037411 (-0.015683) | 0.068285 / 0.014526 (0.053759) | 0.081767 / 0.176557 (-0.094789) | 0.120000 / 0.737135 (-0.617135) | 0.082149 / 0.296338 (-0.214189) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.289625 / 0.215209 (0.074416) | 2.835114 / 2.077655 (0.757460) | 1.583207 / 1.504120 (0.079087) | 1.465251 / 1.541195 (-0.075944) | 1.480691 / 1.468490 (0.012200) | 0.569103 / 4.584777 (-4.015674) | 2.416981 / 3.745712 (-1.328731) | 2.761746 / 5.269862 (-2.508115) | 1.720055 / 4.565676 (-2.845621) | 0.063349 / 0.424275 (-0.360926) | 0.004931 / 0.007607 (-0.002676) | 0.343658 / 0.226044 (0.117614) | 3.362996 / 2.268929 (1.094068) | 1.948088 / 55.444624 (-53.496536) | 1.659504 / 6.876477 (-5.216973) | 1.660359 / 2.142072 (-0.481713) | 0.647871 / 4.805227 (-4.157356) | 0.117395 / 6.500664 (-6.383269) | 0.041049 / 0.075469 (-0.034420) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.953971 / 1.841788 (-0.887817) | 12.076998 / 8.074308 (4.002690) | 10.549021 / 10.191392 (0.357629) | 0.130026 / 0.680424 (-0.550398) | 0.015697 / 0.534201 (-0.518504) | 0.287125 / 0.579283 (-0.292158) | 0.298402 / 0.434364 (-0.135962) | 0.326005 / 0.540337 (-0.214332) | 0.444065 / 1.386936 (-0.942871) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005053 / 0.011353 (-0.006300) | 0.003537 / 0.011008 (-0.007472) | 0.062923 / 0.038508 (0.024415) | 0.053796 / 0.023109 (0.030687) | 0.242523 / 0.275898 (-0.033375) | 0.264014 / 0.323480 (-0.059466) | 0.002879 / 0.007986 (-0.005106) | 0.003273 / 0.004328 (-0.001055) | 0.048735 / 0.004250 (0.044484) | 0.037541 / 0.037052 (0.000488) | 0.248587 / 0.258489 (-0.009902) | 0.275531 / 0.293841 (-0.018310) | 0.027215 / 0.128546 (-0.101331) | 0.010466 / 0.075646 (-0.065180) | 0.206508 / 0.419271 (-0.212763) | 0.035606 / 0.043533 (-0.007927) | 0.251044 / 0.255139 (-0.004095) | 0.267183 / 0.283200 (-0.016016) | 0.018357 / 0.141683 (-0.123326) | 1.083513 / 1.452155 (-0.368642) | 1.152988 / 1.492716 (-0.339728) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091749 / 0.018006 (0.073742) | 0.299946 / 0.000490 (0.299456) | 0.000212 / 0.000200 (0.000013) | 0.000042 / 0.000054 (-0.000013) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018300 / 0.037411 (-0.019111) | 0.060691 / 0.014526 (0.046166) | 0.072998 / 0.176557 (-0.103559) | 0.120581 / 0.737135 (-0.616554) | 0.073912 / 0.296338 (-0.222427) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.277602 / 0.215209 (0.062393) | 2.719181 / 2.077655 (0.641526) | 1.450894 / 1.504120 (-0.053226) | 1.314344 / 1.541195 (-0.226851) | 1.351996 / 1.468490 (-0.116494) | 0.586231 / 4.584777 (-3.998546) | 2.349746 / 3.745712 (-1.395967) | 2.810060 / 5.269862 (-2.459802) | 1.761362 / 4.565676 (-2.804314) | 0.062535 / 0.424275 (-0.361740) | 0.004918 / 0.007607 (-0.002689) | 0.336091 / 0.226044 (0.110047) | 3.238139 / 2.268929 (0.969211) | 1.769734 / 55.444624 (-53.674890) | 1.505332 / 6.876477 (-5.371145) | 1.527875 / 2.142072 (-0.614198) | 0.640194 / 4.805227 (-4.165033) | 0.116567 / 6.500664 (-6.384097) | 0.042464 / 0.075469 (-0.033005) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.930919 / 1.841788 (-0.910869) | 11.462498 / 8.074308 (3.388190) | 10.575359 / 10.191392 (0.383967) | 0.130567 / 0.680424 (-0.549857) | 0.014203 / 0.534201 (-0.519998) | 0.286944 / 0.579283 (-0.292339) | 0.264706 / 0.434364 (-0.169658) | 0.324820 / 0.540337 (-0.215517) | 0.434579 / 1.386936 (-0.952357) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005164 / 0.011353 (-0.006189) | 0.003442 / 0.011008 (-0.007567) | 0.050146 / 0.038508 (0.011638) | 0.050800 / 0.023109 (0.027691) | 0.263405 / 0.275898 (-0.012493) | 0.284876 / 0.323480 (-0.038604) | 0.004011 / 0.007986 (-0.003975) | 0.002602 / 0.004328 (-0.001726) | 0.046742 / 0.004250 (0.042491) | 0.040393 / 0.037052 (0.003341) | 0.265052 / 0.258489 (0.006563) | 0.294217 / 0.293841 (0.000377) | 0.028429 / 0.128546 (-0.100118) | 0.010418 / 0.075646 (-0.065228) | 0.057285 / 0.419271 (-0.361987) | 0.032137 / 0.043533 (-0.011396) | 0.265867 / 0.255139 (0.010728) | 0.284764 / 0.283200 (0.001564) | 0.017448 / 0.141683 (-0.124235) | 1.172830 / 1.452155 (-0.279325) | 1.223982 / 1.492716 (-0.268735) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091859 / 0.018006 (0.073853) | 0.285421 / 0.000490 (0.284931) | 0.000220 / 0.000200 (0.000020) | 0.000049 / 0.000054 (-0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021620 / 0.037411 (-0.015792) | 0.069058 / 0.014526 (0.054532) | 0.082560 / 0.176557 (-0.093997) | 0.119511 / 0.737135 (-0.617624) | 0.082318 / 0.296338 (-0.214021) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.291499 / 0.215209 (0.076290) | 2.863352 / 2.077655 (0.785698) | 1.557242 / 1.504120 (0.053122) | 1.430170 / 1.541195 (-0.111024) | 1.432850 / 1.468490 (-0.035640) | 0.559716 / 4.584777 (-4.025061) | 2.385405 / 3.745712 (-1.360307) | 2.748938 / 5.269862 (-2.520924) | 1.740802 / 4.565676 (-2.824874) | 0.061811 / 0.424275 (-0.362465) | 0.005174 / 0.007607 (-0.002433) | 0.348687 / 0.226044 (0.122642) | 3.420120 / 2.268929 (1.151191) | 1.918278 / 55.444624 (-53.526346) | 1.631559 / 6.876477 (-5.244918) | 1.635850 / 2.142072 (-0.506222) | 0.644144 / 4.805227 (-4.161083) | 0.115823 / 6.500664 (-6.384841) | 0.041255 / 0.075469 (-0.034214) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.960066 / 1.841788 (-0.881722) | 12.011372 / 8.074308 (3.937064) | 10.580532 / 10.191392 (0.389140) | 0.134763 / 0.680424 (-0.545661) | 0.017027 / 0.534201 (-0.517174) | 0.290484 / 0.579283 (-0.288799) | 0.285171 / 0.434364 (-0.149193) | 0.322453 / 0.540337 (-0.217884) | 0.438088 / 1.386936 (-0.948848) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005212 / 0.011353 (-0.006141) | 0.003440 / 0.011008 (-0.007568) | 0.063612 / 0.038508 (0.025104) | 0.049070 / 0.023109 (0.025961) | 0.269748 / 0.275898 (-0.006150) | 0.283270 / 0.323480 (-0.040210) | 0.002892 / 0.007986 (-0.005094) | 0.002693 / 0.004328 (-0.001635) | 0.049710 / 0.004250 (0.045459) | 0.036707 / 0.037052 (-0.000345) | 0.299035 / 0.258489 (0.040546) | 0.296443 / 0.293841 (0.002602) | 0.028095 / 0.128546 (-0.100451) | 0.010682 / 0.075646 (-0.064964) | 0.213914 / 0.419271 (-0.205358) | 0.036210 / 0.043533 (-0.007323) | 0.235720 / 0.255139 (-0.019419) | 0.252687 / 0.283200 (-0.030512) | 0.016985 / 0.141683 (-0.124698) | 1.099024 / 1.452155 (-0.353130) | 1.162970 / 1.492716 (-0.329746) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093114 / 0.018006 (0.075108) | 0.305168 / 0.000490 (0.304678) | 0.000216 / 0.000200 (0.000016) | 0.000043 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018370 / 0.037411 (-0.019041) | 0.060534 / 0.014526 (0.046008) | 0.073960 / 0.176557 (-0.102596) | 0.120325 / 0.737135 (-0.616810) | 0.073754 / 0.296338 (-0.222585) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.284244 / 0.215209 (0.069035) | 2.756854 / 2.077655 (0.679199) | 1.477304 / 1.504120 (-0.026816) | 1.374635 / 1.541195 (-0.166560) | 1.383284 / 1.468490 (-0.085206) | 0.564656 / 4.584777 (-4.020121) | 2.361719 / 3.745712 (-1.383993) | 2.794822 / 5.269862 (-2.475039) | 1.742981 / 4.565676 (-2.822696) | 0.063443 / 0.424275 (-0.360832) | 0.004952 / 0.007607 (-0.002655) | 0.342058 / 0.226044 (0.116014) | 3.351093 / 2.268929 (1.082164) | 1.857375 / 55.444624 (-53.587250) | 1.541680 / 6.876477 (-5.334797) | 1.580147 / 2.142072 (-0.561926) | 0.645216 / 4.805227 (-4.160012) | 0.118768 / 6.500664 (-6.381896) | 0.042115 / 0.075469 (-0.033354) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.925845 / 1.841788 (-0.915943) | 11.444147 / 8.074308 (3.369839) | 10.291297 / 10.191392 (0.099905) | 0.128129 / 0.680424 (-0.552295) | 0.013774 / 0.534201 (-0.520427) | 0.289278 / 0.579283 (-0.290005) | 0.262353 / 0.434364 (-0.172011) | 0.328517 / 0.540337 (-0.211820) | 0.436050 / 1.386936 (-0.950886) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005666 / 0.011353 (-0.005687) | 0.003691 / 0.011008 (-0.007318) | 0.049361 / 0.038508 (0.010853) | 0.054245 / 0.023109 (0.031136) | 0.274433 / 0.275898 (-0.001465) | 0.285648 / 0.323480 (-0.037832) | 0.004080 / 0.007986 (-0.003906) | 0.002666 / 0.004328 (-0.001663) | 0.047539 / 0.004250 (0.043288) | 0.041001 / 0.037052 (0.003948) | 0.296018 / 0.258489 (0.037529) | 0.294542 / 0.293841 (0.000701) | 0.030546 / 0.128546 (-0.098001) | 0.010556 / 0.075646 (-0.065090) | 0.058146 / 0.419271 (-0.361126) | 0.033407 / 0.043533 (-0.010126) | 0.263977 / 0.255139 (0.008838) | 0.286228 / 0.283200 (0.003028) | 0.018088 / 0.141683 (-0.123595) | 1.121295 / 1.452155 (-0.330860) | 1.182183 / 1.492716 (-0.310533) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.104540 / 0.018006 (0.086534) | 0.303494 / 0.000490 (0.303004) | 0.000222 / 0.000200 (0.000022) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021274 / 0.037411 (-0.016137) | 0.070146 / 0.014526 (0.055621) | 0.080343 / 0.176557 (-0.096213) | 0.120017 / 0.737135 (-0.617119) | 0.081303 / 0.296338 (-0.215036) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.294390 / 0.215209 (0.079181) | 2.883366 / 2.077655 (0.805711) | 1.564629 / 1.504120 (0.060509) | 1.432633 / 1.541195 (-0.108562) | 1.438786 / 1.468490 (-0.029704) | 0.569663 / 4.584777 (-4.015114) | 2.448691 / 3.745712 (-1.297021) | 2.817010 / 5.269862 (-2.452851) | 1.757274 / 4.565676 (-2.808402) | 0.064147 / 0.424275 (-0.360129) | 0.004910 / 0.007607 (-0.002697) | 0.344062 / 0.226044 (0.118018) | 3.394223 / 2.268929 (1.125294) | 1.927139 / 55.444624 (-53.517485) | 1.624983 / 6.876477 (-5.251494) | 1.629076 / 2.142072 (-0.512996) | 0.654239 / 4.805227 (-4.150988) | 0.117309 / 6.500664 (-6.383355) | 0.041067 / 0.075469 (-0.034402) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.993184 / 1.841788 (-0.848604) | 11.969985 / 8.074308 (3.895677) | 10.363356 / 10.191392 (0.171964) | 0.130708 / 0.680424 (-0.549716) | 0.015577 / 0.534201 (-0.518624) | 0.289579 / 0.579283 (-0.289704) | 0.274875 / 0.434364 (-0.159488) | 0.326736 / 0.540337 (-0.213601) | 0.442770 / 1.386936 (-0.944166) |\n\n</details>\n</details>\n\n\n",
"Getting the same windows error as in my other PR. I couldn't reproduce on my windows machine though 🧐 ",
"`DataFilesList` is a list so we expect to be able to get its length with zero cost, which wouldn't be the case if we make it lazy no ? ",
"But we don't call `len` on it, do we? And I couldn't find an instance of `DataFilesList` being used in GitHub's public repos.",
"`DataFilesDict` is used in some repositories in dataset scripts when people want to list files from a repo using glob patterns",
"Also making DataFilesList lazy would require to make the pickling more complex, since we don't want to resolve the data files when pickling. At the same time we want to get different hashes if the data files and origin metadata are different so revolving the patterns is needed in that case (we hash the data files when creating the config_id, used in the cache)",
"> `DataFilesDict` is used in some repositories in dataset scripts when people want to list files from a repo using glob patterns\r\n\r\nWould be interesting to know how often these scripts call `len` or do random access on `DataFilesList`.\r\n\r\nStill, I think we should opt for a solution that makes more sense for us. To avoid the breaking change, we can define a `BuilderConfig.data_files` property that resolves this iterable. \r\n\r\n> Also making DataFilesList lazy would require to make the pickling more complex, since we don't want to resolve the data files when pickling. At the same time we want to get different hashes if the data files and origin metadata are different so revolving the patterns is needed in that case (we hash the data files when creating the config_id, used in the cache)\r\n\r\nThe `BuilderConfig.data_files` property suggested above should address this, no? \r\n\r\nI think we should be more careful not to make our API needlessly complex because of the YAML README feature. And if this can't be avoided, we should probably refactor the builder API.",
"> The BuilderConfig.data_files property suggested above should address this, no?\r\n\r\nThat works indeed ! let me try something",
"Implementing lazy DataFilesList and .data_files brings more complexity (less readable, more bad side effects) so I think the current solution is the best one",
"I opened https://github.com/huggingface/datasets/pull/6493 to continue this and fix conflicts with https://github.com/huggingface/datasets/pull/6459"
] | 2023-11-29T13:18:44Z
| 2024-02-08T14:41:35Z
| 2024-02-08T14:41:35Z
|
MEMBER
| null | null | null |
Related to discussion at https://github.com/huggingface/datasets/pull/6255
this makes this code run in 2sec instead of >10sec
```python
from datasets import load_dataset
ds = load_dataset("glue", "sst2", streaming=True, trust_remote_code=False)
```
For some datasets with many configs and files it can be up to 100x faster.
This is particularly important now that some datasets will be loaded from the Parquet export instead of the scripts.
The data files are only resolved in the builder `__init__`. To do so I added DataFilesPatternsList and DataFilesPatternsDict that have `.resolve()` to return resolved DataFilesList and DataFilesDict
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6458/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6458/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6458.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6458",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6458.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6458"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7485
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7485/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7485/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7485/events
|
https://github.com/huggingface/datasets/pull/7485
| 2,953,696,519
|
PR_kwDODunzps6QbjFJ
| 7,485
|
set dev version
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7485). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-03-27T16:39:34Z
| 2025-03-27T16:41:59Z
| 2025-03-27T16:39:42Z
|
MEMBER
| null | null | null | null |
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7485/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7485/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7485.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7485",
"merged_at": "2025-03-27T16:39:42Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7485.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7485"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4760
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4760/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4760/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4760/events
|
https://github.com/huggingface/datasets/issues/4760
| 1,320,878,223
|
I_kwDODunzps5OuwCP
| 4,760
|
Issue with offline mode
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/55560583?v=4",
"events_url": "https://api.github.com/users/SaulLu/events{/privacy}",
"followers_url": "https://api.github.com/users/SaulLu/followers",
"following_url": "https://api.github.com/users/SaulLu/following{/other_user}",
"gists_url": "https://api.github.com/users/SaulLu/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/SaulLu",
"id": 55560583,
"login": "SaulLu",
"node_id": "MDQ6VXNlcjU1NTYwNTgz",
"organizations_url": "https://api.github.com/users/SaulLu/orgs",
"received_events_url": "https://api.github.com/users/SaulLu/received_events",
"repos_url": "https://api.github.com/users/SaulLu/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/SaulLu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SaulLu/subscriptions",
"type": "User",
"url": "https://api.github.com/users/SaulLu",
"user_view_type": "public"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] | null |
[
"Hi @SaulLu, thanks for reporting.\r\n\r\nI think offline mode is not supported for datasets containing only data files (without any loading script). I'm having a look into this...",
"Thanks for your feedback! \r\n\r\nTo give you a little more info, if you don't set the offline mode flag, the script will load the cache. I first noticed this behavior with the `evaluate` library, and while trying to understand the downloading flow I realized that I had a similar error with datasets.",
"This is an issue we have to fix.",
"This is related to https://github.com/huggingface/datasets/issues/3547",
"Still not fixed? ......",
"#5331 will be helpful to fix this, as it updates the cache directory template to be aligned with the other datasets",
"Any updates ?",
"I'm facing the same problem",
"This issue has been fixed in `datasets` 2.16 by https://github.com/huggingface/datasets/pull/6493. The cache is now working properly :)\r\n\r\nYou just have to update `datasets`:\r\n\r\n```\r\npip install -U datasets\r\n```",
"I'm on version 2.17.0, and this exact problem is still persisting.",
"Can you share some code to reproduce your issue ?\r\n\r\nAlso make sure your cache was populated with recent versions of `datasets`. Datasets cached with old versions may not be reloadable in offline mode, though we did our best to keep as much backward compatibility as possible.",
"I'm not sure if this is related @lhoestq but I am experiencing a similar issue when using offline mode:\r\n\r\n```bash\r\n$ python -c \"from datasets import load_dataset; load_dataset('openai_humaneval', split='test')\"\r\n$ HF_DATASETS_OFFLINE=1 python -c \"from datasets import load_dataset; load_dataset('openai_humaneval', split='test')\"\r\nUsing the latest cached version of the dataset since openai_humaneval couldn't be found on the Hugging Face Hub (offline mode is enabled).\r\nTraceback (most recent call last):\r\n File \"<string>\", line 1, in <module>\r\n File \"/dodrio/scratch/projects/2023_071/alignment-handbook/.venv/lib/python3.10/site-packages/datasets/load.py\", line 2556, in load_dataset\r\n builder_instance = load_dataset_builder(\r\n File \"/dodrio/scratch/projects/2023_071/alignment-handbook/.venv/lib/python3.10/site-packages/datasets/load.py\", line 2265, in load_dataset_builder\r\n builder_instance: DatasetBuilder = builder_cls(\r\n File \"/dodrio/scratch/projects/2023_071/alignment-handbook/.venv/lib/python3.10/site-packages/datasets/packaged_modules/cache/cache.py\", line 122, in __init__\r\n config_name, version, hash = _find_hash_in_cache(\r\n File \"/dodrio/scratch/projects/2023_071/alignment-handbook/.venv/lib/python3.10/site-packages/datasets/packaged_modules/cache/cache.py\", line 48, in _find_hash_in_cache\r\n raise ValueError(\r\nValueError: Couldn't find cache for openai_humaneval for config 'default'\r\nAvailable configs in the cache: ['openai_humaneval']\r\n```",
"Thanks for reporting @BramVanroy, I managed to reproduce and I opened a fix here: https://github.com/huggingface/datasets/pull/6741",
"Awesome, thanks for the quick fix @lhoestq! Looking forward to update my dependency version list.",
"> Thanks for reporting @BramVanroy, I managed to reproduce and I opened a fix here: #6741\r\n\r\nThanks a lot! I have faced the same problem. Can I use your fix code to directly replace the existing version code? I noticed that this fix has not been merged yet. Will it affect other functionalities?\r\n",
"I just merged the fix, you can install `datasets` from source or wait for the patch release which will be out in the coming days"
] | 2022-07-28T12:45:14Z
| 2024-03-25T16:24:45Z
| 2024-01-23T10:58:22Z
|
CONTRIBUTOR
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
## Describe the bug
I can't retrieve a cached dataset with offline mode enabled
## Steps to reproduce the bug
To reproduce my issue, first, you'll need to run a script that will cache the dataset
```python
import os
os.environ["HF_DATASETS_OFFLINE"] = "0"
import datasets
datasets.logging.set_verbosity_info()
ds_name = "SaulLu/toy_struc_dataset"
ds = datasets.load_dataset(ds_name)
print(ds)
```
then, you can try to reload it in offline mode:
```python
import os
os.environ["HF_DATASETS_OFFLINE"] = "1"
import datasets
datasets.logging.set_verbosity_info()
ds_name = "SaulLu/toy_struc_dataset"
ds = datasets.load_dataset(ds_name)
print(ds)
```
## Expected results
I would have expected the 2nd snippet not to return any errors
## Actual results
The 2nd snippet returns:
```
Traceback (most recent call last):
File "/home/lucile_huggingface_co/sandbox/evaluate/test_cache_datasets.py", line 8, in <module>
ds = datasets.load_dataset(ds_name)
File "/home/lucile_huggingface_co/anaconda3/envs/evaluate-dev/lib/python3.8/site-packages/datasets/load.py", line 1723, in load_dataset
builder_instance = load_dataset_builder(
File "/home/lucile_huggingface_co/anaconda3/envs/evaluate-dev/lib/python3.8/site-packages/datasets/load.py", line 1500, in load_dataset_builder
dataset_module = dataset_module_factory(
File "/home/lucile_huggingface_co/anaconda3/envs/evaluate-dev/lib/python3.8/site-packages/datasets/load.py", line 1241, in dataset_module_factory
raise ConnectionError(f"Couln't reach the Hugging Face Hub for dataset '{path}': {e1}") from None
ConnectionError: Couln't reach the Hugging Face Hub for dataset 'SaulLu/toy_struc_dataset': Offline mode is enabled.
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.4.0
- Platform: Linux-4.19.0-21-cloud-amd64-x86_64-with-glibc2.17
- Python version: 3.8.13
- PyArrow version: 8.0.0
- Pandas version: 1.4.3
Maybe I'm misunderstanding something in the use of the offline mode (see [doc](https://huggingface.co/docs/datasets/v2.4.0/en/loading#offline)), is that the case?
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 4,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 4,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4760/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4760/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/5336
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5336/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5336/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5336/events
|
https://github.com/huggingface/datasets/pull/5336
| 1,479,649,900
|
PR_kwDODunzps5Egzed
| 5,336
|
Set `IterableDataset.map` param `batch_size` typing as optional
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4",
"events_url": "https://api.github.com/users/alvarobartt/events{/privacy}",
"followers_url": "https://api.github.com/users/alvarobartt/followers",
"following_url": "https://api.github.com/users/alvarobartt/following{/other_user}",
"gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/alvarobartt",
"id": 36760800,
"login": "alvarobartt",
"node_id": "MDQ6VXNlcjM2NzYwODAw",
"organizations_url": "https://api.github.com/users/alvarobartt/orgs",
"received_events_url": "https://api.github.com/users/alvarobartt/received_events",
"repos_url": "https://api.github.com/users/alvarobartt/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions",
"type": "User",
"url": "https://api.github.com/users/alvarobartt",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5336). All of your documentation changes will be reflected on that endpoint.",
"Hi @mariosasko, @lhoestq I was wondering whether we should include `batched` as a `pytest.mark` param for the functions testing `IterableDataset.map` so as to ensure that the changes done in this PR work fine without breaking anything of the actual functionality.\r\n\r\nI've pushed updated tests just for one of the unit testing functions to be run as `pytest tests/test_iterable_dataset.py::test_mapped_examples_iterable -s --durations 0`, but some are still missing `batched` param, it was just to ask you whether we're supposed to do this for the rest of the functions or not, if it's a yes I'll push the commit as it's ready, but didn't want to push extra stuff that may be discarded later!\r\n\r\nThanks :hugs:",
"Thanks for the feedback @lhoestq, I agree with keeping `Optional` instead of `Union[type, None]` for now 👍🏻"
] | 2022-12-06T17:08:10Z
| 2022-12-07T14:14:56Z
| 2022-12-07T14:06:27Z
|
MEMBER
| null | null | null |
This PR solves #5325
~Indeed we're using the typing for optional values as `Union[type, None]` as it's similar to how Python 3.10 handles optional values as `type | None`, instead of using `Optional[type]`.~
~Do we want to start using `Union[type, None]` for type-hinting optional values or just keep on using `Optional`?~ -> Keeping `Optional` still for consistency with the rest of the code in `datasets`
Also we now allow `batch_size` to be `None` for `IterableDataset.map` and `IterableDataset.filter`e.g. `MappedExamplesIterable` as `map` is internally instantiating those and propagating the `batch_size` param so if it can be `None` for `map` it should also do so for `MappedExamplesIterable`, as well as for `FilteredExamplesIterable` when calling `IterableDataset.filter`.
## TODOs
- [x] Add integration tests
- [x] Handle scenario where `batched=True` and `batch_size=None` or `batch_size<=0`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5336/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5336/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/5336.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5336",
"merged_at": "2022-12-07T14:06:27Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5336.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5336"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7423
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7423/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7423/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7423/events
|
https://github.com/huggingface/datasets/issues/7423
| 2,879,271,409
|
I_kwDODunzps6rnjHx
| 7,423
|
Row indexing a dataset with numpy integers
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/35470740?v=4",
"events_url": "https://api.github.com/users/DavidRConnell/events{/privacy}",
"followers_url": "https://api.github.com/users/DavidRConnell/followers",
"following_url": "https://api.github.com/users/DavidRConnell/following{/other_user}",
"gists_url": "https://api.github.com/users/DavidRConnell/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/DavidRConnell",
"id": 35470740,
"login": "DavidRConnell",
"node_id": "MDQ6VXNlcjM1NDcwNzQw",
"organizations_url": "https://api.github.com/users/DavidRConnell/orgs",
"received_events_url": "https://api.github.com/users/DavidRConnell/received_events",
"repos_url": "https://api.github.com/users/DavidRConnell/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/DavidRConnell/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DavidRConnell/subscriptions",
"type": "User",
"url": "https://api.github.com/users/DavidRConnell",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
open
| false
| null |
[] | null |
[
"Would be cool to be consistent when it comes to indexing with numpy objects, if we do accept numpy arrays we should indeed accept numpy integers. Your idea sounds reasonable, I'd also be in favor of adding a simple test as well"
] | 2025-02-25T18:44:45Z
| 2025-03-03T17:55:24Z
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Feature request
Allow indexing datasets with a scalar numpy integer type.
### Motivation
Indexing a dataset with a scalar numpy.int* object raises a TypeError. This is due to the test in `datasets/formatting/formatting.py:key_to_query_type`
``` python
def key_to_query_type(key: Union[int, slice, range, str, Iterable]) -> str:
if isinstance(key, int):
return "row"
elif isinstance(key, str):
return "column"
elif isinstance(key, (slice, range, Iterable)):
return "batch"
_raise_bad_key_type(key)
```
In the row case, it checks if key is an int, which returns false when key is integer like but not a builtin python integer type. This is counterintuitive because a numpy array of np.int64s can be used for the batch case.
For example:
``` python
import numpy as np
import datasets
dataset = datasets.Dataset.from_dict({"a": [1, 2, 3, 4], "b": [5, 6, 7, 8]})
# Regular indexing
dataset[0]
dataset[:2]
# Indexing with numpy data types (expect same results)
idx = np.asarray([0, 1])
dataset[idx] # Succeeds when using an array of np.int64 values
dataset[idx[0]] # Fails with TypeError when using scalar np.int64
```
For the user, this can be solved by wrapping `idx[0]` in `int` but the test could also be changed in `key_to_query_type` to accept a less strict definition of int.
``` diff
+import numbers
+
def key_to_query_type(key: Union[int, slice, range, str, Iterable]) -> str:
+ if isinstance(key, numbers.Integral):
- if isinstance(key, int):
return "row"
elif isinstance(key, str):
return "column"
elif isinstance(key, (slice, range, Iterable)):
return "batch"
_raise_bad_key_type(key)
```
Looking at how others do it, pandas has an `is_integer` definition that it checks which uses `is_integer_object` defined in `pandas/_libs/utils.pxd`:
``` cython
cdef inline bint is_integer_object(object obj) noexcept:
"""
Cython equivalent of
`isinstance(val, (int, np.integer)) and not isinstance(val, (bool, np.timedelta64))`
Parameters
----------
val : object
Returns
-------
is_integer : bool
Notes
-----
This counts np.timedelta64 objects as integers.
"""
return (not PyBool_Check(obj) and isinstance(obj, (int, cnp.integer))
and not is_timedelta64_object(obj))
```
This would be less flexible as it explicitly checks for numpy integer, but worth noting that they had the need to ensure the key is not a bool.
### Your contribution
I can submit a pull request with the above changes after checking that indexing succeeds with the numpy integer type. Or if there is a different integer check that would be preferred I could add that.
If there is a reason not to want this behavior that is fine too.
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7423/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7423/timeline
| null | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/6809
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6809/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6809/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6809/events
|
https://github.com/huggingface/datasets/pull/6809
| 2,242,956,297
|
PR_kwDODunzps5so0e2
| 6,809
|
Make convert_to_parquet CLI command create script branch
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6809). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"@huggingface/datasets once this PR is merged, I would suggest making a release. Do you agree?\r\n- This PR is a follow-up of #6795",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004963 / 0.011353 (-0.006390) | 0.003121 / 0.011008 (-0.007888) | 0.063421 / 0.038508 (0.024913) | 0.030727 / 0.023109 (0.007618) | 0.237698 / 0.275898 (-0.038200) | 0.266613 / 0.323480 (-0.056867) | 0.004237 / 0.007986 (-0.003749) | 0.002715 / 0.004328 (-0.001614) | 0.049503 / 0.004250 (0.045253) | 0.043705 / 0.037052 (0.006653) | 0.247818 / 0.258489 (-0.010671) | 0.287545 / 0.293841 (-0.006296) | 0.027232 / 0.128546 (-0.101314) | 0.009952 / 0.075646 (-0.065695) | 0.208678 / 0.419271 (-0.210593) | 0.035494 / 0.043533 (-0.008039) | 0.260900 / 0.255139 (0.005761) | 0.264738 / 0.283200 (-0.018461) | 0.018093 / 0.141683 (-0.123590) | 1.130924 / 1.452155 (-0.321231) | 1.178982 / 1.492716 (-0.313734) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094610 / 0.018006 (0.076604) | 0.304674 / 0.000490 (0.304184) | 0.000215 / 0.000200 (0.000015) | 0.000048 / 0.000054 (-0.000007) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018168 / 0.037411 (-0.019243) | 0.062040 / 0.014526 (0.047514) | 0.075634 / 0.176557 (-0.100922) | 0.119488 / 0.737135 (-0.617647) | 0.074790 / 0.296338 (-0.221548) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.282449 / 0.215209 (0.067240) | 2.773231 / 2.077655 (0.695576) | 1.455156 / 1.504120 (-0.048964) | 1.332652 / 1.541195 (-0.208543) | 1.340795 / 1.468490 (-0.127695) | 0.576588 / 4.584777 (-4.008189) | 2.415513 / 3.745712 (-1.330199) | 2.801569 / 5.269862 (-2.468292) | 1.741039 / 4.565676 (-2.824637) | 0.064386 / 0.424275 (-0.359890) | 0.005293 / 0.007607 (-0.002314) | 0.329732 / 0.226044 (0.103688) | 3.227275 / 2.268929 (0.958347) | 1.793121 / 55.444624 (-53.651503) | 1.515115 / 6.876477 (-5.361362) | 1.518738 / 2.142072 (-0.623335) | 0.664465 / 4.805227 (-4.140762) | 0.118813 / 6.500664 (-6.381851) | 0.041715 / 0.075469 (-0.033754) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.974371 / 1.841788 (-0.867416) | 11.432869 / 8.074308 (3.358561) | 9.607939 / 10.191392 (-0.583453) | 0.143996 / 0.680424 (-0.536427) | 0.014624 / 0.534201 (-0.519577) | 0.286899 / 0.579283 (-0.292384) | 0.265965 / 0.434364 (-0.168399) | 0.324727 / 0.540337 (-0.215611) | 0.420917 / 1.386936 (-0.966019) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005145 / 0.011353 (-0.006207) | 0.003723 / 0.011008 (-0.007286) | 0.050387 / 0.038508 (0.011879) | 0.030734 / 0.023109 (0.007625) | 0.274331 / 0.275898 (-0.001567) | 0.295045 / 0.323480 (-0.028435) | 0.004187 / 0.007986 (-0.003799) | 0.002781 / 0.004328 (-0.001547) | 0.049698 / 0.004250 (0.045448) | 0.040049 / 0.037052 (0.002996) | 0.284016 / 0.258489 (0.025527) | 0.309908 / 0.293841 (0.016067) | 0.028994 / 0.128546 (-0.099552) | 0.010625 / 0.075646 (-0.065021) | 0.059305 / 0.419271 (-0.359967) | 0.032982 / 0.043533 (-0.010551) | 0.273342 / 0.255139 (0.018203) | 0.291726 / 0.283200 (0.008527) | 0.018084 / 0.141683 (-0.123599) | 1.136864 / 1.452155 (-0.315290) | 1.163656 / 1.492716 (-0.329061) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094868 / 0.018006 (0.076862) | 0.302900 / 0.000490 (0.302410) | 0.000226 / 0.000200 (0.000026) | 0.000053 / 0.000054 (-0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022142 / 0.037411 (-0.015269) | 0.077457 / 0.014526 (0.062932) | 0.087989 / 0.176557 (-0.088568) | 0.127354 / 0.737135 (-0.609781) | 0.092027 / 0.296338 (-0.204312) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.291196 / 0.215209 (0.075987) | 2.840386 / 2.077655 (0.762731) | 1.571201 / 1.504120 (0.067081) | 1.449429 / 1.541195 (-0.091765) | 1.467189 / 1.468490 (-0.001301) | 0.580991 / 4.584777 (-4.003786) | 2.422566 / 3.745712 (-1.323146) | 2.839621 / 5.269862 (-2.430240) | 1.782987 / 4.565676 (-2.782689) | 0.064765 / 0.424275 (-0.359510) | 0.005338 / 0.007607 (-0.002269) | 0.349148 / 0.226044 (0.123104) | 3.421283 / 2.268929 (1.152355) | 1.943503 / 55.444624 (-53.501122) | 1.653881 / 6.876477 (-5.222596) | 1.698141 / 2.142072 (-0.443931) | 0.667628 / 4.805227 (-4.137599) | 0.118469 / 6.500664 (-6.382195) | 0.041693 / 0.075469 (-0.033776) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.026385 / 1.841788 (-0.815403) | 12.225049 / 8.074308 (4.150741) | 10.363072 / 10.191392 (0.171680) | 0.142682 / 0.680424 (-0.537742) | 0.015698 / 0.534201 (-0.518502) | 0.288148 / 0.579283 (-0.291135) | 0.272639 / 0.434364 (-0.161724) | 0.325305 / 0.540337 (-0.215032) | 0.421395 / 1.386936 (-0.965541) |\n\n</details>\n</details>\n\n\n"
] | 2024-04-15T07:47:26Z
| 2024-04-17T08:44:26Z
| 2024-04-17T08:38:18Z
|
MEMBER
| null | null | null |
Make convert_to_parquet CLI command create a "script" branch and keep the script file on it.
This PR proposes the simplest UX approach: whenever `--revision` is not explicitly passed (i.e., when the script is in the main branch), try to create a "script" branch from the "main" branch; if the "script" branch exists already, then do nothing.
Follow-up of:
- #6795
Close #6808.
CC: @severo
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6809/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6809/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6809.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6809",
"merged_at": "2024-04-17T08:38:18Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6809.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6809"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6853
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6853/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6853/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6853/events
|
https://github.com/huggingface/datasets/issues/6853
| 2,272,570,000
|
I_kwDODunzps6HdKqQ
| 6,853
|
Support soft links for load_datasets imagefolder
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/10386511?v=4",
"events_url": "https://api.github.com/users/billytcl/events{/privacy}",
"followers_url": "https://api.github.com/users/billytcl/followers",
"following_url": "https://api.github.com/users/billytcl/following{/other_user}",
"gists_url": "https://api.github.com/users/billytcl/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/billytcl",
"id": 10386511,
"login": "billytcl",
"node_id": "MDQ6VXNlcjEwMzg2NTEx",
"organizations_url": "https://api.github.com/users/billytcl/orgs",
"received_events_url": "https://api.github.com/users/billytcl/received_events",
"repos_url": "https://api.github.com/users/billytcl/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/billytcl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/billytcl/subscriptions",
"type": "User",
"url": "https://api.github.com/users/billytcl",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
open
| false
| null |
[] | null |
[] | 2024-04-30T22:14:29Z
| 2024-04-30T22:14:29Z
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Feature request
Load_dataset from a folder of images doesn't seem to support soft links. It would be nice if it did, especially during methods development where image folders are being curated.
### Motivation
Images are coming from a complex variety of sources and we'd like to be able to soft link directly from the originating folders as opposed to copying. Having a copy of the file ensures that there may be issues with image versioning as well as having double the amount of required disk space.
### Your contribution
N/A
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6853/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6853/timeline
| null | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/6796
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6796/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6796/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6796/events
|
https://github.com/huggingface/datasets/issues/6796
| 2,234,887,618
|
I_kwDODunzps6FNa3C
| 6,796
|
CI is broken due to hf-internal-testing/dataset_with_script
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] | null |
[
"Finally:\r\n- the initial issue seems it was temporary\r\n- there is a different issue now: https://github.com/huggingface/datasets/actions/runs/8627153993/job/23646584590?pr=6797\r\n```\r\nFAILED tests/test_load.py::ModuleFactoryTest::test_HubDatasetModuleFactoryWithParquetExport - datasets.utils._dataset_viewer.DatasetViewerError: No exported Parquet files available.\r\nFAILED tests/test_load.py::ModuleFactoryTest::test_HubDatasetModuleFactoryWithParquetExport_errors_on_wrong_sha - datasets.utils._dataset_viewer.DatasetViewerError: No exported Parquet files available.\r\nFAILED tests/test_load.py::test_load_dataset_builder_for_community_dataset_with_script - AssertionError: assert 'dataset_with_script' == 'parquet'\r\n \r\n - parquet\r\n + dataset_with_script\r\n```\r\n\r\nMaybe related to `hf-internal-testing/dataset_with_script` dataset: https://huggingface.co/datasets/hf-internal-testing/dataset_with_script",
"This URL: https://datasets-server.huggingface.co/parquet?dataset=hf-internal-testing/dataset_with_script\r\nraises:\r\n> {\"error\":\"The dataset viewer doesn't support this dataset because it runs arbitrary python code. Please open a discussion in the discussion tab if you think this is an error and tag @lhoestq and @severo.\"}\r\n\r\nWas there a recent change on the Hub enforcing this behavior?",
"OK, I just saw this PR:\r\n- https://github.com/huggingface/dataset-viewer/pull/2689\r\n\r\nOnce merged and deployed, it should fix the issue.",
"Once the script-dataset has been allowed in the dataset-viewer, we should fix our test to make the CI pass.\r\n\r\nI am addressing this."
] | 2024-04-10T06:56:02Z
| 2024-04-12T09:02:13Z
| 2024-04-12T09:02:13Z
|
MEMBER
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
CI is broken for test_load_dataset_distributed_with_script. See: https://github.com/huggingface/datasets/actions/runs/8614926216/job/23609378127
```
FAILED tests/test_load.py::test_load_dataset_distributed_with_script[None] - assert False
+ where False = all(<generator object test_load_dataset_distributed_with_script.<locals>.<genexpr> at 0x7f0c741de3b0>)
FAILED tests/test_load.py::test_load_dataset_distributed_with_script[force_redownload] - assert False
+ where False = all(<generator object test_load_dataset_distributed_with_script.<locals>.<genexpr> at 0x7f0be45f6ea0>)
```
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6796/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6796/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/4543
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4543/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4543/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4543/events
|
https://github.com/huggingface/datasets/pull/4543
| 1,280,379,781
|
PR_kwDODunzps46IiEp
| 4,543
|
[CI] Fix upstream hub test url
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Remaining CI failures are unrelated to this fix, merging"
] | 2022-06-22T15:34:27Z
| 2022-06-22T16:37:40Z
| 2022-06-22T16:27:37Z
|
MEMBER
| null | null | null |
Some tests were still using moon-stagign instead of hub-ci.
I also updated the token to use one dedicated to `datasets`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4543/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4543/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/4543.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4543",
"merged_at": "2022-06-22T16:27:37Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4543.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4543"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6277
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6277/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6277/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6277/events
|
https://github.com/huggingface/datasets/issues/6277
| 1,927,044,546
|
I_kwDODunzps5y3F3C
| 6,277
|
FileNotFoundError: Couldn't find a module script at /content/paws-x/paws-x.py. Module 'paws-x' doesn't exist on the Hugging Face Hub either.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/66733346?v=4",
"events_url": "https://api.github.com/users/diegogonzalezc/events{/privacy}",
"followers_url": "https://api.github.com/users/diegogonzalezc/followers",
"following_url": "https://api.github.com/users/diegogonzalezc/following{/other_user}",
"gists_url": "https://api.github.com/users/diegogonzalezc/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/diegogonzalezc",
"id": 66733346,
"login": "diegogonzalezc",
"node_id": "MDQ6VXNlcjY2NzMzMzQ2",
"organizations_url": "https://api.github.com/users/diegogonzalezc/orgs",
"received_events_url": "https://api.github.com/users/diegogonzalezc/received_events",
"repos_url": "https://api.github.com/users/diegogonzalezc/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/diegogonzalezc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/diegogonzalezc/subscriptions",
"type": "User",
"url": "https://api.github.com/users/diegogonzalezc",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"`evaluate.load(\"paws-x\", \"es\")` throws the error because there is no such metric in the `evaluate` lib.\r\n\r\nSo, this is unrelated to our lib."
] | 2023-10-04T22:01:25Z
| 2023-10-08T17:05:46Z
| 2023-10-08T17:05:46Z
|
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
I'm encountering a "FileNotFoundError" while attempting to use the "paws-x" dataset to retrain the DistilRoBERTa-base model. The error message is as follows:
FileNotFoundError: Couldn't find a module script at /content/paws-x/paws-x.py. Module 'paws-x' doesn't exist on the Hugging Face Hub either.
### Steps to reproduce the bug
https://colab.research.google.com/drive/11xUUFxloClpmqLvDy_Xxfmo3oUzjY5nx#scrollTo=kUn74FigzhHm
### Expected behavior
The the trained model
### Environment info
colab, "paws-x" dataset , DistilRoBERTa-base model
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6277/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6277/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/5679
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5679/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5679/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5679/events
|
https://github.com/huggingface/datasets/issues/5679
| 1,645,184,622
|
I_kwDODunzps5iD4Zu
| 5,679
|
Allow load_dataset to take a working dir for intermediate data
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/38018689?v=4",
"events_url": "https://api.github.com/users/lu-wang-dl/events{/privacy}",
"followers_url": "https://api.github.com/users/lu-wang-dl/followers",
"following_url": "https://api.github.com/users/lu-wang-dl/following{/other_user}",
"gists_url": "https://api.github.com/users/lu-wang-dl/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lu-wang-dl",
"id": 38018689,
"login": "lu-wang-dl",
"node_id": "MDQ6VXNlcjM4MDE4Njg5",
"organizations_url": "https://api.github.com/users/lu-wang-dl/orgs",
"received_events_url": "https://api.github.com/users/lu-wang-dl/received_events",
"repos_url": "https://api.github.com/users/lu-wang-dl/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lu-wang-dl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lu-wang-dl/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lu-wang-dl",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
open
| false
| null |
[] | null |
[
"Hi ! AFAIK a dataset must be present on a local disk to be able to efficiently memory map the datasets Arrow files. What makes you think that it is possible to load from a cloud storage and have good performance ?\r\n\r\nAnyway it's already possible to download_and_prepare a dataset as Arrow files in a cloud storage with:\r\n```python\r\nbuilder = load_dataset_builder(..., cache_dir=\"/temp/dir\")\r\nbuilder.download_and_prepare(\"/cloud_dir\")\r\n```\r\n\r\nbut then \r\n```python\r\nds = builder.as_dataset()\r\n```\r\nwould fail if \"/cloud_dir\" is not a local directory.",
"In my use case, I am trying to mount the S3 bucket as local system with S3FS-FUSE / [goofys](https://github.com/kahing/goofys). I want to use S3 to save the download data and save checkpoint for training for persistent. Setting the s3 location as cache directory is not fast enough. That is why I want to set a work directory for temp data for memory map and only save the final result to s3 cache. ",
"You can try setting `HF_DATASETS_DOWNLOADED_DATASETS_PATH` and `HF_DATASETS_EXTRACTED_DATASETS_PATH` to S3, and `HF_DATASETS_CACHE` to your local disk.\r\n\r\nThis way all your downloaded and extracted data are on your mounted S3, but the datasets Arrow files are on your local disk",
"If we hope to also persist the Arrow files on the mounted S3 but work with the efficiency of local disk, is there any recommended way to do this, other than copying the Arrow files from local disk to S3?"
] | 2023-03-29T07:21:09Z
| 2023-04-12T22:30:25Z
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Feature request
As a user, I can set a working dir for intermediate data creation. The processed files will be moved to the cache dir, like
```
load_dataset(…, working_dir=”/temp/dir”, cache_dir=”/cloud_dir”).
```
### Motivation
This will help the use case for using datasets with cloud storage as cache. It will help boost the performance.
### Your contribution
I can provide a PR to fix this if the proposal seems reasonable.
| null |
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5679/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5679/timeline
| null | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/7033
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7033/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7033/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7033/events
|
https://github.com/huggingface/datasets/issues/7033
| 2,397,419,768
|
I_kwDODunzps6O5bj4
| 7,033
|
`from_generator` does not allow to specify the split name
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/227357?v=4",
"events_url": "https://api.github.com/users/pminervini/events{/privacy}",
"followers_url": "https://api.github.com/users/pminervini/followers",
"following_url": "https://api.github.com/users/pminervini/following{/other_user}",
"gists_url": "https://api.github.com/users/pminervini/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/pminervini",
"id": 227357,
"login": "pminervini",
"node_id": "MDQ6VXNlcjIyNzM1Nw==",
"organizations_url": "https://api.github.com/users/pminervini/orgs",
"received_events_url": "https://api.github.com/users/pminervini/received_events",
"repos_url": "https://api.github.com/users/pminervini/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/pminervini/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pminervini/subscriptions",
"type": "User",
"url": "https://api.github.com/users/pminervini",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Thanks for reporting, @pminervini.\r\n\r\nI agree we should give the option to define the split name.\r\n\r\nIndeed, there is a PR that addresses precisely this issue:\r\n- #7015\r\n\r\nI am reviewing it.",
"Booom! thank you guys :)"
] | 2024-07-09T07:47:58Z
| 2024-07-26T12:56:16Z
| 2024-07-26T09:31:56Z
|
CONTRIBUTOR
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
I'm building train, dev, and test using `from_generator`; however, in all three cases, the logger prints `Generating train split:`
It's not possible to change the split name since it seems to be hardcoded: https://github.com/huggingface/datasets/blob/main/src/datasets/packaged_modules/generator/generator.py
### Steps to reproduce the bug
```
In [1]: from datasets import Dataset
In [2]: def gen():
...: yield {"pokemon": "bulbasaur", "type": "grass"}
...:
In [3]: ds = Dataset.from_generator(gen)
Generating train split: 1 examples [00:00, 133.89 examples/s]
```
### Expected behavior
It should be possible to specify any split name
### Environment info
- `datasets` version: 2.19.2
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.8.5
- `huggingface_hub` version: 0.23.3
- PyArrow version: 15.0.0
- Pandas version: 2.0.3
- `fsspec` version: 2023.10.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7033/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7033/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/5886
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5886/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5886/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5886/events
|
https://github.com/huggingface/datasets/issues/5886
| 1,721,070,225
|
I_kwDODunzps5mlXKR
| 5,886
|
Use work-stealing algorithm when parallel computing
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/46060451?v=4",
"events_url": "https://api.github.com/users/1014661165/events{/privacy}",
"followers_url": "https://api.github.com/users/1014661165/followers",
"following_url": "https://api.github.com/users/1014661165/following{/other_user}",
"gists_url": "https://api.github.com/users/1014661165/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/1014661165",
"id": 46060451,
"login": "1014661165",
"node_id": "MDQ6VXNlcjQ2MDYwNDUx",
"organizations_url": "https://api.github.com/users/1014661165/orgs",
"received_events_url": "https://api.github.com/users/1014661165/received_events",
"repos_url": "https://api.github.com/users/1014661165/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/1014661165/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/1014661165/subscriptions",
"type": "User",
"url": "https://api.github.com/users/1014661165",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
open
| false
| null |
[] | null |
[
"Alternatively we could set the number of shards to be a factor than the number of processes (current they're equal) - this way it will be less likely to end up with a shard that is significantly slower than all the other ones."
] | 2023-05-23T03:08:44Z
| 2023-05-24T15:30:09Z
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Feature request
when i used Dataset.map api to process data concurrently, i found that
it gets slower and slower as it gets closer to completion. Then i read the source code of arrow_dataset.py and found that it shard the dataset and use multiprocessing pool to execute each shard.It may cause the slowest task to drag out the entire program's execution time,especially when processing huge dataset.
### Motivation
using work-stealing algorithm instead of sharding and parallel computing to optimize performance.
### Your contribution
just an idea.
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5886/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5886/timeline
| null | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/5897
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5897/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5897/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5897/events
|
https://github.com/huggingface/datasets/pull/5897
| 1,726,135,494
|
PR_kwDODunzps5RXJaY
| 5,897
|
Fix `FixedSizeListArray` casting
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006213 / 0.011353 (-0.005140) | 0.004230 / 0.011008 (-0.006778) | 0.098014 / 0.038508 (0.059506) | 0.028659 / 0.023109 (0.005550) | 0.303272 / 0.275898 (0.027374) | 0.337186 / 0.323480 (0.013706) | 0.005126 / 0.007986 (-0.002860) | 0.003563 / 0.004328 (-0.000765) | 0.075295 / 0.004250 (0.071045) | 0.036836 / 0.037052 (-0.000216) | 0.309612 / 0.258489 (0.051123) | 0.346484 / 0.293841 (0.052643) | 0.025714 / 0.128546 (-0.102832) | 0.008562 / 0.075646 (-0.067085) | 0.323475 / 0.419271 (-0.095796) | 0.044072 / 0.043533 (0.000539) | 0.308261 / 0.255139 (0.053122) | 0.330903 / 0.283200 (0.047703) | 0.091805 / 0.141683 (-0.049878) | 1.517011 / 1.452155 (0.064856) | 1.570815 / 1.492716 (0.078099) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.211265 / 0.018006 (0.193259) | 0.438860 / 0.000490 (0.438370) | 0.001127 / 0.000200 (0.000927) | 0.000069 / 0.000054 (0.000015) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023337 / 0.037411 (-0.014074) | 0.096243 / 0.014526 (0.081717) | 0.103529 / 0.176557 (-0.073028) | 0.161171 / 0.737135 (-0.575964) | 0.105904 / 0.296338 (-0.190435) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.417042 / 0.215209 (0.201833) | 4.155067 / 2.077655 (2.077412) | 1.879657 / 1.504120 (0.375537) | 1.669341 / 1.541195 (0.128146) | 1.717623 / 1.468490 (0.249133) | 0.556246 / 4.584777 (-4.028531) | 3.484535 / 3.745712 (-0.261177) | 1.728845 / 5.269862 (-3.541017) | 0.997477 / 4.565676 (-3.568199) | 0.068355 / 0.424275 (-0.355920) | 0.012445 / 0.007607 (0.004837) | 0.519023 / 0.226044 (0.292978) | 5.173506 / 2.268929 (2.904577) | 2.332435 / 55.444624 (-53.112190) | 1.986348 / 6.876477 (-4.890129) | 2.076885 / 2.142072 (-0.065187) | 0.656738 / 4.805227 (-4.148489) | 0.135308 / 6.500664 (-6.365356) | 0.065486 / 0.075469 (-0.009984) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.208874 / 1.841788 (-0.632914) | 13.994200 / 8.074308 (5.919892) | 14.160978 / 10.191392 (3.969586) | 0.146009 / 0.680424 (-0.534415) | 0.016573 / 0.534201 (-0.517628) | 0.356082 / 0.579283 (-0.223202) | 0.387766 / 0.434364 (-0.046598) | 0.419130 / 0.540337 (-0.121208) | 0.508634 / 1.386936 (-0.878302) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006238 / 0.011353 (-0.005115) | 0.004221 / 0.011008 (-0.006788) | 0.075155 / 0.038508 (0.036646) | 0.028491 / 0.023109 (0.005382) | 0.355606 / 0.275898 (0.079708) | 0.388986 / 0.323480 (0.065506) | 0.005941 / 0.007986 (-0.002044) | 0.003510 / 0.004328 (-0.000819) | 0.074905 / 0.004250 (0.070655) | 0.039111 / 0.037052 (0.002059) | 0.358492 / 0.258489 (0.100003) | 0.398763 / 0.293841 (0.104922) | 0.025535 / 0.128546 (-0.103012) | 0.008580 / 0.075646 (-0.067067) | 0.080461 / 0.419271 (-0.338811) | 0.041381 / 0.043533 (-0.002152) | 0.355498 / 0.255139 (0.100359) | 0.379163 / 0.283200 (0.095963) | 0.096450 / 0.141683 (-0.045233) | 1.503248 / 1.452155 (0.051093) | 1.595616 / 1.492716 (0.102900) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.238065 / 0.018006 (0.220058) | 0.422800 / 0.000490 (0.422311) | 0.002274 / 0.000200 (0.002074) | 0.000074 / 0.000054 (0.000020) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025746 / 0.037411 (-0.011665) | 0.103319 / 0.014526 (0.088793) | 0.112155 / 0.176557 (-0.064401) | 0.163034 / 0.737135 (-0.574101) | 0.113377 / 0.296338 (-0.182962) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.440522 / 0.215209 (0.225313) | 4.398123 / 2.077655 (2.320468) | 2.143538 / 1.504120 (0.639418) | 1.946084 / 1.541195 (0.404890) | 1.996556 / 1.468490 (0.528066) | 0.550108 / 4.584777 (-4.034669) | 3.455774 / 3.745712 (-0.289938) | 2.862474 / 5.269862 (-2.407387) | 1.213446 / 4.565676 (-3.352230) | 0.067987 / 0.424275 (-0.356288) | 0.012413 / 0.007607 (0.004806) | 0.543990 / 0.226044 (0.317945) | 5.454807 / 2.268929 (3.185879) | 2.669195 / 55.444624 (-52.775429) | 2.332948 / 6.876477 (-4.543528) | 2.383870 / 2.142072 (0.241797) | 0.652017 / 4.805227 (-4.153210) | 0.135508 / 6.500664 (-6.365156) | 0.068238 / 0.075469 (-0.007231) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.322669 / 1.841788 (-0.519118) | 14.368136 / 8.074308 (6.293828) | 14.167431 / 10.191392 (3.976039) | 0.159371 / 0.680424 (-0.521052) | 0.016638 / 0.534201 (-0.517563) | 0.357106 / 0.579283 (-0.222177) | 0.392491 / 0.434364 (-0.041873) | 0.419458 / 0.540337 (-0.120880) | 0.504662 / 1.386936 (-0.882274) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006296 / 0.011353 (-0.005057) | 0.004185 / 0.011008 (-0.006823) | 0.096170 / 0.038508 (0.057662) | 0.029212 / 0.023109 (0.006102) | 0.315356 / 0.275898 (0.039458) | 0.335214 / 0.323480 (0.011734) | 0.005108 / 0.007986 (-0.002877) | 0.003634 / 0.004328 (-0.000694) | 0.074186 / 0.004250 (0.069936) | 0.038716 / 0.037052 (0.001663) | 0.311041 / 0.258489 (0.052551) | 0.341202 / 0.293841 (0.047361) | 0.025584 / 0.128546 (-0.102962) | 0.008499 / 0.075646 (-0.067148) | 0.318660 / 0.419271 (-0.100611) | 0.043745 / 0.043533 (0.000212) | 0.314824 / 0.255139 (0.059685) | 0.328117 / 0.283200 (0.044917) | 0.093425 / 0.141683 (-0.048258) | 1.478732 / 1.452155 (0.026578) | 1.531743 / 1.492716 (0.039027) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.203484 / 0.018006 (0.185478) | 0.416131 / 0.000490 (0.415641) | 0.007352 / 0.000200 (0.007152) | 0.000211 / 0.000054 (0.000156) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022908 / 0.037411 (-0.014503) | 0.098641 / 0.014526 (0.084115) | 0.103426 / 0.176557 (-0.073131) | 0.161658 / 0.737135 (-0.575477) | 0.106506 / 0.296338 (-0.189832) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.430781 / 0.215209 (0.215572) | 4.315677 / 2.077655 (2.238022) | 2.022302 / 1.504120 (0.518182) | 1.832043 / 1.541195 (0.290849) | 1.789302 / 1.468490 (0.320812) | 0.560484 / 4.584777 (-4.024293) | 3.448204 / 3.745712 (-0.297508) | 1.725016 / 5.269862 (-3.544846) | 1.002649 / 4.565676 (-3.563027) | 0.068480 / 0.424275 (-0.355795) | 0.012617 / 0.007607 (0.005010) | 0.532291 / 0.226044 (0.306246) | 5.319352 / 2.268929 (3.050423) | 2.520730 / 55.444624 (-52.923894) | 2.213881 / 6.876477 (-4.662596) | 2.352477 / 2.142072 (0.210404) | 0.662516 / 4.805227 (-4.142711) | 0.136481 / 6.500664 (-6.364183) | 0.066597 / 0.075469 (-0.008872) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.224537 / 1.841788 (-0.617251) | 13.849920 / 8.074308 (5.775612) | 14.026358 / 10.191392 (3.834966) | 0.131018 / 0.680424 (-0.549405) | 0.016756 / 0.534201 (-0.517445) | 0.358091 / 0.579283 (-0.221192) | 0.397709 / 0.434364 (-0.036655) | 0.450024 / 0.540337 (-0.090314) | 0.542609 / 1.386936 (-0.844327) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006179 / 0.011353 (-0.005174) | 0.004145 / 0.011008 (-0.006863) | 0.077482 / 0.038508 (0.038974) | 0.028005 / 0.023109 (0.004896) | 0.400010 / 0.275898 (0.124112) | 0.408206 / 0.323480 (0.084726) | 0.005049 / 0.007986 (-0.002937) | 0.003608 / 0.004328 (-0.000721) | 0.076841 / 0.004250 (0.072590) | 0.036714 / 0.037052 (-0.000338) | 0.406020 / 0.258489 (0.147531) | 0.412392 / 0.293841 (0.118551) | 0.025626 / 0.128546 (-0.102920) | 0.008560 / 0.075646 (-0.067087) | 0.084088 / 0.419271 (-0.335183) | 0.039707 / 0.043533 (-0.003826) | 0.396909 / 0.255139 (0.141770) | 0.403623 / 0.283200 (0.120424) | 0.095137 / 0.141683 (-0.046546) | 1.515670 / 1.452155 (0.063515) | 1.568379 / 1.492716 (0.075662) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.181802 / 0.018006 (0.163795) | 0.408778 / 0.000490 (0.408289) | 0.000393 / 0.000200 (0.000193) | 0.000060 / 0.000054 (0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025940 / 0.037411 (-0.011471) | 0.099992 / 0.014526 (0.085466) | 0.106280 / 0.176557 (-0.070276) | 0.161729 / 0.737135 (-0.575406) | 0.108625 / 0.296338 (-0.187713) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.459802 / 0.215209 (0.244593) | 4.603002 / 2.077655 (2.525347) | 2.406851 / 1.504120 (0.902732) | 2.265422 / 1.541195 (0.724227) | 2.306305 / 1.468490 (0.837815) | 0.553903 / 4.584777 (-4.030874) | 3.482052 / 3.745712 (-0.263660) | 2.969855 / 5.269862 (-2.300007) | 1.309285 / 4.565676 (-3.256391) | 0.068130 / 0.424275 (-0.356145) | 0.012189 / 0.007607 (0.004582) | 0.571299 / 0.226044 (0.345254) | 5.711420 / 2.268929 (3.442492) | 2.716748 / 55.444624 (-52.727876) | 2.369869 / 6.876477 (-4.506608) | 2.544240 / 2.142072 (0.402167) | 0.659955 / 4.805227 (-4.145272) | 0.136684 / 6.500664 (-6.363980) | 0.068962 / 0.075469 (-0.006507) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.297659 / 1.841788 (-0.544129) | 14.012758 / 8.074308 (5.938449) | 14.324644 / 10.191392 (4.133252) | 0.144894 / 0.680424 (-0.535530) | 0.016751 / 0.534201 (-0.517450) | 0.361547 / 0.579283 (-0.217736) | 0.396595 / 0.434364 (-0.037769) | 0.422375 / 0.540337 (-0.117962) | 0.508209 / 1.386936 (-0.878727) |\n\n</details>\n</details>\n\n\n",
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006303 / 0.011353 (-0.005050) | 0.004043 / 0.011008 (-0.006965) | 0.096239 / 0.038508 (0.057731) | 0.029608 / 0.023109 (0.006498) | 0.321058 / 0.275898 (0.045160) | 0.367066 / 0.323480 (0.043587) | 0.005236 / 0.007986 (-0.002749) | 0.003342 / 0.004328 (-0.000987) | 0.074407 / 0.004250 (0.070157) | 0.038810 / 0.037052 (0.001757) | 0.332597 / 0.258489 (0.074108) | 0.363562 / 0.293841 (0.069721) | 0.025460 / 0.128546 (-0.103086) | 0.008426 / 0.075646 (-0.067221) | 0.316998 / 0.419271 (-0.102273) | 0.043621 / 0.043533 (0.000088) | 0.338043 / 0.255139 (0.082904) | 0.366441 / 0.283200 (0.083241) | 0.092061 / 0.141683 (-0.049622) | 1.461531 / 1.452155 (0.009376) | 1.538047 / 1.492716 (0.045331) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.206796 / 0.018006 (0.188790) | 0.517959 / 0.000490 (0.517469) | 0.002745 / 0.000200 (0.002545) | 0.000070 / 0.000054 (0.000015) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022902 / 0.037411 (-0.014510) | 0.097901 / 0.014526 (0.083375) | 0.103664 / 0.176557 (-0.072893) | 0.163516 / 0.737135 (-0.573619) | 0.108561 / 0.296338 (-0.187778) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.418964 / 0.215209 (0.203755) | 4.159113 / 2.077655 (2.081458) | 1.843946 / 1.504120 (0.339827) | 1.641083 / 1.541195 (0.099888) | 1.686848 / 1.468490 (0.218358) | 0.554583 / 4.584777 (-4.030194) | 3.409862 / 3.745712 (-0.335850) | 2.647904 / 5.269862 (-2.621958) | 1.355424 / 4.565676 (-3.210253) | 0.068229 / 0.424275 (-0.356046) | 0.012217 / 0.007607 (0.004610) | 0.515895 / 0.226044 (0.289851) | 5.144920 / 2.268929 (2.875991) | 2.298046 / 55.444624 (-53.146579) | 1.964735 / 6.876477 (-4.911741) | 2.075580 / 2.142072 (-0.066492) | 0.657104 / 4.805227 (-4.148123) | 0.134759 / 6.500664 (-6.365905) | 0.067545 / 0.075469 (-0.007924) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.233075 / 1.841788 (-0.608713) | 13.896762 / 8.074308 (5.822454) | 14.055143 / 10.191392 (3.863751) | 0.145507 / 0.680424 (-0.534917) | 0.016702 / 0.534201 (-0.517499) | 0.365157 / 0.579283 (-0.214126) | 0.385842 / 0.434364 (-0.048522) | 0.459993 / 0.540337 (-0.080344) | 0.547115 / 1.386936 (-0.839821) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006174 / 0.011353 (-0.005179) | 0.004191 / 0.011008 (-0.006817) | 0.078311 / 0.038508 (0.039803) | 0.028038 / 0.023109 (0.004928) | 0.360056 / 0.275898 (0.084158) | 0.398081 / 0.323480 (0.074602) | 0.005069 / 0.007986 (-0.002916) | 0.003464 / 0.004328 (-0.000864) | 0.077858 / 0.004250 (0.073608) | 0.039420 / 0.037052 (0.002367) | 0.361743 / 0.258489 (0.103254) | 0.404829 / 0.293841 (0.110988) | 0.025604 / 0.128546 (-0.102943) | 0.008573 / 0.075646 (-0.067074) | 0.084944 / 0.419271 (-0.334328) | 0.042652 / 0.043533 (-0.000881) | 0.368549 / 0.255139 (0.113410) | 0.385682 / 0.283200 (0.102482) | 0.099085 / 0.141683 (-0.042598) | 1.495815 / 1.452155 (0.043661) | 1.548168 / 1.492716 (0.055452) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.193737 / 0.018006 (0.175730) | 0.421871 / 0.000490 (0.421381) | 0.002306 / 0.000200 (0.002106) | 0.000073 / 0.000054 (0.000018) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025928 / 0.037411 (-0.011483) | 0.103410 / 0.014526 (0.088885) | 0.107931 / 0.176557 (-0.068626) | 0.157127 / 0.737135 (-0.580008) | 0.111892 / 0.296338 (-0.184446) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.477562 / 0.215209 (0.262353) | 4.772711 / 2.077655 (2.695056) | 2.458725 / 1.504120 (0.954605) | 2.269871 / 1.541195 (0.728676) | 2.365502 / 1.468490 (0.897012) | 0.556182 / 4.584777 (-4.028595) | 3.408016 / 3.745712 (-0.337697) | 1.730639 / 5.269862 (-3.539222) | 1.000973 / 4.565676 (-3.564704) | 0.068293 / 0.424275 (-0.355982) | 0.012119 / 0.007607 (0.004512) | 0.581281 / 0.226044 (0.355236) | 5.811930 / 2.268929 (3.543001) | 2.890337 / 55.444624 (-52.554288) | 2.592156 / 6.876477 (-4.284321) | 2.687764 / 2.142072 (0.545691) | 0.664282 / 4.805227 (-4.140946) | 0.136029 / 6.500664 (-6.364635) | 0.067493 / 0.075469 (-0.007976) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.330723 / 1.841788 (-0.511064) | 14.379172 / 8.074308 (6.304864) | 14.153286 / 10.191392 (3.961894) | 0.142942 / 0.680424 (-0.537482) | 0.016698 / 0.534201 (-0.517503) | 0.361044 / 0.579283 (-0.218239) | 0.393174 / 0.434364 (-0.041190) | 0.423107 / 0.540337 (-0.117231) | 0.514299 / 1.386936 (-0.872637) |\n\n</details>\n</details>\n\n\n"
] | 2023-05-25T16:26:33Z
| 2023-05-26T12:22:04Z
| 2023-05-26T11:57:16Z
|
COLLABORATOR
| null | null | null |
Fix cast on sliced `FixedSizeListArray`s.
Fix #5866
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5897/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5897/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/5897.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5897",
"merged_at": "2023-05-26T11:57:16Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5897.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5897"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5978
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5978/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5978/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5978/events
|
https://github.com/huggingface/datasets/pull/5978
| 1,770,187,053
|
PR_kwDODunzps5Tru2_
| 5,978
|
Release: 2.13.1
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006173 / 0.011353 (-0.005180) | 0.003773 / 0.011008 (-0.007235) | 0.099499 / 0.038508 (0.060991) | 0.037918 / 0.023109 (0.014809) | 0.321329 / 0.275898 (0.045431) | 0.379739 / 0.323480 (0.056259) | 0.004664 / 0.007986 (-0.003322) | 0.002943 / 0.004328 (-0.001385) | 0.077759 / 0.004250 (0.073509) | 0.055271 / 0.037052 (0.018219) | 0.329428 / 0.258489 (0.070939) | 0.378731 / 0.293841 (0.084890) | 0.027737 / 0.128546 (-0.100810) | 0.008566 / 0.075646 (-0.067081) | 0.313220 / 0.419271 (-0.106052) | 0.047101 / 0.043533 (0.003568) | 0.316211 / 0.255139 (0.061072) | 0.341826 / 0.283200 (0.058626) | 0.020838 / 0.141683 (-0.120845) | 1.550064 / 1.452155 (0.097909) | 1.706518 / 1.492716 (0.213801) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.203093 / 0.018006 (0.185087) | 0.425345 / 0.000490 (0.424856) | 0.004800 / 0.000200 (0.004600) | 0.000077 / 0.000054 (0.000022) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024590 / 0.037411 (-0.012821) | 0.098115 / 0.014526 (0.083589) | 0.108274 / 0.176557 (-0.068282) | 0.170804 / 0.737135 (-0.566332) | 0.110560 / 0.296338 (-0.185778) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.425251 / 0.215209 (0.210042) | 4.239075 / 2.077655 (2.161421) | 1.955601 / 1.504120 (0.451481) | 1.774796 / 1.541195 (0.233602) | 1.826641 / 1.468490 (0.358150) | 0.558777 / 4.584777 (-4.026000) | 3.361697 / 3.745712 (-0.384015) | 1.764468 / 5.269862 (-3.505394) | 1.032280 / 4.565676 (-3.533396) | 0.067872 / 0.424275 (-0.356403) | 0.010998 / 0.007607 (0.003391) | 0.525682 / 0.226044 (0.299637) | 5.254356 / 2.268929 (2.985427) | 2.384332 / 55.444624 (-53.060292) | 2.045578 / 6.876477 (-4.830898) | 2.170914 / 2.142072 (0.028841) | 0.674782 / 4.805227 (-4.130445) | 0.135351 / 6.500664 (-6.365314) | 0.066591 / 0.075469 (-0.008878) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.209181 / 1.841788 (-0.632606) | 14.044518 / 8.074308 (5.970210) | 13.184705 / 10.191392 (2.993313) | 0.130836 / 0.680424 (-0.549588) | 0.016582 / 0.534201 (-0.517619) | 0.360005 / 0.579283 (-0.219279) | 0.379519 / 0.434364 (-0.054845) | 0.422174 / 0.540337 (-0.118164) | 0.515546 / 1.386936 (-0.871390) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006293 / 0.011353 (-0.005060) | 0.003784 / 0.011008 (-0.007224) | 0.079248 / 0.038508 (0.040739) | 0.038452 / 0.023109 (0.015343) | 0.444727 / 0.275898 (0.168829) | 0.500535 / 0.323480 (0.177055) | 0.003455 / 0.007986 (-0.004531) | 0.002873 / 0.004328 (-0.001455) | 0.077439 / 0.004250 (0.073189) | 0.047855 / 0.037052 (0.010803) | 0.448049 / 0.258489 (0.189560) | 0.509517 / 0.293841 (0.215676) | 0.028359 / 0.128546 (-0.100188) | 0.008503 / 0.075646 (-0.067143) | 0.084961 / 0.419271 (-0.334310) | 0.042880 / 0.043533 (-0.000653) | 0.436628 / 0.255139 (0.181489) | 0.456574 / 0.283200 (0.173375) | 0.019539 / 0.141683 (-0.122144) | 1.561273 / 1.452155 (0.109118) | 1.572018 / 1.492716 (0.079301) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.230250 / 0.018006 (0.212244) | 0.415189 / 0.000490 (0.414700) | 0.003213 / 0.000200 (0.003013) | 0.000080 / 0.000054 (0.000026) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025541 / 0.037411 (-0.011871) | 0.102326 / 0.014526 (0.087800) | 0.110258 / 0.176557 (-0.066298) | 0.162488 / 0.737135 (-0.574647) | 0.112782 / 0.296338 (-0.183556) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.457936 / 0.215209 (0.242727) | 4.581503 / 2.077655 (2.503848) | 2.237659 / 1.504120 (0.733540) | 2.029960 / 1.541195 (0.488765) | 2.082911 / 1.468490 (0.614421) | 0.556485 / 4.584777 (-4.028292) | 3.384418 / 3.745712 (-0.361295) | 1.748809 / 5.269862 (-3.521053) | 1.034759 / 4.565676 (-3.530917) | 0.067500 / 0.424275 (-0.356776) | 0.011425 / 0.007607 (0.003818) | 0.561340 / 0.226044 (0.335295) | 5.623629 / 2.268929 (3.354701) | 2.733587 / 55.444624 (-52.711038) | 2.401578 / 6.876477 (-4.474899) | 2.524569 / 2.142072 (0.382496) | 0.673170 / 4.805227 (-4.132057) | 0.136681 / 6.500664 (-6.363983) | 0.068060 / 0.075469 (-0.007409) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.318651 / 1.841788 (-0.523137) | 14.362123 / 8.074308 (6.287815) | 14.385964 / 10.191392 (4.194572) | 0.149914 / 0.680424 (-0.530510) | 0.016877 / 0.534201 (-0.517324) | 0.358406 / 0.579283 (-0.220877) | 0.394349 / 0.434364 (-0.040015) | 0.422471 / 0.540337 (-0.117866) | 0.513807 / 1.386936 (-0.873129) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006272 / 0.011353 (-0.005080) | 0.003903 / 0.011008 (-0.007105) | 0.100180 / 0.038508 (0.061672) | 0.037799 / 0.023109 (0.014690) | 0.385627 / 0.275898 (0.109729) | 0.446518 / 0.323480 (0.123038) | 0.004811 / 0.007986 (-0.003175) | 0.003032 / 0.004328 (-0.001296) | 0.077063 / 0.004250 (0.072812) | 0.055564 / 0.037052 (0.018512) | 0.397346 / 0.258489 (0.138857) | 0.443242 / 0.293841 (0.149401) | 0.027904 / 0.128546 (-0.100642) | 0.008386 / 0.075646 (-0.067260) | 0.315013 / 0.419271 (-0.104259) | 0.047943 / 0.043533 (0.004410) | 0.378443 / 0.255139 (0.123304) | 0.411472 / 0.283200 (0.128272) | 0.020465 / 0.141683 (-0.121218) | 1.526594 / 1.452155 (0.074439) | 1.547018 / 1.492716 (0.054301) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.219377 / 0.018006 (0.201370) | 0.430254 / 0.000490 (0.429764) | 0.003218 / 0.000200 (0.003018) | 0.000072 / 0.000054 (0.000017) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023667 / 0.037411 (-0.013744) | 0.099143 / 0.014526 (0.084617) | 0.106044 / 0.176557 (-0.070513) | 0.166186 / 0.737135 (-0.570949) | 0.108736 / 0.296338 (-0.187603) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.437971 / 0.215209 (0.222762) | 4.363675 / 2.077655 (2.286021) | 2.011993 / 1.504120 (0.507873) | 1.845189 / 1.541195 (0.303994) | 1.831848 / 1.468490 (0.363358) | 0.562402 / 4.584777 (-4.022375) | 3.365259 / 3.745712 (-0.380453) | 1.781491 / 5.269862 (-3.488371) | 1.023454 / 4.565676 (-3.542223) | 0.067857 / 0.424275 (-0.356418) | 0.011076 / 0.007607 (0.003469) | 0.532267 / 0.226044 (0.306223) | 5.340344 / 2.268929 (3.071415) | 2.388649 / 55.444624 (-53.055976) | 2.055373 / 6.876477 (-4.821104) | 2.205047 / 2.142072 (0.062975) | 0.672909 / 4.805227 (-4.132318) | 0.135244 / 6.500664 (-6.365420) | 0.066184 / 0.075469 (-0.009285) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.206838 / 1.841788 (-0.634950) | 13.967075 / 8.074308 (5.892767) | 13.143971 / 10.191392 (2.952579) | 0.143991 / 0.680424 (-0.536433) | 0.016673 / 0.534201 (-0.517527) | 0.376180 / 0.579283 (-0.203103) | 0.386550 / 0.434364 (-0.047814) | 0.440590 / 0.540337 (-0.099747) | 0.529974 / 1.386936 (-0.856962) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006299 / 0.011353 (-0.005054) | 0.003784 / 0.011008 (-0.007224) | 0.077875 / 0.038508 (0.039367) | 0.038689 / 0.023109 (0.015580) | 0.421684 / 0.275898 (0.145786) | 0.472649 / 0.323480 (0.149169) | 0.003570 / 0.007986 (-0.004415) | 0.004448 / 0.004328 (0.000120) | 0.077867 / 0.004250 (0.073616) | 0.049514 / 0.037052 (0.012462) | 0.375983 / 0.258489 (0.117494) | 0.470632 / 0.293841 (0.176791) | 0.028238 / 0.128546 (-0.100308) | 0.008462 / 0.075646 (-0.067185) | 0.082452 / 0.419271 (-0.336819) | 0.043617 / 0.043533 (0.000084) | 0.400874 / 0.255139 (0.145735) | 0.426191 / 0.283200 (0.142992) | 0.020602 / 0.141683 (-0.121081) | 1.567658 / 1.452155 (0.115504) | 1.572610 / 1.492716 (0.079893) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.246144 / 0.018006 (0.228138) | 0.419402 / 0.000490 (0.418913) | 0.001691 / 0.000200 (0.001491) | 0.000071 / 0.000054 (0.000017) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026105 / 0.037411 (-0.011306) | 0.104734 / 0.014526 (0.090208) | 0.110257 / 0.176557 (-0.066300) | 0.161429 / 0.737135 (-0.575706) | 0.114367 / 0.296338 (-0.181972) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.453352 / 0.215209 (0.238143) | 4.537924 / 2.077655 (2.460269) | 2.196193 / 1.504120 (0.692073) | 2.002087 / 1.541195 (0.460892) | 2.041722 / 1.468490 (0.573231) | 0.561643 / 4.584777 (-4.023134) | 3.449108 / 3.745712 (-0.296605) | 2.862800 / 5.269862 (-2.407062) | 1.387895 / 4.565676 (-3.177782) | 0.068076 / 0.424275 (-0.356199) | 0.011568 / 0.007607 (0.003961) | 0.559279 / 0.226044 (0.333235) | 5.598738 / 2.268929 (3.329809) | 2.676649 / 55.444624 (-52.767975) | 2.334588 / 6.876477 (-4.541889) | 2.376215 / 2.142072 (0.234142) | 0.673109 / 4.805227 (-4.132118) | 0.137587 / 6.500664 (-6.363077) | 0.069131 / 0.075469 (-0.006338) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.307332 / 1.841788 (-0.534456) | 14.536036 / 8.074308 (6.461728) | 14.173734 / 10.191392 (3.982342) | 0.145143 / 0.680424 (-0.535281) | 0.016662 / 0.534201 (-0.517539) | 0.366901 / 0.579283 (-0.212383) | 0.394498 / 0.434364 (-0.039866) | 0.430546 / 0.540337 (-0.109792) | 0.518950 / 1.386936 (-0.867986) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008122 / 0.011353 (-0.003231) | 0.005585 / 0.011008 (-0.005424) | 0.121219 / 0.038508 (0.082711) | 0.047616 / 0.023109 (0.024507) | 0.440576 / 0.275898 (0.164678) | 0.491053 / 0.323480 (0.167573) | 0.004774 / 0.007986 (-0.003211) | 0.006758 / 0.004328 (0.002430) | 0.103852 / 0.004250 (0.099602) | 0.071560 / 0.037052 (0.034508) | 0.463107 / 0.258489 (0.204618) | 0.516904 / 0.293841 (0.223063) | 0.048052 / 0.128546 (-0.080494) | 0.013679 / 0.075646 (-0.061968) | 0.428383 / 0.419271 (0.009112) | 0.069468 / 0.043533 (0.025936) | 0.432593 / 0.255139 (0.177454) | 0.471810 / 0.283200 (0.188611) | 0.037541 / 0.141683 (-0.104142) | 1.823490 / 1.452155 (0.371335) | 1.922558 / 1.492716 (0.429842) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.252315 / 0.018006 (0.234309) | 0.541757 / 0.000490 (0.541267) | 0.000373 / 0.000200 (0.000173) | 0.000083 / 0.000054 (0.000028) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030361 / 0.037411 (-0.007050) | 0.125928 / 0.014526 (0.111402) | 0.145102 / 0.176557 (-0.031455) | 0.209798 / 0.737135 (-0.527337) | 0.147349 / 0.296338 (-0.148990) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.627554 / 0.215209 (0.412345) | 5.917422 / 2.077655 (3.839767) | 2.491083 / 1.504120 (0.986963) | 2.147078 / 1.541195 (0.605883) | 2.167511 / 1.468490 (0.699021) | 0.903061 / 4.584777 (-3.681716) | 5.518537 / 3.745712 (1.772825) | 2.654348 / 5.269862 (-2.615514) | 1.645121 / 4.565676 (-2.920556) | 0.103782 / 0.424275 (-0.320493) | 0.013048 / 0.007607 (0.005441) | 0.756732 / 0.226044 (0.530687) | 7.622873 / 2.268929 (5.353945) | 3.122689 / 55.444624 (-52.321936) | 2.537735 / 6.876477 (-4.338742) | 2.640090 / 2.142072 (0.498018) | 1.128635 / 4.805227 (-3.676593) | 0.228089 / 6.500664 (-6.272575) | 0.086207 / 0.075469 (0.010738) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.561591 / 1.841788 (-0.280197) | 18.110299 / 8.074308 (10.035991) | 20.718017 / 10.191392 (10.526625) | 0.225741 / 0.680424 (-0.454682) | 0.031738 / 0.534201 (-0.502463) | 0.530789 / 0.579283 (-0.048495) | 0.607364 / 0.434364 (0.173000) | 0.581593 / 0.540337 (0.041256) | 0.726033 / 1.386936 (-0.660903) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009323 / 0.011353 (-0.002030) | 0.005360 / 0.011008 (-0.005649) | 0.103608 / 0.038508 (0.065100) | 0.050158 / 0.023109 (0.027049) | 0.499906 / 0.275898 (0.224008) | 0.561005 / 0.323480 (0.237525) | 0.005093 / 0.007986 (-0.002892) | 0.008285 / 0.004328 (0.003956) | 0.103446 / 0.004250 (0.099196) | 0.061478 / 0.037052 (0.024426) | 0.494016 / 0.258489 (0.235527) | 0.537550 / 0.293841 (0.243709) | 0.048829 / 0.128546 (-0.079717) | 0.017032 / 0.075646 (-0.058614) | 0.107748 / 0.419271 (-0.311524) | 0.065607 / 0.043533 (0.022074) | 0.488709 / 0.255139 (0.233570) | 0.512023 / 0.283200 (0.228823) | 0.032067 / 0.141683 (-0.109616) | 1.907585 / 1.452155 (0.455431) | 1.960994 / 1.492716 (0.468278) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.278378 / 0.018006 (0.260371) | 0.551474 / 0.000490 (0.550985) | 0.006886 / 0.000200 (0.006686) | 0.000106 / 0.000054 (0.000051) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030674 / 0.037411 (-0.006737) | 0.135179 / 0.014526 (0.120654) | 0.133703 / 0.176557 (-0.042853) | 0.198923 / 0.737135 (-0.538212) | 0.155108 / 0.296338 (-0.141231) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.690566 / 0.215209 (0.475357) | 6.789594 / 2.077655 (4.711940) | 2.940668 / 1.504120 (1.436549) | 2.562431 / 1.541195 (1.021236) | 2.554232 / 1.468490 (1.085742) | 0.888470 / 4.584777 (-3.696307) | 5.672318 / 3.745712 (1.926606) | 2.741626 / 5.269862 (-2.528236) | 1.818336 / 4.565676 (-2.747340) | 0.110434 / 0.424275 (-0.313841) | 0.014114 / 0.007607 (0.006507) | 0.830632 / 0.226044 (0.604588) | 8.270787 / 2.268929 (6.001859) | 3.723486 / 55.444624 (-51.721139) | 2.993671 / 6.876477 (-3.882806) | 2.918273 / 2.142072 (0.776201) | 1.105337 / 4.805227 (-3.699891) | 0.222976 / 6.500664 (-6.277688) | 0.085290 / 0.075469 (0.009820) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.816027 / 1.841788 (-0.025760) | 18.496850 / 8.074308 (10.422541) | 20.457032 / 10.191392 (10.265640) | 0.243533 / 0.680424 (-0.436891) | 0.027044 / 0.534201 (-0.507157) | 0.500752 / 0.579283 (-0.078531) | 0.620963 / 0.434364 (0.186599) | 0.607995 / 0.540337 (0.067658) | 0.722915 / 1.386936 (-0.664021) |\n\n</details>\n</details>\n\n\n"
] | 2023-06-22T18:23:11Z
| 2023-06-22T18:40:24Z
| 2023-06-22T18:30:16Z
|
MEMBER
| null | null | null | null |
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5978/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5978/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/5978.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5978",
"merged_at": "2023-06-22T18:30:16Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5978.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5978"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7321
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7321/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7321/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7321/events
|
https://github.com/huggingface/datasets/issues/7321
| 2,731,626,760
|
I_kwDODunzps6i0VEI
| 7,321
|
ImportError: cannot import name 'set_caching_enabled' from 'datasets'
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/33318353?v=4",
"events_url": "https://api.github.com/users/sankexin/events{/privacy}",
"followers_url": "https://api.github.com/users/sankexin/followers",
"following_url": "https://api.github.com/users/sankexin/following{/other_user}",
"gists_url": "https://api.github.com/users/sankexin/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sankexin",
"id": 33318353,
"login": "sankexin",
"node_id": "MDQ6VXNlcjMzMzE4MzUz",
"organizations_url": "https://api.github.com/users/sankexin/orgs",
"received_events_url": "https://api.github.com/users/sankexin/received_events",
"repos_url": "https://api.github.com/users/sankexin/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sankexin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sankexin/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sankexin",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"pip install datasets==2.18.0",
"Hi ! I think you need to update axolotl"
] | 2024-12-11T01:58:46Z
| 2024-12-11T13:32:15Z
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
Traceback (most recent call last):
File "/usr/local/lib/python3.10/runpy.py", line 187, in _run_module_as_main
mod_name, mod_spec, code = _get_module_details(mod_name, _Error)
File "/usr/local/lib/python3.10/runpy.py", line 110, in _get_module_details
__import__(pkg_name)
File "/home/Medusa/axolotl/src/axolotl/cli/__init__.py", line 23, in <module>
from axolotl.train import TrainDatasetMeta
File "/home/Medusa/axolotl/src/axolotl/train.py", line 23, in <module>
from axolotl.utils.trainer import setup_trainer
File "/home/Medusa/axolotl/src/axolotl/utils/trainer.py", line 13, in <module>
from datasets import set_caching_enabled
ImportError: cannot import name 'set_caching_enabled' from 'datasets' (/usr/local/lib/python3.10/site-packages/datasets/__init__.py)
### Steps to reproduce the bug
1、axolotl
2、accelerate launch -m axolotl.cli.train examples/medusa/qwen_lora_stage1.yml
### Expected behavior
enable datasets
### Environment info
python3.10
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7321/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7321/timeline
| null | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/6296
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6296/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6296/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6296/events
|
https://github.com/huggingface/datasets/pull/6296
| 1,938,453,845
|
PR_kwDODunzps5cjUs1
| 6,296
|
Move `exceptions.py` to `utils/exceptions.py`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006695 / 0.011353 (-0.004658) | 0.004321 / 0.011008 (-0.006687) | 0.084558 / 0.038508 (0.046050) | 0.076290 / 0.023109 (0.053181) | 0.312331 / 0.275898 (0.036433) | 0.349854 / 0.323480 (0.026374) | 0.004267 / 0.007986 (-0.003719) | 0.003595 / 0.004328 (-0.000733) | 0.065077 / 0.004250 (0.060826) | 0.057461 / 0.037052 (0.020409) | 0.314989 / 0.258489 (0.056500) | 0.364767 / 0.293841 (0.070926) | 0.031726 / 0.128546 (-0.096820) | 0.008674 / 0.075646 (-0.066972) | 0.288282 / 0.419271 (-0.130990) | 0.052845 / 0.043533 (0.009312) | 0.317501 / 0.255139 (0.062362) | 0.333241 / 0.283200 (0.050041) | 0.026412 / 0.141683 (-0.115271) | 1.475648 / 1.452155 (0.023493) | 1.551656 / 1.492716 (0.058939) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.276512 / 0.018006 (0.258506) | 0.576350 / 0.000490 (0.575861) | 0.009518 / 0.000200 (0.009318) | 0.000280 / 0.000054 (0.000226) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029332 / 0.037411 (-0.008079) | 0.082904 / 0.014526 (0.068379) | 0.102516 / 0.176557 (-0.074041) | 0.159355 / 0.737135 (-0.577780) | 0.104112 / 0.296338 (-0.192226) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.379144 / 0.215209 (0.163935) | 3.785283 / 2.077655 (1.707629) | 1.833753 / 1.504120 (0.329633) | 1.667906 / 1.541195 (0.126711) | 1.751551 / 1.468490 (0.283061) | 0.480998 / 4.584777 (-4.103779) | 3.533433 / 3.745712 (-0.212279) | 3.343363 / 5.269862 (-1.926498) | 2.094169 / 4.565676 (-2.471508) | 0.056613 / 0.424275 (-0.367662) | 0.007410 / 0.007607 (-0.000197) | 0.455077 / 0.226044 (0.229033) | 4.541380 / 2.268929 (2.272452) | 2.269151 / 55.444624 (-53.175473) | 1.955663 / 6.876477 (-4.920814) | 2.227663 / 2.142072 (0.085591) | 0.580597 / 4.805227 (-4.224630) | 0.135034 / 6.500664 (-6.365630) | 0.062091 / 0.075469 (-0.013378) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.276295 / 1.841788 (-0.565492) | 20.072827 / 8.074308 (11.998519) | 14.296462 / 10.191392 (4.105070) | 0.164936 / 0.680424 (-0.515488) | 0.018415 / 0.534201 (-0.515786) | 0.390894 / 0.579283 (-0.188389) | 0.415515 / 0.434364 (-0.018849) | 0.462798 / 0.540337 (-0.077540) | 0.650099 / 1.386936 (-0.736837) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007218 / 0.011353 (-0.004135) | 0.004246 / 0.011008 (-0.006763) | 0.065818 / 0.038508 (0.027310) | 0.087315 / 0.023109 (0.064206) | 0.406449 / 0.275898 (0.130551) | 0.442008 / 0.323480 (0.118528) | 0.005752 / 0.007986 (-0.002233) | 0.003624 / 0.004328 (-0.000704) | 0.065349 / 0.004250 (0.061099) | 0.062423 / 0.037052 (0.025371) | 0.410099 / 0.258489 (0.151610) | 0.448929 / 0.293841 (0.155088) | 0.032498 / 0.128546 (-0.096048) | 0.008877 / 0.075646 (-0.066770) | 0.071611 / 0.419271 (-0.347661) | 0.048038 / 0.043533 (0.004506) | 0.407957 / 0.255139 (0.152818) | 0.424045 / 0.283200 (0.140846) | 0.025222 / 0.141683 (-0.116461) | 1.496191 / 1.452155 (0.044037) | 1.580765 / 1.492716 (0.088048) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.274798 / 0.018006 (0.256792) | 0.581410 / 0.000490 (0.580920) | 0.007302 / 0.000200 (0.007102) | 0.000160 / 0.000054 (0.000106) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034068 / 0.037411 (-0.003343) | 0.096116 / 0.014526 (0.081590) | 0.110234 / 0.176557 (-0.066323) | 0.163246 / 0.737135 (-0.573889) | 0.110250 / 0.296338 (-0.186089) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.442381 / 0.215209 (0.227172) | 4.427061 / 2.077655 (2.349406) | 2.361013 / 1.504120 (0.856893) | 2.185048 / 1.541195 (0.643853) | 2.312544 / 1.468490 (0.844054) | 0.498347 / 4.584777 (-4.086430) | 3.640839 / 3.745712 (-0.104873) | 3.353405 / 5.269862 (-1.916457) | 2.082038 / 4.565676 (-2.483638) | 0.058786 / 0.424275 (-0.365489) | 0.007403 / 0.007607 (-0.000205) | 0.517894 / 0.226044 (0.291850) | 5.184257 / 2.268929 (2.915329) | 2.838467 / 55.444624 (-52.606157) | 2.511116 / 6.876477 (-4.365361) | 2.757816 / 2.142072 (0.615743) | 0.644050 / 4.805227 (-4.161177) | 0.136446 / 6.500664 (-6.364218) | 0.062219 / 0.075469 (-0.013250) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.350916 / 1.841788 (-0.490872) | 20.549280 / 8.074308 (12.474972) | 14.697569 / 10.191392 (4.506177) | 0.149818 / 0.680424 (-0.530606) | 0.020187 / 0.534201 (-0.514014) | 0.396008 / 0.579283 (-0.183275) | 0.427535 / 0.434364 (-0.006829) | 0.484544 / 0.540337 (-0.055794) | 0.687076 / 1.386936 (-0.699860) |\n\n</details>\n</details>\n\n\n",
"I'd rather be consistent with `huggingface_hub` and have this module in `utils/` with the exceptions exposed in `utils/__init__.py` ...",
"Ok, I'll close this PR.\r\n\r\n> Maybe we could ask huggingface_hub to align with the rest of open-source libraries and expose the errors/exceptions at the root of the library...\r\n\r\ncc @Wauplin \r\n\r\nIt would be nice to have an HF style guide to ensure consistency across our libraries 🙂. ",
"I can expose exceptions at root level yes.\r\n\r\nAbout having guidelines and consistency, let's try to do our best but it's not really in the essence of HF to formalize stuff in libraries :unamused: ",
"Better late than never, we now have all the exceptions defined in `huggingface_hub.errors`! See https://github.com/huggingface/huggingface_hub/issues/2069.",
"Thanks for taking care, @Wauplin. I am closing this PR."
] | 2023-10-11T18:28:00Z
| 2024-09-03T16:00:04Z
| 2024-09-03T16:00:03Z
|
COLLABORATOR
| null | null | null |
I didn't notice the path while reviewing the PR yesterday :(
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6296/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6296/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6296.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6296",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6296.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6296"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7251
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7251/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7251/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7251/events
|
https://github.com/huggingface/datasets/pull/7251
| 2,612,097,435
|
PR_kwDODunzps5_zPTt
| 7,251
|
Missing video docs
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7251). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2024-10-24T16:45:12Z
| 2024-10-24T16:48:29Z
| 2024-10-24T16:48:27Z
|
MEMBER
| null | null | null | null |
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7251/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7251/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7251.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7251",
"merged_at": "2024-10-24T16:48:27Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7251.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7251"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5989
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5989/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5989/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5989/events
|
https://github.com/huggingface/datasets/issues/5989
| 1,774,134,091
|
I_kwDODunzps5pvyNL
| 5,989
|
Set a rule on the config and split names
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"in this case we need to decide what to do with the existing datasets with white space characters (there shouldn't be a lot of them I think)",
"I imagine that we should stop supporting them, and help the user fix them?",
"See a report where the datasets server fails: https://huggingface.co/datasets/poloclub/diffusiondb/discussions/2#6374ff55b93cbdf65675f564\r\n\r\nThe config name is `random_10k [2m]`!"
] | 2023-06-26T07:34:14Z
| 2023-07-19T14:22:54Z
| null |
COLLABORATOR
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
> should we actually allow characters like spaces? maybe it's better to add validation for whitespace symbols and directly in datasets and raise
https://github.com/huggingface/datasets-server/issues/853
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5989/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5989/timeline
| null | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/7236
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7236/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7236/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7236/events
|
https://github.com/huggingface/datasets/pull/7236
| 2,597,358,525
|
PR_kwDODunzps5_GIvw
| 7,236
|
[MINOR:TYPO] Update arrow_dataset.py
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/3664563?v=4",
"events_url": "https://api.github.com/users/cakiki/events{/privacy}",
"followers_url": "https://api.github.com/users/cakiki/followers",
"following_url": "https://api.github.com/users/cakiki/following{/other_user}",
"gists_url": "https://api.github.com/users/cakiki/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/cakiki",
"id": 3664563,
"login": "cakiki",
"node_id": "MDQ6VXNlcjM2NjQ1NjM=",
"organizations_url": "https://api.github.com/users/cakiki/orgs",
"received_events_url": "https://api.github.com/users/cakiki/received_events",
"repos_url": "https://api.github.com/users/cakiki/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/cakiki/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cakiki/subscriptions",
"type": "User",
"url": "https://api.github.com/users/cakiki",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2024-10-18T12:10:03Z
| 2024-10-24T15:06:43Z
| 2024-10-24T15:06:43Z
|
CONTRIBUTOR
| null | null | null |
Fix wrong link.
csv kwargs docstring link was pointing to pandas json docs.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7236/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7236/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7236.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7236",
"merged_at": "2024-10-24T15:06:43Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7236.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7236"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6880
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6880/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6880/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6880/events
|
https://github.com/huggingface/datasets/issues/6880
| 2,283,278,337
|
I_kwDODunzps6IGBAB
| 6,880
|
Webdataset: KeyError: 'png' on some datasets when streaming
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
open
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] | null |
[
"The error is caused by malformed basenames of the files within the TARs:\r\n- `15_Cohen_1-s2.0-S0929664620300449-gr3_lrg-b.png` becomes `15_Cohen_1-s2` as the grouping `__key__`, and `0-S0929664620300449-gr3_lrg-b.png` as the additional key to be added to the example\r\n- whereas the intended behavior was to use `15_Cohen_1-s2.0-S0929664620300449-gr3_lrg-b` as the grouping `__key__`, and `png` as the additional key to be added to the example\r\n\r\nTo get the expected behavior, the basenames of the files within the TARs should be fixed so that they only contain a single dot, the one separating the file extension.",
"I reopen it because I think we should try to give a clearer error message with a specific error code.\r\n\r\nFor now, it's hard for the user to understand where the error comes from (not everybody knows the subtleties of the webdataset filename structure).\r\n\r\n(we can transfer it to https://github.com/huggingface/dataset-viewer if it fits better there)",
"same with .jpg -> https://huggingface.co/datasets/ProGamerGov/synthetic-dataset-1m-dalle3-high-quality-captions\r\n\r\n```\r\nError code: DatasetGenerationError\r\nException: DatasetGenerationError\r\nMessage: An error occurred while generating the dataset\r\nTraceback: Traceback (most recent call last):\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py\", line 1748, in _prepare_split_single\r\n for key, record in generator:\r\n File \"/src/services/worker/src/worker/job_runners/config/parquet_and_info.py\", line 818, in wrapped\r\n for item in generator(*args, **kwargs):\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/webdataset/webdataset.py\", line 109, in _generate_examples\r\n example[field_name] = {\"path\": example[\"__key__\"] + \".\" + field_name, \"bytes\": example[field_name]}\r\n KeyError: 'jpg'\r\n \r\n The above exception was the direct cause of the following exception:\r\n \r\n Traceback (most recent call last):\r\n File \"/src/services/worker/src/worker/job_runners/config/parquet_and_info.py\", line 1316, in compute_config_parquet_and_info_response\r\n parquet_operations, partial = stream_convert_to_parquet(\r\n File \"/src/services/worker/src/worker/job_runners/config/parquet_and_info.py\", line 909, in stream_convert_to_parquet\r\n builder._prepare_split(\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py\", line 1627, in _prepare_split\r\n for job_id, done, content in self._prepare_split_single(\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py\", line 1784, in _prepare_split_single\r\n raise DatasetGenerationError(\"An error occurred while generating the dataset\") from e\r\n datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset\r\n```\r\n",
"More details in the spec (https://docs.google.com/document/d/18OdLjruFNX74ILmgrdiCI9J1fQZuhzzRBCHV9URWto0/edit#heading=h.hkptaq2kct2s)\r\n\r\n> The prefix of a file is all directory components of the file plus the file name component up to the first “.” in the file name.\r\n> The last extension (i.e., the portion after the last “.”) in a file name determines the file type.\r\n\r\n> Example:\r\n\timages17/image194.left.jpg\r\n\timages17/image194.right.jpg\r\n\timages17/image194.json\r\n\timages17/image12.left.jpg\r\n\timages17/image12.json\r\n\timages17/image12.right.jpg\r\n\timages3/image1459.left.jpg\r\n> \t…\r\n> When reading this with a WebDataset library, you would get the following two dictionaries back in sequence:\r\n\r\n { “__key__”: “images17/image194”, “left.jpg”: b”...”, “right.jpg”: b”...”, “json”: b”...”}\r\n { “__key__”: “images17/image12”, “left.jpg”: b”...”, “right.jpg”: b”...”, “json”: b”...”}\r\n",
"OK, the issue is different in the latter case: some files are suffixed as `.jpeg`, and others as `.jpg` :)\r\n\r\nIs it a limitation of the webdataset format, or of the datasets library @lhoestq? And could we be able to give a clearer error?"
] | 2024-05-07T13:09:02Z
| 2024-05-14T20:34:05Z
| null |
MEMBER
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
reported at https://huggingface.co/datasets/tbone5563/tar_images/discussions/1
```python
>>> from datasets import load_dataset
>>> ds = load_dataset("tbone5563/tar_images")
Downloading data: 100%
1.41G/1.41G [00:48<00:00, 17.2MB/s]
Downloading data: 100%
619M/619M [00:11<00:00, 57.4MB/s]
Generating train split:
970/0 [00:02<00:00, 534.94 examples/s]
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
[/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in _prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, split_info, check_duplicate_keys, job_id)
1747 _time = time.time()
-> 1748 for key, record in generator:
1749 if max_shard_size is not None and writer._num_bytes > max_shard_size:
7 frames
[/usr/local/lib/python3.10/dist-packages/datasets/packaged_modules/webdataset/webdataset.py](https://localhost:8080/#) in _generate_examples(self, tar_paths, tar_iterators)
108 for field_name in image_field_names + audio_field_names:
--> 109 example[field_name] = {"path": example["__key__"] + "." + field_name, "bytes": example[field_name]}
110 yield f"{tar_idx}_{example_idx}", example
KeyError: 'png'
The above exception was the direct cause of the following exception:
DatasetGenerationError Traceback (most recent call last)
[<ipython-input-2-8e0fbb7badc9>](https://localhost:8080/#) in <cell line: 3>()
1 from datasets import load_dataset
2
----> 3 ds = load_dataset("tbone5563/tar_images")
[/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://localhost:8080/#) in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, trust_remote_code, **config_kwargs)
2607
2608 # Download and prepare data
-> 2609 builder_instance.download_and_prepare(
2610 download_config=download_config,
2611 download_mode=download_mode,
[/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in download_and_prepare(self, output_dir, download_config, download_mode, verification_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, file_format, max_shard_size, num_proc, storage_options, **download_and_prepare_kwargs)
1025 if num_proc is not None:
1026 prepare_split_kwargs["num_proc"] = num_proc
-> 1027 self._download_and_prepare(
1028 dl_manager=dl_manager,
1029 verification_mode=verification_mode,
[/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in _download_and_prepare(self, dl_manager, verification_mode, **prepare_splits_kwargs)
1787
1788 def _download_and_prepare(self, dl_manager, verification_mode, **prepare_splits_kwargs):
-> 1789 super()._download_and_prepare(
1790 dl_manager,
1791 verification_mode,
[/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in _download_and_prepare(self, dl_manager, verification_mode, **prepare_split_kwargs)
1120 try:
1121 # Prepare split will record examples associated to the split
-> 1122 self._prepare_split(split_generator, **prepare_split_kwargs)
1123 except OSError as e:
1124 raise OSError(
[/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in _prepare_split(self, split_generator, check_duplicate_keys, file_format, num_proc, max_shard_size)
1625 job_id = 0
1626 with pbar:
-> 1627 for job_id, done, content in self._prepare_split_single(
1628 gen_kwargs=gen_kwargs, job_id=job_id, **_prepare_split_args
1629 ):
[/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in _prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, split_info, check_duplicate_keys, job_id)
1782 if isinstance(e, SchemaInferenceError) and e.__context__ is not None:
1783 e = e.__context__
-> 1784 raise DatasetGenerationError("An error occurred while generating the dataset") from e
1785
1786 yield job_id, True, (total_num_examples, total_num_bytes, writer._features, num_shards, shard_lengths)
DatasetGenerationError: An error occurred while generating the dataset
```
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6880/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6880/timeline
| null |
reopened
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/6244
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6244/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6244/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6244/events
|
https://github.com/huggingface/datasets/pull/6244
| 1,898,861,422
|
PR_kwDODunzps5adtD3
| 6,244
|
Add support for `fsspec>=2023.9.0`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006410 / 0.011353 (-0.004943) | 0.003995 / 0.011008 (-0.007013) | 0.083585 / 0.038508 (0.045076) | 0.074285 / 0.023109 (0.051176) | 0.307163 / 0.275898 (0.031265) | 0.344691 / 0.323480 (0.021212) | 0.004277 / 0.007986 (-0.003708) | 0.004192 / 0.004328 (-0.000136) | 0.065156 / 0.004250 (0.060905) | 0.056774 / 0.037052 (0.019721) | 0.315483 / 0.258489 (0.056994) | 0.361911 / 0.293841 (0.068070) | 0.030454 / 0.128546 (-0.098092) | 0.008600 / 0.075646 (-0.067047) | 0.286692 / 0.419271 (-0.132579) | 0.052354 / 0.043533 (0.008821) | 0.308997 / 0.255139 (0.053858) | 0.337847 / 0.283200 (0.054647) | 0.022459 / 0.141683 (-0.119224) | 1.482758 / 1.452155 (0.030604) | 1.572853 / 1.492716 (0.080137) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.288603 / 0.018006 (0.270597) | 0.632903 / 0.000490 (0.632413) | 0.013702 / 0.000200 (0.013502) | 0.000284 / 0.000054 (0.000230) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028448 / 0.037411 (-0.008964) | 0.082441 / 0.014526 (0.067916) | 0.099048 / 0.176557 (-0.077508) | 0.154370 / 0.737135 (-0.582765) | 0.146143 / 0.296338 (-0.150195) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.399250 / 0.215209 (0.184040) | 3.986683 / 2.077655 (1.909028) | 1.962606 / 1.504120 (0.458486) | 1.782653 / 1.541195 (0.241459) | 1.830251 / 1.468490 (0.361761) | 0.492498 / 4.584777 (-4.092278) | 3.549581 / 3.745712 (-0.196131) | 3.200056 / 5.269862 (-2.069806) | 2.028109 / 4.565676 (-2.537568) | 0.058222 / 0.424275 (-0.366053) | 0.007629 / 0.007607 (0.000022) | 0.482083 / 0.226044 (0.256039) | 4.824728 / 2.268929 (2.555800) | 2.448772 / 55.444624 (-52.995852) | 2.079629 / 6.876477 (-4.796848) | 2.267739 / 2.142072 (0.125667) | 0.586712 / 4.805227 (-4.218515) | 0.134073 / 6.500664 (-6.366591) | 0.060565 / 0.075469 (-0.014904) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.263244 / 1.841788 (-0.578544) | 18.964498 / 8.074308 (10.890190) | 14.125062 / 10.191392 (3.933670) | 0.167635 / 0.680424 (-0.512789) | 0.018469 / 0.534201 (-0.515732) | 0.390395 / 0.579283 (-0.188888) | 0.406055 / 0.434364 (-0.028309) | 0.460717 / 0.540337 (-0.079620) | 0.642746 / 1.386936 (-0.744190) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006637 / 0.011353 (-0.004716) | 0.003972 / 0.011008 (-0.007036) | 0.064569 / 0.038508 (0.026061) | 0.075450 / 0.023109 (0.052341) | 0.405250 / 0.275898 (0.129352) | 0.433530 / 0.323480 (0.110050) | 0.005625 / 0.007986 (-0.002361) | 0.004118 / 0.004328 (-0.000211) | 0.065092 / 0.004250 (0.060842) | 0.057979 / 0.037052 (0.020927) | 0.413732 / 0.258489 (0.155243) | 0.451983 / 0.293841 (0.158142) | 0.032170 / 0.128546 (-0.096377) | 0.008690 / 0.075646 (-0.066957) | 0.071792 / 0.419271 (-0.347479) | 0.048560 / 0.043533 (0.005027) | 0.410312 / 0.255139 (0.155173) | 0.427294 / 0.283200 (0.144095) | 0.023006 / 0.141683 (-0.118677) | 1.496319 / 1.452155 (0.044164) | 1.566744 / 1.492716 (0.074027) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.266812 / 0.018006 (0.248805) | 0.540277 / 0.000490 (0.539788) | 0.008998 / 0.000200 (0.008799) | 0.000101 / 0.000054 (0.000047) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032496 / 0.037411 (-0.004915) | 0.091387 / 0.014526 (0.076861) | 0.107516 / 0.176557 (-0.069041) | 0.160019 / 0.737135 (-0.577116) | 0.107686 / 0.296338 (-0.188652) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.433321 / 0.215209 (0.218111) | 4.330221 / 2.077655 (2.252566) | 2.367215 / 1.504120 (0.863095) | 2.192464 / 1.541195 (0.651269) | 2.200204 / 1.468490 (0.731714) | 0.488057 / 4.584777 (-4.096720) | 3.625429 / 3.745712 (-0.120283) | 3.282859 / 5.269862 (-1.987003) | 2.038716 / 4.565676 (-2.526960) | 0.057968 / 0.424275 (-0.366307) | 0.007753 / 0.007607 (0.000146) | 0.509133 / 0.226044 (0.283089) | 5.086445 / 2.268929 (2.817516) | 2.846017 / 55.444624 (-52.598607) | 2.469546 / 6.876477 (-4.406931) | 2.673218 / 2.142072 (0.531145) | 0.591228 / 4.805227 (-4.213999) | 0.131920 / 6.500664 (-6.368744) | 0.059967 / 0.075469 (-0.015502) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.375634 / 1.841788 (-0.466153) | 19.506752 / 8.074308 (11.432444) | 14.677876 / 10.191392 (4.486484) | 0.165071 / 0.680424 (-0.515353) | 0.020614 / 0.534201 (-0.513587) | 0.395967 / 0.579283 (-0.183316) | 0.424358 / 0.434364 (-0.010006) | 0.469954 / 0.540337 (-0.070384) | 0.643169 / 1.386936 (-0.743767) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006072 / 0.011353 (-0.005281) | 0.003691 / 0.011008 (-0.007318) | 0.081683 / 0.038508 (0.043175) | 0.059114 / 0.023109 (0.036005) | 0.317053 / 0.275898 (0.041155) | 0.357672 / 0.323480 (0.034192) | 0.003577 / 0.007986 (-0.004408) | 0.003890 / 0.004328 (-0.000438) | 0.063667 / 0.004250 (0.059417) | 0.048233 / 0.037052 (0.011181) | 0.322854 / 0.258489 (0.064365) | 0.368014 / 0.293841 (0.074173) | 0.027750 / 0.128546 (-0.100796) | 0.008137 / 0.075646 (-0.067509) | 0.263906 / 0.419271 (-0.155366) | 0.045402 / 0.043533 (0.001870) | 0.315414 / 0.255139 (0.060275) | 0.340906 / 0.283200 (0.057707) | 0.023475 / 0.141683 (-0.118208) | 1.443922 / 1.452155 (-0.008233) | 1.550332 / 1.492716 (0.057616) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.211914 / 0.018006 (0.193908) | 0.423577 / 0.000490 (0.423088) | 0.003436 / 0.000200 (0.003236) | 0.000077 / 0.000054 (0.000022) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024675 / 0.037411 (-0.012737) | 0.072550 / 0.014526 (0.058024) | 0.084533 / 0.176557 (-0.092024) | 0.146106 / 0.737135 (-0.591029) | 0.085523 / 0.296338 (-0.210816) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.403498 / 0.215209 (0.188289) | 4.019000 / 2.077655 (1.941345) | 1.984821 / 1.504120 (0.480701) | 1.805071 / 1.541195 (0.263876) | 1.860906 / 1.468490 (0.392416) | 0.499570 / 4.584777 (-4.085207) | 3.088424 / 3.745712 (-0.657288) | 2.833693 / 5.269862 (-2.436169) | 1.869731 / 4.565676 (-2.695945) | 0.057606 / 0.424275 (-0.366669) | 0.006960 / 0.007607 (-0.000647) | 0.476085 / 0.226044 (0.250040) | 4.774063 / 2.268929 (2.505134) | 2.458079 / 55.444624 (-52.986545) | 2.106075 / 6.876477 (-4.770402) | 2.248373 / 2.142072 (0.106301) | 0.589767 / 4.805227 (-4.215460) | 0.124382 / 6.500664 (-6.376282) | 0.060705 / 0.075469 (-0.014764) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.287031 / 1.841788 (-0.554756) | 17.662455 / 8.074308 (9.588147) | 14.288812 / 10.191392 (4.097420) | 0.156168 / 0.680424 (-0.524256) | 0.016795 / 0.534201 (-0.517406) | 0.333726 / 0.579283 (-0.245557) | 0.362327 / 0.434364 (-0.072037) | 0.387773 / 0.540337 (-0.152564) | 0.547232 / 1.386936 (-0.839704) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006494 / 0.011353 (-0.004859) | 0.003762 / 0.011008 (-0.007247) | 0.062373 / 0.038508 (0.023864) | 0.066357 / 0.023109 (0.043247) | 0.448687 / 0.275898 (0.172789) | 0.482445 / 0.323480 (0.158965) | 0.004990 / 0.007986 (-0.002996) | 0.002945 / 0.004328 (-0.001384) | 0.062444 / 0.004250 (0.058194) | 0.051381 / 0.037052 (0.014329) | 0.449310 / 0.258489 (0.190821) | 0.483188 / 0.293841 (0.189347) | 0.029078 / 0.128546 (-0.099468) | 0.008146 / 0.075646 (-0.067501) | 0.067369 / 0.419271 (-0.351903) | 0.041732 / 0.043533 (-0.001801) | 0.451675 / 0.255139 (0.196536) | 0.470445 / 0.283200 (0.187246) | 0.021053 / 0.141683 (-0.120630) | 1.483627 / 1.452155 (0.031472) | 1.541594 / 1.492716 (0.048878) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.210247 / 0.018006 (0.192240) | 0.424663 / 0.000490 (0.424173) | 0.005394 / 0.000200 (0.005194) | 0.000076 / 0.000054 (0.000021) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026894 / 0.037411 (-0.010517) | 0.081324 / 0.014526 (0.066798) | 0.091362 / 0.176557 (-0.085195) | 0.145602 / 0.737135 (-0.591533) | 0.091896 / 0.296338 (-0.204443) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.469662 / 0.215209 (0.254453) | 4.689495 / 2.077655 (2.611840) | 2.596462 / 1.504120 (1.092342) | 2.422584 / 1.541195 (0.881389) | 2.476710 / 1.468490 (1.008220) | 0.507049 / 4.584777 (-4.077728) | 3.185519 / 3.745712 (-0.560193) | 2.879842 / 5.269862 (-2.390019) | 1.882643 / 4.565676 (-2.683034) | 0.058046 / 0.424275 (-0.366229) | 0.006797 / 0.007607 (-0.000811) | 0.545245 / 0.226044 (0.319201) | 5.449248 / 2.268929 (3.180319) | 3.057341 / 55.444624 (-52.387283) | 2.728385 / 6.876477 (-4.148092) | 2.898945 / 2.142072 (0.756873) | 0.600035 / 4.805227 (-4.205192) | 0.126337 / 6.500664 (-6.374327) | 0.061333 / 0.075469 (-0.014136) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.332966 / 1.841788 (-0.508821) | 17.960805 / 8.074308 (9.886497) | 14.978838 / 10.191392 (4.787446) | 0.148852 / 0.680424 (-0.531572) | 0.018307 / 0.534201 (-0.515894) | 0.335234 / 0.579283 (-0.244050) | 0.389659 / 0.434364 (-0.044704) | 0.393259 / 0.540337 (-0.147078) | 0.549237 / 1.386936 (-0.837699) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008808 / 0.011353 (-0.002545) | 0.005001 / 0.011008 (-0.006008) | 0.110022 / 0.038508 (0.071514) | 0.078015 / 0.023109 (0.054906) | 0.384724 / 0.275898 (0.108826) | 0.441354 / 0.323480 (0.117874) | 0.005116 / 0.007986 (-0.002870) | 0.004308 / 0.004328 (-0.000020) | 0.081679 / 0.004250 (0.077429) | 0.061386 / 0.037052 (0.024333) | 0.398149 / 0.258489 (0.139660) | 0.464859 / 0.293841 (0.171018) | 0.047443 / 0.128546 (-0.081104) | 0.014693 / 0.075646 (-0.060954) | 0.365438 / 0.419271 (-0.053833) | 0.081689 / 0.043533 (0.038156) | 0.400458 / 0.255139 (0.145319) | 0.449958 / 0.283200 (0.166758) | 0.038266 / 0.141683 (-0.103417) | 1.795043 / 1.452155 (0.342888) | 1.908819 / 1.492716 (0.416102) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.297911 / 0.018006 (0.279905) | 0.601640 / 0.000490 (0.601150) | 0.015406 / 0.000200 (0.015206) | 0.000163 / 0.000054 (0.000108) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034520 / 0.037411 (-0.002891) | 0.092657 / 0.014526 (0.078131) | 0.113992 / 0.176557 (-0.062564) | 0.189075 / 0.737135 (-0.548061) | 0.106602 / 0.296338 (-0.189736) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.585838 / 0.215209 (0.370629) | 5.719281 / 2.077655 (3.641627) | 2.525914 / 1.504120 (1.021794) | 2.231908 / 1.541195 (0.690713) | 2.215272 / 1.468490 (0.746782) | 0.814425 / 4.584777 (-3.770352) | 5.243406 / 3.745712 (1.497694) | 4.476642 / 5.269862 (-0.793220) | 2.929438 / 4.565676 (-1.636239) | 0.092070 / 0.424275 (-0.332205) | 0.009358 / 0.007607 (0.001751) | 0.713975 / 0.226044 (0.487931) | 6.948846 / 2.268929 (4.679918) | 3.361125 / 55.444624 (-52.083500) | 2.575224 / 6.876477 (-4.301253) | 2.783082 / 2.142072 (0.641009) | 1.016205 / 4.805227 (-3.789022) | 0.202578 / 6.500664 (-6.298086) | 0.076696 / 0.075469 (0.001227) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.650889 / 1.841788 (-0.190898) | 23.358273 / 8.074308 (15.283965) | 19.882450 / 10.191392 (9.691058) | 0.228971 / 0.680424 (-0.451453) | 0.027736 / 0.534201 (-0.506465) | 0.472405 / 0.579283 (-0.106878) | 0.581799 / 0.434364 (0.147435) | 0.533000 / 0.540337 (-0.007338) | 0.815588 / 1.386936 (-0.571348) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009151 / 0.011353 (-0.002202) | 0.005074 / 0.011008 (-0.005934) | 0.078709 / 0.038508 (0.040201) | 0.077696 / 0.023109 (0.054586) | 0.522356 / 0.275898 (0.246458) | 0.562345 / 0.323480 (0.238865) | 0.006411 / 0.007986 (-0.001575) | 0.004379 / 0.004328 (0.000051) | 0.082402 / 0.004250 (0.078151) | 0.064223 / 0.037052 (0.027170) | 0.518184 / 0.258489 (0.259695) | 0.566221 / 0.293841 (0.272380) | 0.046796 / 0.128546 (-0.081750) | 0.013987 / 0.075646 (-0.061659) | 0.094925 / 0.419271 (-0.324346) | 0.058810 / 0.043533 (0.015277) | 0.520252 / 0.255139 (0.265113) | 0.566403 / 0.283200 (0.283203) | 0.034720 / 0.141683 (-0.106963) | 1.796809 / 1.452155 (0.344654) | 1.913787 / 1.492716 (0.421070) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.317449 / 0.018006 (0.299443) | 0.620154 / 0.000490 (0.619664) | 0.007066 / 0.000200 (0.006866) | 0.000126 / 0.000054 (0.000072) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.035252 / 0.037411 (-0.002160) | 0.111648 / 0.014526 (0.097122) | 0.120692 / 0.176557 (-0.055864) | 0.193202 / 0.737135 (-0.543933) | 0.127905 / 0.296338 (-0.168434) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.661012 / 0.215209 (0.445803) | 6.626680 / 2.077655 (4.549026) | 3.243065 / 1.504120 (1.738945) | 2.904053 / 1.541195 (1.362858) | 2.880516 / 1.468490 (1.412026) | 0.875650 / 4.584777 (-3.709127) | 5.381993 / 3.745712 (1.636281) | 4.743997 / 5.269862 (-0.525864) | 3.020736 / 4.565676 (-1.544940) | 0.106573 / 0.424275 (-0.317702) | 0.011151 / 0.007607 (0.003544) | 0.821990 / 0.226044 (0.595946) | 8.225383 / 2.268929 (5.956454) | 3.963232 / 55.444624 (-51.481392) | 3.288916 / 6.876477 (-3.587561) | 3.579435 / 2.142072 (1.437363) | 1.043379 / 4.805227 (-3.761848) | 0.207508 / 6.500664 (-6.293156) | 0.085109 / 0.075469 (0.009640) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.723798 / 1.841788 (-0.117990) | 24.709848 / 8.074308 (16.635540) | 22.484674 / 10.191392 (12.293282) | 0.260357 / 0.680424 (-0.420067) | 0.033539 / 0.534201 (-0.500662) | 0.487814 / 0.579283 (-0.091469) | 0.610171 / 0.434364 (0.175807) | 0.585012 / 0.540337 (0.044674) | 0.803764 / 1.386936 (-0.583172) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006661 / 0.011353 (-0.004692) | 0.004022 / 0.011008 (-0.006987) | 0.084269 / 0.038508 (0.045760) | 0.070707 / 0.023109 (0.047598) | 0.315035 / 0.275898 (0.039137) | 0.339830 / 0.323480 (0.016350) | 0.003994 / 0.007986 (-0.003991) | 0.004129 / 0.004328 (-0.000199) | 0.065383 / 0.004250 (0.061133) | 0.055493 / 0.037052 (0.018441) | 0.320521 / 0.258489 (0.062032) | 0.354301 / 0.293841 (0.060460) | 0.031177 / 0.128546 (-0.097370) | 0.008724 / 0.075646 (-0.066922) | 0.288298 / 0.419271 (-0.130974) | 0.052418 / 0.043533 (0.008885) | 0.319122 / 0.255139 (0.063983) | 0.335859 / 0.283200 (0.052659) | 0.026260 / 0.141683 (-0.115423) | 1.450039 / 1.452155 (-0.002115) | 1.545172 / 1.492716 (0.052455) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.234232 / 0.018006 (0.216226) | 0.454983 / 0.000490 (0.454493) | 0.007590 / 0.000200 (0.007390) | 0.000550 / 0.000054 (0.000495) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028714 / 0.037411 (-0.008698) | 0.083686 / 0.014526 (0.069160) | 0.162986 / 0.176557 (-0.013570) | 0.167574 / 0.737135 (-0.569561) | 0.273158 / 0.296338 (-0.023180) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.388275 / 0.215209 (0.173066) | 3.862034 / 2.077655 (1.784379) | 1.843561 / 1.504120 (0.339441) | 1.675224 / 1.541195 (0.134029) | 1.730394 / 1.468490 (0.261904) | 0.495259 / 4.584777 (-4.089518) | 3.627155 / 3.745712 (-0.118557) | 3.290590 / 5.269862 (-1.979272) | 2.032432 / 4.565676 (-2.533245) | 0.058212 / 0.424275 (-0.366063) | 0.007815 / 0.007607 (0.000208) | 0.460625 / 0.226044 (0.234580) | 4.616845 / 2.268929 (2.347916) | 2.339280 / 55.444624 (-53.105344) | 1.957216 / 6.876477 (-4.919261) | 2.129511 / 2.142072 (-0.012562) | 0.591782 / 4.805227 (-4.213446) | 0.136391 / 6.500664 (-6.364273) | 0.059627 / 0.075469 (-0.015842) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.278998 / 1.841788 (-0.562789) | 18.485496 / 8.074308 (10.411188) | 14.161273 / 10.191392 (3.969881) | 0.164346 / 0.680424 (-0.516078) | 0.018144 / 0.534201 (-0.516057) | 0.391601 / 0.579283 (-0.187682) | 0.424391 / 0.434364 (-0.009973) | 0.458209 / 0.540337 (-0.082129) | 0.645124 / 1.386936 (-0.741812) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006799 / 0.011353 (-0.004554) | 0.004023 / 0.011008 (-0.006985) | 0.065206 / 0.038508 (0.026698) | 0.074386 / 0.023109 (0.051277) | 0.437399 / 0.275898 (0.161501) | 0.467382 / 0.323480 (0.143903) | 0.005467 / 0.007986 (-0.002519) | 0.003324 / 0.004328 (-0.001005) | 0.064289 / 0.004250 (0.060039) | 0.057257 / 0.037052 (0.020205) | 0.440035 / 0.258489 (0.181546) | 0.477138 / 0.293841 (0.183298) | 0.032171 / 0.128546 (-0.096375) | 0.008400 / 0.075646 (-0.067247) | 0.070877 / 0.419271 (-0.348395) | 0.048180 / 0.043533 (0.004648) | 0.441274 / 0.255139 (0.186135) | 0.461386 / 0.283200 (0.178187) | 0.022576 / 0.141683 (-0.119106) | 1.520914 / 1.452155 (0.068759) | 1.575593 / 1.492716 (0.082877) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.221551 / 0.018006 (0.203545) | 0.447213 / 0.000490 (0.446723) | 0.004435 / 0.000200 (0.004235) | 0.000099 / 0.000054 (0.000044) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032123 / 0.037411 (-0.005288) | 0.091809 / 0.014526 (0.077283) | 0.103938 / 0.176557 (-0.072618) | 0.156878 / 0.737135 (-0.580258) | 0.105071 / 0.296338 (-0.191268) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.430389 / 0.215209 (0.215180) | 4.293496 / 2.077655 (2.215841) | 2.292801 / 1.504120 (0.788681) | 2.135320 / 1.541195 (0.594126) | 2.195720 / 1.468490 (0.727229) | 0.493277 / 4.584777 (-4.091500) | 3.685617 / 3.745712 (-0.060096) | 3.278897 / 5.269862 (-1.990965) | 2.036939 / 4.565676 (-2.528737) | 0.058766 / 0.424275 (-0.365509) | 0.007783 / 0.007607 (0.000176) | 0.511165 / 0.226044 (0.285120) | 5.126757 / 2.268929 (2.857829) | 2.756690 / 55.444624 (-52.687935) | 2.421745 / 6.876477 (-4.454732) | 2.597249 / 2.142072 (0.455177) | 0.647206 / 4.805227 (-4.158021) | 0.143392 / 6.500664 (-6.357273) | 0.060110 / 0.075469 (-0.015359) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.340289 / 1.841788 (-0.501499) | 19.057620 / 8.074308 (10.983312) | 14.832892 / 10.191392 (4.641500) | 0.167730 / 0.680424 (-0.512694) | 0.020178 / 0.534201 (-0.514023) | 0.394060 / 0.579283 (-0.185223) | 0.433976 / 0.434364 (-0.000388) | 0.474417 / 0.540337 (-0.065921) | 0.640653 / 1.386936 (-0.746283) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007661 / 0.011353 (-0.003692) | 0.004541 / 0.011008 (-0.006467) | 0.100547 / 0.038508 (0.062039) | 0.084257 / 0.023109 (0.061148) | 0.377627 / 0.275898 (0.101729) | 0.433764 / 0.323480 (0.110284) | 0.005995 / 0.007986 (-0.001990) | 0.003810 / 0.004328 (-0.000518) | 0.076409 / 0.004250 (0.072158) | 0.063411 / 0.037052 (0.026359) | 0.382504 / 0.258489 (0.124015) | 0.449721 / 0.293841 (0.155880) | 0.036499 / 0.128546 (-0.092047) | 0.009942 / 0.075646 (-0.065705) | 0.343839 / 0.419271 (-0.075433) | 0.062147 / 0.043533 (0.018614) | 0.383244 / 0.255139 (0.128105) | 0.415606 / 0.283200 (0.132406) | 0.027475 / 0.141683 (-0.114207) | 1.740413 / 1.452155 (0.288258) | 1.862210 / 1.492716 (0.369493) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.260064 / 0.018006 (0.242058) | 0.499001 / 0.000490 (0.498511) | 0.015811 / 0.000200 (0.015611) | 0.000119 / 0.000054 (0.000065) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033599 / 0.037411 (-0.003812) | 0.099354 / 0.014526 (0.084828) | 0.114693 / 0.176557 (-0.061864) | 0.180231 / 0.737135 (-0.556904) | 0.114715 / 0.296338 (-0.181623) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.459884 / 0.215209 (0.244675) | 4.580806 / 2.077655 (2.503151) | 2.270770 / 1.504120 (0.766650) | 2.077127 / 1.541195 (0.535932) | 2.167175 / 1.468490 (0.698685) | 0.570593 / 4.584777 (-4.014184) | 4.120926 / 3.745712 (0.375214) | 3.817595 / 5.269862 (-1.452267) | 2.404782 / 4.565676 (-2.160894) | 0.067972 / 0.424275 (-0.356304) | 0.009378 / 0.007607 (0.001771) | 0.549642 / 0.226044 (0.323597) | 5.490369 / 2.268929 (3.221440) | 2.905264 / 55.444624 (-52.539361) | 2.452935 / 6.876477 (-4.423542) | 2.700760 / 2.142072 (0.558688) | 0.700407 / 4.805227 (-4.104820) | 0.159349 / 6.500664 (-6.341315) | 0.074605 / 0.075469 (-0.000864) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.517803 / 1.841788 (-0.323985) | 22.343700 / 8.074308 (14.269392) | 16.411639 / 10.191392 (6.220247) | 0.169816 / 0.680424 (-0.510608) | 0.021532 / 0.534201 (-0.512668) | 0.470161 / 0.579283 (-0.109122) | 0.473412 / 0.434364 (0.039048) | 0.539690 / 0.540337 (-0.000647) | 0.774011 / 1.386936 (-0.612925) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007629 / 0.011353 (-0.003724) | 0.004651 / 0.011008 (-0.006357) | 0.075162 / 0.038508 (0.036654) | 0.085365 / 0.023109 (0.062256) | 0.493272 / 0.275898 (0.217374) | 0.535776 / 0.323480 (0.212296) | 0.006323 / 0.007986 (-0.001663) | 0.003785 / 0.004328 (-0.000544) | 0.076161 / 0.004250 (0.071911) | 0.065982 / 0.037052 (0.028929) | 0.513355 / 0.258489 (0.254866) | 0.549219 / 0.293841 (0.255378) | 0.038052 / 0.128546 (-0.090494) | 0.010055 / 0.075646 (-0.065592) | 0.083744 / 0.419271 (-0.335527) | 0.056708 / 0.043533 (0.013175) | 0.496273 / 0.255139 (0.241135) | 0.523709 / 0.283200 (0.240509) | 0.026502 / 0.141683 (-0.115181) | 1.793032 / 1.452155 (0.340877) | 1.870534 / 1.492716 (0.377817) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.252288 / 0.018006 (0.234281) | 0.490380 / 0.000490 (0.489890) | 0.005884 / 0.000200 (0.005684) | 0.000109 / 0.000054 (0.000054) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.038238 / 0.037411 (0.000827) | 0.110010 / 0.014526 (0.095485) | 0.125497 / 0.176557 (-0.051059) | 0.188154 / 0.737135 (-0.548981) | 0.126112 / 0.296338 (-0.170227) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.515837 / 0.215209 (0.300628) | 5.135153 / 2.077655 (3.057498) | 2.761740 / 1.504120 (1.257620) | 2.552718 / 1.541195 (1.011523) | 2.636425 / 1.468490 (1.167935) | 0.588442 / 4.584777 (-3.996335) | 4.220833 / 3.745712 (0.475120) | 3.874637 / 5.269862 (-1.395225) | 2.424668 / 4.565676 (-2.141009) | 0.069979 / 0.424275 (-0.354296) | 0.009349 / 0.007607 (0.001742) | 0.608936 / 0.226044 (0.382891) | 6.081209 / 2.268929 (3.812280) | 3.348067 / 55.444624 (-52.096557) | 2.919130 / 6.876477 (-3.957347) | 3.159093 / 2.142072 (1.017020) | 0.704059 / 4.805227 (-4.101169) | 0.158417 / 6.500664 (-6.342247) | 0.071321 / 0.075469 (-0.004148) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.595287 / 1.841788 (-0.246501) | 23.096619 / 8.074308 (15.022311) | 17.258041 / 10.191392 (7.066649) | 0.186197 / 0.680424 (-0.494227) | 0.023633 / 0.534201 (-0.510567) | 0.472181 / 0.579283 (-0.107102) | 0.493817 / 0.434364 (0.059453) | 0.567657 / 0.540337 (0.027320) | 0.793789 / 1.386936 (-0.593147) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007084 / 0.011353 (-0.004268) | 0.004093 / 0.011008 (-0.006915) | 0.086395 / 0.038508 (0.047887) | 0.087734 / 0.023109 (0.064625) | 0.356936 / 0.275898 (0.081038) | 0.386413 / 0.323480 (0.062933) | 0.005531 / 0.007986 (-0.002454) | 0.003462 / 0.004328 (-0.000866) | 0.065503 / 0.004250 (0.061252) | 0.058973 / 0.037052 (0.021920) | 0.354151 / 0.258489 (0.095662) | 0.398812 / 0.293841 (0.104971) | 0.031508 / 0.128546 (-0.097038) | 0.008537 / 0.075646 (-0.067109) | 0.290942 / 0.419271 (-0.128329) | 0.053537 / 0.043533 (0.010004) | 0.352067 / 0.255139 (0.096928) | 0.375142 / 0.283200 (0.091943) | 0.025658 / 0.141683 (-0.116025) | 1.468496 / 1.452155 (0.016341) | 1.556926 / 1.492716 (0.064210) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.238858 / 0.018006 (0.220852) | 0.460018 / 0.000490 (0.459528) | 0.009613 / 0.000200 (0.009414) | 0.000326 / 0.000054 (0.000272) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030333 / 0.037411 (-0.007078) | 0.088431 / 0.014526 (0.073905) | 0.098130 / 0.176557 (-0.078427) | 0.155160 / 0.737135 (-0.581975) | 0.099963 / 0.296338 (-0.196375) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.385769 / 0.215209 (0.170560) | 3.836723 / 2.077655 (1.759069) | 1.861065 / 1.504120 (0.356945) | 1.685159 / 1.541195 (0.143965) | 1.780679 / 1.468490 (0.312189) | 0.491865 / 4.584777 (-4.092912) | 3.581139 / 3.745712 (-0.164573) | 3.366278 / 5.269862 (-1.903584) | 2.093094 / 4.565676 (-2.472583) | 0.058063 / 0.424275 (-0.366212) | 0.007903 / 0.007607 (0.000296) | 0.464866 / 0.226044 (0.238821) | 4.647754 / 2.268929 (2.378825) | 2.316466 / 55.444624 (-53.128158) | 1.984079 / 6.876477 (-4.892398) | 2.235020 / 2.142072 (0.092948) | 0.592591 / 4.805227 (-4.212636) | 0.135586 / 6.500664 (-6.365078) | 0.061434 / 0.075469 (-0.014035) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.282940 / 1.841788 (-0.558848) | 19.635975 / 8.074308 (11.561667) | 14.426135 / 10.191392 (4.234743) | 0.166732 / 0.680424 (-0.513692) | 0.018438 / 0.534201 (-0.515763) | 0.393173 / 0.579283 (-0.186110) | 0.417291 / 0.434364 (-0.017073) | 0.459188 / 0.540337 (-0.081149) | 0.632568 / 1.386936 (-0.754368) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007166 / 0.011353 (-0.004187) | 0.004254 / 0.011008 (-0.006754) | 0.064667 / 0.038508 (0.026159) | 0.085142 / 0.023109 (0.062033) | 0.410081 / 0.275898 (0.134183) | 0.445803 / 0.323480 (0.122323) | 0.005600 / 0.007986 (-0.002385) | 0.003520 / 0.004328 (-0.000809) | 0.064148 / 0.004250 (0.059897) | 0.059869 / 0.037052 (0.022817) | 0.407439 / 0.258489 (0.148950) | 0.451169 / 0.293841 (0.157329) | 0.032619 / 0.128546 (-0.095927) | 0.008706 / 0.075646 (-0.066940) | 0.071230 / 0.419271 (-0.348041) | 0.048499 / 0.043533 (0.004966) | 0.416401 / 0.255139 (0.161262) | 0.430737 / 0.283200 (0.147537) | 0.022511 / 0.141683 (-0.119172) | 1.517296 / 1.452155 (0.065141) | 1.581704 / 1.492716 (0.088988) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.220738 / 0.018006 (0.202732) | 0.454026 / 0.000490 (0.453536) | 0.004695 / 0.000200 (0.004495) | 0.000087 / 0.000054 (0.000033) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033202 / 0.037411 (-0.004209) | 0.097506 / 0.014526 (0.082980) | 0.106661 / 0.176557 (-0.069896) | 0.160554 / 0.737135 (-0.576581) | 0.109203 / 0.296338 (-0.187135) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.432013 / 0.215209 (0.216804) | 4.310399 / 2.077655 (2.232744) | 2.296529 / 1.504120 (0.792409) | 2.139929 / 1.541195 (0.598734) | 2.227432 / 1.468490 (0.758942) | 0.493697 / 4.584777 (-4.091080) | 3.639877 / 3.745712 (-0.105835) | 3.323165 / 5.269862 (-1.946697) | 2.084527 / 4.565676 (-2.481150) | 0.058812 / 0.424275 (-0.365463) | 0.007813 / 0.007607 (0.000206) | 0.512366 / 0.226044 (0.286321) | 5.119660 / 2.268929 (2.850732) | 2.783819 / 55.444624 (-52.660806) | 2.490669 / 6.876477 (-4.385808) | 2.696653 / 2.142072 (0.554581) | 0.627161 / 4.805227 (-4.178066) | 0.137032 / 6.500664 (-6.363632) | 0.064040 / 0.075469 (-0.011429) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.369578 / 1.841788 (-0.472210) | 20.421182 / 8.074308 (12.346873) | 15.719347 / 10.191392 (5.527955) | 0.166150 / 0.680424 (-0.514274) | 0.020262 / 0.534201 (-0.513939) | 0.395645 / 0.579283 (-0.183638) | 0.430363 / 0.434364 (-0.004001) | 0.477843 / 0.540337 (-0.062494) | 0.638501 / 1.386936 (-0.748435) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006141 / 0.011353 (-0.005211) | 0.003683 / 0.011008 (-0.007325) | 0.081127 / 0.038508 (0.042618) | 0.064285 / 0.023109 (0.041176) | 0.323038 / 0.275898 (0.047140) | 0.347280 / 0.323480 (0.023800) | 0.003518 / 0.007986 (-0.004467) | 0.002958 / 0.004328 (-0.001370) | 0.063093 / 0.004250 (0.058843) | 0.050682 / 0.037052 (0.013629) | 0.321222 / 0.258489 (0.062733) | 0.359266 / 0.293841 (0.065425) | 0.027515 / 0.128546 (-0.101032) | 0.007964 / 0.075646 (-0.067682) | 0.261305 / 0.419271 (-0.157966) | 0.044897 / 0.043533 (0.001365) | 0.320684 / 0.255139 (0.065545) | 0.335722 / 0.283200 (0.052522) | 0.023378 / 0.141683 (-0.118305) | 1.418211 / 1.452155 (-0.033943) | 1.523728 / 1.492716 (0.031011) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.222316 / 0.018006 (0.204310) | 0.426943 / 0.000490 (0.426454) | 0.008785 / 0.000200 (0.008585) | 0.000081 / 0.000054 (0.000026) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024716 / 0.037411 (-0.012695) | 0.075341 / 0.014526 (0.060816) | 0.089532 / 0.176557 (-0.087024) | 0.147638 / 0.737135 (-0.589498) | 0.085697 / 0.296338 (-0.210641) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.396395 / 0.215209 (0.181186) | 3.947280 / 2.077655 (1.869625) | 1.894762 / 1.504120 (0.390642) | 1.712094 / 1.541195 (0.170899) | 1.779049 / 1.468490 (0.310559) | 0.509206 / 4.584777 (-4.075571) | 3.073951 / 3.745712 (-0.671761) | 2.886826 / 5.269862 (-2.383035) | 1.894444 / 4.565676 (-2.671232) | 0.059519 / 0.424275 (-0.364756) | 0.006951 / 0.007607 (-0.000656) | 0.468213 / 0.226044 (0.242169) | 4.667134 / 2.268929 (2.398206) | 2.342516 / 55.444624 (-53.102108) | 1.992047 / 6.876477 (-4.884430) | 2.142059 / 2.142072 (-0.000014) | 0.600507 / 4.805227 (-4.204720) | 0.128982 / 6.500664 (-6.371682) | 0.062100 / 0.075469 (-0.013369) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.234500 / 1.841788 (-0.607288) | 17.951646 / 8.074308 (9.877338) | 13.862763 / 10.191392 (3.671371) | 0.143133 / 0.680424 (-0.537291) | 0.016643 / 0.534201 (-0.517558) | 0.333174 / 0.579283 (-0.246109) | 0.366956 / 0.434364 (-0.067408) | 0.384569 / 0.540337 (-0.155769) | 0.546830 / 1.386936 (-0.840106) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006146 / 0.011353 (-0.005207) | 0.003725 / 0.011008 (-0.007283) | 0.062099 / 0.038508 (0.023591) | 0.064117 / 0.023109 (0.041008) | 0.456100 / 0.275898 (0.180202) | 0.490794 / 0.323480 (0.167314) | 0.005652 / 0.007986 (-0.002334) | 0.002897 / 0.004328 (-0.001432) | 0.061909 / 0.004250 (0.057659) | 0.050634 / 0.037052 (0.013582) | 0.454422 / 0.258489 (0.195933) | 0.493208 / 0.293841 (0.199367) | 0.028822 / 0.128546 (-0.099724) | 0.008115 / 0.075646 (-0.067531) | 0.067214 / 0.419271 (-0.352058) | 0.041529 / 0.043533 (-0.002004) | 0.458016 / 0.255139 (0.202877) | 0.476059 / 0.283200 (0.192859) | 0.019926 / 0.141683 (-0.121757) | 1.465345 / 1.452155 (0.013190) | 1.533518 / 1.492716 (0.040802) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.218830 / 0.018006 (0.200823) | 0.418869 / 0.000490 (0.418380) | 0.005154 / 0.000200 (0.004954) | 0.000080 / 0.000054 (0.000026) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027648 / 0.037411 (-0.009763) | 0.083842 / 0.014526 (0.069316) | 0.092300 / 0.176557 (-0.084257) | 0.146098 / 0.737135 (-0.591037) | 0.093441 / 0.296338 (-0.202898) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.464426 / 0.215209 (0.249217) | 4.632705 / 2.077655 (2.555051) | 2.642091 / 1.504120 (1.137971) | 2.461768 / 1.541195 (0.920573) | 2.535554 / 1.468490 (1.067064) | 0.507506 / 4.584777 (-4.077271) | 3.095485 / 3.745712 (-0.650227) | 2.884261 / 5.269862 (-2.385601) | 1.908943 / 4.565676 (-2.656734) | 0.058622 / 0.424275 (-0.365653) | 0.006892 / 0.007607 (-0.000715) | 0.536045 / 0.226044 (0.310001) | 5.377448 / 2.268929 (3.108519) | 3.076023 / 55.444624 (-52.368602) | 2.745586 / 6.876477 (-4.130890) | 2.939582 / 2.142072 (0.797510) | 0.595639 / 4.805227 (-4.209589) | 0.125086 / 6.500664 (-6.375578) | 0.061075 / 0.075469 (-0.014394) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.342820 / 1.841788 (-0.498968) | 18.326240 / 8.074308 (10.251932) | 15.007094 / 10.191392 (4.815702) | 0.133037 / 0.680424 (-0.547387) | 0.018702 / 0.534201 (-0.515499) | 0.330245 / 0.579283 (-0.249038) | 0.381494 / 0.434364 (-0.052870) | 0.393705 / 0.540337 (-0.146633) | 0.533676 / 1.386936 (-0.853260) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007644 / 0.011353 (-0.003709) | 0.004759 / 0.011008 (-0.006249) | 0.100569 / 0.038508 (0.062061) | 0.089645 / 0.023109 (0.066536) | 0.376679 / 0.275898 (0.100781) | 0.413214 / 0.323480 (0.089735) | 0.006087 / 0.007986 (-0.001899) | 0.003832 / 0.004328 (-0.000496) | 0.075892 / 0.004250 (0.071641) | 0.064635 / 0.037052 (0.027582) | 0.376874 / 0.258489 (0.118385) | 0.436756 / 0.293841 (0.142915) | 0.036372 / 0.128546 (-0.092174) | 0.010047 / 0.075646 (-0.065599) | 0.345073 / 0.419271 (-0.074198) | 0.062092 / 0.043533 (0.018559) | 0.380503 / 0.255139 (0.125364) | 0.414800 / 0.283200 (0.131600) | 0.028274 / 0.141683 (-0.113409) | 1.732463 / 1.452155 (0.280308) | 1.859049 / 1.492716 (0.366333) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.267129 / 0.018006 (0.249123) | 0.509109 / 0.000490 (0.508619) | 0.012329 / 0.000200 (0.012130) | 0.000432 / 0.000054 (0.000377) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033773 / 0.037411 (-0.003638) | 0.102800 / 0.014526 (0.088274) | 0.114256 / 0.176557 (-0.062300) | 0.182048 / 0.737135 (-0.555087) | 0.118225 / 0.296338 (-0.178113) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.457553 / 0.215209 (0.242344) | 4.588212 / 2.077655 (2.510557) | 2.184138 / 1.504120 (0.680018) | 2.003570 / 1.541195 (0.462375) | 2.093217 / 1.468490 (0.624727) | 0.585679 / 4.584777 (-3.999098) | 4.175319 / 3.745712 (0.429607) | 3.914168 / 5.269862 (-1.355693) | 2.452992 / 4.565676 (-2.112684) | 0.068363 / 0.424275 (-0.355912) | 0.009314 / 0.007607 (0.001707) | 0.543640 / 0.226044 (0.317595) | 5.440853 / 2.268929 (3.171925) | 2.782415 / 55.444624 (-52.662210) | 2.332359 / 6.876477 (-4.544118) | 2.628520 / 2.142072 (0.486448) | 0.696838 / 4.805227 (-4.108389) | 0.160653 / 6.500664 (-6.340012) | 0.075599 / 0.075469 (0.000130) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.545305 / 1.841788 (-0.296483) | 23.073174 / 8.074308 (14.998866) | 16.974977 / 10.191392 (6.783585) | 0.183719 / 0.680424 (-0.496705) | 0.021633 / 0.534201 (-0.512568) | 0.471202 / 0.579283 (-0.108081) | 0.479385 / 0.434364 (0.045021) | 0.550872 / 0.540337 (0.010535) | 0.766825 / 1.386936 (-0.620111) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007918 / 0.011353 (-0.003435) | 0.004793 / 0.011008 (-0.006215) | 0.077273 / 0.038508 (0.038765) | 0.092079 / 0.023109 (0.068969) | 0.483269 / 0.275898 (0.207371) | 0.524919 / 0.323480 (0.201439) | 0.006273 / 0.007986 (-0.001713) | 0.004018 / 0.004328 (-0.000310) | 0.077188 / 0.004250 (0.072937) | 0.067891 / 0.037052 (0.030839) | 0.478531 / 0.258489 (0.220042) | 0.526956 / 0.293841 (0.233115) | 0.038309 / 0.128546 (-0.090237) | 0.010133 / 0.075646 (-0.065513) | 0.083892 / 0.419271 (-0.335379) | 0.057369 / 0.043533 (0.013836) | 0.509427 / 0.255139 (0.254288) | 0.506574 / 0.283200 (0.223374) | 0.027987 / 0.141683 (-0.113696) | 1.897469 / 1.452155 (0.445314) | 1.893102 / 1.492716 (0.400385) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.243003 / 0.018006 (0.224997) | 0.500267 / 0.000490 (0.499777) | 0.007442 / 0.000200 (0.007242) | 0.000110 / 0.000054 (0.000055) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.039266 / 0.037411 (0.001855) | 0.114438 / 0.014526 (0.099912) | 0.124528 / 0.176557 (-0.052029) | 0.189399 / 0.737135 (-0.547736) | 0.126703 / 0.296338 (-0.169635) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.518139 / 0.215209 (0.302930) | 5.162058 / 2.077655 (3.084403) | 2.835111 / 1.504120 (1.330991) | 2.640919 / 1.541195 (1.099724) | 2.736800 / 1.468490 (1.268310) | 0.582813 / 4.584777 (-4.001964) | 4.246269 / 3.745712 (0.500557) | 3.891161 / 5.269862 (-1.378701) | 2.445392 / 4.565676 (-2.120285) | 0.068943 / 0.424275 (-0.355332) | 0.009248 / 0.007607 (0.001641) | 0.604859 / 0.226044 (0.378815) | 6.030660 / 2.268929 (3.761731) | 3.409778 / 55.444624 (-52.034846) | 2.990488 / 6.876477 (-3.885988) | 3.281317 / 2.142072 (1.139245) | 0.697705 / 4.805227 (-4.107523) | 0.159502 / 6.500664 (-6.341162) | 0.072471 / 0.075469 (-0.002999) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.625428 / 1.841788 (-0.216360) | 23.602509 / 8.074308 (15.528201) | 18.091474 / 10.191392 (7.900082) | 0.172816 / 0.680424 (-0.507608) | 0.023708 / 0.534201 (-0.510493) | 0.473768 / 0.579283 (-0.105515) | 0.493713 / 0.434364 (0.059349) | 0.566326 / 0.540337 (0.025989) | 0.788670 / 1.386936 (-0.598266) |\n\n</details>\n</details>\n\n\n",
"> Thanks. Any comment on my comment below?\r\n> \r\n> >Maybe we should update the docstring of get_data_patterns accordingly? Currently it only gives examples of outputs with ** not in a single path segment (i.e. not with a / as prefix or suffix).\r\n\r\nYea right we need to update it indeed, the outputs are the ones from older versions of fsspec, and from older patterns that we don't use anymore.\r\n\r\nIn general in docstrings I also think we should encourage users to use `**/*` instead of `**` (which has a behavior that is unique to fsspec)",
"Also just noticed that `KEYWORDS_IN_DIR_NAME_BASE_PATTERNS` seems to include `KEYWORDS_IN_FILENAME_BASE_PATTERNS`. I guess we can try to remove the filename one in another PR to remove this redundancy \r\n\r\n(noticed this by checking that the data pattern is the same for both the dir name and filename examples in the get_data_patterns docstring)",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006922 / 0.011353 (-0.004431) | 0.004459 / 0.011008 (-0.006549) | 0.084742 / 0.038508 (0.046234) | 0.089002 / 0.023109 (0.065893) | 0.310886 / 0.275898 (0.034988) | 0.340518 / 0.323480 (0.017038) | 0.007011 / 0.007986 (-0.000975) | 0.004566 / 0.004328 (0.000237) | 0.067260 / 0.004250 (0.063009) | 0.066349 / 0.037052 (0.029297) | 0.324029 / 0.258489 (0.065540) | 0.373785 / 0.293841 (0.079944) | 0.031780 / 0.128546 (-0.096766) | 0.009208 / 0.075646 (-0.066438) | 0.288871 / 0.419271 (-0.130401) | 0.054548 / 0.043533 (0.011015) | 0.313344 / 0.255139 (0.058205) | 0.336430 / 0.283200 (0.053231) | 0.029037 / 0.141683 (-0.112646) | 1.483797 / 1.452155 (0.031642) | 1.581884 / 1.492716 (0.089167) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.370520 / 0.018006 (0.352514) | 0.796720 / 0.000490 (0.796230) | 0.009329 / 0.000200 (0.009129) | 0.000109 / 0.000054 (0.000055) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033002 / 0.037411 (-0.004410) | 0.083442 / 0.014526 (0.068916) | 0.106468 / 0.176557 (-0.070088) | 0.165315 / 0.737135 (-0.571820) | 0.103048 / 0.296338 (-0.193291) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.386800 / 0.215209 (0.171591) | 3.843312 / 2.077655 (1.765658) | 1.848953 / 1.504120 (0.344834) | 1.679508 / 1.541195 (0.138313) | 1.733578 / 1.468490 (0.265088) | 0.488455 / 4.584777 (-4.096322) | 3.613594 / 3.745712 (-0.132118) | 3.533334 / 5.269862 (-1.736528) | 2.176216 / 4.565676 (-2.389460) | 0.056915 / 0.424275 (-0.367360) | 0.007349 / 0.007607 (-0.000258) | 0.465132 / 0.226044 (0.239088) | 4.638479 / 2.268929 (2.369550) | 2.354741 / 55.444624 (-53.089883) | 1.991777 / 6.876477 (-4.884700) | 2.249823 / 2.142072 (0.107751) | 0.582748 / 4.805227 (-4.222480) | 0.133829 / 6.500664 (-6.366835) | 0.060949 / 0.075469 (-0.014520) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.252027 / 1.841788 (-0.589760) | 20.660234 / 8.074308 (12.585926) | 14.328496 / 10.191392 (4.137104) | 0.164872 / 0.680424 (-0.515552) | 0.018867 / 0.534201 (-0.515334) | 0.392850 / 0.579283 (-0.186433) | 0.425684 / 0.434364 (-0.008679) | 0.461776 / 0.540337 (-0.078562) | 0.663688 / 1.386936 (-0.723248) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007010 / 0.011353 (-0.004343) | 0.004791 / 0.011008 (-0.006217) | 0.064738 / 0.038508 (0.026230) | 0.088648 / 0.023109 (0.065539) | 0.418106 / 0.275898 (0.142208) | 0.446767 / 0.323480 (0.123287) | 0.006761 / 0.007986 (-0.001224) | 0.004649 / 0.004328 (0.000320) | 0.066345 / 0.004250 (0.062094) | 0.068326 / 0.037052 (0.031274) | 0.423426 / 0.258489 (0.164937) | 0.463160 / 0.293841 (0.169319) | 0.032689 / 0.128546 (-0.095858) | 0.009299 / 0.075646 (-0.066347) | 0.071321 / 0.419271 (-0.347951) | 0.048752 / 0.043533 (0.005219) | 0.418932 / 0.255139 (0.163793) | 0.440673 / 0.283200 (0.157473) | 0.027898 / 0.141683 (-0.113785) | 1.531860 / 1.452155 (0.079705) | 1.620456 / 1.492716 (0.127739) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.354917 / 0.018006 (0.336911) | 0.792432 / 0.000490 (0.791943) | 0.006626 / 0.000200 (0.006426) | 0.000124 / 0.000054 (0.000070) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.036190 / 0.037411 (-0.001222) | 0.093052 / 0.014526 (0.078526) | 0.111927 / 0.176557 (-0.064629) | 0.165571 / 0.737135 (-0.571564) | 0.112159 / 0.296338 (-0.184180) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.437798 / 0.215209 (0.222589) | 4.367166 / 2.077655 (2.289511) | 2.343292 / 1.504120 (0.839172) | 2.169298 / 1.541195 (0.628103) | 2.224471 / 1.468490 (0.755981) | 0.487317 / 4.584777 (-4.097460) | 3.627825 / 3.745712 (-0.117887) | 3.500914 / 5.269862 (-1.768947) | 2.175862 / 4.565676 (-2.389815) | 0.057975 / 0.424275 (-0.366300) | 0.007509 / 0.007607 (-0.000098) | 0.517389 / 0.226044 (0.291345) | 5.169694 / 2.268929 (2.900766) | 2.850993 / 55.444624 (-52.593631) | 2.473111 / 6.876477 (-4.403366) | 2.746731 / 2.142072 (0.604659) | 0.586597 / 4.805227 (-4.218630) | 0.134082 / 6.500664 (-6.366582) | 0.061035 / 0.075469 (-0.014434) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.375186 / 1.841788 (-0.466602) | 20.960817 / 8.074308 (12.886509) | 15.035071 / 10.191392 (4.843679) | 0.169494 / 0.680424 (-0.510930) | 0.020654 / 0.534201 (-0.513547) | 0.398047 / 0.579283 (-0.181236) | 0.438117 / 0.434364 (0.003753) | 0.483896 / 0.540337 (-0.056441) | 0.690728 / 1.386936 (-0.696208) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006892 / 0.011353 (-0.004461) | 0.004087 / 0.011008 (-0.006921) | 0.084695 / 0.038508 (0.046187) | 0.078084 / 0.023109 (0.054975) | 0.322976 / 0.275898 (0.047078) | 0.355332 / 0.323480 (0.031852) | 0.004235 / 0.007986 (-0.003750) | 0.003450 / 0.004328 (-0.000879) | 0.065355 / 0.004250 (0.061104) | 0.058593 / 0.037052 (0.021541) | 0.335761 / 0.258489 (0.077272) | 0.370392 / 0.293841 (0.076551) | 0.031720 / 0.128546 (-0.096827) | 0.008611 / 0.075646 (-0.067036) | 0.288213 / 0.419271 (-0.131059) | 0.053374 / 0.043533 (0.009842) | 0.321863 / 0.255139 (0.066724) | 0.341587 / 0.283200 (0.058387) | 0.025694 / 0.141683 (-0.115989) | 1.470502 / 1.452155 (0.018348) | 1.565068 / 1.492716 (0.072352) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.231063 / 0.018006 (0.213057) | 0.464996 / 0.000490 (0.464506) | 0.007316 / 0.000200 (0.007116) | 0.000288 / 0.000054 (0.000233) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029244 / 0.037411 (-0.008167) | 0.086303 / 0.014526 (0.071777) | 0.097281 / 0.176557 (-0.079276) | 0.153552 / 0.737135 (-0.583583) | 0.098488 / 0.296338 (-0.197850) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.382753 / 0.215209 (0.167544) | 3.826503 / 2.077655 (1.748848) | 1.848439 / 1.504120 (0.344319) | 1.688519 / 1.541195 (0.147324) | 1.787867 / 1.468490 (0.319377) | 0.489708 / 4.584777 (-4.095069) | 3.576780 / 3.745712 (-0.168932) | 3.341536 / 5.269862 (-1.928325) | 2.108787 / 4.565676 (-2.456889) | 0.057409 / 0.424275 (-0.366866) | 0.007325 / 0.007607 (-0.000282) | 0.459536 / 0.226044 (0.233492) | 4.590609 / 2.268929 (2.321681) | 2.313005 / 55.444624 (-53.131620) | 1.972389 / 6.876477 (-4.904087) | 2.218511 / 2.142072 (0.076439) | 0.613817 / 4.805227 (-4.191410) | 0.133846 / 6.500664 (-6.366818) | 0.062190 / 0.075469 (-0.013279) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.279860 / 1.841788 (-0.561928) | 19.549777 / 8.074308 (11.475469) | 14.225844 / 10.191392 (4.034452) | 0.164682 / 0.680424 (-0.515741) | 0.018321 / 0.534201 (-0.515880) | 0.389874 / 0.579283 (-0.189409) | 0.408597 / 0.434364 (-0.025767) | 0.454327 / 0.540337 (-0.086011) | 0.645571 / 1.386936 (-0.741365) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007021 / 0.011353 (-0.004332) | 0.004119 / 0.011008 (-0.006889) | 0.065393 / 0.038508 (0.026885) | 0.085005 / 0.023109 (0.061896) | 0.412221 / 0.275898 (0.136323) | 0.438266 / 0.323480 (0.114786) | 0.005594 / 0.007986 (-0.002392) | 0.003499 / 0.004328 (-0.000829) | 0.065053 / 0.004250 (0.060802) | 0.060608 / 0.037052 (0.023555) | 0.413938 / 0.258489 (0.155449) | 0.446192 / 0.293841 (0.152351) | 0.032232 / 0.128546 (-0.096314) | 0.008617 / 0.075646 (-0.067029) | 0.071296 / 0.419271 (-0.347976) | 0.048756 / 0.043533 (0.005223) | 0.404977 / 0.255139 (0.149838) | 0.426801 / 0.283200 (0.143602) | 0.023650 / 0.141683 (-0.118033) | 1.526928 / 1.452155 (0.074773) | 1.627504 / 1.492716 (0.134787) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.224318 / 0.018006 (0.206312) | 0.469717 / 0.000490 (0.469227) | 0.005539 / 0.000200 (0.005339) | 0.000098 / 0.000054 (0.000043) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034240 / 0.037411 (-0.003171) | 0.096449 / 0.014526 (0.081923) | 0.107309 / 0.176557 (-0.069247) | 0.160246 / 0.737135 (-0.576889) | 0.107595 / 0.296338 (-0.188743) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.434266 / 0.215209 (0.219057) | 4.325571 / 2.077655 (2.247916) | 2.324066 / 1.504120 (0.819946) | 2.140238 / 1.541195 (0.599044) | 2.244593 / 1.468490 (0.776103) | 0.486259 / 4.584777 (-4.098518) | 3.644120 / 3.745712 (-0.101592) | 3.372330 / 5.269862 (-1.897531) | 2.074779 / 4.565676 (-2.490897) | 0.057154 / 0.424275 (-0.367121) | 0.007304 / 0.007607 (-0.000303) | 0.516944 / 0.226044 (0.290899) | 5.174300 / 2.268929 (2.905372) | 2.816269 / 55.444624 (-52.628356) | 2.462943 / 6.876477 (-4.413534) | 2.735851 / 2.142072 (0.593779) | 0.589028 / 4.805227 (-4.216200) | 0.131804 / 6.500664 (-6.368860) | 0.060173 / 0.075469 (-0.015296) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.354540 / 1.841788 (-0.487248) | 20.436511 / 8.074308 (12.362203) | 15.541981 / 10.191392 (5.350589) | 0.168399 / 0.680424 (-0.512025) | 0.020716 / 0.534201 (-0.513485) | 0.396275 / 0.579283 (-0.183008) | 0.427232 / 0.434364 (-0.007132) | 0.475121 / 0.540337 (-0.065216) | 0.648579 / 1.386936 (-0.738357) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009071 / 0.011353 (-0.002282) | 0.005820 / 0.011008 (-0.005188) | 0.119974 / 0.038508 (0.081466) | 0.092145 / 0.023109 (0.069036) | 0.445349 / 0.275898 (0.169451) | 0.442488 / 0.323480 (0.119008) | 0.005352 / 0.007986 (-0.002634) | 0.004332 / 0.004328 (0.000003) | 0.084397 / 0.004250 (0.080147) | 0.064624 / 0.037052 (0.027572) | 0.430938 / 0.258489 (0.172448) | 0.503574 / 0.293841 (0.209733) | 0.047900 / 0.128546 (-0.080647) | 0.014237 / 0.075646 (-0.061409) | 0.366145 / 0.419271 (-0.053127) | 0.066344 / 0.043533 (0.022811) | 0.424582 / 0.255139 (0.169443) | 0.451845 / 0.283200 (0.168646) | 0.041409 / 0.141683 (-0.100274) | 1.886998 / 1.452155 (0.434843) | 2.011676 / 1.492716 (0.518960) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.301008 / 0.018006 (0.283001) | 0.608670 / 0.000490 (0.608180) | 0.011963 / 0.000200 (0.011763) | 0.000117 / 0.000054 (0.000063) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031996 / 0.037411 (-0.005415) | 0.102274 / 0.014526 (0.087748) | 0.121437 / 0.176557 (-0.055120) | 0.181647 / 0.737135 (-0.555489) | 0.121634 / 0.296338 (-0.174704) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.597070 / 0.215209 (0.381861) | 5.973808 / 2.077655 (3.896154) | 2.486345 / 1.504120 (0.982225) | 2.125395 / 1.541195 (0.584201) | 2.270864 / 1.468490 (0.802374) | 0.880031 / 4.584777 (-3.704746) | 5.396522 / 3.745712 (1.650809) | 4.702005 / 5.269862 (-0.567857) | 3.023087 / 4.565676 (-1.542589) | 0.097093 / 0.424275 (-0.327182) | 0.008457 / 0.007607 (0.000850) | 0.712164 / 0.226044 (0.486120) | 7.112867 / 2.268929 (4.843938) | 3.364509 / 55.444624 (-52.080115) | 2.646953 / 6.876477 (-4.229524) | 2.795967 / 2.142072 (0.653894) | 1.067182 / 4.805227 (-3.738046) | 0.218297 / 6.500664 (-6.282368) | 0.071720 / 0.075469 (-0.003750) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.640477 / 1.841788 (-0.201311) | 24.875163 / 8.074308 (16.800855) | 22.125706 / 10.191392 (11.934314) | 0.247267 / 0.680424 (-0.433157) | 0.033717 / 0.534201 (-0.500484) | 0.492422 / 0.579283 (-0.086862) | 0.578323 / 0.434364 (0.143959) | 0.579503 / 0.540337 (0.039165) | 0.816721 / 1.386936 (-0.570215) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009372 / 0.011353 (-0.001981) | 0.005449 / 0.011008 (-0.005559) | 0.095371 / 0.038508 (0.056863) | 0.086320 / 0.023109 (0.063211) | 0.539573 / 0.275898 (0.263675) | 0.580338 / 0.323480 (0.256858) | 0.007028 / 0.007986 (-0.000958) | 0.004196 / 0.004328 (-0.000133) | 0.082710 / 0.004250 (0.078460) | 0.064336 / 0.037052 (0.027284) | 0.521490 / 0.258489 (0.263001) | 0.567942 / 0.293841 (0.274101) | 0.049659 / 0.128546 (-0.078887) | 0.017297 / 0.075646 (-0.058350) | 0.093874 / 0.419271 (-0.325398) | 0.061664 / 0.043533 (0.018131) | 0.524476 / 0.255139 (0.269337) | 0.563255 / 0.283200 (0.280055) | 0.039990 / 0.141683 (-0.101693) | 1.854438 / 1.452155 (0.402283) | 1.819321 / 1.492716 (0.326605) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.298817 / 0.018006 (0.280811) | 0.629381 / 0.000490 (0.628891) | 0.006259 / 0.000200 (0.006059) | 0.000690 / 0.000054 (0.000635) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.041009 / 0.037411 (0.003598) | 0.123845 / 0.014526 (0.109319) | 0.138606 / 0.176557 (-0.037951) | 0.215042 / 0.737135 (-0.522093) | 0.129572 / 0.296338 (-0.166767) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.668823 / 0.215209 (0.453614) | 6.596762 / 2.077655 (4.519108) | 3.275429 / 1.504120 (1.771309) | 2.921747 / 1.541195 (1.380553) | 2.963748 / 1.468490 (1.495258) | 0.897588 / 4.584777 (-3.687188) | 5.683618 / 3.745712 (1.937906) | 5.051102 / 5.269862 (-0.218760) | 3.178855 / 4.565676 (-1.386822) | 0.107446 / 0.424275 (-0.316829) | 0.008967 / 0.007607 (0.001360) | 0.785577 / 0.226044 (0.559532) | 8.236556 / 2.268929 (5.967628) | 3.914725 / 55.444624 (-51.529899) | 3.129068 / 6.876477 (-3.747409) | 3.368383 / 2.142072 (1.226310) | 1.004307 / 4.805227 (-3.800920) | 0.204788 / 6.500664 (-6.295876) | 0.078250 / 0.075469 (0.002780) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.778574 / 1.841788 (-0.063213) | 25.583659 / 8.074308 (17.509351) | 23.505866 / 10.191392 (13.314474) | 0.228759 / 0.680424 (-0.451665) | 0.038348 / 0.534201 (-0.495853) | 0.468980 / 0.579283 (-0.110303) | 0.630194 / 0.434364 (0.195830) | 0.587535 / 0.540337 (0.047198) | 0.831761 / 1.386936 (-0.555175) |\n\n</details>\n</details>\n\n\n",
"I've addressed the comments. Let me know if it looks all good now :)",
"Actually just found out that the current `**/*[-._ 0-9/]train[-._ 0-9/]**` doesn't match `data/train.csv` in bash (but does match in fsspec right now).\r\n\r\nSo there might be a risk that this pattern breaks in the future no ?",
"@lhoestq `fsspec` has tests to check their specific (non-posix) behavior, so I think merging in the current state is fine. And if they make a breaking change in the future, we can align the patterns once again :) ",
"Yea after more thoughts I also think it's fine. Feel free to merge !",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006920 / 0.011353 (-0.004433) | 0.004182 / 0.011008 (-0.006826) | 0.084629 / 0.038508 (0.046121) | 0.086052 / 0.023109 (0.062943) | 0.326062 / 0.275898 (0.050164) | 0.344190 / 0.323480 (0.020710) | 0.005393 / 0.007986 (-0.002593) | 0.003410 / 0.004328 (-0.000918) | 0.064327 / 0.004250 (0.060076) | 0.056556 / 0.037052 (0.019504) | 0.319255 / 0.258489 (0.060766) | 0.357943 / 0.293841 (0.064102) | 0.032097 / 0.128546 (-0.096450) | 0.008778 / 0.075646 (-0.066868) | 0.291057 / 0.419271 (-0.128215) | 0.053225 / 0.043533 (0.009692) | 0.307713 / 0.255139 (0.052574) | 0.350058 / 0.283200 (0.066858) | 0.024380 / 0.141683 (-0.117303) | 1.459482 / 1.452155 (0.007328) | 1.555711 / 1.492716 (0.062994) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.239487 / 0.018006 (0.221480) | 0.467604 / 0.000490 (0.467114) | 0.010742 / 0.000200 (0.010542) | 0.000285 / 0.000054 (0.000230) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029394 / 0.037411 (-0.008018) | 0.087404 / 0.014526 (0.072879) | 0.098701 / 0.176557 (-0.077855) | 0.154145 / 0.737135 (-0.582990) | 0.099726 / 0.296338 (-0.196612) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.389008 / 0.215209 (0.173799) | 3.873165 / 2.077655 (1.795510) | 1.860676 / 1.504120 (0.356556) | 1.679668 / 1.541195 (0.138474) | 1.782347 / 1.468490 (0.313857) | 0.489469 / 4.584777 (-4.095308) | 3.678706 / 3.745712 (-0.067006) | 3.404076 / 5.269862 (-1.865785) | 2.110972 / 4.565676 (-2.454704) | 0.057478 / 0.424275 (-0.366797) | 0.007443 / 0.007607 (-0.000164) | 0.464780 / 0.226044 (0.238736) | 4.643606 / 2.268929 (2.374678) | 2.355744 / 55.444624 (-53.088881) | 1.993992 / 6.876477 (-4.882485) | 2.245520 / 2.142072 (0.103447) | 0.592773 / 4.805227 (-4.212454) | 0.135369 / 6.500664 (-6.365295) | 0.062478 / 0.075469 (-0.012991) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.257537 / 1.841788 (-0.584251) | 19.828010 / 8.074308 (11.753702) | 14.709260 / 10.191392 (4.517868) | 0.168359 / 0.680424 (-0.512065) | 0.018907 / 0.534201 (-0.515294) | 0.397223 / 0.579283 (-0.182060) | 0.421760 / 0.434364 (-0.012604) | 0.464597 / 0.540337 (-0.075740) | 0.665905 / 1.386936 (-0.721031) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007247 / 0.011353 (-0.004106) | 0.004104 / 0.011008 (-0.006904) | 0.065008 / 0.038508 (0.026500) | 0.083485 / 0.023109 (0.060376) | 0.399808 / 0.275898 (0.123910) | 0.433374 / 0.323480 (0.109894) | 0.005453 / 0.007986 (-0.002532) | 0.003479 / 0.004328 (-0.000850) | 0.065126 / 0.004250 (0.060876) | 0.059945 / 0.037052 (0.022893) | 0.402018 / 0.258489 (0.143529) | 0.437927 / 0.293841 (0.144086) | 0.032654 / 0.128546 (-0.095892) | 0.008717 / 0.075646 (-0.066929) | 0.071737 / 0.419271 (-0.347534) | 0.048903 / 0.043533 (0.005370) | 0.402107 / 0.255139 (0.146968) | 0.417602 / 0.283200 (0.134402) | 0.024821 / 0.141683 (-0.116862) | 1.474471 / 1.452155 (0.022316) | 1.559571 / 1.492716 (0.066855) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.232010 / 0.018006 (0.214003) | 0.460768 / 0.000490 (0.460278) | 0.005250 / 0.000200 (0.005050) | 0.000109 / 0.000054 (0.000055) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033839 / 0.037411 (-0.003573) | 0.101617 / 0.014526 (0.087091) | 0.107984 / 0.176557 (-0.068573) | 0.160923 / 0.737135 (-0.576212) | 0.110367 / 0.296338 (-0.185971) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.433087 / 0.215209 (0.217878) | 4.324100 / 2.077655 (2.246445) | 2.312937 / 1.504120 (0.808817) | 2.159903 / 1.541195 (0.618708) | 2.240235 / 1.468490 (0.771745) | 0.500659 / 4.584777 (-4.084118) | 3.743801 / 3.745712 (-0.001911) | 3.441350 / 5.269862 (-1.828512) | 2.141370 / 4.565676 (-2.424306) | 0.059078 / 0.424275 (-0.365197) | 0.007468 / 0.007607 (-0.000139) | 0.508108 / 0.226044 (0.282064) | 5.076738 / 2.268929 (2.807809) | 2.825939 / 55.444624 (-52.618685) | 2.467762 / 6.876477 (-4.408715) | 2.705079 / 2.142072 (0.563006) | 0.603363 / 4.805227 (-4.201864) | 0.136267 / 6.500664 (-6.364397) | 0.062887 / 0.075469 (-0.012582) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.359344 / 1.841788 (-0.482443) | 20.581510 / 8.074308 (12.507202) | 15.534489 / 10.191392 (5.343097) | 0.192068 / 0.680424 (-0.488356) | 0.020831 / 0.534201 (-0.513370) | 0.403330 / 0.579283 (-0.175953) | 0.429536 / 0.434364 (-0.004828) | 0.479906 / 0.540337 (-0.060431) | 0.674170 / 1.386936 (-0.712766) |\n\n</details>\n</details>\n\n\n"
] | 2023-09-15T17:58:25Z
| 2023-09-26T15:41:38Z
| 2023-09-26T15:32:51Z
|
COLLABORATOR
| null | null | null |
Fix #6214
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6244/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6244/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6244.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6244",
"merged_at": "2023-09-26T15:32:51Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6244.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6244"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5264
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5264/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5264/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5264/events
|
https://github.com/huggingface/datasets/issues/5264
| 1,455,252,906
|
I_kwDODunzps5WvWWq
| 5,264
|
`datasets` can't read a Parquet file in Python 3.9.13
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/44069155?v=4",
"events_url": "https://api.github.com/users/loubnabnl/events{/privacy}",
"followers_url": "https://api.github.com/users/loubnabnl/followers",
"following_url": "https://api.github.com/users/loubnabnl/following{/other_user}",
"gists_url": "https://api.github.com/users/loubnabnl/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/loubnabnl",
"id": 44069155,
"login": "loubnabnl",
"node_id": "MDQ6VXNlcjQ0MDY5MTU1",
"organizations_url": "https://api.github.com/users/loubnabnl/orgs",
"received_events_url": "https://api.github.com/users/loubnabnl/received_events",
"repos_url": "https://api.github.com/users/loubnabnl/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/loubnabnl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/loubnabnl/subscriptions",
"type": "User",
"url": "https://api.github.com/users/loubnabnl",
"user_view_type": "public"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
| null |
[] | null |
[
"Could you share the full stack trace please ?\r\n\r\n\r\nCan you also try running this code ? It can be useful to determine if the issue comes from `datasets` or `fsspec` (streaming) or `pyarrow` (parquet reading):\r\n```python\r\nds = load_dataset(\"parquet\", data_files=a_parquet_file_url, use_auth_token=True)\r\n```",
"Here's the full trace\r\n```\r\nTraceback (most recent call last):\r\n File \"/home/loubna_huggingface_co/load.py\", line 15, in <module>\r\n ds_all = load_dataset(\"bigcode/the-stack-dedup-pjj\", data_dir=\"data/java\",use_auth_token=True, split=\"train\", revision=\"v1.1.a1\")\r\n File \"/opt/conda/envs/venv/lib/python3.9/site-packages/datasets/load.py\", line 1742, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"/opt/conda/envs/venv/lib/python3.9/site-packages/datasets/builder.py\", line 814, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/opt/conda/envs/venv/lib/python3.9/site-packages/datasets/builder.py\", line 905, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"/opt/conda/envs/venv/lib/python3.9/site-packages/datasets/builder.py\", line 1502, in _prepare_split\r\n for key, table in logging.tqdm(\r\n File \"/opt/conda/envs/venv/lib/python3.9/site-packages/tqdm/std.py\", line 1195, in __iter__\r\n for obj in iterable:\r\n File \"/opt/conda/envs/venv/lib/python3.9/site-packages/datasets/packaged_modules/parquet/parquet.py\", line 67, in _generate_tables\r\n parquet_file = pq.ParquetFile(f)\r\n File \"/opt/conda/envs/venv/lib/python3.9/site-packages/pyarrow/parquet/__init__.py\", line 286, in __init__\r\n self.reader.open(\r\n File \"pyarrow/_parquet.pyx\", line 1227, in pyarrow._parquet.ParquetReader.open\r\n File \"pyarrow/error.pxi\", line 100, in pyarrow.lib.check_status\r\npyarrow.lib.ArrowInvalid: Parquet magic bytes not found in footer. Either the file is corrupted or this is not a parquet file.\r\n```\r\n\r\nwhen running\r\n```python\r\nds = load_dataset(\"parquet\", data_files=\"https://huggingface.co/datasets/bigcode/the-stack-dedup-pjj/blob/v1.1.a1/data/java/data_0000.parquet\", use_auth_token=True)\r\n```\r\nI get 401 error, but that's the case for the python subset too which I can load properly\r\n```\r\n File \"/opt/conda/envs/venv/lib/python3.9/site-packages/datasets/load.py\", line 1719, in load_dataset\r\n builder_instance = load_dataset_builder(\r\n File \"/opt/conda/envs/venv/lib/python3.9/site-packages/datasets/load.py\", line 1497, in load_dataset_builder\r\n dataset_module = dataset_module_factory(\r\n File \"/opt/conda/envs/venv/lib/python3.9/site-packages/datasets/load.py\", line 1134, in dataset_module_factory\r\n return PackagedDatasetModuleFactory(\r\n File \"/opt/conda/envs/venv/lib/python3.9/site-packages/datasets/load.py\", line 707, in get_module\r\n data_files = DataFilesDict.from_local_or_remote(\r\n File \"/opt/conda/envs/venv/lib/python3.9/site-packages/datasets/data_files.py\", line 795, in from_local_or_remote\r\n DataFilesList.from_local_or_remote(\r\n File \"/opt/conda/envs/venv/lib/python3.9/site-packages/datasets/data_files.py\", line 764, in from_local_or_remote\r\n origin_metadata = _get_origin_metadata_locally_or_by_urls(data_files, use_auth_token=use_auth_token)\r\n File \"/opt/conda/envs/venv/lib/python3.9/site-packages/datasets/data_files.py\", line 710, in _get_origin_metadata_locally_or_by_urls\r\n return thread_map(\r\n File \"/opt/conda/envs/venv/lib/python3.9/site-packages/tqdm/contrib/concurrent.py\", line 94, in thread_map\r\n return _executor_map(ThreadPoolExecutor, fn, *iterables, **tqdm_kwargs)\r\n File \"/opt/conda/envs/venv/lib/python3.9/site-packages/tqdm/contrib/concurrent.py\", line 76, in _executor_map\r\n return list(tqdm_class(ex.map(fn, *iterables, **map_args), **kwargs))\r\n File \"/opt/conda/envs/venv/lib/python3.9/site-packages/tqdm/std.py\", line 1183, in __iter__\r\n for obj in iterable:\r\n File \"/opt/conda/envs/venv/lib/python3.9/concurrent/futures/_base.py\", line 609, in result_iterator\r\n yield fs.pop().result()\r\n File \"/opt/conda/envs/venv/lib/python3.9/concurrent/futures/_base.py\", line 446, in result\r\n return self.__get_result()\r\n File \"/opt/conda/envs/venv/lib/python3.9/concurrent/futures/_base.py\", line 391, in __get_result\r\n raise self._exception\r\n File \"/opt/conda/envs/venv/lib/python3.9/concurrent/futures/thread.py\", line 58, in run\r\n result = self.fn(*self.args, **self.kwargs)\r\n File \"/opt/conda/envs/venv/lib/python3.9/site-packages/datasets/data_files.py\", line 701, in _get_single_origin_metadata_locally_or_by_urls\r\n return (request_etag(data_file, use_auth_token=use_auth_token),)\r\n File \"/opt/conda/envs/venv/lib/python3.9/site-packages/datasets/utils/file_utils.py\", line 411, in request_etag\r\n response.raise_for_status()\r\n File \"/opt/conda/envs/venv/lib/python3.9/site-packages/requests/models.py\", line 960, in raise_for_status\r\n raise HTTPError(http_error_msg, response=self)\r\nrequests.exceptions.HTTPError: 401 Client Error: Unauthorized for url: https://huggingface.co/datasets/bigcode/the-stack-dedup-pjj/blob/v1.1.a1/data/python/data_0000.parquet```",
"Can you check you used the right token ? You shouldn't get a 401 using your token",
"I checked it’s the right token, when loading the full dataset I get the error after data extraction so I can access the files. \r\n```\r\nDownloading and preparing dataset parquet/bigcode--the-stack-dedup-pjj to /home/loubna_huggingface_co/.cache/huggingface/datasets/bigcode___parquet/bigcode--the-stack-dedup-pjj-872ffac7f4bb46ca/0.0.0/2a3b91fbd88a2c90d1dbbb32b460cf621d31bd5b05b934492fdef7d8d6f236ec...\r\nDownloading data files: 100%|██████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 22.38it/s]\r\nExtracting data files: 100%|███████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 49.91it/s]\r\nTraceback (most recent call last):\r\n File \"/home/loubna_huggingface_co/load_ds.py\", line 5, in <module>\r\n ds = load_dataset(\"bigcode/the-stack-dedup-pjj\", data_dir=\"data/java\", use_auth_token=True,split=\"train\", revision=\"v1.1.a1\")\r\n File \"/opt/conda/envs/venv/lib/python3.9/site-packages/datasets/load.py\", line 1742, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"/opt/conda/envs/venv/lib/python3.9/site-packages/datasets/builder.py\", line 814, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/opt/conda/envs/venv/lib/python3.9/site-packages/datasets/builder.py\", line 905, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"/opt/conda/envs/venv/lib/python3.9/site-packages/datasets/builder.py\", line 1502, in _prepare_split\r\n for key, table in logging.tqdm(\r\n File \"/opt/conda/envs/venv/lib/python3.9/site-packages/tqdm/std.py\", line 1195, in __iter__\r\n for obj in iterable:\r\n File \"/opt/conda/envs/venv/lib/python3.9/site-packages/datasets/packaged_modules/parquet/parquet.py\", line 67, in _generate_tables\r\n parquet_file = pq.ParquetFile(f)\r\n File \"/opt/conda/envs/venv/lib/python3.9/site-packages/pyarrow/parquet/__init__.py\", line 286, in __init__\r\n self.reader.open(\r\n File \"pyarrow/_parquet.pyx\", line 1227, in pyarrow._parquet.ParquetReader.open\r\n File \"pyarrow/error.pxi\", line 100, in pyarrow.lib.check_status\r\npyarrow.lib.ArrowInvalid: Parquet magic bytes not found in footer. Either the file is corrupted or this is not a parquet file.\r\n```\r\nCould it be that I'm using a wrong url, I just copied it from the address bar",
"The URL is wrong indeed, the right one is the one with \"resolve\" (the one you get when clicking on \"download\")- otherwise you try to download an html page ;)\r\n```\r\nhttps://huggingface.co/datasets/bigcode/the-stack-dedup-pjj/resolve/v1.1.a1/data/java/data_0000.parquet\r\n```",
"Ah thanks! So I tried it with the first parquet file and it works, is there a way to know which parquet file was causing the issue since there are a lot of shards?",
"I think you have to try them all :/\r\n\r\nAlternatively you can add a try/catch in `parquet.py` in `datasets` to raise the name of the file that fails at doing `parquet_file = pq.ParquetFile(f)` when you run your initial code\r\n```python\r\nload_dataset(\"bigcode/the-stack-dedup-pjj\", data_dir=\"data/java\", split=\"train\", revision=\"v1.1.a1\", use_auth_token=True)\r\n```\r\nbut it will still iterate on all the files until it fails",
"Ok I will do that",
"I did find the file, and I get the same error as before \r\n```\r\nDownloading data files: 100%|███████████████████| 1/1 [00:00<00:00, 8160.12it/s]\r\nExtracting data files: 100%|████████████████████| 1/1 [00:00<00:00, 1447.81it/s]\r\n \r\n---------------------------------------------------------------------------\r\nArrowInvalid Traceback (most recent call last)\r\nInput In [22], in <cell line: 7>()\r\n 4 data_features = (data[\"train\"].features)\r\n 6 url = \"/home/loubna_huggingface_co/.cache/huggingface/datasets/downloads/93431bc4380de07de8b0ab533666cb5a6120cbe266779e0a63c86bf7717475d7\"\r\n----> 7 data = load_dataset(\"parquet\", \r\n 8 data_files=url,\r\n 9 split=\"train\",\r\n 10 features=data_features,\r\n 11 use_auth_token=True)\r\n\r\nFile /opt/conda/envs/venv/lib/python3.9/site-packages/datasets/load.py:1742, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs)\r\n 1739 try_from_hf_gcs = path not in _PACKAGED_DATASETS_MODULES\r\n 1741 # Download and prepare data\r\n-> 1742 builder_instance.download_and_prepare(\r\n 1743 download_config=download_config,\r\n 1744 download_mode=download_mode,\r\n 1745 ignore_verifications=ignore_verifications,\r\n 1746 try_from_hf_gcs=try_from_hf_gcs,\r\n 1747 use_auth_token=use_auth_token,\r\n 1748 )\r\n 1750 # Build dataset for splits\r\n 1751 keep_in_memory = (\r\n 1752 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size)\r\n 1753 )\r\n\r\nFile /opt/conda/envs/venv/lib/python3.9/site-packages/datasets/builder.py:814, in DatasetBuilder.download_and_prepare(self, output_dir, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, file_format, max_shard_size, storage_options, **download_and_prepare_kwargs)\r\n 808 if not downloaded_from_gcs:\r\n 809 prepare_split_kwargs = {\r\n 810 \"file_format\": file_format,\r\n 811 \"max_shard_size\": max_shard_size,\r\n 812 **download_and_prepare_kwargs,\r\n 813 }\r\n--> 814 self._download_and_prepare(\r\n 815 dl_manager=dl_manager,\r\n 816 verify_infos=verify_infos,\r\n 817 **prepare_split_kwargs,\r\n 818 **download_and_prepare_kwargs,\r\n 819 )\r\n 820 # Sync info\r\n 821 self.info.dataset_size = sum(split.num_bytes for split in self.info.splits.values())\r\n\r\nFile /opt/conda/envs/venv/lib/python3.9/site-packages/datasets/builder.py:905, in DatasetBuilder._download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)\r\n 901 split_dict.add(split_generator.split_info)\r\n 903 try:\r\n 904 # Prepare split will record examples associated to the split\r\n--> 905 self._prepare_split(split_generator, **prepare_split_kwargs)\r\n 906 except OSError as e:\r\n 907 raise OSError(\r\n 908 \"Cannot find data file. \"\r\n 909 + (self.manual_download_instructions or \"\")\r\n 910 + \"\\nOriginal error:\\n\"\r\n 911 + str(e)\r\n 912 ) from None\r\n\r\nFile /opt/conda/envs/venv/lib/python3.9/site-packages/datasets/builder.py:1502, in ArrowBasedBuilder._prepare_split(self, split_generator, file_format, max_shard_size)\r\n 1500 total_num_examples, total_num_bytes = 0, 0\r\n 1501 try:\r\n-> 1502 for key, table in logging.tqdm(\r\n 1503 generator,\r\n 1504 unit=\" tables\",\r\n 1505 leave=False,\r\n 1506 disable=not logging.is_progress_bar_enabled(),\r\n 1507 ):\r\n 1508 if max_shard_size is not None and writer._num_bytes > max_shard_size:\r\n 1509 num_examples, num_bytes = writer.finalize()\r\n\r\nFile /opt/conda/envs/venv/lib/python3.9/site-packages/tqdm/std.py:1195, in tqdm.__iter__(self)\r\n 1192 time = self._time\r\n 1194 try:\r\n-> 1195 for obj in iterable:\r\n 1196 yield obj\r\n 1197 # Update and possibly print the progressbar.\r\n 1198 # Note: does not call self.update(1) for speed optimisation.\r\n\r\nFile /opt/conda/envs/venv/lib/python3.9/site-packages/datasets/packaged_modules/parquet/parquet.py:67, in Parquet._generate_tables(self, files)\r\n 65 for file_idx, file in enumerate(itertools.chain.from_iterable(files)):\r\n 66 with open(file, \"rb\") as f:\r\n---> 67 parquet_file = pq.ParquetFile(f)\r\n 68 try:\r\n 69 for batch_idx, record_batch in enumerate(\r\n 70 parquet_file.iter_batches(batch_size=self.config.batch_size, columns=self.config.columns)\r\n 71 ):\r\n\r\nFile /opt/conda/envs/venv/lib/python3.9/site-packages/pyarrow/parquet/__init__.py:286, in ParquetFile.__init__(self, source, metadata, common_metadata, read_dictionary, memory_map, buffer_size, pre_buffer, coerce_int96_timestamp_unit, decryption_properties, thrift_string_size_limit, thrift_container_size_limit)\r\n 280 def __init__(self, source, *, metadata=None, common_metadata=None,\r\n 281 read_dictionary=None, memory_map=False, buffer_size=0,\r\n 282 pre_buffer=False, coerce_int96_timestamp_unit=None,\r\n 283 decryption_properties=None, thrift_string_size_limit=None,\r\n 284 thrift_container_size_limit=None):\r\n 285 self.reader = ParquetReader()\r\n--> 286 self.reader.open(\r\n 287 source, use_memory_map=memory_map,\r\n 288 buffer_size=buffer_size, pre_buffer=pre_buffer,\r\n 289 read_dictionary=read_dictionary, metadata=metadata,\r\n 290 coerce_int96_timestamp_unit=coerce_int96_timestamp_unit,\r\n 291 decryption_properties=decryption_properties,\r\n 292 thrift_string_size_limit=thrift_string_size_limit,\r\n 293 thrift_container_size_limit=thrift_container_size_limit,\r\n 294 )\r\n 295 self.common_metadata = common_metadata\r\n 296 self._nested_paths_by_prefix = self._build_nested_paths()\r\n\r\nFile /opt/conda/envs/venv/lib/python3.9/site-packages/pyarrow/_parquet.pyx:1227, in pyarrow._parquet.ParquetReader.open()\r\n\r\nFile /opt/conda/envs/venv/lib/python3.9/site-packages/pyarrow/error.pxi:100, in pyarrow.lib.check_status()\r\n\r\nArrowInvalid: Parquet magic bytes not found in footer. Either the file is corrupted or this is not a parquet file.\r\n```",
"Can you check the JSON file associated to `/home/loubna_huggingface_co/.cache/huggingface/datasets/downloads/93431bc4380de07de8b0ab533666cb5a6120cbe266779e0a63c86bf7717475d7` ? In the JSON file we can know from where it was downloaded\r\n\r\nYou can find it at `/home/loubna_huggingface_co/.cache/huggingface/datasets/downloads/93431bc4380de07de8b0ab533666cb5a6120cbe266779e0a63c86bf7717475d7.json`",
"It's this file `https://huggingface.co/datasets/bigcode/the-stack-dedup-pjj/resolve/f48656daa9f3a3607dacf8b57a65810a6a7a7f73/data/java/data_0022.parquet` loading it gives the same error",
"I'm able to load it properly using\r\n```python\r\nds = load_dataset(\"parquet\", data_files=a_parquet_file_url, use_auth_token=token)\r\n```\r\n\r\nMy guess is that your download was corrupted. Please delete `93431bc4380de07de8b0ab533666cb5a6120cbe266779e0a63c86bf7717475d7` and `93431bc4380de07de8b0ab533666cb5a6120cbe266779e0a63c86bf7717475d7.json` locally and try again",
"That worked, thanks! But I thought if something went wrong with a download `datasets` creates new cache for all the files, that's not the case? (at some point I even changed dataset versions so it was still using that cache?)",
"Cool !\r\n\r\n> But I thought if something went wrong with a download datasets creates new cache for all the files\r\n\r\nWe don't perform integrity verifications if we don't know in advance the hash of the file to download.\r\n\r\n> at some point I even changed dataset versions so it was still using that cache?\r\n\r\n`datasets` caches the files by URL and ETag. If the content of a file changes, then the ETag changes and so it redownloads the file",
"I see, thank you!\r\n",
"I experience the same error in v 2.12.0. But found out it was due to one column from polars was a categorical dtype (related to the error from #5706. Temporarily resolved it by casting the column to str instead."
] | 2022-11-18T14:44:01Z
| 2023-05-07T09:52:59Z
| 2022-11-22T11:18:08Z
|
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
I have an error when trying to load this [dataset](https://huggingface.co/datasets/bigcode/the-stack-dedup-pjj) (it's private but I can add you to the bigcode org). `datasets` can't read one of the parquet files in the Java subset
```python
from datasets import load_dataset
ds = load_dataset("bigcode/the-stack-dedup-pjj", data_dir="data/java", split="train", revision="v1.1.a1", use_auth_token=True)
````
```
File "pyarrow/error.pxi", line 100, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Parquet magic bytes not found in footer. Either the file is corrupted or this is not a parquet file.
```
It seems to be an issue with new Python versions, Because it works in these two environements:
```
- `datasets` version: 2.6.1
- Platform: Linux-5.4.0-131-generic-x86_64-with-glibc2.31
- Python version: 3.9.7
- PyArrow version: 9.0.0
- Pandas version: 1.3.4
```
```
- `datasets` version: 2.6.1
- Platform: Linux-4.19.0-22-cloud-amd64-x86_64-with-debian-10.13
- Python version: 3.7.12
- PyArrow version: 9.0.0
- Pandas version: 1.3.4
```
But not in this:
```
- `datasets` version: 2.6.1
- Platform: Linux-4.19.0-22-cloud-amd64-x86_64-with-glibc2.28
- Python version: 3.9.13
- PyArrow version: 9.0.0
- Pandas version: 1.3.4
```
### Steps to reproduce the bug
Load the dataset in python 3.9.13
### Expected behavior
Load the dataset without the pyarrow error.
### Environment info
```
- `datasets` version: 2.6.1
- Platform: Linux-4.19.0-22-cloud-amd64-x86_64-with-glibc2.28
- Python version: 3.9.13
- PyArrow version: 9.0.0
- Pandas version: 1.3.4
```
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/44069155?v=4",
"events_url": "https://api.github.com/users/loubnabnl/events{/privacy}",
"followers_url": "https://api.github.com/users/loubnabnl/followers",
"following_url": "https://api.github.com/users/loubnabnl/following{/other_user}",
"gists_url": "https://api.github.com/users/loubnabnl/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/loubnabnl",
"id": 44069155,
"login": "loubnabnl",
"node_id": "MDQ6VXNlcjQ0MDY5MTU1",
"organizations_url": "https://api.github.com/users/loubnabnl/orgs",
"received_events_url": "https://api.github.com/users/loubnabnl/received_events",
"repos_url": "https://api.github.com/users/loubnabnl/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/loubnabnl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/loubnabnl/subscriptions",
"type": "User",
"url": "https://api.github.com/users/loubnabnl",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5264/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5264/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/7355
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7355/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7355/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7355/events
|
https://github.com/huggingface/datasets/issues/7355
| 2,768,958,211
|
I_kwDODunzps6lCvMD
| 7,355
|
Not available datasets[audio] on python 3.13
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/70306948?v=4",
"events_url": "https://api.github.com/users/sergiosinlimites/events{/privacy}",
"followers_url": "https://api.github.com/users/sergiosinlimites/followers",
"following_url": "https://api.github.com/users/sergiosinlimites/following{/other_user}",
"gists_url": "https://api.github.com/users/sergiosinlimites/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sergiosinlimites",
"id": 70306948,
"login": "sergiosinlimites",
"node_id": "MDQ6VXNlcjcwMzA2OTQ4",
"organizations_url": "https://api.github.com/users/sergiosinlimites/orgs",
"received_events_url": "https://api.github.com/users/sergiosinlimites/received_events",
"repos_url": "https://api.github.com/users/sergiosinlimites/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sergiosinlimites/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sergiosinlimites/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sergiosinlimites",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"It looks like an issue with `numba` which can't be installed on 3.13 ? `numba` is a dependency of `librosa`, used to decode audio files"
] | 2025-01-04T18:37:08Z
| 2025-01-10T10:46:00Z
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
This is the error I got, it seems numba package does not support python 3.13
PS C:\Users\sergi\Documents> pip install datasets[audio]
Defaulting to user installation because normal site-packages is not writeable
Collecting datasets[audio]
Using cached datasets-3.2.0-py3-none-any.whl.metadata (20 kB)
... (OTHER PACKAGES)
Collecting numba>=0.51.0 (from librosa->datasets[audio])
Downloading numba-0.60.0.tar.gz (2.7 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.7/2.7 MB 44.1 MB/s eta 0:00:00
Installing build dependencies ... done
Getting requirements to build wheel ... error
error: subprocess-exited-with-error
× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> [24 lines of output]
Traceback (most recent call last):
File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.13_3.13.496.0_x64__qbz5n2kfra8p0\Lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 353, in <module>
main()
~~~~^^
File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.13_3.13.496.0_x64__qbz5n2kfra8p0\Lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 335, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
~~~~^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.13_3.13.496.0_x64__qbz5n2kfra8p0\Lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 118, in get_requires_for_build_wheel
return hook(config_settings)
File "C:\Users\sergi\AppData\Local\Temp\pip-build-env-yauns_qh\overlay\Lib\site-packages\setuptools\build_meta.py", line 334, in get_requires_for_build_wheel
return self._get_build_requires(config_settings, requirements=[])
~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\sergi\AppData\Local\Temp\pip-build-env-yauns_qh\overlay\Lib\site-packages\setuptools\build_meta.py", line 304, in _get_build_requires
self.run_setup()
~~~~~~~~~~~~~~^^
RuntimeError: Cannot install on Python version 3.13.1; only versions >=3.9,<3.13 are supported.
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error
× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> See above for output.
### Steps to reproduce the bug
1. install python >=3.13
2. !pip install datasets[audio]
### Expected behavior
I needed datasets[audio] in the python 3.13
### Environment info
python 3.13.1
| null |
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7355/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7355/timeline
| null | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/7362
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7362/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7362/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7362/events
|
https://github.com/huggingface/datasets/issues/7362
| 2,773,731,829
|
I_kwDODunzps6lU8n1
| 7,362
|
HuggingFace CLI dataset download raises error
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/3870355?v=4",
"events_url": "https://api.github.com/users/ajayvohra2005/events{/privacy}",
"followers_url": "https://api.github.com/users/ajayvohra2005/followers",
"following_url": "https://api.github.com/users/ajayvohra2005/following{/other_user}",
"gists_url": "https://api.github.com/users/ajayvohra2005/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ajayvohra2005",
"id": 3870355,
"login": "ajayvohra2005",
"node_id": "MDQ6VXNlcjM4NzAzNTU=",
"organizations_url": "https://api.github.com/users/ajayvohra2005/orgs",
"received_events_url": "https://api.github.com/users/ajayvohra2005/received_events",
"repos_url": "https://api.github.com/users/ajayvohra2005/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ajayvohra2005/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ajayvohra2005/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ajayvohra2005",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"I got the same error and was able to resolve it by upgrading from 2.15.0 to 3.2.0.",
"> I got the same error and was able to resolve it by upgrading from 2.15.0 to 3.2.0.\r\n\r\nWhat is needed is upgrading `huggingface-hub==0.27.1`. `datasets` does not appear to have anything to do with the error. The upgrade is a workaround, if the workaround works for your use case. Otherwise, this issue breaks all existing Python clients not using some minimum version of `huggingface-hub`. ",
"Correct, this has to do with `huggingface_hub`, not `datasets`. Some old versions of `huggingface_hub` are unfortunately not robust to recent changes on HF. Updating `huggingface_hub` fixes the issue :)\r\n\r\nClosing this issue since it's not directly related to `datasets`"
] | 2025-01-07T21:03:30Z
| 2025-01-08T15:00:37Z
| 2025-01-08T14:35:52Z
|
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
Trying to download Hugging Face datasets using Hugging Face CLI raises error. This error only started after December 27th, 2024. For example:
```
huggingface-cli download --repo-type dataset gboleda/wikicorpus
Traceback (most recent call last):
File "/home/ubuntu/test_venv/bin/huggingface-cli", line 8, in <module>
sys.exit(main())
File "/home/ubuntu/test_venv/lib/python3.10/site-packages/huggingface_hub/commands/huggingface_cli.py", line 51, in main
service.run()
File "/home/ubuntu/test_venv/lib/python3.10/site-packages/huggingface_hub/commands/download.py", line 146, in run
print(self._download()) # Print path to downloaded files
File "/home/ubuntu/test_venv/lib/python3.10/site-packages/huggingface_hub/commands/download.py", line 180, in _download
return snapshot_download(
File "/home/ubuntu/test_venv/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
File "/home/ubuntu/test_venv/lib/python3.10/site-packages/huggingface_hub/_snapshot_download.py", line 164, in snapshot_download
repo_info = api.repo_info(repo_id=repo_id, repo_type=repo_type, revision=revision, token=token)
File "/home/ubuntu/test_venv/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
File "/home/ubuntu/test_venv/lib/python3.10/site-packages/huggingface_hub/hf_api.py", line 2491, in repo_info
return method(
File "/home/ubuntu/test_venv/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
File "/home/ubuntu/test_venv/lib/python3.10/site-packages/huggingface_hub/hf_api.py", line 2366, in dataset_info
return DatasetInfo(**data)
File "/home/ubuntu/test_venv/lib/python3.10/site-packages/huggingface_hub/hf_api.py", line 799, in __init__
self.tags = kwargs.pop("tags")
KeyError: 'tags'
```
### Steps to reproduce the bug
```
1. huggingface-cli download --repo-type dataset gboleda/wikicorpus
```
### Expected behavior
There should be no error.
### Environment info
- `datasets` version: 2.19.1
- Platform: Linux-6.8.0-1015-aws-x86_64-with-glibc2.35
- Python version: 3.10.12
- `huggingface_hub` version: 0.23.5
- PyArrow version: 18.1.0
- Pandas version: 2.2.3
- `fsspec` version: 2024.3.1
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 3,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 3,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7362/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7362/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/5852
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5852/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5852/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5852/events
|
https://github.com/huggingface/datasets/pull/5852
| 1,707,927,165
|
PR_kwDODunzps5QZ1lj
| 5,852
|
Iterable torch formatting
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006567 / 0.011353 (-0.004786) | 0.004479 / 0.011008 (-0.006530) | 0.028286 / 0.038508 (-0.010222) | 0.033137 / 0.023109 (0.010028) | 0.305249 / 0.275898 (0.029351) | 0.330306 / 0.323480 (0.006826) | 0.003747 / 0.007986 (-0.004238) | 0.004409 / 0.004328 (0.000081) | 0.004742 / 0.004250 (0.000491) | 0.040780 / 0.037052 (0.003728) | 0.302879 / 0.258489 (0.044390) | 0.346880 / 0.293841 (0.053039) | 0.032908 / 0.128546 (-0.095638) | 0.010617 / 0.075646 (-0.065029) | 0.257996 / 0.419271 (-0.161275) | 0.051044 / 0.043533 (0.007511) | 0.306113 / 0.255139 (0.050974) | 0.324444 / 0.283200 (0.041244) | 0.100820 / 0.141683 (-0.040863) | 1.478402 / 1.452155 (0.026248) | 1.599398 / 1.492716 (0.106682) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.216540 / 0.018006 (0.198534) | 0.433480 / 0.000490 (0.432991) | 0.004032 / 0.000200 (0.003832) | 0.000084 / 0.000054 (0.000029) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027807 / 0.037411 (-0.009604) | 0.107225 / 0.014526 (0.092699) | 0.120157 / 0.176557 (-0.056400) | 0.174130 / 0.737135 (-0.563005) | 0.128902 / 0.296338 (-0.167437) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.395996 / 0.215209 (0.180787) | 3.936254 / 2.077655 (1.858599) | 1.808864 / 1.504120 (0.304744) | 1.608935 / 1.541195 (0.067741) | 1.646427 / 1.468490 (0.177937) | 0.716026 / 4.584777 (-3.868751) | 3.815045 / 3.745712 (0.069333) | 2.271534 / 5.269862 (-2.998327) | 1.548728 / 4.565676 (-3.016948) | 0.076743 / 0.424275 (-0.347532) | 0.011575 / 0.007607 (0.003968) | 0.499202 / 0.226044 (0.273158) | 4.983754 / 2.268929 (2.714825) | 2.239319 / 55.444624 (-53.205306) | 1.919427 / 6.876477 (-4.957050) | 2.019664 / 2.142072 (-0.122408) | 0.866318 / 4.805227 (-3.938910) | 0.157309 / 6.500664 (-6.343355) | 0.063341 / 0.075469 (-0.012128) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.180817 / 1.841788 (-0.660971) | 14.579869 / 8.074308 (6.505561) | 14.277848 / 10.191392 (4.086456) | 0.182560 / 0.680424 (-0.497863) | 0.017402 / 0.534201 (-0.516799) | 0.411549 / 0.579283 (-0.167734) | 0.432938 / 0.434364 (-0.001426) | 0.545067 / 0.540337 (0.004730) | 0.642173 / 1.386936 (-0.744763) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006753 / 0.011353 (-0.004600) | 0.004590 / 0.011008 (-0.006418) | 0.006111 / 0.038508 (-0.032397) | 0.032763 / 0.023109 (0.009654) | 0.401001 / 0.275898 (0.125103) | 0.428063 / 0.323480 (0.104583) | 0.003730 / 0.007986 (-0.004255) | 0.004617 / 0.004328 (0.000289) | 0.004770 / 0.004250 (0.000519) | 0.049718 / 0.037052 (0.012666) | 0.399724 / 0.258489 (0.141235) | 0.440292 / 0.293841 (0.146451) | 0.032846 / 0.128546 (-0.095700) | 0.010842 / 0.075646 (-0.064804) | 0.012642 / 0.419271 (-0.406630) | 0.046043 / 0.043533 (0.002510) | 0.390862 / 0.255139 (0.135723) | 0.407027 / 0.283200 (0.123828) | 0.099349 / 0.141683 (-0.042334) | 1.455739 / 1.452155 (0.003584) | 1.572214 / 1.492716 (0.079497) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.227186 / 0.018006 (0.209180) | 0.447404 / 0.000490 (0.446914) | 0.000400 / 0.000200 (0.000200) | 0.000055 / 0.000054 (0.000000) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029830 / 0.037411 (-0.007581) | 0.112365 / 0.014526 (0.097839) | 0.125736 / 0.176557 (-0.050821) | 0.174781 / 0.737135 (-0.562354) | 0.129439 / 0.296338 (-0.166900) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.444438 / 0.215209 (0.229229) | 4.459381 / 2.077655 (2.381726) | 2.264541 / 1.504120 (0.760421) | 2.075257 / 1.541195 (0.534062) | 2.181289 / 1.468490 (0.712799) | 0.725279 / 4.584777 (-3.859498) | 3.863253 / 3.745712 (0.117541) | 2.132498 / 5.269862 (-3.137364) | 1.402003 / 4.565676 (-3.163673) | 0.084268 / 0.424275 (-0.340007) | 0.011762 / 0.007607 (0.004155) | 0.556239 / 0.226044 (0.330194) | 5.617998 / 2.268929 (3.349070) | 2.754789 / 55.444624 (-52.689835) | 2.418418 / 6.876477 (-4.458059) | 2.479696 / 2.142072 (0.337624) | 0.870037 / 4.805227 (-3.935190) | 0.160480 / 6.500664 (-6.340184) | 0.064464 / 0.075469 (-0.011005) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.290916 / 1.841788 (-0.550872) | 14.783173 / 8.074308 (6.708865) | 13.355883 / 10.191392 (3.164491) | 0.169963 / 0.680424 (-0.510461) | 0.017657 / 0.534201 (-0.516544) | 0.409218 / 0.579283 (-0.170065) | 0.422942 / 0.434364 (-0.011422) | 0.494968 / 0.540337 (-0.045369) | 0.587044 / 1.386936 (-0.799892) |\n\n</details>\n</details>\n\n\n",
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007183 / 0.011353 (-0.004169) | 0.004586 / 0.011008 (-0.006423) | 0.032668 / 0.038508 (-0.005840) | 0.040896 / 0.023109 (0.017787) | 0.358225 / 0.275898 (0.082327) | 0.395063 / 0.323480 (0.071583) | 0.004540 / 0.007986 (-0.003446) | 0.003849 / 0.004328 (-0.000480) | 0.005521 / 0.004250 (0.001271) | 0.053314 / 0.037052 (0.016262) | 0.362417 / 0.258489 (0.103928) | 0.414337 / 0.293841 (0.120496) | 0.030698 / 0.128546 (-0.097849) | 0.008823 / 0.075646 (-0.066823) | 0.303583 / 0.419271 (-0.115689) | 0.060277 / 0.043533 (0.016744) | 0.365938 / 0.255139 (0.110799) | 0.379554 / 0.283200 (0.096354) | 0.122545 / 0.141683 (-0.019138) | 1.712098 / 1.452155 (0.259943) | 1.802036 / 1.492716 (0.309319) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.239508 / 0.018006 (0.221502) | 0.492194 / 0.000490 (0.491704) | 0.003280 / 0.000200 (0.003081) | 0.000096 / 0.000054 (0.000042) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033301 / 0.037411 (-0.004110) | 0.125851 / 0.014526 (0.111325) | 0.137757 / 0.176557 (-0.038799) | 0.207603 / 0.737135 (-0.529533) | 0.143507 / 0.296338 (-0.152831) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.470662 / 0.215209 (0.255453) | 4.736017 / 2.077655 (2.658363) | 2.154152 / 1.504120 (0.650032) | 1.954243 / 1.541195 (0.413048) | 2.080186 / 1.468490 (0.611696) | 0.622884 / 4.584777 (-3.961893) | 4.385885 / 3.745712 (0.640173) | 2.262085 / 5.269862 (-3.007776) | 1.454215 / 4.565676 (-3.111462) | 0.067342 / 0.424275 (-0.356933) | 0.012913 / 0.007607 (0.005306) | 0.600676 / 0.226044 (0.374631) | 5.915093 / 2.268929 (3.646164) | 2.664915 / 55.444624 (-52.779709) | 2.286986 / 6.876477 (-4.589490) | 2.387776 / 2.142072 (0.245704) | 0.757067 / 4.805227 (-4.048160) | 0.154625 / 6.500664 (-6.346039) | 0.074632 / 0.075469 (-0.000838) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.413229 / 1.841788 (-0.428558) | 17.433012 / 8.074308 (9.358704) | 16.980340 / 10.191392 (6.788948) | 0.218943 / 0.680424 (-0.461481) | 0.020525 / 0.534201 (-0.513676) | 0.451847 / 0.579283 (-0.127436) | 0.495587 / 0.434364 (0.061223) | 0.548739 / 0.540337 (0.008402) | 0.662120 / 1.386936 (-0.724816) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006775 / 0.011353 (-0.004577) | 0.004556 / 0.011008 (-0.006452) | 0.006462 / 0.038508 (-0.032046) | 0.039073 / 0.023109 (0.015964) | 0.429249 / 0.275898 (0.153351) | 0.469946 / 0.323480 (0.146467) | 0.004402 / 0.007986 (-0.003584) | 0.003798 / 0.004328 (-0.000530) | 0.005347 / 0.004250 (0.001097) | 0.053743 / 0.037052 (0.016691) | 0.434635 / 0.258489 (0.176146) | 0.475661 / 0.293841 (0.181820) | 0.029891 / 0.128546 (-0.098656) | 0.009058 / 0.075646 (-0.066588) | 0.010987 / 0.419271 (-0.408284) | 0.053877 / 0.043533 (0.010344) | 0.434428 / 0.255139 (0.179289) | 0.449637 / 0.283200 (0.166437) | 0.124331 / 0.141683 (-0.017352) | 1.736083 / 1.452155 (0.283928) | 1.831632 / 1.492716 (0.338916) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.248428 / 0.018006 (0.230422) | 0.493113 / 0.000490 (0.492623) | 0.000429 / 0.000200 (0.000229) | 0.000057 / 0.000054 (0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031337 / 0.037411 (-0.006074) | 0.132360 / 0.014526 (0.117834) | 0.134734 / 0.176557 (-0.041822) | 0.193811 / 0.737135 (-0.543324) | 0.146883 / 0.296338 (-0.149456) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.510876 / 0.215209 (0.295666) | 5.170198 / 2.077655 (3.092543) | 2.572105 / 1.504120 (1.067985) | 2.316918 / 1.541195 (0.775723) | 2.449316 / 1.468490 (0.980826) | 0.612219 / 4.584777 (-3.972558) | 4.456740 / 3.745712 (0.711028) | 2.099757 / 5.269862 (-3.170105) | 1.293017 / 4.565676 (-3.272660) | 0.067922 / 0.424275 (-0.356353) | 0.013467 / 0.007607 (0.005860) | 0.634240 / 0.226044 (0.408196) | 6.373111 / 2.268929 (4.104182) | 3.171567 / 55.444624 (-52.273057) | 2.763411 / 6.876477 (-4.113066) | 2.845557 / 2.142072 (0.703485) | 0.763431 / 4.805227 (-4.041797) | 0.155949 / 6.500664 (-6.344715) | 0.076264 / 0.075469 (0.000795) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.468075 / 1.841788 (-0.373713) | 17.582354 / 8.074308 (9.508046) | 16.565964 / 10.191392 (6.374572) | 0.163779 / 0.680424 (-0.516644) | 0.020472 / 0.534201 (-0.513728) | 0.444416 / 0.579283 (-0.134867) | 0.488471 / 0.434364 (0.054107) | 0.550661 / 0.540337 (0.010323) | 0.667230 / 1.386936 (-0.719706) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006160 / 0.011353 (-0.005193) | 0.004093 / 0.011008 (-0.006915) | 0.056485 / 0.038508 (0.017977) | 0.033637 / 0.023109 (0.010528) | 0.296448 / 0.275898 (0.020550) | 0.332532 / 0.323480 (0.009052) | 0.003864 / 0.007986 (-0.004122) | 0.003446 / 0.004328 (-0.000883) | 0.034808 / 0.004250 (0.030558) | 0.048567 / 0.037052 (0.011514) | 0.296090 / 0.258489 (0.037601) | 0.336067 / 0.293841 (0.042226) | 0.026081 / 0.128546 (-0.102465) | 0.007875 / 0.075646 (-0.067771) | 0.286049 / 0.419271 (-0.133222) | 0.050411 / 0.043533 (0.006878) | 0.297016 / 0.255139 (0.041877) | 0.320030 / 0.283200 (0.036830) | 0.110374 / 0.141683 (-0.031308) | 1.432470 / 1.452155 (-0.019684) | 1.492479 / 1.492716 (-0.000238) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.262352 / 0.018006 (0.244346) | 0.557956 / 0.000490 (0.557467) | 0.010296 / 0.000200 (0.010096) | 0.000315 / 0.000054 (0.000260) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028801 / 0.037411 (-0.008611) | 0.109844 / 0.014526 (0.095318) | 0.122333 / 0.176557 (-0.054224) | 0.180571 / 0.737135 (-0.556564) | 0.125990 / 0.296338 (-0.170348) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.401643 / 0.215209 (0.186434) | 4.020993 / 2.077655 (1.943338) | 1.815256 / 1.504120 (0.311136) | 1.619579 / 1.541195 (0.078384) | 1.708889 / 1.468490 (0.240398) | 0.537847 / 4.584777 (-4.046930) | 3.743331 / 3.745712 (-0.002381) | 1.779891 / 5.269862 (-3.489970) | 1.021423 / 4.565676 (-3.544253) | 0.058869 / 0.424275 (-0.365406) | 0.011826 / 0.007607 (0.004218) | 0.499665 / 0.226044 (0.273621) | 4.980928 / 2.268929 (2.712000) | 2.285664 / 55.444624 (-53.158960) | 1.936553 / 6.876477 (-4.939923) | 2.090428 / 2.142072 (-0.051645) | 0.655218 / 4.805227 (-4.150009) | 0.133178 / 6.500664 (-6.367486) | 0.062991 / 0.075469 (-0.012478) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.168895 / 1.841788 (-0.672892) | 14.656773 / 8.074308 (6.582465) | 13.737921 / 10.191392 (3.546529) | 0.145383 / 0.680424 (-0.535041) | 0.017614 / 0.534201 (-0.516587) | 0.386499 / 0.579283 (-0.192784) | 0.425626 / 0.434364 (-0.008738) | 0.389572 / 0.540337 (-0.150766) | 0.386753 / 1.386936 (-1.000183) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005998 / 0.011353 (-0.005355) | 0.004265 / 0.011008 (-0.006743) | 0.034743 / 0.038508 (-0.003766) | 0.033929 / 0.023109 (0.010820) | 0.405535 / 0.275898 (0.129636) | 0.407235 / 0.323480 (0.083755) | 0.003972 / 0.007986 (-0.004013) | 0.003616 / 0.004328 (-0.000712) | 0.035278 / 0.004250 (0.031027) | 0.052990 / 0.037052 (0.015937) | 0.405228 / 0.258489 (0.146739) | 0.415007 / 0.293841 (0.121166) | 0.025951 / 0.128546 (-0.102595) | 0.007990 / 0.075646 (-0.067656) | 0.040492 / 0.419271 (-0.378779) | 0.049123 / 0.043533 (0.005591) | 0.399282 / 0.255139 (0.144143) | 0.384303 / 0.283200 (0.101103) | 0.115234 / 0.141683 (-0.026448) | 1.476904 / 1.452155 (0.024749) | 1.627191 / 1.492716 (0.134475) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.209211 / 0.018006 (0.191205) | 0.566718 / 0.000490 (0.566228) | 0.002094 / 0.000200 (0.001894) | 0.000104 / 0.000054 (0.000049) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030885 / 0.037411 (-0.006526) | 0.110777 / 0.014526 (0.096251) | 0.124382 / 0.176557 (-0.052174) | 0.175081 / 0.737135 (-0.562054) | 0.130263 / 0.296338 (-0.166075) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.448091 / 0.215209 (0.232882) | 4.484404 / 2.077655 (2.406749) | 2.278438 / 1.504120 (0.774318) | 2.087933 / 1.541195 (0.546738) | 2.186709 / 1.468490 (0.718219) | 0.534822 / 4.584777 (-4.049955) | 3.778229 / 3.745712 (0.032517) | 3.312334 / 5.269862 (-1.957528) | 1.557209 / 4.565676 (-3.008467) | 0.058923 / 0.424275 (-0.365352) | 0.011350 / 0.007607 (0.003743) | 0.550470 / 0.226044 (0.324426) | 5.480347 / 2.268929 (3.211419) | 2.781709 / 55.444624 (-52.662915) | 2.478729 / 6.876477 (-4.397748) | 2.492001 / 2.142072 (0.349929) | 0.652649 / 4.805227 (-4.152578) | 0.131334 / 6.500664 (-6.369330) | 0.065619 / 0.075469 (-0.009850) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.253998 / 1.841788 (-0.587790) | 15.207433 / 8.074308 (7.133124) | 14.627842 / 10.191392 (4.436450) | 0.146947 / 0.680424 (-0.533477) | 0.017533 / 0.534201 (-0.516668) | 0.391627 / 0.579283 (-0.187656) | 0.431113 / 0.434364 (-0.003251) | 0.413886 / 0.540337 (-0.126451) | 0.414483 / 1.386936 (-0.972453) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007741 / 0.011353 (-0.003612) | 0.004584 / 0.011008 (-0.006424) | 0.067869 / 0.038508 (0.029361) | 0.041612 / 0.023109 (0.018503) | 0.377878 / 0.275898 (0.101980) | 0.421633 / 0.323480 (0.098153) | 0.004614 / 0.007986 (-0.003371) | 0.003824 / 0.004328 (-0.000504) | 0.041479 / 0.004250 (0.037229) | 0.053309 / 0.037052 (0.016256) | 0.390147 / 0.258489 (0.131658) | 0.437706 / 0.293841 (0.143865) | 0.035951 / 0.128546 (-0.092595) | 0.009231 / 0.075646 (-0.066415) | 0.357572 / 0.419271 (-0.061699) | 0.081332 / 0.043533 (0.037799) | 0.370076 / 0.255139 (0.114937) | 0.423653 / 0.283200 (0.140453) | 0.141401 / 0.141683 (-0.000282) | 1.722744 / 1.452155 (0.270589) | 1.914668 / 1.492716 (0.421952) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.256568 / 0.018006 (0.238562) | 0.512243 / 0.000490 (0.511753) | 0.019913 / 0.000200 (0.019713) | 0.000136 / 0.000054 (0.000082) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031742 / 0.037411 (-0.005670) | 0.128537 / 0.014526 (0.114011) | 0.139962 / 0.176557 (-0.036594) | 0.210711 / 0.737135 (-0.526424) | 0.147162 / 0.296338 (-0.149177) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.509518 / 0.215209 (0.294309) | 5.083788 / 2.077655 (3.006134) | 2.455381 / 1.504120 (0.951262) | 2.208078 / 1.541195 (0.666883) | 2.341807 / 1.468490 (0.873317) | 0.580014 / 4.584777 (-4.004763) | 4.599492 / 3.745712 (0.853780) | 2.403249 / 5.269862 (-2.866612) | 1.559177 / 4.565676 (-3.006500) | 0.072846 / 0.424275 (-0.351429) | 0.017327 / 0.007607 (0.009720) | 0.627747 / 0.226044 (0.401703) | 6.242586 / 2.268929 (3.973657) | 2.982875 / 55.444624 (-52.461750) | 2.588645 / 6.876477 (-4.287832) | 2.765915 / 2.142072 (0.623843) | 0.720455 / 4.805227 (-4.084772) | 0.157474 / 6.500664 (-6.343190) | 0.074295 / 0.075469 (-0.001174) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.540799 / 1.841788 (-0.300988) | 18.054632 / 8.074308 (9.980324) | 16.544036 / 10.191392 (6.352644) | 0.201423 / 0.680424 (-0.479001) | 0.020497 / 0.534201 (-0.513704) | 0.496275 / 0.579283 (-0.083008) | 0.547380 / 0.434364 (0.113017) | 0.614605 / 0.540337 (0.074267) | 0.749889 / 1.386936 (-0.637047) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006963 / 0.011353 (-0.004389) | 0.004543 / 0.011008 (-0.006465) | 0.039530 / 0.038508 (0.001022) | 0.038420 / 0.023109 (0.015311) | 0.454885 / 0.275898 (0.178987) | 0.491731 / 0.323480 (0.168251) | 0.004211 / 0.007986 (-0.003775) | 0.003673 / 0.004328 (-0.000655) | 0.038735 / 0.004250 (0.034484) | 0.052085 / 0.037052 (0.015032) | 0.448924 / 0.258489 (0.190435) | 0.499254 / 0.293841 (0.205413) | 0.030069 / 0.128546 (-0.098477) | 0.009082 / 0.075646 (-0.066565) | 0.047181 / 0.419271 (-0.372090) | 0.054758 / 0.043533 (0.011225) | 0.445035 / 0.255139 (0.189896) | 0.475090 / 0.283200 (0.191891) | 0.122641 / 0.141683 (-0.019042) | 1.706514 / 1.452155 (0.254360) | 1.855726 / 1.492716 (0.363010) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.246028 / 0.018006 (0.228022) | 0.486382 / 0.000490 (0.485892) | 0.003038 / 0.000200 (0.002838) | 0.000107 / 0.000054 (0.000053) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034298 / 0.037411 (-0.003113) | 0.135364 / 0.014526 (0.120838) | 0.146102 / 0.176557 (-0.030455) | 0.207997 / 0.737135 (-0.529139) | 0.153119 / 0.296338 (-0.143219) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.528758 / 0.215209 (0.313549) | 5.243303 / 2.077655 (3.165648) | 2.617194 / 1.504120 (1.113074) | 2.400740 / 1.541195 (0.859545) | 2.534692 / 1.468490 (1.066202) | 0.585825 / 4.584777 (-3.998952) | 4.879766 / 3.745712 (1.134054) | 2.377419 / 5.269862 (-2.892443) | 1.460711 / 4.565676 (-3.104966) | 0.075572 / 0.424275 (-0.348703) | 0.013650 / 0.007607 (0.006042) | 0.697103 / 0.226044 (0.471058) | 6.444984 / 2.268929 (4.176055) | 3.227662 / 55.444624 (-52.216963) | 2.875163 / 6.876477 (-4.001314) | 2.860953 / 2.142072 (0.718881) | 0.718908 / 4.805227 (-4.086319) | 0.158005 / 6.500664 (-6.342659) | 0.077581 / 0.075469 (0.002112) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.653027 / 1.841788 (-0.188760) | 18.789342 / 8.074308 (10.715034) | 16.762678 / 10.191392 (6.571286) | 0.238920 / 0.680424 (-0.441504) | 0.020698 / 0.534201 (-0.513502) | 0.512634 / 0.579283 (-0.066649) | 0.542235 / 0.434364 (0.107871) | 0.626634 / 0.540337 (0.086297) | 0.753324 / 1.386936 (-0.633612) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005737 / 0.011353 (-0.005616) | 0.003767 / 0.011008 (-0.007241) | 0.097792 / 0.038508 (0.059284) | 0.028466 / 0.023109 (0.005356) | 0.317703 / 0.275898 (0.041805) | 0.359512 / 0.323480 (0.036032) | 0.003428 / 0.007986 (-0.004558) | 0.002848 / 0.004328 (-0.001481) | 0.075668 / 0.004250 (0.071418) | 0.037165 / 0.037052 (0.000113) | 0.329539 / 0.258489 (0.071050) | 0.361365 / 0.293841 (0.067524) | 0.024777 / 0.128546 (-0.103769) | 0.008324 / 0.075646 (-0.067323) | 0.317346 / 0.419271 (-0.101926) | 0.043296 / 0.043533 (-0.000237) | 0.315318 / 0.255139 (0.060179) | 0.347641 / 0.283200 (0.064441) | 0.089551 / 0.141683 (-0.052132) | 1.506335 / 1.452155 (0.054180) | 1.573931 / 1.492716 (0.081215) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.208041 / 0.018006 (0.190034) | 0.428198 / 0.000490 (0.427708) | 0.002568 / 0.000200 (0.002369) | 0.000072 / 0.000054 (0.000018) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023745 / 0.037411 (-0.013667) | 0.096256 / 0.014526 (0.081730) | 0.104917 / 0.176557 (-0.071639) | 0.164341 / 0.737135 (-0.572794) | 0.107972 / 0.296338 (-0.188367) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.453995 / 0.215209 (0.238786) | 4.546892 / 2.077655 (2.469238) | 2.185498 / 1.504120 (0.681378) | 1.989156 / 1.541195 (0.447962) | 2.053443 / 1.468490 (0.584953) | 0.559940 / 4.584777 (-4.024837) | 3.420759 / 3.745712 (-0.324954) | 1.771528 / 5.269862 (-3.498333) | 1.139692 / 4.565676 (-3.425984) | 0.067686 / 0.424275 (-0.356589) | 0.011729 / 0.007607 (0.004122) | 0.558001 / 0.226044 (0.331957) | 5.583886 / 2.268929 (3.314957) | 2.678726 / 55.444624 (-52.765899) | 2.324127 / 6.876477 (-4.552350) | 2.472805 / 2.142072 (0.330733) | 0.663163 / 4.805227 (-4.142065) | 0.134892 / 6.500664 (-6.365772) | 0.066722 / 0.075469 (-0.008747) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.195200 / 1.841788 (-0.646587) | 13.602517 / 8.074308 (5.528209) | 14.036344 / 10.191392 (3.844952) | 0.143759 / 0.680424 (-0.536665) | 0.017215 / 0.534201 (-0.516986) | 0.383749 / 0.579283 (-0.195534) | 0.388229 / 0.434364 (-0.046134) | 0.469366 / 0.540337 (-0.070971) | 0.560408 / 1.386936 (-0.826528) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005953 / 0.011353 (-0.005400) | 0.003840 / 0.011008 (-0.007168) | 0.077481 / 0.038508 (0.038973) | 0.028318 / 0.023109 (0.005209) | 0.403991 / 0.275898 (0.128093) | 0.433374 / 0.323480 (0.109894) | 0.003572 / 0.007986 (-0.004414) | 0.003033 / 0.004328 (-0.001295) | 0.075873 / 0.004250 (0.071623) | 0.039321 / 0.037052 (0.002269) | 0.416790 / 0.258489 (0.158301) | 0.459368 / 0.293841 (0.165527) | 0.025270 / 0.128546 (-0.103276) | 0.008574 / 0.075646 (-0.067072) | 0.083376 / 0.419271 (-0.335896) | 0.043206 / 0.043533 (-0.000327) | 0.404831 / 0.255139 (0.149692) | 0.418559 / 0.283200 (0.135360) | 0.099135 / 0.141683 (-0.042548) | 1.501315 / 1.452155 (0.049160) | 1.583912 / 1.492716 (0.091195) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.241510 / 0.018006 (0.223504) | 0.410473 / 0.000490 (0.409983) | 0.001857 / 0.000200 (0.001657) | 0.000081 / 0.000054 (0.000027) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025366 / 0.037411 (-0.012045) | 0.103353 / 0.014526 (0.088828) | 0.107934 / 0.176557 (-0.068622) | 0.162388 / 0.737135 (-0.574747) | 0.113550 / 0.296338 (-0.182789) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.463529 / 0.215209 (0.248320) | 4.657688 / 2.077655 (2.580034) | 2.455088 / 1.504120 (0.950968) | 2.304833 / 1.541195 (0.763638) | 2.317520 / 1.468490 (0.849029) | 0.563395 / 4.584777 (-4.021382) | 3.408489 / 3.745712 (-0.337223) | 2.636379 / 5.269862 (-2.633482) | 1.425355 / 4.565676 (-3.140322) | 0.068335 / 0.424275 (-0.355940) | 0.011713 / 0.007607 (0.004106) | 0.550230 / 0.226044 (0.324186) | 5.519843 / 2.268929 (3.250915) | 2.864986 / 55.444624 (-52.579639) | 2.604821 / 6.876477 (-4.271655) | 2.701501 / 2.142072 (0.559428) | 0.668193 / 4.805227 (-4.137034) | 0.134739 / 6.500664 (-6.365925) | 0.067110 / 0.075469 (-0.008359) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.326358 / 1.841788 (-0.515430) | 14.184172 / 8.074308 (6.109864) | 14.139245 / 10.191392 (3.947853) | 0.151881 / 0.680424 (-0.528542) | 0.016718 / 0.534201 (-0.517483) | 0.367035 / 0.579283 (-0.212248) | 0.393512 / 0.434364 (-0.040852) | 0.441261 / 0.540337 (-0.099076) | 0.533907 / 1.386936 (-0.853029) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006275 / 0.011353 (-0.005078) | 0.003980 / 0.011008 (-0.007028) | 0.097617 / 0.038508 (0.059109) | 0.034089 / 0.023109 (0.010980) | 0.297381 / 0.275898 (0.021483) | 0.330106 / 0.323480 (0.006626) | 0.003838 / 0.007986 (-0.004148) | 0.004042 / 0.004328 (-0.000287) | 0.074305 / 0.004250 (0.070055) | 0.048318 / 0.037052 (0.011265) | 0.295585 / 0.258489 (0.037096) | 0.346924 / 0.293841 (0.053083) | 0.027397 / 0.128546 (-0.101150) | 0.008452 / 0.075646 (-0.067194) | 0.326837 / 0.419271 (-0.092435) | 0.049515 / 0.043533 (0.005982) | 0.303931 / 0.255139 (0.048792) | 0.317647 / 0.283200 (0.034447) | 0.098280 / 0.141683 (-0.043403) | 1.442603 / 1.452155 (-0.009552) | 1.524050 / 1.492716 (0.031334) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.215095 / 0.018006 (0.197089) | 0.437662 / 0.000490 (0.437173) | 0.009771 / 0.000200 (0.009571) | 0.000401 / 0.000054 (0.000346) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027169 / 0.037411 (-0.010243) | 0.111383 / 0.014526 (0.096857) | 0.116163 / 0.176557 (-0.060394) | 0.173134 / 0.737135 (-0.564001) | 0.122376 / 0.296338 (-0.173962) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.398332 / 0.215209 (0.183123) | 3.974166 / 2.077655 (1.896511) | 1.793847 / 1.504120 (0.289727) | 1.615117 / 1.541195 (0.073922) | 1.660288 / 1.468490 (0.191798) | 0.523833 / 4.584777 (-4.060944) | 3.704273 / 3.745712 (-0.041439) | 1.873308 / 5.269862 (-3.396554) | 1.203546 / 4.565676 (-3.362131) | 0.064949 / 0.424275 (-0.359326) | 0.011830 / 0.007607 (0.004223) | 0.497294 / 0.226044 (0.271250) | 4.948663 / 2.268929 (2.679735) | 2.233391 / 55.444624 (-53.211234) | 1.903208 / 6.876477 (-4.973269) | 2.067908 / 2.142072 (-0.074164) | 0.644256 / 4.805227 (-4.160971) | 0.142798 / 6.500664 (-6.357866) | 0.064734 / 0.075469 (-0.010735) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.172313 / 1.841788 (-0.669475) | 14.665853 / 8.074308 (6.591545) | 13.147051 / 10.191392 (2.955659) | 0.139338 / 0.680424 (-0.541086) | 0.017452 / 0.534201 (-0.516749) | 0.395660 / 0.579283 (-0.183623) | 0.410138 / 0.434364 (-0.024226) | 0.460357 / 0.540337 (-0.079980) | 0.555670 / 1.386936 (-0.831266) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006247 / 0.011353 (-0.005106) | 0.004098 / 0.011008 (-0.006910) | 0.075050 / 0.038508 (0.036542) | 0.033232 / 0.023109 (0.010122) | 0.384139 / 0.275898 (0.108241) | 0.420865 / 0.323480 (0.097385) | 0.003889 / 0.007986 (-0.004096) | 0.003336 / 0.004328 (-0.000993) | 0.073837 / 0.004250 (0.069587) | 0.048775 / 0.037052 (0.011723) | 0.386373 / 0.258489 (0.127884) | 0.421718 / 0.293841 (0.127878) | 0.027553 / 0.128546 (-0.100993) | 0.008724 / 0.075646 (-0.066922) | 0.080970 / 0.419271 (-0.338302) | 0.045981 / 0.043533 (0.002448) | 0.364381 / 0.255139 (0.109242) | 0.391203 / 0.283200 (0.108004) | 0.101681 / 0.141683 (-0.040002) | 1.469533 / 1.452155 (0.017378) | 1.562016 / 1.492716 (0.069300) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.222318 / 0.018006 (0.204312) | 0.441395 / 0.000490 (0.440905) | 0.000408 / 0.000200 (0.000208) | 0.000057 / 0.000054 (0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030291 / 0.037411 (-0.007120) | 0.114053 / 0.014526 (0.099527) | 0.123124 / 0.176557 (-0.053433) | 0.173474 / 0.737135 (-0.563661) | 0.129946 / 0.296338 (-0.166393) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.430342 / 0.215209 (0.215133) | 4.309782 / 2.077655 (2.232128) | 2.110668 / 1.504120 (0.606548) | 1.922881 / 1.541195 (0.381687) | 1.993562 / 1.468490 (0.525072) | 0.523682 / 4.584777 (-4.061095) | 3.774152 / 3.745712 (0.028440) | 3.354783 / 5.269862 (-1.915079) | 1.489793 / 4.565676 (-3.075884) | 0.065169 / 0.424275 (-0.359107) | 0.011626 / 0.007607 (0.004019) | 0.539126 / 0.226044 (0.313081) | 5.372593 / 2.268929 (3.103664) | 2.570652 / 55.444624 (-52.873973) | 2.253353 / 6.876477 (-4.623123) | 2.312876 / 2.142072 (0.170804) | 0.644241 / 4.805227 (-4.160986) | 0.138326 / 6.500664 (-6.362338) | 0.064491 / 0.075469 (-0.010979) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.344164 / 1.841788 (-0.497624) | 15.124679 / 8.074308 (7.050371) | 14.799310 / 10.191392 (4.607918) | 0.149054 / 0.680424 (-0.531370) | 0.017564 / 0.534201 (-0.516637) | 0.394593 / 0.579283 (-0.184690) | 0.428768 / 0.434364 (-0.005596) | 0.468235 / 0.540337 (-0.072103) | 0.557384 / 1.386936 (-0.829552) |\n\n</details>\n</details>\n\n\n",
"@albertvillanova could you take a look at this one ? It directly follows the arrow formatting PR",
"I added tests for the `__array__` case which lets you go from any tensor format to any other tensor format.\r\n\r\nI also properly deprecated format_type and added a warning message.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007838 / 0.011353 (-0.003515) | 0.005177 / 0.011008 (-0.005831) | 0.131058 / 0.038508 (0.092550) | 0.035959 / 0.023109 (0.012850) | 0.414071 / 0.275898 (0.138173) | 0.429628 / 0.323480 (0.106148) | 0.005151 / 0.007986 (-0.002834) | 0.003979 / 0.004328 (-0.000349) | 0.103209 / 0.004250 (0.098958) | 0.046200 / 0.037052 (0.009148) | 0.414020 / 0.258489 (0.155531) | 0.475748 / 0.293841 (0.181907) | 0.041031 / 0.128546 (-0.087515) | 0.014462 / 0.075646 (-0.061185) | 0.423706 / 0.419271 (0.004434) | 0.063488 / 0.043533 (0.019955) | 0.404937 / 0.255139 (0.149798) | 0.404973 / 0.283200 (0.121773) | 0.114982 / 0.141683 (-0.026701) | 1.911867 / 1.452155 (0.459713) | 1.925274 / 1.492716 (0.432557) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.284656 / 0.018006 (0.266650) | 0.588329 / 0.000490 (0.587840) | 0.007092 / 0.000200 (0.006892) | 0.000143 / 0.000054 (0.000089) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025136 / 0.037411 (-0.012275) | 0.109514 / 0.014526 (0.094988) | 0.117953 / 0.176557 (-0.058603) | 0.195454 / 0.737135 (-0.541682) | 0.134243 / 0.296338 (-0.162096) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.584045 / 0.215209 (0.368836) | 6.456922 / 2.077655 (4.379267) | 2.759728 / 1.504120 (1.255608) | 2.260913 / 1.541195 (0.719718) | 2.292535 / 1.468490 (0.824045) | 0.906873 / 4.584777 (-3.677904) | 5.554455 / 3.745712 (1.808743) | 4.881557 / 5.269862 (-0.388305) | 2.509121 / 4.565676 (-2.056555) | 0.107191 / 0.424275 (-0.317084) | 0.014684 / 0.007607 (0.007077) | 0.761625 / 0.226044 (0.535580) | 7.582708 / 2.268929 (5.313780) | 3.150160 / 55.444624 (-52.294464) | 2.792284 / 6.876477 (-4.084193) | 2.881321 / 2.142072 (0.739248) | 1.108353 / 4.805227 (-3.696874) | 0.220129 / 6.500664 (-6.280535) | 0.075877 / 0.075469 (0.000408) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.465743 / 1.841788 (-0.376045) | 17.679219 / 8.074308 (9.604911) | 18.929399 / 10.191392 (8.738007) | 0.219488 / 0.680424 (-0.460935) | 0.028435 / 0.534201 (-0.505766) | 0.512623 / 0.579283 (-0.066660) | 0.619983 / 0.434364 (0.185619) | 0.603430 / 0.540337 (0.063092) | 0.730416 / 1.386936 (-0.656520) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008285 / 0.011353 (-0.003068) | 0.005771 / 0.011008 (-0.005237) | 0.106444 / 0.038508 (0.067936) | 0.035078 / 0.023109 (0.011969) | 0.441198 / 0.275898 (0.165300) | 0.536279 / 0.323480 (0.212800) | 0.004561 / 0.007986 (-0.003424) | 0.006623 / 0.004328 (0.002294) | 0.102392 / 0.004250 (0.098142) | 0.051736 / 0.037052 (0.014684) | 0.479113 / 0.258489 (0.220624) | 0.535088 / 0.293841 (0.241247) | 0.041805 / 0.128546 (-0.086741) | 0.014031 / 0.075646 (-0.061615) | 0.115795 / 0.419271 (-0.303477) | 0.057913 / 0.043533 (0.014380) | 0.435847 / 0.255139 (0.180708) | 0.524831 / 0.283200 (0.241632) | 0.119419 / 0.141683 (-0.022263) | 1.835577 / 1.452155 (0.383423) | 1.936990 / 1.492716 (0.444273) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.288422 / 0.018006 (0.270416) | 0.569776 / 0.000490 (0.569287) | 0.005652 / 0.000200 (0.005452) | 0.000139 / 0.000054 (0.000085) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034632 / 0.037411 (-0.002779) | 0.136217 / 0.014526 (0.121691) | 0.139468 / 0.176557 (-0.037089) | 0.206804 / 0.737135 (-0.530331) | 0.148733 / 0.296338 (-0.147606) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.667728 / 0.215209 (0.452518) | 6.548972 / 2.077655 (4.471317) | 3.051537 / 1.504120 (1.547417) | 2.581173 / 1.541195 (1.039978) | 2.653443 / 1.468490 (1.184953) | 0.906606 / 4.584777 (-3.678171) | 5.704384 / 3.745712 (1.958672) | 2.848618 / 5.269862 (-2.421244) | 1.821402 / 4.565676 (-2.744274) | 0.118018 / 0.424275 (-0.306257) | 0.014821 / 0.007607 (0.007214) | 0.821967 / 0.226044 (0.595923) | 8.165818 / 2.268929 (5.896889) | 3.744509 / 55.444624 (-51.700116) | 2.901097 / 6.876477 (-3.975380) | 3.018068 / 2.142072 (0.875996) | 1.106155 / 4.805227 (-3.699072) | 0.263118 / 6.500664 (-6.237546) | 0.088508 / 0.075469 (0.013039) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.725860 / 1.841788 (-0.115928) | 19.411246 / 8.074308 (11.336938) | 20.807499 / 10.191392 (10.616107) | 0.238417 / 0.680424 (-0.442007) | 0.026550 / 0.534201 (-0.507651) | 0.500715 / 0.579283 (-0.078568) | 0.615547 / 0.434364 (0.181183) | 0.614361 / 0.540337 (0.074023) | 0.720365 / 1.386936 (-0.666571) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006640 / 0.011353 (-0.004713) | 0.004079 / 0.011008 (-0.006930) | 0.100555 / 0.038508 (0.062046) | 0.037318 / 0.023109 (0.014209) | 0.320050 / 0.275898 (0.044152) | 0.358860 / 0.323480 (0.035380) | 0.003828 / 0.007986 (-0.004158) | 0.003215 / 0.004328 (-0.001113) | 0.076577 / 0.004250 (0.072326) | 0.048080 / 0.037052 (0.011028) | 0.324759 / 0.258489 (0.066270) | 0.361862 / 0.293841 (0.068021) | 0.030759 / 0.128546 (-0.097787) | 0.008998 / 0.075646 (-0.066648) | 0.329105 / 0.419271 (-0.090167) | 0.051407 / 0.043533 (0.007875) | 0.311067 / 0.255139 (0.055928) | 0.334401 / 0.283200 (0.051201) | 0.098307 / 0.141683 (-0.043376) | 1.500931 / 1.452155 (0.048776) | 1.574646 / 1.492716 (0.081930) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.219080 / 0.018006 (0.201073) | 0.447117 / 0.000490 (0.446627) | 0.009091 / 0.000200 (0.008891) | 0.000396 / 0.000054 (0.000341) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026048 / 0.037411 (-0.011363) | 0.112714 / 0.014526 (0.098188) | 0.116426 / 0.176557 (-0.060131) | 0.172187 / 0.737135 (-0.564948) | 0.121707 / 0.296338 (-0.174632) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.358898 / 0.215209 (0.143689) | 3.589212 / 2.077655 (1.511557) | 1.677927 / 1.504120 (0.173807) | 1.515861 / 1.541195 (-0.025334) | 1.598479 / 1.468490 (0.129989) | 0.478265 / 4.584777 (-4.106512) | 3.834982 / 3.745712 (0.089270) | 1.933815 / 5.269862 (-3.336047) | 1.122769 / 4.565676 (-3.442908) | 0.066984 / 0.424275 (-0.357291) | 0.011276 / 0.007607 (0.003669) | 0.512530 / 0.226044 (0.286486) | 5.112667 / 2.268929 (2.843739) | 2.266336 / 55.444624 (-53.178288) | 1.929671 / 6.876477 (-4.946806) | 2.127231 / 2.142072 (-0.014842) | 0.671307 / 4.805227 (-4.133920) | 0.143919 / 6.500664 (-6.356745) | 0.066086 / 0.075469 (-0.009383) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.208767 / 1.841788 (-0.633021) | 15.008415 / 8.074308 (6.934106) | 14.085442 / 10.191392 (3.894050) | 0.184164 / 0.680424 (-0.496260) | 0.017619 / 0.534201 (-0.516582) | 0.394443 / 0.579283 (-0.184840) | 0.457653 / 0.434364 (0.023289) | 0.473169 / 0.540337 (-0.067169) | 0.571332 / 1.386936 (-0.815604) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007009 / 0.011353 (-0.004344) | 0.004330 / 0.011008 (-0.006678) | 0.077462 / 0.038508 (0.038954) | 0.034780 / 0.023109 (0.011671) | 0.395573 / 0.275898 (0.119675) | 0.425444 / 0.323480 (0.101964) | 0.004119 / 0.007986 (-0.003866) | 0.003597 / 0.004328 (-0.000731) | 0.075209 / 0.004250 (0.070958) | 0.050871 / 0.037052 (0.013819) | 0.402990 / 0.258489 (0.144500) | 0.445334 / 0.293841 (0.151493) | 0.032492 / 0.128546 (-0.096054) | 0.009066 / 0.075646 (-0.066581) | 0.083073 / 0.419271 (-0.336198) | 0.051661 / 0.043533 (0.008128) | 0.395207 / 0.255139 (0.140068) | 0.409556 / 0.283200 (0.126356) | 0.106035 / 0.141683 (-0.035648) | 1.506255 / 1.452155 (0.054101) | 1.598724 / 1.492716 (0.106008) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.194733 / 0.018006 (0.176727) | 0.444920 / 0.000490 (0.444431) | 0.002402 / 0.000200 (0.002202) | 0.000083 / 0.000054 (0.000028) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030464 / 0.037411 (-0.006947) | 0.119153 / 0.014526 (0.104627) | 0.126081 / 0.176557 (-0.050476) | 0.179692 / 0.737135 (-0.557444) | 0.131834 / 0.296338 (-0.164504) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.440153 / 0.215209 (0.224944) | 4.397504 / 2.077655 (2.319850) | 2.138320 / 1.504120 (0.634200) | 1.950596 / 1.541195 (0.409402) | 2.079792 / 1.468490 (0.611302) | 0.537606 / 4.584777 (-4.047171) | 3.689420 / 3.745712 (-0.056292) | 2.960732 / 5.269862 (-2.309129) | 1.585652 / 4.565676 (-2.980024) | 0.066102 / 0.424275 (-0.358173) | 0.011429 / 0.007607 (0.003821) | 0.537011 / 0.226044 (0.310967) | 5.342171 / 2.268929 (3.073242) | 2.624446 / 55.444624 (-52.820179) | 2.313311 / 6.876477 (-4.563166) | 2.389166 / 2.142072 (0.247094) | 0.657547 / 4.805227 (-4.147681) | 0.141640 / 6.500664 (-6.359025) | 0.066102 / 0.075469 (-0.009367) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.130471 / 1.841788 (-0.711317) | 14.824792 / 8.074308 (6.750484) | 13.436463 / 10.191392 (3.245071) | 0.155688 / 0.680424 (-0.524736) | 0.015811 / 0.534201 (-0.518390) | 0.355623 / 0.579283 (-0.223660) | 0.450604 / 0.434364 (0.016241) | 0.472542 / 0.540337 (-0.067796) | 0.563584 / 1.386936 (-0.823352) |\n\n</details>\n</details>\n\n\n"
] | 2023-05-12T16:48:49Z
| 2023-06-13T16:04:05Z
| 2023-06-13T15:57:05Z
|
MEMBER
| null | null | null |
Used the TorchFormatter to get torch tensors in iterable dataset with format set to "torch".
It uses the data from Arrow if possible, otherwise applies recursive_tensorize.
When set back to format_type=None, cast_to_python_objects is used.
requires https://github.com/huggingface/datasets/pull/5821
close https://github.com/huggingface/datasets/issues/5793
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5852/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5852/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/5852.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5852",
"merged_at": "2023-06-13T15:57:05Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5852.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5852"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6504
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6504/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6504/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6504/events
|
https://github.com/huggingface/datasets/issues/6504
| 2,044,541,154
|
I_kwDODunzps553Tji
| 6,504
|
Error Pushing to Hub
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/55055083?v=4",
"events_url": "https://api.github.com/users/Jiayi-Pan/events{/privacy}",
"followers_url": "https://api.github.com/users/Jiayi-Pan/followers",
"following_url": "https://api.github.com/users/Jiayi-Pan/following{/other_user}",
"gists_url": "https://api.github.com/users/Jiayi-Pan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Jiayi-Pan",
"id": 55055083,
"login": "Jiayi-Pan",
"node_id": "MDQ6VXNlcjU1MDU1MDgz",
"organizations_url": "https://api.github.com/users/Jiayi-Pan/orgs",
"received_events_url": "https://api.github.com/users/Jiayi-Pan/received_events",
"repos_url": "https://api.github.com/users/Jiayi-Pan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Jiayi-Pan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Jiayi-Pan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Jiayi-Pan",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2023-12-16T01:05:22Z
| 2023-12-16T06:20:53Z
| 2023-12-16T06:20:53Z
|
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
Error when trying to push a dataset in a special format to hub
### Steps to reproduce the bug
```
import datasets
from datasets import Dataset
dataset_dict = {
"filename": ["apple", "banana"],
"token": [[[1,2],[3,4]],[[1,2],[3,4]]],
"label": [0, 1],
}
dataset = Dataset.from_dict(dataset_dict)
dataset = dataset.cast_column("token", datasets.features.features.Array2D(shape=(2, 2),dtype="int16"))
dataset.push_to_hub("SequenceModel/imagenet_val_256")
```
Error:
```
...
ConstructorError: could not determine a constructor for the tag 'tag:yaml.org,2002:python/tuple'
in "<unicode string>", line 8, column 16:
shape: !!python/tuple
^
```
### Expected behavior
Dataset being pushed to hub
### Environment info
- `datasets` version: 2.15.0
- Platform: Linux-5.19.0-1022-gcp-x86_64-with-glibc2.35
- Python version: 3.11.5
- `huggingface_hub` version: 0.19.4
- PyArrow version: 14.0.1
- Pandas version: 2.1.4
- `fsspec` version: 2023.10.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/55055083?v=4",
"events_url": "https://api.github.com/users/Jiayi-Pan/events{/privacy}",
"followers_url": "https://api.github.com/users/Jiayi-Pan/followers",
"following_url": "https://api.github.com/users/Jiayi-Pan/following{/other_user}",
"gists_url": "https://api.github.com/users/Jiayi-Pan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Jiayi-Pan",
"id": 55055083,
"login": "Jiayi-Pan",
"node_id": "MDQ6VXNlcjU1MDU1MDgz",
"organizations_url": "https://api.github.com/users/Jiayi-Pan/orgs",
"received_events_url": "https://api.github.com/users/Jiayi-Pan/received_events",
"repos_url": "https://api.github.com/users/Jiayi-Pan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Jiayi-Pan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Jiayi-Pan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Jiayi-Pan",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6504/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6504/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/4630
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4630/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4630/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4630/events
|
https://github.com/huggingface/datasets/pull/4630
| 1,293,470,728
|
PR_kwDODunzps460HFM
| 4,630
|
fix(dataset_wrappers): Fixes access to fsspec.asyn in torch_iterable_dataset.py.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/4120639?v=4",
"events_url": "https://api.github.com/users/gugarosa/events{/privacy}",
"followers_url": "https://api.github.com/users/gugarosa/followers",
"following_url": "https://api.github.com/users/gugarosa/following{/other_user}",
"gists_url": "https://api.github.com/users/gugarosa/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/gugarosa",
"id": 4120639,
"login": "gugarosa",
"node_id": "MDQ6VXNlcjQxMjA2Mzk=",
"organizations_url": "https://api.github.com/users/gugarosa/orgs",
"received_events_url": "https://api.github.com/users/gugarosa/received_events",
"repos_url": "https://api.github.com/users/gugarosa/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/gugarosa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gugarosa/subscriptions",
"type": "User",
"url": "https://api.github.com/users/gugarosa",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-07-04T18:26:55Z
| 2022-07-05T15:19:52Z
| 2022-07-05T15:08:21Z
|
CONTRIBUTOR
| null | null | null |
Fix #4612.
Apparently, newest `fsspec` versions do not allow access to attribute-based modules if they are not imported, such as `fsspec.async`.
Thus, @mariosasko suggested to add the missing part to the module import to allow for its access.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4630/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4630/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/4630.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4630",
"merged_at": "2022-07-05T15:08:21Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4630.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4630"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7006
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7006/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7006/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7006/events
|
https://github.com/huggingface/datasets/issues/7006
| 2,379,581,543
|
I_kwDODunzps6N1Yhn
| 7,006
|
CI is broken after ruff-0.5.0: E721
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"color": "d4c5f9",
"default": false,
"description": "Maintenance tasks",
"id": 4296013012,
"name": "maintenance",
"node_id": "LA_kwDODunzps8AAAABAA_01A",
"url": "https://api.github.com/repos/huggingface/datasets/labels/maintenance"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] | null |
[] | 2024-06-28T05:03:28Z
| 2024-06-28T05:25:18Z
| 2024-06-28T05:25:18Z
|
MEMBER
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
After ruff-0.5.0 release (https://github.com/astral-sh/ruff/releases/tag/0.5.0), our CI is broken due to E721 rule.
See: https://github.com/huggingface/datasets/actions/runs/9707641618/job/26793170961?pr=6983
> src/datasets/features/features.py:844:12: E721 Use `is` and `is not` for type comparisons, or `isinstance()` for isinstance checks
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7006/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7006/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/4797
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4797/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4797/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4797/events
|
https://github.com/huggingface/datasets/pull/4797
| 1,330,000,998
|
PR_kwDODunzps48uL-t
| 4,797
|
Torgo dataset creation
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/75192317?v=4",
"events_url": "https://api.github.com/users/YingLi001/events{/privacy}",
"followers_url": "https://api.github.com/users/YingLi001/followers",
"following_url": "https://api.github.com/users/YingLi001/following{/other_user}",
"gists_url": "https://api.github.com/users/YingLi001/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/YingLi001",
"id": 75192317,
"login": "YingLi001",
"node_id": "MDQ6VXNlcjc1MTkyMzE3",
"organizations_url": "https://api.github.com/users/YingLi001/orgs",
"received_events_url": "https://api.github.com/users/YingLi001/received_events",
"repos_url": "https://api.github.com/users/YingLi001/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/YingLi001/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/YingLi001/subscriptions",
"type": "User",
"url": "https://api.github.com/users/YingLi001",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Hi @YingLi001, thanks for your proposal to add this dataset.\r\n\r\nHowever, now we add datasets directly to the Hub (instead of our GitHub repository). You have the instructions in our docs: \r\n- [Create a dataset loading script](https://huggingface.co/docs/datasets/dataset_script)\r\n- [Create a dataset card](https://huggingface.co/docs/datasets/dataset_card)\r\n- [Share](https://huggingface.co/docs/datasets/share)\r\n\r\nFeel free to ask if you need any additional support/help."
] | 2022-08-05T14:18:26Z
| 2022-08-09T18:46:00Z
| 2022-08-09T18:46:00Z
|
NONE
| null | null | null | null |
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4797/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4797/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/4797.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4797",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/4797.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4797"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6483
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6483/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6483/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6483/events
|
https://github.com/huggingface/datasets/issues/6483
| 2,032,946,981
|
I_kwDODunzps55LE8l
| 6,483
|
Iterable Dataset: rename column clashes with remove column
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sanchit-gandhi",
"id": 93869735,
"login": "sanchit-gandhi",
"node_id": "U_kgDOBZhWpw",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sanchit-gandhi",
"user_view_type": "public"
}
|
[
{
"color": "fef2c0",
"default": false,
"description": "",
"id": 3287858981,
"name": "streaming",
"node_id": "MDU6TGFiZWwzMjg3ODU4OTgx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/streaming"
}
] |
closed
| false
| null |
[] | null |
[
"Column \"text\" doesn't exist anymore so you can't remove it",
"You can get the expected result by fixing typos in the snippet :)\r\n```python\r\nfrom datasets import load_dataset\r\n\r\n# load LS in streaming mode\r\ndataset = load_dataset(\"librispeech_asr\", \"clean\", split=\"validation\", streaming=True)\r\n\r\n# check original features\r\ndataset_features = dataset.features.keys()\r\nprint(\"Original features: \", dataset_features)\r\n\r\n# rename \"text\" -> \"sentence\"\r\ndataset = dataset.rename_column(\"text\", \"sentence\")\r\n\r\n# remove unwanted columns\r\nCOLUMNS_TO_KEEP = {\"audio\", \"sentence\"}\r\ndataset = dataset.remove_columns(set(dataset.features) - COLUMNS_TO_KEEP)\r\n\r\n# stream first sample, should return \"audio\" and \"sentence\" columns\r\nprint(next(iter(dataset)))\r\n```",
"Fixed code:\r\n\r\n```python\r\nfrom datasets import load_dataset\r\n\r\n# load LS in streaming mode\r\ndataset = load_dataset(\"librispeech_asr\", \"clean\", split=\"validation\", streaming=True)\r\n\r\n# check original features\r\ndataset_features = dataset.features.keys()\r\nprint(\"Original features: \", dataset_features)\r\n\r\n# rename \"text\" -> \"sentence\"\r\ndataset = dataset.rename_column(\"text\", \"sentence\")\r\ndataset_features = dataset.features.keys()\r\n\r\n# remove unwanted columns\r\nCOLUMNS_TO_KEEP = {\"audio\", \"sentence\"}\r\ndataset = dataset.remove_columns(set(dataset_features - COLUMNS_TO_KEEP))\r\n\r\n# stream first sample, should return \"audio\" and \"sentence\" columns\r\nprint(next(iter(dataset)))\r\n```",
"Whoops 😅 Thanks for the swift reply both! Works like a charm!"
] | 2023-12-08T16:11:30Z
| 2023-12-08T16:27:16Z
| 2023-12-08T16:27:04Z
|
CONTRIBUTOR
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
Suppose I have a two iterable datasets, one with the features:
* `{"audio", "text", "column_a"}`
And the other with the features:
* `{"audio", "sentence", "column_b"}`
I want to combine both datasets using `interleave_datasets`, which requires me to unify the column names. I would typically do this by:
1. Renaming the common columns to the same name (e.g. `"text"` -> `"sentence"`)
2. Removing the unwanted columns (e.g. `"column_a"`, `"column_b"`)
However, the process of renaming and removing columns in an iterable dataset doesn't work, since we need to preserve the original text column, meaning we can't combine the datasets.
### Steps to reproduce the bug
```python
from datasets import load_dataset
# load LS in streaming mode
dataset = load_dataset("librispeech_asr", "clean", split="validation", streaming=True)
# check original features
dataset_features = dataset.features.keys()
print("Original features: ", dataset_features)
# rename "text" -> "sentence"
dataset = dataset.rename_column("text", "sentence")
# remove unwanted columns
COLUMNS_TO_KEEP = {"audio", "sentence"}
dataset = dataset.remove_columns(set(dataset_features - COLUMNS_TO_KEEP))
# stream first sample, should return "audio" and "sentence" columns
print(next(iter(dataset)))
```
Traceback:
```python
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
Cell In[5], line 17
14 COLUMNS_TO_KEEP = {"audio", "sentence"}
15 dataset = dataset.remove_columns(set(dataset_features - COLUMNS_TO_KEEP))
---> 17 print(next(iter(dataset)))
File ~/datasets/src/datasets/iterable_dataset.py:1353, in IterableDataset.__iter__(self)
1350 yield formatter.format_row(pa_table)
1351 return
-> 1353 for key, example in ex_iterable:
1354 if self.features:
1355 # `IterableDataset` automatically fills missing columns with None.
1356 # This is done with `_apply_feature_types_on_example`.
1357 example = _apply_feature_types_on_example(
1358 example, self.features, token_per_repo_id=self._token_per_repo_id
1359 )
File ~/datasets/src/datasets/iterable_dataset.py:652, in MappedExamplesIterable.__iter__(self)
650 yield from ArrowExamplesIterable(self._iter_arrow, {})
651 else:
--> 652 yield from self._iter()
File ~/datasets/src/datasets/iterable_dataset.py:729, in MappedExamplesIterable._iter(self)
727 if self.remove_columns:
728 for c in self.remove_columns:
--> 729 del transformed_example[c]
730 yield key, transformed_example
731 current_idx += 1
KeyError: 'text'
```
=> we see that `datasets` is looking for the column "text", even though we've renamed this to "sentence" and then removed the un-wanted "text" column from our dataset.
### Expected behavior
Should be able to rename and remove columns from iterable dataset.
### Environment info
- `datasets` version: 2.15.1.dev0
- Platform: macOS-13.5.1-arm64-arm-64bit
- Python version: 3.11.6
- `huggingface_hub` version: 0.19.4
- PyArrow version: 14.0.1
- Pandas version: 2.1.2
- `fsspec` version: 2023.9.2
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sanchit-gandhi",
"id": 93869735,
"login": "sanchit-gandhi",
"node_id": "U_kgDOBZhWpw",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sanchit-gandhi",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6483/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6483/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/6550
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6550/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6550/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6550/events
|
https://github.com/huggingface/datasets/pull/6550
| 2,062,556,493
|
PR_kwDODunzps5jD1OL
| 6,550
|
Multi gpu docs
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6550). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Thanks @lhoestq . This is a very important fix for code to run on multiple GPUs. Otherwise, only one GPU is working. I wish it can be merged soon. \r\nI also wrote a [blog post](https://forrestbao.github.io/2024/01/30/datasets_map_with_rank_multiple_GPUs.html) with a complete example in case it can be helpful to someone. Please feel free to use complete example in any documentation. \r\n",
"Thanks a lot @forrestbao ! I reused parts of your code for the documentation, I'm sure it will be useful to many people !",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005662 / 0.011353 (-0.005691) | 0.003930 / 0.011008 (-0.007078) | 0.063807 / 0.038508 (0.025299) | 0.030227 / 0.023109 (0.007118) | 0.235338 / 0.275898 (-0.040560) | 0.264433 / 0.323480 (-0.059047) | 0.004226 / 0.007986 (-0.003759) | 0.002847 / 0.004328 (-0.001481) | 0.048998 / 0.004250 (0.044747) | 0.042713 / 0.037052 (0.005660) | 0.250504 / 0.258489 (-0.007985) | 0.281101 / 0.293841 (-0.012740) | 0.029123 / 0.128546 (-0.099423) | 0.011388 / 0.075646 (-0.064258) | 0.211342 / 0.419271 (-0.207930) | 0.036437 / 0.043533 (-0.007096) | 0.238909 / 0.255139 (-0.016230) | 0.255853 / 0.283200 (-0.027347) | 0.018852 / 0.141683 (-0.122831) | 1.131870 / 1.452155 (-0.320284) | 1.209007 / 1.492716 (-0.283710) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092433 / 0.018006 (0.074427) | 0.303045 / 0.000490 (0.302556) | 0.000291 / 0.000200 (0.000091) | 0.000079 / 0.000054 (0.000025) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018349 / 0.037411 (-0.019062) | 0.062527 / 0.014526 (0.048002) | 0.075347 / 0.176557 (-0.101210) | 0.120587 / 0.737135 (-0.616549) | 0.075171 / 0.296338 (-0.221167) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.288364 / 0.215209 (0.073155) | 2.775779 / 2.077655 (0.698124) | 1.490875 / 1.504120 (-0.013245) | 1.375451 / 1.541195 (-0.165744) | 1.398923 / 1.468490 (-0.069567) | 0.588659 / 4.584777 (-3.996117) | 2.458114 / 3.745712 (-1.287598) | 2.928910 / 5.269862 (-2.340951) | 1.834221 / 4.565676 (-2.731456) | 0.064503 / 0.424275 (-0.359772) | 0.005028 / 0.007607 (-0.002580) | 0.340386 / 0.226044 (0.114341) | 3.408697 / 2.268929 (1.139769) | 1.843613 / 55.444624 (-53.601012) | 1.569300 / 6.876477 (-5.307177) | 1.636761 / 2.142072 (-0.505312) | 0.687854 / 4.805227 (-4.117374) | 0.123462 / 6.500664 (-6.377202) | 0.042877 / 0.075469 (-0.032593) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.984054 / 1.841788 (-0.857734) | 12.243934 / 8.074308 (4.169626) | 10.835244 / 10.191392 (0.643852) | 0.131609 / 0.680424 (-0.548815) | 0.014000 / 0.534201 (-0.520201) | 0.292070 / 0.579283 (-0.287213) | 0.271958 / 0.434364 (-0.162406) | 0.326866 / 0.540337 (-0.213471) | 0.440880 / 1.386936 (-0.946056) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005954 / 0.011353 (-0.005399) | 0.004123 / 0.011008 (-0.006885) | 0.050371 / 0.038508 (0.011863) | 0.034387 / 0.023109 (0.011277) | 0.273254 / 0.275898 (-0.002644) | 0.297785 / 0.323480 (-0.025695) | 0.004619 / 0.007986 (-0.003367) | 0.002884 / 0.004328 (-0.001444) | 0.050236 / 0.004250 (0.045986) | 0.048586 / 0.037052 (0.011533) | 0.283878 / 0.258489 (0.025389) | 0.315218 / 0.293841 (0.021377) | 0.060688 / 0.128546 (-0.067859) | 0.011991 / 0.075646 (-0.063655) | 0.059518 / 0.419271 (-0.359753) | 0.036113 / 0.043533 (-0.007420) | 0.274767 / 0.255139 (0.019628) | 0.290620 / 0.283200 (0.007420) | 0.020070 / 0.141683 (-0.121613) | 1.164635 / 1.452155 (-0.287519) | 1.189482 / 1.492716 (-0.303234) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095171 / 0.018006 (0.077165) | 0.307129 / 0.000490 (0.306639) | 0.000227 / 0.000200 (0.000027) | 0.000045 / 0.000054 (-0.000009) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022777 / 0.037411 (-0.014634) | 0.076761 / 0.014526 (0.062235) | 0.087654 / 0.176557 (-0.088902) | 0.126729 / 0.737135 (-0.610406) | 0.089491 / 0.296338 (-0.206847) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.292208 / 0.215209 (0.076999) | 2.890491 / 2.077655 (0.812836) | 1.625696 / 1.504120 (0.121576) | 1.463484 / 1.541195 (-0.077710) | 1.490889 / 1.468490 (0.022399) | 0.582155 / 4.584777 (-4.002622) | 2.492209 / 3.745712 (-1.253503) | 2.817020 / 5.269862 (-2.452842) | 1.806812 / 4.565676 (-2.758864) | 0.065830 / 0.424275 (-0.358445) | 0.005089 / 0.007607 (-0.002518) | 0.356067 / 0.226044 (0.130022) | 3.489652 / 2.268929 (1.220723) | 1.959276 / 55.444624 (-53.485348) | 1.678819 / 6.876477 (-5.197657) | 1.853581 / 2.142072 (-0.288491) | 0.660515 / 4.805227 (-4.144712) | 0.119884 / 6.500664 (-6.380780) | 0.041713 / 0.075469 (-0.033757) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.021701 / 1.841788 (-0.820087) | 12.918290 / 8.074308 (4.843982) | 11.469371 / 10.191392 (1.277979) | 0.144830 / 0.680424 (-0.535594) | 0.015858 / 0.534201 (-0.518343) | 0.290136 / 0.579283 (-0.289148) | 0.277894 / 0.434364 (-0.156470) | 0.330091 / 0.540337 (-0.210247) | 0.422697 / 1.386936 (-0.964240) |\n\n</details>\n</details>\n\n\n"
] | 2024-01-02T15:11:58Z
| 2024-01-31T13:45:15Z
| 2024-01-31T13:38:59Z
|
MEMBER
| null | null | null |
after discussions in https://github.com/huggingface/datasets/pull/6415
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6550/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6550/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6550.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6550",
"merged_at": "2024-01-31T13:38:59Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6550.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6550"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6709
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6709/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6709/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6709/events
|
https://github.com/huggingface/datasets/pull/6709
| 2,164,169,913
|
PR_kwDODunzps5oc2Fg
| 6,709
|
set dev version
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6709). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005081 / 0.011353 (-0.006272) | 0.004182 / 0.011008 (-0.006826) | 0.063377 / 0.038508 (0.024869) | 0.027880 / 0.023109 (0.004770) | 0.247260 / 0.275898 (-0.028638) | 0.273580 / 0.323480 (-0.049900) | 0.002995 / 0.007986 (-0.004991) | 0.002804 / 0.004328 (-0.001524) | 0.049669 / 0.004250 (0.045418) | 0.042469 / 0.037052 (0.005417) | 0.268606 / 0.258489 (0.010117) | 0.292867 / 0.293841 (-0.000973) | 0.028077 / 0.128546 (-0.100469) | 0.011031 / 0.075646 (-0.064615) | 0.210225 / 0.419271 (-0.209047) | 0.035723 / 0.043533 (-0.007810) | 0.252131 / 0.255139 (-0.003008) | 0.272895 / 0.283200 (-0.010304) | 0.019809 / 0.141683 (-0.121874) | 1.138500 / 1.452155 (-0.313655) | 1.167752 / 1.492716 (-0.324964) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094881 / 0.018006 (0.076875) | 0.300168 / 0.000490 (0.299678) | 0.000207 / 0.000200 (0.000007) | 0.000050 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.017917 / 0.037411 (-0.019494) | 0.061854 / 0.014526 (0.047328) | 0.074481 / 0.176557 (-0.102075) | 0.120075 / 0.737135 (-0.617061) | 0.074627 / 0.296338 (-0.221711) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.287888 / 0.215209 (0.072679) | 2.770165 / 2.077655 (0.692510) | 1.500071 / 1.504120 (-0.004049) | 1.374857 / 1.541195 (-0.166338) | 1.427291 / 1.468490 (-0.041200) | 0.558431 / 4.584777 (-4.026346) | 2.439352 / 3.745712 (-1.306361) | 2.787471 / 5.269862 (-2.482391) | 1.742636 / 4.565676 (-2.823041) | 0.061716 / 0.424275 (-0.362559) | 0.004961 / 0.007607 (-0.002646) | 0.345209 / 0.226044 (0.119164) | 3.360253 / 2.268929 (1.091325) | 1.847945 / 55.444624 (-53.596680) | 1.595733 / 6.876477 (-5.280744) | 1.642350 / 2.142072 (-0.499723) | 0.638639 / 4.805227 (-4.166588) | 0.116918 / 6.500664 (-6.383746) | 0.042132 / 0.075469 (-0.033338) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.980602 / 1.841788 (-0.861185) | 11.545402 / 8.074308 (3.471094) | 9.452471 / 10.191392 (-0.738921) | 0.129930 / 0.680424 (-0.550494) | 0.014143 / 0.534201 (-0.520058) | 0.290302 / 0.579283 (-0.288981) | 0.263785 / 0.434364 (-0.170579) | 0.339580 / 0.540337 (-0.200758) | 0.450355 / 1.386936 (-0.936581) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005565 / 0.011353 (-0.005788) | 0.003764 / 0.011008 (-0.007244) | 0.050082 / 0.038508 (0.011574) | 0.030354 / 0.023109 (0.007245) | 0.250609 / 0.275898 (-0.025289) | 0.277200 / 0.323480 (-0.046280) | 0.004276 / 0.007986 (-0.003710) | 0.002805 / 0.004328 (-0.001523) | 0.048765 / 0.004250 (0.044514) | 0.045477 / 0.037052 (0.008425) | 0.267704 / 0.258489 (0.009215) | 0.303214 / 0.293841 (0.009373) | 0.029393 / 0.128546 (-0.099153) | 0.010623 / 0.075646 (-0.065023) | 0.058201 / 0.419271 (-0.361070) | 0.053131 / 0.043533 (0.009599) | 0.258682 / 0.255139 (0.003543) | 0.276069 / 0.283200 (-0.007131) | 0.018260 / 0.141683 (-0.123423) | 1.141542 / 1.452155 (-0.310613) | 1.185780 / 1.492716 (-0.306936) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.096857 / 0.018006 (0.078850) | 0.300656 / 0.000490 (0.300167) | 0.000450 / 0.000200 (0.000250) | 0.000059 / 0.000054 (0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022416 / 0.037411 (-0.014995) | 0.074781 / 0.014526 (0.060255) | 0.087299 / 0.176557 (-0.089257) | 0.127616 / 0.737135 (-0.609519) | 0.088382 / 0.296338 (-0.207957) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.298639 / 0.215209 (0.083430) | 2.940002 / 2.077655 (0.862347) | 1.709707 / 1.504120 (0.205587) | 1.556502 / 1.541195 (0.015307) | 1.592841 / 1.468490 (0.124351) | 0.570237 / 4.584777 (-4.014539) | 2.467576 / 3.745712 (-1.278137) | 2.741021 / 5.269862 (-2.528840) | 1.776526 / 4.565676 (-2.789151) | 0.063999 / 0.424275 (-0.360276) | 0.005068 / 0.007607 (-0.002539) | 0.360727 / 0.226044 (0.134682) | 3.535404 / 2.268929 (1.266476) | 2.035345 / 55.444624 (-53.409279) | 1.755916 / 6.876477 (-5.120561) | 1.889281 / 2.142072 (-0.252791) | 0.649025 / 4.805227 (-4.156202) | 0.118210 / 6.500664 (-6.382454) | 0.040815 / 0.075469 (-0.034654) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.005650 / 1.841788 (-0.836138) | 12.228314 / 8.074308 (4.154006) | 10.147363 / 10.191392 (-0.044029) | 0.159258 / 0.680424 (-0.521166) | 0.015288 / 0.534201 (-0.518913) | 0.288144 / 0.579283 (-0.291139) | 0.281319 / 0.434364 (-0.153045) | 0.323380 / 0.540337 (-0.216958) | 0.426887 / 1.386936 (-0.960049) |\n\n</details>\n</details>\n\n\n"
] | 2024-03-01T21:01:14Z
| 2024-03-01T21:07:35Z
| 2024-03-01T21:01:23Z
|
MEMBER
| null | null | null | null |
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6709/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6709/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6709.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6709",
"merged_at": "2024-03-01T21:01:23Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6709.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6709"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7189
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7189/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7189/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7189/events
|
https://github.com/huggingface/datasets/issues/7189
| 2,562,152,845
|
I_kwDODunzps6Yt1mN
| 7,189
|
Audio preview in dataset viewer for audio array data without a path/filename
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/7157234?v=4",
"events_url": "https://api.github.com/users/Lauler/events{/privacy}",
"followers_url": "https://api.github.com/users/Lauler/followers",
"following_url": "https://api.github.com/users/Lauler/following{/other_user}",
"gists_url": "https://api.github.com/users/Lauler/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Lauler",
"id": 7157234,
"login": "Lauler",
"node_id": "MDQ6VXNlcjcxNTcyMzQ=",
"organizations_url": "https://api.github.com/users/Lauler/orgs",
"received_events_url": "https://api.github.com/users/Lauler/received_events",
"repos_url": "https://api.github.com/users/Lauler/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Lauler/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Lauler/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Lauler",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
open
| false
| null |
[] | null |
[] | 2024-10-02T16:38:38Z
| 2024-10-02T17:01:40Z
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Feature request
Huggingface has quite a comprehensive set of guides for [audio datasets](https://huggingface.co/docs/datasets/en/audio_dataset). It seems, however, all these guides assume the audio array data to be decoded/inserted into a HF dataset always originates from individual files. The [Audio-dataclass](https://github.com/huggingface/datasets/blob/3.0.1/src/datasets/features/audio.py#L20) appears designed with this assumption in mind. Looking at its source code it returns a dictionary with the keys `path`, `array` and `sampling_rate`.
However, sometimes users may have different pipelines where they themselves decode the audio array. This feature request has to do with wishing some clarification in guides on whether it is possible, and in such case how users can insert already decoded audio array data into datasets (pandas DataFrame, HF dataset or whatever) that are later saved as parquet, and still get a functioning audio preview in the dataset viewer.
Do I perhaps need to write a tempfile of my audio array slice to wav and capture the bytes object with `io.BytesIO` and pass that to `Audio()`?
### Motivation
I'm working with large audio datasets, and my pipeline reads (decodes) audio from larger files, and slices the relevant portions of audio from that larger file based on metadata I have available.
The pipeline is designed this way to avoid having to store multiple copies of data, and to avoid having to store tens of millions of small files.
I tried [test-uploading parquet files](https://huggingface.co/datasets/Lauler/riksdagen_test) where I store the audio array data of decoded slices of audio in an `audio` column with a dictionary with the keys `path`, `array` and `sampling_rate`. But I don't know the secret sauce of what the Huggingface Hub expects and requires to be able to display audio previews correctly.
### Your contribution
I could contribute a tool agnostic guide of creating HF audio datasets directly as parquet to the HF documentation if there is an interest. Provided you help me figure out the secret sauce of what the dataset viewer expects to display the preview correctly.
| null |
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7189/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7189/timeline
| null | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/5866
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5866/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5866/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5866/events
|
https://github.com/huggingface/datasets/issues/5866
| 1,710,496,993
|
I_kwDODunzps5l9Bzh
| 5,866
|
Issue with Sequence features
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/14365168?v=4",
"events_url": "https://api.github.com/users/alialamiidrissi/events{/privacy}",
"followers_url": "https://api.github.com/users/alialamiidrissi/followers",
"following_url": "https://api.github.com/users/alialamiidrissi/following{/other_user}",
"gists_url": "https://api.github.com/users/alialamiidrissi/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/alialamiidrissi",
"id": 14365168,
"login": "alialamiidrissi",
"node_id": "MDQ6VXNlcjE0MzY1MTY4",
"organizations_url": "https://api.github.com/users/alialamiidrissi/orgs",
"received_events_url": "https://api.github.com/users/alialamiidrissi/received_events",
"repos_url": "https://api.github.com/users/alialamiidrissi/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/alialamiidrissi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alialamiidrissi/subscriptions",
"type": "User",
"url": "https://api.github.com/users/alialamiidrissi",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Thanks for reporting! I've opened a PR with a fix."
] | 2023-05-15T17:13:29Z
| 2023-05-26T11:57:17Z
| 2023-05-26T11:57:17Z
|
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
Sequences features sometimes causes errors when the specified length is not -1
### Steps to reproduce the bug
```python
import numpy as np
from datasets import Features, ClassLabel, Sequence, Value, Dataset
feats = Features(**{'target': ClassLabel(names=[0, 1]),'x': Sequence(feature=Value(dtype='float64',id=None), length=2, id=None)})
Dataset.from_dict({"target": np.ones(2000).astype(int), "x": np.random.rand(2000,2)},features = feats).flatten_indices()
```
Throws:
```
TypeError: Couldn't cast array of type
fixed_size_list<item: double>[2]
to
Sequence(feature=Value(dtype='float64', id=None), length=2, id=None)
```
The same code works without any issues when `length = -1`
EDIT: The error seems to happen only when the length of the dataset is bigger than 1000 for some reason
### Expected behavior
No exception
### Environment info
- `datasets` version: 2.10.1
- Python version: 3.9.5
- PyArrow version: 11.0.0
- Pandas version: 1.4.1
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5866/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5866/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/5585
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5585/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5585/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5585/events
|
https://github.com/huggingface/datasets/issues/5585
| 1,602,190,030
|
I_kwDODunzps5ff3rO
| 5,585
|
Cache is not transportable
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/4443482?v=4",
"events_url": "https://api.github.com/users/davidgilbertson/events{/privacy}",
"followers_url": "https://api.github.com/users/davidgilbertson/followers",
"following_url": "https://api.github.com/users/davidgilbertson/following{/other_user}",
"gists_url": "https://api.github.com/users/davidgilbertson/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/davidgilbertson",
"id": 4443482,
"login": "davidgilbertson",
"node_id": "MDQ6VXNlcjQ0NDM0ODI=",
"organizations_url": "https://api.github.com/users/davidgilbertson/orgs",
"received_events_url": "https://api.github.com/users/davidgilbertson/received_events",
"repos_url": "https://api.github.com/users/davidgilbertson/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/davidgilbertson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/davidgilbertson/subscriptions",
"type": "User",
"url": "https://api.github.com/users/davidgilbertson",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Hi ! No the cache is not transportable in general. It will work on a shared filesystem if you use the same python environment, but not across machines/os/environments.\r\n\r\nIn particular, reloading cached datasets does work, but reloading cached processed datasets (e.g. from `map`) may not work. This is because some hashes used by caching are based on pickle dumps of the function you pass to `map`.\r\n\r\nFinally you may copy the cache to another machine, but all the `cached-*.arrow` files are unlikely to be reloaded.",
"OK good to know. Thanks @lhoestq !"
] | 2023-02-28T00:53:06Z
| 2023-02-28T21:26:52Z
| 2023-02-28T21:26:52Z
|
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
I would like to share cache between two machines (a Windows host machine and a WSL instance).
I run most my code in WSL. I have just run out of space in the virtual drive. Rather than expand the drive size, I plan to move to cache to the host Windows machine, thereby sharing the downloads.
I'm hoping that I can just copy/paste the cache files, but I notice that a lot of the file names start with the path name, e.g. `_home_davidg_.cache_huggingface_datasets_conll2003_default-451...98.lock` where `home/davidg` is where the cache is in WSL.
This seems to suggest that the cache is not portable/cannot be centralised or shared. Is this the case, or are the files that start with path names not integral to the caching mechanism? Because copying the cache files _seems_ to work, but I'm not filled with confidence that something isn't going to break.
A related issue, when trying to load a dataset that should come from cache (running in WSL, pointing to cache on the Windows host) it seemed to work fine, but it still uses a WSL directory for `.cache\huggingface\modules\datasets_modules`. I see nothing in the docs about this, or how to point it to a different place.
I have asked a related question on the forum: https://discuss.huggingface.co/t/is-datasets-cache-operating-system-agnostic/32656
### Steps to reproduce the bug
View the cache directory in WSL/Windows.
### Expected behavior
Cache can be shared between (virtual) machines and be transportable.
It would be nice to have a simple way to say "Dear Hugging Face packages, please put ALL your cache in `blah/de/blah`" and have all the Hugging Face packages respect that single location.
### Environment info
```
- `datasets` version: 2.9.0
- Platform: Linux-5.10.102.1-microsoft-standard-WSL2-x86_64-with-glibc2.31
- Python version: 3.10.8
- PyArrow version: 11.0.0
- Pandas version: 1.5.3
- ```
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/4443482?v=4",
"events_url": "https://api.github.com/users/davidgilbertson/events{/privacy}",
"followers_url": "https://api.github.com/users/davidgilbertson/followers",
"following_url": "https://api.github.com/users/davidgilbertson/following{/other_user}",
"gists_url": "https://api.github.com/users/davidgilbertson/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/davidgilbertson",
"id": 4443482,
"login": "davidgilbertson",
"node_id": "MDQ6VXNlcjQ0NDM0ODI=",
"organizations_url": "https://api.github.com/users/davidgilbertson/orgs",
"received_events_url": "https://api.github.com/users/davidgilbertson/received_events",
"repos_url": "https://api.github.com/users/davidgilbertson/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/davidgilbertson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/davidgilbertson/subscriptions",
"type": "User",
"url": "https://api.github.com/users/davidgilbertson",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5585/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5585/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/5870
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5870/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5870/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5870/events
|
https://github.com/huggingface/datasets/issues/5870
| 1,712,156,282
|
I_kwDODunzps5mDW56
| 5,870
|
Behaviour difference between datasets.map and IterableDatasets.map
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/30209072?v=4",
"events_url": "https://api.github.com/users/llStringll/events{/privacy}",
"followers_url": "https://api.github.com/users/llStringll/followers",
"following_url": "https://api.github.com/users/llStringll/following{/other_user}",
"gists_url": "https://api.github.com/users/llStringll/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/llStringll",
"id": 30209072,
"login": "llStringll",
"node_id": "MDQ6VXNlcjMwMjA5MDcy",
"organizations_url": "https://api.github.com/users/llStringll/orgs",
"received_events_url": "https://api.github.com/users/llStringll/received_events",
"repos_url": "https://api.github.com/users/llStringll/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/llStringll/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/llStringll/subscriptions",
"type": "User",
"url": "https://api.github.com/users/llStringll",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"PS - some work is definitely needed for 'special cases' docs, not explanations, just usages of 'functions' under mixture of special cases, like a combination of custom databuilder + iterable dataset for large size + dynamic .map() application."
] | 2023-05-16T14:32:57Z
| 2023-05-16T14:36:05Z
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
All the examples in all the docs mentioned throughout huggingface datasets correspond to datasets object, and not IterableDatasets object. At one point of time, they might have been in sync, but the code for datasets version >=2.9.0 is very different as compared to the docs.
I basically need to .map() a transform on images in an iterable dataset, which was made using a custom databuilder config.
This works very good in map-styles datasets, but the .map() fails in IterableDatasets, show behvaiour as such:
"pixel_values" key not found, KeyError in examples object/dict passed into transform function for map, which works fine with map style, even as batch.
In iterable style, the object/dict passed into map() paramter callable function is completely different as what is mentioned in all examples.
Please look into this. Thank you
My databuilder class is inherited as such:
def _info(self):
print ("Config: ",self.config.__dict__.keys())
return datasets.DatasetInfo(
description=_DESCRIPTION,
features=datasets.Features(
{
"labels": datasets.Sequence(datasets.Value("uint16")),
# "labels_name": datasets.Value("string"),
# "pixel_values": datasets.Array3D(shape=(3, 1280, 960), dtype="float32"),
"pixel_values": datasets.Array3D(shape=(1280, 960, 3), dtype="uint8"),
"image_s3_path": datasets.Value("string"),
}
),
supervised_keys=None,
homepage="none",
citation="",
)
def _split_generators(self, dl_manager):
records_train = list(db.mini_set.find({'split':'train'},{'image_s3_path':1, 'ocwen_template_name':1}))[:10000]
records_val = list(db.mini_set.find({'split':'val'},{'image_s3_path':1, 'ocwen_template_name':1}))[:1000]
# print (len(records),self.config.num_shards)
# shard_size_train = len(records_train)//self.config.num_shards
# sharded_records_train = [records_train[i:i+shard_size_train] for i in range(0,len(records_train),shard_size_train)]
# shard_size_val = len(records_val)//self.config.num_shards
# sharded_records_val = [records_val[i:i+shard_size_val] for i in range(0,len(records_val),shard_size_val)]
return [
datasets.SplitGenerator(
name=datasets.Split.TRAIN, gen_kwargs={"records":records_train} # passing list of records, for sharding to take over
),
datasets.SplitGenerator(
name=datasets.Split.VALIDATION, gen_kwargs={"records":records_val} # passing list of records, for sharding to take over
),
]
def _generate_examples(self, records):
# print ("Generating examples for [{}] shards".format(len(shards)))
# initiate_db_connection()
# records = list(db.mini_set.find({'split':split},{'image_s3_path':1, 'ocwen_template_name':1}))[:10]
id_ = 0
# for records in shards:
for i,rec in enumerate(records):
img_local_path = fetch_file(rec['image_s3_path'],self.config.buffer_dir)
# t = self.config.processor(Image.open(img_local_path), random_padding=True, return_tensors="np").pixel_values.squeeze()
# print (t.shape, type(t),type(t[0][0][0]))
# sys.exit()
pvs = np.array(Image.open(img_local_path).resize((1280,960))) # image object is wxh, so resize as per that, numpy array of it is hxwxc, transposing to cxwxh
# pvs = self.config.processor(Image.open(img_local_path), random_padding=True, return_tensors="np").pixel_values.astype(np.float16).squeeze()
# print (type(pvs[0][0][0]))
lblids = self.config.processor.tokenizer('<s_class>'+rec['ocwen_template_name']+'</s_class>'+'</s>', add_special_tokens=False, padding=False, truncation=False, return_tensors="np")["input_ids"].squeeze(0) # take padding later, as per batch collating
# print (len(lblids),type(lblids[0]))
# print (type(pvs),pvs.shape,type(pvs[0][0][0]), type(lblids))
yield id_, {"labels":lblids,"pixel_values":pvs,"image_s3_path":rec['image_s3_path']}
id_+=1
os.remove(img_local_path)
and I load it inside my trainer script as such
`ds = load_dataset("/tmp/DonutDS/dataset/", split="train", streaming=True) # iterable dataset, where .map() falls`
or also as
`ds = load_from_disk('/tmp/DonutDS/dataset/') #map style dataset`
Thank you to the team for having such a great library, and for this bug fix in advance!
### Steps to reproduce the bug
Above config can allow one to reproduce the said bug
### Expected behavior
.map() should show some consistency b/w map-style and iterable-style datasets, or atleast the docs should address iterable-style datasets behaviour and examples. I honestly do not figure the use of such docs.
### Environment info
datasets==2.9.0
transformers==4.26.0
| null |
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5870/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5870/timeline
| null | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/4542
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4542/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4542/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4542/events
|
https://github.com/huggingface/datasets/issues/4542
| 1,280,269,445
|
I_kwDODunzps5MT1yF
| 4,542
|
[to_tf_dataset] Use Feather for better compatibility with TensorFlow ?
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[
{
"color": "c5def5",
"default": false,
"description": "Generic discussion on the library",
"id": 2067400324,
"name": "generic discussion",
"node_id": "MDU6TGFiZWwyMDY3NDAwMzI0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion"
}
] |
open
| false
| null |
[] | null |
[
"This has so much potential to be great! Also I think you tagged some poor random dude on the internet whose name is also Joao, lol, edited that for you! ",
"cc @sayakpaul here too, since he was interested in our new approaches to converting datasets!",
"Noted and I will look into the thread in detail tomorrow once I log back in. ",
"@lhoestq I have used TFRecords with `tf.data` for both vision and text and I can say that they are quite performant. I haven't worked with Feather yet as similarly as I have with TFRecords. If you haven't started the benchmarking script yet, I can prepare a Colab notebook that loads Feather files, converts them into a `tf.data` pipeline, and does some basic preprocessing. \r\n\r\nBut in my limited understanding, Feather might be better suited for CSV files. Not yet sure if it's good for modalities like images. ",
"> Not yet sure if it's good for modalities like images.\r\n\r\nWe store images pretty much the same way as tensorflow_datasets (i.e. storing the encoded image bytes, or a path to the local image, so that the image can be decoded on-the-fly), so as long as we use something similar as TFDS for image decoding it should be ok",
"So for image datasets, we could potentially store the paths in the feather format and decode and read them on the fly? But it introduces an I/O redundancy of having to read the images every time.\r\n\r\nWith caching it could be somewhat mitigated but it's not a good solution for bigger image datasets. ",
"> So for image datasets, we could potentially store the paths in the feather format and decode and read them on the fly?\r\n\r\nhopefully yes :) \r\n\r\nI double-checked the TFDS source code and they always save the bytes actually, not the path. Anyway we'll see if we run into issues or not (as a first step we can require the bytes to be in the feather file)",
"Yes. For images, TFDS actually prepares TFRecords first for encoding and then reuses them for every subsequent call. ",
"@lhoestq @Rocketknight1 I worked on [this PoC](https://gist.github.com/sayakpaul/f7d5cc312cd01cb31098fad3fd9c6b59) that\r\n\r\n* Creates Feather files from a medium resolution dataset (`tf_flowers`).\r\n* Explores different options with TensorFlow IO to load the Feather files. \r\n\r\nI haven't benchmarked those different options yet. There's also a gotcha that I have noted in the PoC. I hope it gets us started but I'm sorry if this is redundant. ",
"Cool thanks ! If I understand correctly in your PoC you store the flattened array of pixels in the feather file. This will take a lot of disk space.\r\n\r\nMaybe we could just save the encoded bytes and let users apply a `map` to decode/transform them into the format they need for training ? Users can use tf.image to do so for example",
"@lhoestq this is what I tried:\r\n\r\n```py\r\ndef read_image(path):\r\n with open(path, \"rb\") as f:\r\n return f.read()\r\n\r\n\r\ntotal_images_written = 0\r\n\r\nfor step in tqdm.tnrange(int(math.ceil(len(image_paths) / batch_size))):\r\n batch_image_paths = image_paths[step * batch_size : (step + 1) * batch_size]\r\n batch_image_labels = all_integer_labels[step * batch_size : (step + 1) * batch_size]\r\n\r\n data = [read_image(path) for path in batch_image_paths]\r\n table = pa.Table.from_arrays([data, batch_image_labels], [\"data\", \"labels\"])\r\n write_feather(table, f\"/tmp/flowers_feather_{step}.feather\", chunksize=chunk_size)\r\n total_images_written += len(batch_image_paths)\r\n print(f\"Total images written: {total_images_written}.\")\r\n\r\n del data\r\n```\r\n\r\nI got the feather files done (no resizing required as you can see):\r\n\r\n```sh\r\nls -lh /tmp/*.feather\r\n\r\n-rw-r--r-- 1 sayakpaul wheel 64M Jun 24 09:28 /tmp/flowers_feather_0.feather\r\n-rw-r--r-- 1 sayakpaul wheel 59M Jun 24 09:28 /tmp/flowers_feather_1.feather\r\n-rw-r--r-- 1 sayakpaul wheel 51M Jun 24 09:28 /tmp/flowers_feather_2.feather\r\n-rw-r--r-- 1 sayakpaul wheel 45M Jun 24 09:28 /tmp/flowers_feather_3.feather\r\n```\r\n\r\nNow there seems to be a problem with `tfio.arrow`:\r\n\r\n```py\r\nimport tensorflow_io.arrow as arrow_io\r\n\r\n\r\ndataset = arrow_io.ArrowFeatherDataset(\r\n [\"/tmp/flowers_feather_0.feather\"],\r\n columns=(0, 1),\r\n output_types=(tf.string, tf.int64),\r\n output_shapes=([], []),\r\n batch_mode=\"auto\",\r\n)\r\n\r\nprint(dataset.element_spec) \r\n```\r\n\r\nPrints:\r\n\r\n```\r\n(TensorSpec(shape=(None,), dtype=tf.string, name=None),\r\n TensorSpec(shape=(None,), dtype=tf.int64, name=None))\r\n```\r\n\r\nBut when I do `sample = next(iter(dataset))` it goes into:\r\n\r\n```py\r\nInternalError Traceback (most recent call last)\r\nInput In [30], in <cell line: 1>()\r\n----> 1 sample = next(iter(dataset))\r\n\r\nFile ~/.local/bin/.virtualenvs/jax/lib/python3.8/site-packages/tensorflow/python/data/ops/iterator_ops.py:766, in OwnedIterator.__next__(self)\r\n 764 def __next__(self):\r\n 765 try:\r\n--> 766 return self._next_internal()\r\n 767 except errors.OutOfRangeError:\r\n 768 raise StopIteration\r\n\r\nFile ~/.local/bin/.virtualenvs/jax/lib/python3.8/site-packages/tensorflow/python/data/ops/iterator_ops.py:749, in OwnedIterator._next_internal(self)\r\n 746 # TODO(b/77291417): This runs in sync mode as iterators use an error status\r\n 747 # to communicate that there is no more data to iterate over.\r\n 748 with context.execution_mode(context.SYNC):\r\n--> 749 ret = gen_dataset_ops.iterator_get_next(\r\n 750 self._iterator_resource,\r\n 751 output_types=self._flat_output_types,\r\n 752 output_shapes=self._flat_output_shapes)\r\n 754 try:\r\n 755 # Fast path for the case `self._structure` is not a nested structure.\r\n 756 return self._element_spec._from_compatible_tensor_list(ret) # pylint: disable=protected-access\r\n\r\nFile ~/.local/bin/.virtualenvs/jax/lib/python3.8/site-packages/tensorflow/python/ops/gen_dataset_ops.py:3017, in iterator_get_next(iterator, output_types, output_shapes, name)\r\n 3015 return _result\r\n 3016 except _core._NotOkStatusException as e:\r\n-> 3017 _ops.raise_from_not_ok_status(e, name)\r\n 3018 except _core._FallbackException:\r\n 3019 pass\r\n\r\nFile ~/.local/bin/.virtualenvs/jax/lib/python3.8/site-packages/tensorflow/python/framework/ops.py:7164, in raise_from_not_ok_status(e, name)\r\n 7162 def raise_from_not_ok_status(e, name):\r\n 7163 e.message += (\" name: \" + name if name is not None else \"\")\r\n-> 7164 raise core._status_to_exception(e) from None\r\n\r\nInternalError: Invalid: INVALID_ARGUMENT: arrow data type 0x7ff9899d8038 is not supported: Type error: Arrow data type is not supported [Op:IteratorGetNext]\r\n```\r\n\r\nSome additional notes:\r\n\r\n* I can actually decode an image encoded with `read_image()` (shown earlier):\r\n\r\n ```py\r\n sample_image_path = image_paths[0]\r\n encoded_image = read_image(sample_image_path)\r\n image = tf.image.decode_png(encoded_image, 3)\r\n print(image.shape)\r\n ```\r\n\r\n* If the above `tf.data.Dataset` object would have succeeded my plan was to just map the decoder like so:\r\n\r\n ```py\r\n autotune = tf.data.AUTOTUNE\r\n dataset = dataset.map(lambda x, y: (tf.image.decode_png(x, 3), y), num_parallel_calls=autotune)\r\n ```",
"@lhoestq I think I was able to make it work in the way you were envisioning. Here's the PoC:\r\nhttps://gist.github.com/sayakpaul/f7d5cc312cd01cb31098fad3fd9c6b59#file-feather-tf-poc-bytes-ipynb\r\n\r\nSome details:\r\n\r\n* I am currently serializing the images as strings with `base64`). In comparison to the flattened arrays as before, the size of the individual feather files has reduced (144 MB -> 85 MB, largest).\r\n* When decoding, I am first decoding the base64 string and then decoding that string (with `tf.io.decode_base64`) as an image with `tf.image.decode_png()`. \r\n* The entire workflow (from generating the Feather files to loading them and preparing the batched `tf.data` pipeline) involves the following libraries: `pyarraow`, `tensorflow-io`, and `tensorflow`. \r\n\r\nCc: @Rocketknight1 @gante ",
"Cool thanks ! Too bad the Arrow binary type doesn't seem to be supported in `arrow_io.ArrowFeatherDataset` :/ We would also need it to support Arrow struct type. Indeed images in `datasets` are represented using an Arrow type\r\n```python\r\npa.struct({\"path\": pa.string(), \"bytes\": pa.binary()})\r\n```\r\nnot sure yet how hard it is to support this though.\r\n\r\nChanging the typing on our side would create concerning breaking changes, that's why it would be awesome if it could work using these types",
"If the ArrowFeatherDataset doesn't yet support it, I guess our hands are a bit tied at the moment. \r\n\r\nIIUC, in my [latest PoC notebook](https://gist.github.com/sayakpaul/f7d5cc312cd01cb31098fad3fd9c6b59#file-feather-tf-poc-bytes-ipynb), you wanted to see each entry in the feather file to be represented like so?\r\n\r\n```\r\npa.struct({\"path\": pa.string(), \"bytes\": pa.binary()})\r\n``` \r\n\r\nIn that case, `pa.binary()` isn't yet supported.",
"> IIUC, in my [latest PoC notebook](https://gist.github.com/sayakpaul/f7d5cc312cd01cb31098fad3fd9c6b59#file-feather-tf-poc-bytes-ipynb), you wanted to see each entry in the feather file to be represented like so?\r\n> \r\n> pa.struct({\"path\": pa.string(), \"bytes\": pa.binary()})\r\n\r\nYea because that's the data format we're using. If we were to use base64, then we would have to process the full dataset to convert it, which can take some time. Converting to TFRecords would be simpler than converting to base64 in Feather files.\r\n\r\nMaybe it would take too much time to be worth exploring, but according to https://github.com/tensorflow/io/issues/1361#issuecomment-819029002 it's possible to add support for binary type in ArrowFeatherDataset. What do you think ? Any other alternative in mind ?",
"> Maybe it would take too much time to be worth exploring, but according to https://github.com/tensorflow/io/issues/1361#issuecomment-819029002 it's possible to add support for binary type in ArrowFeatherDataset.\r\n\r\nShould be possible as per the comment but there hasn't been any progress and it's been more than a year. \r\n\r\n> If we were to use base64, then we would have to process the full dataset to convert it, which can take some time.\r\n\r\nI don't understand this. I would think TFRecords would also need something similar but I need the context you're coming from. \r\n\r\n> What do you think ? Any other alternative in mind ?\r\n\r\nTFRecords since the TensorFlow ecosystem has developed good support for it over the years. ",
"> I don't understand this. I would think TFRecords would also need something similar but I need the context you're coming from.\r\n\r\nUsers already have a copy of the dataset in Arrow format (we can change this to Feather). So to load the Arrow/feather files to a TF dataset we need TF IO or something like that. Otherwise the user has to convert all the files from Arrow to TFRecords to use TF data efficiently. But the conversion needs resources: CPU, disk, time. Converting the images to base64 require the same sort of resources.\r\n\r\nSo the issue we're trying to tackle is how to load the Arrow data in TF without having to convert anything ^^",
"Yeah, it looks like in its current state the tfio support for `Feather` is incomplete, so we'd end up having to write a lot of it, or do a conversion that defeats the whole point (because if we're going to convert the whole dataset we might as well convert to `TFRecord`).",
"Understood @lhoestq. Thanks for explaining!\r\n\r\nAgreed with @Rocketknight1. ",
"@lhoestq Although I think this is a dead-end for now unfortunately, because of the limitations at TF's end, we could still explore automatic conversion to TFRecord, or I could dive into refining `to_tf_dataset()` to yield unbatched samples and/or load samples with multiprocessing to improve throughput. Do you have any preferences there?",
"> @lhoestq Although I think this is a dead-end for now unfortunately, because of the limitations at TF's end, we could still explore automatic conversion to TFRecord, or I could dive into refining `to_tf_dataset()` to yield unbatched samples and/or load samples with multiprocessing to improve throughput. Do you have any preferences there?\r\n\r\nHappy to take part there @Rocketknight1.",
"If `to_tf_dataset` can be unbatched, then it should be fairly easy for users to convert the TF dataset to TFRecords right ?",
"@lhoestq why one would convert to TFRecords after unbatching? ",
"> If to_tf_dataset can be unbatched, then it should be fairly easy for users to convert the TF dataset to TFRecords right ?\r\n\r\nSort of! A `tf.data.Dataset` is more like an iterator, and does not support sample indexing. `to_tf_dataset()` creates an iterator, but to convert that to `TFRecord`, the user would have to iterate over the whole thing and manually save the stream of samples to files. ",
"Someone would like to try to dive into tfio to fix this ? Sounds like a good opportunity to learn what are the best ways to load a dataset for TF, and also the connections between Arrow and TF.\r\n\r\nIf we can at least have the Arrow `binary` type working for TF that would be awesome already (issue https://github.com/tensorflow/io/issues/1361)\r\n\r\nalso cc @nateraw in case you'd be interested ;)",
"> Sounds like a good opportunity to learn what are the best ways to load a dataset for TF\r\n\r\nThe recommended way would likely be a combination of TFRecords and `tf.data`. \r\n\r\nExploring the connection between Arrow and TensorFlow is definitely worth pursuing though. But I am not sure about the implications of storing images in a format supported by Arrow. I guess we'll know more once we have at least figured out the support for `binary` type for TFIO. I will spend some time on it and keep this thread updated. ",
"I am currently working on a fine-tuning notebook for the TFSegFormer model (Semantic Segmentation). The resolution is high for both the input images and the labels - (512, 512, 3). Here's the [Colab Notebook](https://colab.research.google.com/drive/1jAtR7Z0lYX6m6JsDI5VByh5vFaNhHIbP?usp=sharing) (it's a WIP so please bear that in mind).\r\n\r\nI think the current implementation of `to_tf_dataset()` does create a bottleneck here since the GPU utilization is quite low. ",
"Here's a notebook showing the performance difference: https://colab.research.google.com/gist/sayakpaul/d7ca67c90beb47e354942c9d8c0bd8ef/scratchpad.ipynb. \r\n\r\nNote that I acknowledge that it's not an apples-to-apples comparison in many aspects (the dataset isn't the same, data serialization format isn't the same, etc.) but this is the best I could do. ",
"Thanks ! I think the speed difference can be partly explained: you use ds.shuffle in your dataset, which is an exact shuffling (compared to TFDS which does buffer shuffling): it slows down query time by 2x to 10x since it has to play with data that are not contiguous.\r\n\r\nThe rest of the speed difference seems to be caused by image decoding (from 330µs/image to 30ms/image)",
"Fair enough. Can do one without shuffling too. But it's an important one to consider I guess. "
] | 2022-06-22T14:42:00Z
| 2022-10-11T08:45:45Z
| null |
MEMBER
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
To have better performance in TensorFlow, it is important to provide lists of data files in supported formats. For example sharded TFRecords datasets are extremely performant. This is because tf.data can better leverage parallelism in this case, and load one file at a time in memory.
It seems that using `tensorflow_io` we could have something similar for `to_tf_dataset` if we provide sharded Feather files: https://www.tensorflow.org/io/api_docs/python/tfio/arrow/ArrowFeatherDataset
Feather is a format almost equivalent to the Arrow IPC Stream format we're using in `datasets`: Feather V2 is equivalent to Arrow IPC File format, which is an extension of the stream format (it has an extra footer). Therefore we could store datasets as Feather instead of Arrow IPC Stream format without breaking the whole library.
Here are a few points to explore
- [ ] check the performance of ArrowFeatherDataset in tf.data
- [ ] check what would change if we were to switch to Feather if needed, in particular check that those are fine: memory mapping, typing, writing, reading to python objects, etc.
We would also need to implement sharding when loading a dataset (this will be done anyway for #546)
cc @Rocketknight1 @gante feel free to comment in case I missed anything !
I'll share some files and scripts, so that we can benchmark performance of Feather files with tf.data
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4542/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4542/timeline
| null | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/5397
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5397/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5397/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5397/events
|
https://github.com/huggingface/datasets/pull/5397
| 1,514,412,246
|
PR_kwDODunzps5GYirs
| 5,397
|
Unpin pydantic test dependency
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.012922 / 0.011353 (0.001569) | 0.006568 / 0.011008 (-0.004440) | 0.139567 / 0.038508 (0.101059) | 0.039362 / 0.023109 (0.016253) | 0.444238 / 0.275898 (0.168340) | 0.529102 / 0.323480 (0.205622) | 0.010275 / 0.007986 (0.002290) | 0.006134 / 0.004328 (0.001805) | 0.107506 / 0.004250 (0.103255) | 0.047948 / 0.037052 (0.010896) | 0.460469 / 0.258489 (0.201980) | 0.516817 / 0.293841 (0.222976) | 0.058637 / 0.128546 (-0.069909) | 0.019516 / 0.075646 (-0.056130) | 0.464111 / 0.419271 (0.044839) | 0.062140 / 0.043533 (0.018607) | 0.445004 / 0.255139 (0.189865) | 0.460117 / 0.283200 (0.176917) | 0.116591 / 0.141683 (-0.025092) | 1.936834 / 1.452155 (0.484680) | 1.941837 / 1.492716 (0.449120) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.284130 / 0.018006 (0.266124) | 0.588109 / 0.000490 (0.587619) | 0.004383 / 0.000200 (0.004183) | 0.000143 / 0.000054 (0.000089) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032984 / 0.037411 (-0.004427) | 0.132811 / 0.014526 (0.118285) | 0.150932 / 0.176557 (-0.025625) | 0.203759 / 0.737135 (-0.533377) | 0.149612 / 0.296338 (-0.146726) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.677666 / 0.215209 (0.462457) | 6.627611 / 2.077655 (4.549956) | 2.679526 / 1.504120 (1.175406) | 2.272536 / 1.541195 (0.731342) | 2.371179 / 1.468490 (0.902689) | 1.205282 / 4.584777 (-3.379495) | 5.733537 / 3.745712 (1.987825) | 3.165279 / 5.269862 (-2.104583) | 2.287918 / 4.565676 (-2.277759) | 0.144581 / 0.424275 (-0.279695) | 0.016812 / 0.007607 (0.009205) | 0.841719 / 0.226044 (0.615675) | 8.379119 / 2.268929 (6.110191) | 3.507169 / 55.444624 (-51.937456) | 2.756666 / 6.876477 (-4.119811) | 2.814091 / 2.142072 (0.672018) | 1.495835 / 4.805227 (-3.309392) | 0.253651 / 6.500664 (-6.247013) | 0.081258 / 0.075469 (0.005789) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.651586 / 1.841788 (-0.190202) | 19.039628 / 8.074308 (10.965320) | 21.269814 / 10.191392 (11.078421) | 0.241024 / 0.680424 (-0.439400) | 0.047975 / 0.534201 (-0.486225) | 0.563727 / 0.579283 (-0.015556) | 0.666808 / 0.434364 (0.232445) | 0.661065 / 0.540337 (0.120728) | 0.762884 / 1.386936 (-0.624052) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010141 / 0.011353 (-0.001212) | 0.006216 / 0.011008 (-0.004792) | 0.135491 / 0.038508 (0.096983) | 0.035439 / 0.023109 (0.012330) | 0.482789 / 0.275898 (0.206891) | 0.520673 / 0.323480 (0.197193) | 0.006358 / 0.007986 (-0.001627) | 0.005432 / 0.004328 (0.001104) | 0.094448 / 0.004250 (0.090197) | 0.048379 / 0.037052 (0.011326) | 0.509359 / 0.258489 (0.250870) | 0.539583 / 0.293841 (0.245742) | 0.054621 / 0.128546 (-0.073925) | 0.021382 / 0.075646 (-0.054265) | 0.435539 / 0.419271 (0.016267) | 0.060630 / 0.043533 (0.017097) | 0.469593 / 0.255139 (0.214454) | 0.507838 / 0.283200 (0.224639) | 0.112062 / 0.141683 (-0.029621) | 1.829694 / 1.452155 (0.377539) | 1.972266 / 1.492716 (0.479549) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.291669 / 0.018006 (0.273663) | 0.590104 / 0.000490 (0.589614) | 0.000661 / 0.000200 (0.000461) | 0.000082 / 0.000054 (0.000028) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034933 / 0.037411 (-0.002479) | 0.134867 / 0.014526 (0.120341) | 0.138892 / 0.176557 (-0.037665) | 0.192619 / 0.737135 (-0.544516) | 0.153787 / 0.296338 (-0.142551) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.666762 / 0.215209 (0.451553) | 6.741736 / 2.077655 (4.664082) | 2.988712 / 1.504120 (1.484592) | 2.554823 / 1.541195 (1.013628) | 2.655651 / 1.468490 (1.187161) | 1.276603 / 4.584777 (-3.308174) | 5.827960 / 3.745712 (2.082247) | 5.046876 / 5.269862 (-0.222985) | 2.829775 / 4.565676 (-1.735902) | 0.151525 / 0.424275 (-0.272750) | 0.016504 / 0.007607 (0.008897) | 0.849749 / 0.226044 (0.623704) | 8.331675 / 2.268929 (6.062747) | 3.664529 / 55.444624 (-51.780096) | 2.976495 / 6.876477 (-3.899982) | 3.034737 / 2.142072 (0.892664) | 1.499036 / 4.805227 (-3.306191) | 0.261027 / 6.500664 (-6.239637) | 0.088306 / 0.075469 (0.012837) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.693506 / 1.841788 (-0.148282) | 18.939914 / 8.074308 (10.865605) | 20.685460 / 10.191392 (10.494068) | 0.218316 / 0.680424 (-0.462108) | 0.029010 / 0.534201 (-0.505191) | 0.565246 / 0.579283 (-0.014037) | 0.633573 / 0.434364 (0.199209) | 0.656895 / 0.540337 (0.116558) | 0.781975 / 1.386936 (-0.604961) |\n\n</details>\n</details>\n\n\n"
] | 2022-12-30T10:22:09Z
| 2022-12-30T10:53:11Z
| 2022-12-30T10:43:40Z
|
MEMBER
| null | null | null |
Once pydantic-1.10.3 has been yanked, we can unpin it: https://pypi.org/project/pydantic/1.10.3/
See reply by pydantic team https://github.com/pydantic/pydantic/issues/4885#issuecomment-1367819807
```
v1.10.3 has been yanked.
```
in response to spacy request: https://github.com/pydantic/pydantic/issues/4885#issuecomment-1367810049
```
On behalf of spacy-related packages: would it be possible for you to temporarily yank v1.10.3?
To address this and be compatible with v1.10.4, we'd have to release new versions of a whole series of packages and nearly everyone (including me) is currently on vacation. Even if v1.10.4 is released with a fix, pip would still back off to v1.10.3 for spacy, etc. because of its current pins for typing_extensions. If it could instead back off to v1.10.2, we'd have a bit more breathing room to make the updates on our end.
```
Close #5398.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5397/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5397/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/5397.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5397",
"merged_at": "2022-12-30T10:43:40Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5397.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5397"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5505
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5505/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5505/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5505/events
|
https://github.com/huggingface/datasets/issues/5505
| 1,571,720,814
|
I_kwDODunzps5dro5u
| 5,505
|
PyTorch BatchSampler still loads from Dataset one-by-one
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/4443482?v=4",
"events_url": "https://api.github.com/users/davidgilbertson/events{/privacy}",
"followers_url": "https://api.github.com/users/davidgilbertson/followers",
"following_url": "https://api.github.com/users/davidgilbertson/following{/other_user}",
"gists_url": "https://api.github.com/users/davidgilbertson/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/davidgilbertson",
"id": 4443482,
"login": "davidgilbertson",
"node_id": "MDQ6VXNlcjQ0NDM0ODI=",
"organizations_url": "https://api.github.com/users/davidgilbertson/orgs",
"received_events_url": "https://api.github.com/users/davidgilbertson/received_events",
"repos_url": "https://api.github.com/users/davidgilbertson/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/davidgilbertson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/davidgilbertson/subscriptions",
"type": "User",
"url": "https://api.github.com/users/davidgilbertson",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"This change seems to come from a few months ago in the PyTorch side. That's good news and it means we may not need to pass a batch_sampler as soon as we add `Dataset.__getitems__` to get the optimal speed :)\r\n\r\nThanks for reporting ! Would you like to open a PR to add `__getitems__` and remove this outdated documentation ?",
"Yeah I figured this was the sort of thing that probably once worked. I can confirm that you no longer need the batch sampler, just `batch_size=n` in the `DataLoader`.\r\n\r\nI'll pass on the PR, I'm flat out right now, sorry."
] | 2023-02-06T01:14:55Z
| 2023-02-19T18:27:30Z
| 2023-02-19T18:27:30Z
|
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
In [the docs here](https://huggingface.co/docs/datasets/use_with_pytorch#use-a-batchsampler), it mentions the issue of the Dataset being read one-by-one, then states that using a BatchSampler resolves the issue.
I'm not sure if this is a mistake in the docs or the code, but it seems that the only way for a Dataset to be passed a list of indexes by PyTorch (instead of one index at a time) is to define a `__getitems__` method (note the plural) on the Dataset object, and since the HF Dataset doesn't have this, PyTorch executes [this line of code](https://github.com/pytorch/pytorch/blob/master/torch/utils/data/_utils/fetch.py#L58), reverting to fetching one-by-one.
### Steps to reproduce the bug
You can put a breakpoint in `Dataset.__getitem__()` or just print the args from there and see that it's called multiple times for a single `next(iter(dataloader))`, even when using the code from the docs:
```py
from torch.utils.data.sampler import BatchSampler, RandomSampler
batch_sampler = BatchSampler(RandomSampler(ds), batch_size=32, drop_last=False)
dataloader = DataLoader(ds, batch_sampler=batch_sampler)
```
### Expected behavior
The expected behaviour would be for it to fetch batches from the dataset, rather than one-by-one.
To demonstrate that there is room for improvement: once I have a HF dataset `ds`, if I just add this line:
```py
ds.__getitems__ = ds.__getitem__
```
...then the time taken to loop over the dataset improves considerably (for wikitext-103, from one minute to 13 seconds with batch size 32). Probably not a big deal in the grand scheme of things, but seems like an easy win.
### Environment info
- `datasets` version: 2.9.0
- Platform: Linux-5.10.102.1-microsoft-standard-WSL2-x86_64-with-glibc2.31
- Python version: 3.10.8
- PyArrow version: 10.0.1
- Pandas version: 1.5.3
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5505/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5505/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/6503
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6503/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6503/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6503/events
|
https://github.com/huggingface/datasets/pull/6503
| 2,043,847,591
|
PR_kwDODunzps5iHgZf
| 6,503
|
Fix streaming xnli
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6503). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005003 / 0.011353 (-0.006350) | 0.003020 / 0.011008 (-0.007988) | 0.061370 / 0.038508 (0.022862) | 0.050996 / 0.023109 (0.027887) | 0.243434 / 0.275898 (-0.032464) | 0.266317 / 0.323480 (-0.057163) | 0.003888 / 0.007986 (-0.004098) | 0.002607 / 0.004328 (-0.001721) | 0.047541 / 0.004250 (0.043290) | 0.037933 / 0.037052 (0.000881) | 0.259695 / 0.258489 (0.001206) | 0.279374 / 0.293841 (-0.014467) | 0.027258 / 0.128546 (-0.101288) | 0.010184 / 0.075646 (-0.065462) | 0.207412 / 0.419271 (-0.211860) | 0.034978 / 0.043533 (-0.008554) | 0.247871 / 0.255139 (-0.007267) | 0.265273 / 0.283200 (-0.017927) | 0.017886 / 0.141683 (-0.123796) | 1.090451 / 1.452155 (-0.361704) | 1.152034 / 1.492716 (-0.340682) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094383 / 0.018006 (0.076377) | 0.301151 / 0.000490 (0.300661) | 0.000211 / 0.000200 (0.000011) | 0.000049 / 0.000054 (-0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018927 / 0.037411 (-0.018484) | 0.062152 / 0.014526 (0.047626) | 0.072177 / 0.176557 (-0.104380) | 0.119792 / 0.737135 (-0.617343) | 0.073333 / 0.296338 (-0.223005) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.282671 / 0.215209 (0.067462) | 2.721148 / 2.077655 (0.643494) | 1.472689 / 1.504120 (-0.031431) | 1.355226 / 1.541195 (-0.185969) | 1.375935 / 1.468490 (-0.092556) | 0.562600 / 4.584777 (-4.022177) | 2.364046 / 3.745712 (-1.381666) | 2.714984 / 5.269862 (-2.554878) | 1.738413 / 4.565676 (-2.827263) | 0.062564 / 0.424275 (-0.361711) | 0.004964 / 0.007607 (-0.002643) | 0.341300 / 0.226044 (0.115255) | 3.345187 / 2.268929 (1.076259) | 1.857822 / 55.444624 (-53.586803) | 1.581002 / 6.876477 (-5.295475) | 1.585919 / 2.142072 (-0.556153) | 0.640105 / 4.805227 (-4.165122) | 0.117880 / 6.500664 (-6.382784) | 0.042032 / 0.075469 (-0.033437) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.962701 / 1.841788 (-0.879086) | 11.309251 / 8.074308 (3.234943) | 10.462520 / 10.191392 (0.271128) | 0.127399 / 0.680424 (-0.553025) | 0.014549 / 0.534201 (-0.519652) | 0.297017 / 0.579283 (-0.282266) | 0.266152 / 0.434364 (-0.168212) | 0.349252 / 0.540337 (-0.191085) | 0.457015 / 1.386936 (-0.929921) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005341 / 0.011353 (-0.006012) | 0.003108 / 0.011008 (-0.007900) | 0.048862 / 0.038508 (0.010353) | 0.053354 / 0.023109 (0.030245) | 0.274499 / 0.275898 (-0.001399) | 0.296698 / 0.323480 (-0.026782) | 0.003974 / 0.007986 (-0.004012) | 0.002631 / 0.004328 (-0.001697) | 0.048013 / 0.004250 (0.043762) | 0.040416 / 0.037052 (0.003363) | 0.276581 / 0.258489 (0.018092) | 0.301296 / 0.293841 (0.007455) | 0.029049 / 0.128546 (-0.099497) | 0.010253 / 0.075646 (-0.065393) | 0.057157 / 0.419271 (-0.362114) | 0.031830 / 0.043533 (-0.011703) | 0.274341 / 0.255139 (0.019202) | 0.292583 / 0.283200 (0.009383) | 0.018449 / 0.141683 (-0.123234) | 1.145099 / 1.452155 (-0.307055) | 1.192958 / 1.492716 (-0.299758) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091596 / 0.018006 (0.073590) | 0.300917 / 0.000490 (0.300427) | 0.000225 / 0.000200 (0.000025) | 0.000054 / 0.000054 (-0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021657 / 0.037411 (-0.015754) | 0.068464 / 0.014526 (0.053938) | 0.079869 / 0.176557 (-0.096687) | 0.117523 / 0.737135 (-0.619613) | 0.081257 / 0.296338 (-0.215082) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.294876 / 0.215209 (0.079667) | 2.879372 / 2.077655 (0.801718) | 1.619887 / 1.504120 (0.115767) | 1.482154 / 1.541195 (-0.059041) | 1.494656 / 1.468490 (0.026166) | 0.558914 / 4.584777 (-4.025862) | 2.420948 / 3.745712 (-1.324765) | 2.728992 / 5.269862 (-2.540869) | 1.722135 / 4.565676 (-2.843542) | 0.062182 / 0.424275 (-0.362093) | 0.004933 / 0.007607 (-0.002674) | 0.342759 / 0.226044 (0.116715) | 3.424083 / 2.268929 (1.155154) | 1.950673 / 55.444624 (-53.493951) | 1.683126 / 6.876477 (-5.193351) | 1.673135 / 2.142072 (-0.468937) | 0.633711 / 4.805227 (-4.171516) | 0.114898 / 6.500664 (-6.385766) | 0.040332 / 0.075469 (-0.035137) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.975102 / 1.841788 (-0.866685) | 11.975731 / 8.074308 (3.901423) | 10.961103 / 10.191392 (0.769711) | 0.131152 / 0.680424 (-0.549272) | 0.016268 / 0.534201 (-0.517933) | 0.285031 / 0.579283 (-0.294252) | 0.279556 / 0.434364 (-0.154808) | 0.324183 / 0.540337 (-0.216154) | 0.571404 / 1.386936 (-0.815532) |\n\n</details>\n</details>\n\n\n"
] | 2023-12-15T14:40:57Z
| 2023-12-15T14:51:06Z
| 2023-12-15T14:44:47Z
|
MEMBER
| null | null | null |
This code was failing
```python
In [1]: from datasets import load_dataset
In [2]:
...: ds = load_dataset("xnli", "all_languages", split="test", streaming=True)
...:
...: sample_data = next(iter(ds))["premise"] # pick up one data
...: input_text = list(sample_data.values())
```
```
File ~/hf/datasets/src/datasets/features/translation.py:104, in TranslationVariableLanguages.encode_example(self, translation_dict)
102 return translation_dict
103 elif self.languages and set(translation_dict) - lang_set:
--> 104 raise ValueError(
105 f'Some languages in example ({", ".join(sorted(set(translation_dict) - lang_set))}) are not in valid set ({", ".join(lang_set)}).'
106 )
108 # Convert dictionary into tuples, splitting out cases where there are
109 # multiple translations for a single language.
110 translation_tuples = []
ValueError: Some languages in example (language, translation) are not in valid set (ur, fr, hi, sw, vi, el, de, th, en, tr, zh, ar, bg, ru, es).
```
because in streaming mode we expect features encode methods to be no-ops if the example is already encoded.
I fixed `TranslationVariableLanguages` to account for that
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6503/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6503/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6503.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6503",
"merged_at": "2023-12-15T14:44:46Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6503.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6503"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5511
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5511/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5511/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5511/events
|
https://github.com/huggingface/datasets/issues/5511
| 1,575,851,768
|
I_kwDODunzps5d7Zb4
| 5,511
|
Creating a dummy dataset from a bigger one
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patrickvonplaten",
"id": 23423619,
"login": "patrickvonplaten",
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patrickvonplaten",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Update `datasets` or downgrade `huggingface-hub` ;)\r\n\r\nThe `huggingface-hub` lib did a breaking change a few months ago, and you're using an old version of `datasets` that does't support it",
"Awesome thanks a lot! Everything works just fine with `datasets==2.9.0` :-) ",
"Getting same error with latest versions.\r\n\r\n\r\n```shell\r\n---------------------------------------------------------------------------\r\nTypeError Traceback (most recent call last)\r\nCell In[99], line 1\r\n----> 1 dataset.push_to_hub(\"mirfan899/kids_phoneme_asr\")\r\n\r\nFile /opt/conda/lib/python3.10/site-packages/datasets/arrow_dataset.py:3538, in Dataset.push_to_hub(self, repo_id, split, private, token, branch, shard_size, embed_external_files)\r\n 3493 def push_to_hub(\r\n 3494 self,\r\n 3495 repo_id: str,\r\n (...)\r\n 3501 embed_external_files: bool = True,\r\n 3502 ):\r\n 3503 \"\"\"Pushes the dataset to the hub.\r\n 3504 The dataset is pushed using HTTP requests and does not need to have neither git or git-lfs installed.\r\n 3505 \r\n (...)\r\n 3536 ```\r\n 3537 \"\"\"\r\n-> 3538 repo_id, split, uploaded_size, dataset_nbytes = self._push_parquet_shards_to_hub(\r\n 3539 repo_id=repo_id,\r\n 3540 split=split,\r\n 3541 private=private,\r\n 3542 token=token,\r\n 3543 branch=branch,\r\n 3544 shard_size=shard_size,\r\n 3545 embed_external_files=embed_external_files,\r\n 3546 )\r\n 3547 organization, dataset_name = repo_id.split(\"/\")\r\n 3548 info_to_dump = self.info.copy()\r\n\r\nFile /opt/conda/lib/python3.10/site-packages/datasets/arrow_dataset.py:3474, in Dataset._push_parquet_shards_to_hub(self, repo_id, split, private, token, branch, shard_size, embed_external_files)\r\n 3472 shard.to_parquet(buffer)\r\n 3473 uploaded_size += buffer.tell()\r\n-> 3474 _retry(\r\n 3475 api.upload_file,\r\n 3476 func_kwargs=dict(\r\n 3477 path_or_fileobj=buffer.getvalue(),\r\n 3478 path_in_repo=path_in_repo(index),\r\n 3479 repo_id=repo_id,\r\n 3480 token=token,\r\n 3481 repo_type=\"dataset\",\r\n 3482 revision=branch,\r\n 3483 identical_ok=True,\r\n 3484 ),\r\n 3485 exceptions=HTTPError,\r\n 3486 status_codes=[504],\r\n 3487 base_wait_time=2.0,\r\n 3488 max_retries=5,\r\n 3489 max_wait_time=20.0,\r\n 3490 )\r\n 3491 return repo_id, split, uploaded_size, dataset_nbytes\r\n\r\nFile /opt/conda/lib/python3.10/site-packages/datasets/utils/file_utils.py:330, in _retry(func, func_args, func_kwargs, exceptions, status_codes, max_retries, base_wait_time, max_wait_time)\r\n 328 while True:\r\n 329 try:\r\n--> 330 return func(*func_args, **func_kwargs)\r\n 331 except exceptions as err:\r\n 332 if retry >= max_retries or (status_codes and err.response.status_code not in status_codes):\r\n\r\nFile /opt/conda/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py:120, in validate_hf_hub_args.<locals>._inner_fn(*args, **kwargs)\r\n 117 if check_use_auth_token:\r\n 118 kwargs = smoothly_deprecate_use_auth_token(fn_name=fn.__name__, has_token=has_token, kwargs=kwargs)\r\n--> 120 return fn(*args, **kwargs)\r\n\r\nTypeError: HfApi.upload_file() got an unexpected keyword argument 'identical_ok'\r\n```",
"Feel free to update `datasets` and `huggingface-hub`, it should fix it :)",
"I went ahead and upgraded both datasets and hub and still getting the same error\r\n",
"Which version do you have ? It's been a while since it has been fixed",
"huggingface 0.0.1\r\nhuggingface-hub 0.17.1\r\ndatasets 2.14.5\r\n\r\nstill has the issue!!",
"I face the same issue even after upgrading :/"
] | 2023-02-08T10:18:41Z
| 2023-12-28T18:21:01Z
| 2023-02-08T10:35:48Z
|
CONTRIBUTOR
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
I often want to create a dummy dataset from a bigger dataset for fast iteration when training. However, I'm having a hard time doing this especially when trying to upload the dataset to the Hub.
### Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("lambdalabs/pokemon-blip-captions")
dataset["train"] = dataset["train"].select(range(20))
dataset.push_to_hub("patrickvonplaten/dummy_image_data")
```
gives:
```
~/python_bin/datasets/arrow_dataset.py in _push_parquet_shards_to_hub(self, repo_id, split, private, token, branch, max_shard_size, embed_external_files)
4003 base_wait_time=2.0,
4004 max_retries=5,
-> 4005 max_wait_time=20.0,
4006 )
4007 return repo_id, split, uploaded_size, dataset_nbytes
~/python_bin/datasets/utils/file_utils.py in _retry(func, func_args, func_kwargs, exceptions, status_codes, max_retries, base_wait_time, max_wait_time)
328 while True:
329 try:
--> 330 return func(*func_args, **func_kwargs)
331 except exceptions as err:
332 if retry >= max_retries or (status_codes and err.response.status_code not in status_codes):
~/hf/lib/python3.7/site-packages/huggingface_hub/utils/_validators.py in _inner_fn(*args, **kwargs)
122 )
123
--> 124 return fn(*args, **kwargs)
125
126 return _inner_fn # type: ignore
TypeError: upload_file() got an unexpected keyword argument 'identical_ok'
In [2]:
```
### Expected behavior
I would have expected this to work. It's for me the most intuitive way of creating a dummy dataset.
### Environment info
```
- `datasets` version: 2.1.1.dev0
- Platform: Linux-4.19.0-22-cloud-amd64-x86_64-with-debian-10.13
- Python version: 3.7.3
- PyArrow version: 11.0.0
- Pandas version: 1.3.5
```
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patrickvonplaten",
"id": 23423619,
"login": "patrickvonplaten",
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patrickvonplaten",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5511/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5511/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/5366
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5366/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5366/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5366/events
|
https://github.com/huggingface/datasets/pull/5366
| 1,498,530,851
|
PR_kwDODunzps5FjSFl
| 5,366
|
ExamplesIterable fixes
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-12-15T14:23:05Z
| 2022-12-15T14:44:47Z
| 2022-12-15T14:41:45Z
|
MEMBER
| null | null | null |
fix typing and ExamplesIterable.shard_data_sources
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5366/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5366/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/5366.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5366",
"merged_at": "2022-12-15T14:41:45Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5366.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5366"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4660
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4660/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4660/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4660/events
|
https://github.com/huggingface/datasets/pull/4660
| 1,297,128,387
|
PR_kwDODunzps47AYDq
| 4,660
|
Fix _resolve_single_pattern_locally on Windows with multiple drives
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Good catch ! Sorry I forgot (again) about windows paths when writing this x)"
] | 2022-07-07T09:57:30Z
| 2022-07-07T17:03:36Z
| 2022-07-07T16:52:07Z
|
MEMBER
| null | null | null |
Currently, when `_resolve_single_pattern_locally` is called from a different drive than the one in `pattern`, it raises an exception:
```
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
C:\hostedtoolcache\windows\Python\3.6.8\x64\lib\site-packages\datasets\io\parquet.py:35: in __init__
**kwargs,
C:\hostedtoolcache\windows\Python\3.6.8\x64\lib\site-packages\datasets\builder.py:287: in __init__
sanitize_patterns(data_files), base_path=base_path, use_auth_token=use_auth_token
C:\hostedtoolcache\windows\Python\3.6.8\x64\lib\site-packages\datasets\data_files.py:761: in from_local_or_remote
if not isinstance(patterns_for_key, DataFilesList)
C:\hostedtoolcache\windows\Python\3.6.8\x64\lib\site-packages\datasets\data_files.py:723: in from_local_or_remote
data_files = resolve_patterns_locally_or_by_urls(base_path, patterns, allowed_extensions)
C:\hostedtoolcache\windows\Python\3.6.8\x64\lib\site-packages\datasets\data_files.py:321: in resolve_patterns_locally_or_by_urls
for path in _resolve_single_pattern_locally(base_path, pattern, allowed_extensions):
C:\hostedtoolcache\windows\Python\3.6.8\x64\lib\site-packages\datasets\data_files.py:239: in _resolve_single_pattern_locally
for filepath in glob_iter
C:\hostedtoolcache\windows\Python\3.6.8\x64\lib\site-packages\datasets\data_files.py:242: in <listcomp>
os.path.relpath(filepath, base_path), os.path.relpath(pattern, base_path)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
path = 'C:\\Users\\runneradmin\\AppData\\Local\\Temp\\pytest-of-runneradmin\\pytest-0\\popen-gw0\\data6\\dataset.parquet'
start = '/'
...
E ValueError: path is on mount 'C:', start on mount 'D:'
```
This PR makes sure that `base_path` is in the same drive as `pattern`.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4660/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4660/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/4660.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4660",
"merged_at": "2022-07-07T16:52:07Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4660.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4660"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7480
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7480/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7480/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7480/events
|
https://github.com/huggingface/datasets/issues/7480
| 2,950,315,214
|
I_kwDODunzps6v2jzO
| 7,480
|
HF_DATASETS_CACHE ignored?
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/31896?v=4",
"events_url": "https://api.github.com/users/stephenroller/events{/privacy}",
"followers_url": "https://api.github.com/users/stephenroller/followers",
"following_url": "https://api.github.com/users/stephenroller/following{/other_user}",
"gists_url": "https://api.github.com/users/stephenroller/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/stephenroller",
"id": 31896,
"login": "stephenroller",
"node_id": "MDQ6VXNlcjMxODk2",
"organizations_url": "https://api.github.com/users/stephenroller/orgs",
"received_events_url": "https://api.github.com/users/stephenroller/received_events",
"repos_url": "https://api.github.com/users/stephenroller/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/stephenroller/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stephenroller/subscriptions",
"type": "User",
"url": "https://api.github.com/users/stephenroller",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"FWIW, it does eventually write to /tmp/roller/datasets when generating the final version.",
"Hey, I’d love to work on this issue but I am a beginner, can I work it with you?",
"Hi @lhoestq,\nI'd like to look into this issue but I'm still learning. Could you share any quick pointers on the HF_DATASETS_CACHE behavior here? Thanks!",
"Hi ! `HF_DATASETS_CACHE` is only for the cache files of the `datasets` library, not for the `huggingface_hub` cache for files downloaded from the Hugging Face Hub.\n\nYou should either specify `HF_HOME` (parent cache path for everything HF) or both `HF_DATASETS_CACHE` and `HF_HUB_CACHE`"
] | 2025-03-26T17:19:34Z
| 2025-04-08T13:04:45Z
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
I'm struggling to get things to respect HF_DATASETS_CACHE.
Rationale: I'm on a system that uses NFS for homedir, so downloading to NFS is expensive, slow, and wastes valuable quota compared to local disk. Instead, it seems to rely mostly on HF_HUB_CACHE.
Current version: 3.2.1dev. In the process of testing 3.4.0
### Steps to reproduce the bug
[Currently writing using datasets 3.2.1dev. Will follow up with 3.4.0 results]
dump.py:
```python
from datasets import load_dataset
dataset = load_dataset("HuggingFaceFW/fineweb", name="sample-100BT", split="train")
```
Repro steps
```bash
# ensure no cache
$ mv ~/.cache/huggingface ~/.cache/huggingface.bak
$ export HF_DATASETS_CACHE=/tmp/roller/datasets
$ rm -rf ${HF_DATASETS_CACHE}
$ env | grep HF | grep -v TOKEN
HF_DATASETS_CACHE=/tmp/roller/datasets
$ python dump.py
# (omitted for brevity)
# (while downloading)
$ du -hcs ~/.cache/huggingface/hub
18G hub
18G total
# (after downloading)
$ du -hcs ~/.cache/huggingface/hub
```
It's a shame because datasets supports s3 (which I could really use right now) but hub does not.
### Expected behavior
* ~/.cache/huggingface/hub stays empty
* /tmp/roller/datasets becomes full of stuff
### Environment info
[Currently writing using datasets 3.2.1dev. Will follow up with 3.4.0 results]
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7480/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7480/timeline
| null | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/4893
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4893/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4893/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4893/events
|
https://github.com/huggingface/datasets/issues/4893
| 1,350,655,674
|
I_kwDODunzps5QgV66
| 4,893
|
Oversampling strategy for iterable datasets in `interleave_datasets`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[
{
"color": "BDE59C",
"default": false,
"description": "Issues a bit more difficult than \"Good First\" issues",
"id": 3761482852,
"name": "good second issue",
"node_id": "LA_kwDODunzps7gM6xk",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20second%20issue"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/52246514?v=4",
"events_url": "https://api.github.com/users/ylacombe/events{/privacy}",
"followers_url": "https://api.github.com/users/ylacombe/followers",
"following_url": "https://api.github.com/users/ylacombe/following{/other_user}",
"gists_url": "https://api.github.com/users/ylacombe/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ylacombe",
"id": 52246514,
"login": "ylacombe",
"node_id": "MDQ6VXNlcjUyMjQ2NTE0",
"organizations_url": "https://api.github.com/users/ylacombe/orgs",
"received_events_url": "https://api.github.com/users/ylacombe/received_events",
"repos_url": "https://api.github.com/users/ylacombe/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ylacombe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ylacombe/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ylacombe",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/52246514?v=4",
"events_url": "https://api.github.com/users/ylacombe/events{/privacy}",
"followers_url": "https://api.github.com/users/ylacombe/followers",
"following_url": "https://api.github.com/users/ylacombe/following{/other_user}",
"gists_url": "https://api.github.com/users/ylacombe/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ylacombe",
"id": 52246514,
"login": "ylacombe",
"node_id": "MDQ6VXNlcjUyMjQ2NTE0",
"organizations_url": "https://api.github.com/users/ylacombe/orgs",
"received_events_url": "https://api.github.com/users/ylacombe/received_events",
"repos_url": "https://api.github.com/users/ylacombe/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ylacombe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ylacombe/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ylacombe",
"user_view_type": "public"
}
] | null |
[
"Hi @lhoestq,\r\nI plunged into the code and it should be manageable for me to work on it!\r\n#take\r\n\r\nAlso, setting `d1`, `d2` and `d3` as you did raised a `SyntaxError: 'yield' inside list comprehension` for me, on Python 3.8.10.\r\nThe following snippet works for me though:\r\n```\r\nd1 = IterableDataset(ExamplesIterable((lambda: (yield from [(i, {\"a\": i}) for i in [0, 1, 2]])), {}))\r\nd2 = IterableDataset(ExamplesIterable((lambda: (yield from [(i, {\"a\": i}) for i in [10, 11, 12, 13]])), {}))\r\nd3 = IterableDataset(ExamplesIterable((lambda: (yield from [(i, {\"a\": i}) for i in [20, 21, 22, 23, 24]])), {}))\r\n```\r\n\r\n",
"Great @ylacombe thanks ! I'm assigning you this issue",
"Hi @ylacombe :) Is there anything I can do to help ? Feel free to ping me if you have any question :)",
"Hi @lhoestq,\r\n\r\nI actually have already wrote the code last time [on this commit](https://github.com/ylacombe/datasets/commit/84769db97facc78a33ec53f7b1b395951e1804df) but I still have to change the docs and write some tests though. I'm working on it.\r\n\r\nHowever, I still your advice on one matter. \r\nIn #4831, when using a `Dataset` list with probabilities, I had change the original behavior so that it stops as soon as one or all datasets are out of samples. By nature, this behavior can't be applied with an `IterableDataset` because one only knows an iterable dataset is out of sample when receiving a StopIteration error after calling the iterator once again. \r\nTo sum up, as it is right know, the behavior is not consistent with an `IterableDataset` list or a `Dataset` list, when using probabilities.\r\nTo be honest, I think that the current behavior with a `Dataset` list is desirable and avoid having too many samples, so I would recommand keeping that as it is, but I can understand the desire to have the same behavior for both classes. \r\nWhat do you think ? Please let me know if you need more details.\r\n\r\n\r\nEDIT:\r\nHere is an example:\r\n```\r\n>>> from tests.test_iterable_dataset import *\r\n>>> d1 = IterableDataset(ExamplesIterable((lambda: (yield from [(i, {\"a\": i}) for i in [0, 1, 2]])), {}))\r\n>>> d2 = IterableDataset(ExamplesIterable((lambda: (yield from [(i, {\"a\": i}) for i in [10, 11, 12, 13]])), {}))\r\n>>> d3 = IterableDataset(ExamplesIterable((lambda: (yield from [(i, {\"a\": i}) for i in [20, 21, 22, 23, 24]])), {}))\r\n>>> dataset = interleave_datasets([d1, d2, d3], probabilities=[0.7, 0.2, 0.1], seed=42)\r\n>>> [x[\"a\"] for x in dataset]\r\n[10, 0, 11, 1, 2, 20, 12, 13]\r\n>>> from tests.test_arrow_dataset import *\r\n>>> d1 = Dataset.from_dict({\"a\": [0, 1, 2]})\r\n>>> d2 = Dataset.from_dict({\"a\": [10, 11, 12]})\r\n>>> d3 = Dataset.from_dict({\"a\": [20, 21, 22]})\r\n>>> interleave_datasets([d1, d2, d3], probabilities=[0.7, 0.2, 0.1], seed=42)[\"a\"]\r\n[10, 0, 11, 1, 2]\r\n[10, 0, 11, 1, 2]\r\n```\r\n ",
"Hi ! Awesome :) \r\n\r\nMaybe you can pre-load the next sample to know if the dataset is empty or not ?\r\nThis way it should be possible to have the same behavior for `IterableDataset`",
"Hi @ylacombe let us know if we can help with anything :)",
"Hi @lhoestq, I've finally made some advances in the matter. I've modified the `IterableDataset` behavior so that it aligns with the `Dataset` behavior as we have discussed. The documentation has been dealt with too. \r\nIt works as expected on my examples. However I'm having trouble figuring out how to test `interleave_datasets` on `test_iterable_datasets.py` as I have never worked with pytest. Could you help me on that or give me some indications? \r\n",
"Thanks @ylacombe :)\r\n\r\nUsing the `pytest` command, you can run all the functions in a python file that start with \"test_*\" and make sure they return not errors:\r\n```\r\npytest tests/test_iterable_dataset.py\r\n```\r\n\r\nIn our case it can be nice to define a `test_interleave_datasets_with_oversampling` function. This function can contain the code example that we mentioned earlier in this github issue to make sure it works as expected.",
"Resolved via #5036."
] | 2022-08-25T10:06:55Z
| 2022-10-03T12:37:46Z
| 2022-10-03T12:37:46Z
|
MEMBER
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
In https://github.com/huggingface/datasets/pull/4831 @ylacombe added an oversampling strategy for `interleave_datasets`. However right now it doesn't work for datasets loaded using `load_dataset(..., streaming=True)`, which are `IterableDataset` objects.
It would be nice to expand `interleave_datasets` for iterable datasets as well to support this oversampling strategy
```python
>>> from datasets.iterable_dataset import IterableDataset, ExamplesIterable
>>> d1 = IterableDataset(ExamplesIterable(lambda: [(yield i, {"a": i}) for i in [0, 1, 2]], {}))
>>> d2 = IterableDataset(ExamplesIterable(lambda: [(yield i, {"a": i}) for i in [10, 11, 12, 13]], {}))
>>> d3 = IterableDataset(ExamplesIterable(lambda: [(yield i, {"a": i}) for i in [20, 21, 22, 23, 24]], {}))
>>> dataset = interleave_datasets([d1, d2, d3]) # is supported
>>> [x["a"] for x in dataset]
[0, 10, 20, 1, 11, 21, 2, 12, 22]
>>> dataset = interleave_datasets([d1, d2, d3], stopping_strategy="all_exhausted") # is not supported yet
>>> [x["a"] for x in dataset]
[0, 10, 20, 1, 11, 21, 2, 12, 22, 0, 13, 23, 1, 0, 24]
```
This can be implemented by adding the strategy to both `CyclingMultiSourcesExamplesIterable` and `RandomlyCyclingMultiSourcesExamplesIterable` used in `_interleave_iterable_datasets` in `iterable_dataset.py`
I would be happy to share some guidance if anyone would like to give it a shot :)
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4893/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4893/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/6289
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6289/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6289/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6289/events
|
https://github.com/huggingface/datasets/pull/6289
| 1,935,628,506
|
PR_kwDODunzps5cZiay
| 6,289
|
testing doc-builder
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/11827707?v=4",
"events_url": "https://api.github.com/users/mishig25/events{/privacy}",
"followers_url": "https://api.github.com/users/mishig25/followers",
"following_url": "https://api.github.com/users/mishig25/following{/other_user}",
"gists_url": "https://api.github.com/users/mishig25/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mishig25",
"id": 11827707,
"login": "mishig25",
"node_id": "MDQ6VXNlcjExODI3NzA3",
"organizations_url": "https://api.github.com/users/mishig25/orgs",
"received_events_url": "https://api.github.com/users/mishig25/received_events",
"repos_url": "https://api.github.com/users/mishig25/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mishig25/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mishig25/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mishig25",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006424 / 0.011353 (-0.004929) | 0.003960 / 0.011008 (-0.007048) | 0.084022 / 0.038508 (0.045514) | 0.070770 / 0.023109 (0.047661) | 0.320525 / 0.275898 (0.044627) | 0.354507 / 0.323480 (0.031027) | 0.003939 / 0.007986 (-0.004047) | 0.004161 / 0.004328 (-0.000168) | 0.064754 / 0.004250 (0.060503) | 0.053630 / 0.037052 (0.016578) | 0.323948 / 0.258489 (0.065459) | 0.376908 / 0.293841 (0.083067) | 0.031063 / 0.128546 (-0.097483) | 0.008470 / 0.075646 (-0.067177) | 0.288110 / 0.419271 (-0.131161) | 0.053062 / 0.043533 (0.009529) | 0.328176 / 0.255139 (0.073037) | 0.345203 / 0.283200 (0.062003) | 0.024579 / 0.141683 (-0.117104) | 1.471649 / 1.452155 (0.019495) | 1.561458 / 1.492716 (0.068742) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.223591 / 0.018006 (0.205585) | 0.450758 / 0.000490 (0.450269) | 0.003751 / 0.000200 (0.003552) | 0.000079 / 0.000054 (0.000025) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027859 / 0.037411 (-0.009552) | 0.080607 / 0.014526 (0.066081) | 0.093835 / 0.176557 (-0.082722) | 0.150466 / 0.737135 (-0.586669) | 0.094381 / 0.296338 (-0.201957) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.394011 / 0.215209 (0.178802) | 3.918318 / 2.077655 (1.840664) | 1.928684 / 1.504120 (0.424564) | 1.765944 / 1.541195 (0.224749) | 1.784716 / 1.468490 (0.316226) | 0.487189 / 4.584777 (-4.097588) | 3.537705 / 3.745712 (-0.208008) | 3.312162 / 5.269862 (-1.957699) | 2.024520 / 4.565676 (-2.541156) | 0.057571 / 0.424275 (-0.366704) | 0.007203 / 0.007607 (-0.000404) | 0.467253 / 0.226044 (0.241208) | 4.659934 / 2.268929 (2.391005) | 2.377764 / 55.444624 (-53.066860) | 2.021984 / 6.876477 (-4.854492) | 2.197468 / 2.142072 (0.055395) | 0.586415 / 4.805227 (-4.218812) | 0.136636 / 6.500664 (-6.364028) | 0.060885 / 0.075469 (-0.014584) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.241879 / 1.841788 (-0.599908) | 18.719327 / 8.074308 (10.645019) | 14.408689 / 10.191392 (4.217297) | 0.155778 / 0.680424 (-0.524646) | 0.018475 / 0.534201 (-0.515726) | 0.392316 / 0.579283 (-0.186967) | 0.409803 / 0.434364 (-0.024561) | 0.458701 / 0.540337 (-0.081637) | 0.630561 / 1.386936 (-0.756375) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006541 / 0.011353 (-0.004812) | 0.003915 / 0.011008 (-0.007094) | 0.064292 / 0.038508 (0.025784) | 0.069174 / 0.023109 (0.046065) | 0.402048 / 0.275898 (0.126150) | 0.423960 / 0.323480 (0.100480) | 0.005355 / 0.007986 (-0.002631) | 0.003295 / 0.004328 (-0.001033) | 0.065212 / 0.004250 (0.060962) | 0.054292 / 0.037052 (0.017240) | 0.402930 / 0.258489 (0.144441) | 0.441840 / 0.293841 (0.147999) | 0.032732 / 0.128546 (-0.095814) | 0.008565 / 0.075646 (-0.067081) | 0.070705 / 0.419271 (-0.348567) | 0.047908 / 0.043533 (0.004375) | 0.401400 / 0.255139 (0.146261) | 0.422682 / 0.283200 (0.139483) | 0.022244 / 0.141683 (-0.119439) | 1.532018 / 1.452155 (0.079864) | 1.597955 / 1.492716 (0.105239) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.226277 / 0.018006 (0.208271) | 0.475578 / 0.000490 (0.475088) | 0.005456 / 0.000200 (0.005256) | 0.000140 / 0.000054 (0.000085) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033111 / 0.037411 (-0.004300) | 0.093138 / 0.014526 (0.078613) | 0.104619 / 0.176557 (-0.071937) | 0.157972 / 0.737135 (-0.579164) | 0.105017 / 0.296338 (-0.191321) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.441771 / 0.215209 (0.226562) | 4.396981 / 2.077655 (2.319326) | 2.410745 / 1.504120 (0.906625) | 2.258359 / 1.541195 (0.717164) | 2.372628 / 1.468490 (0.904138) | 0.491411 / 4.584777 (-4.093366) | 3.650084 / 3.745712 (-0.095628) | 3.279557 / 5.269862 (-1.990304) | 2.011377 / 4.565676 (-2.554300) | 0.058283 / 0.424275 (-0.365992) | 0.007435 / 0.007607 (-0.000172) | 0.507212 / 0.226044 (0.281167) | 5.080104 / 2.268929 (2.811176) | 2.822680 / 55.444624 (-52.621945) | 2.507608 / 6.876477 (-4.368869) | 2.719349 / 2.142072 (0.577277) | 0.586157 / 4.805227 (-4.219071) | 0.132851 / 6.500664 (-6.367813) | 0.059944 / 0.075469 (-0.015525) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.374801 / 1.841788 (-0.466987) | 19.089359 / 8.074308 (11.015051) | 14.525861 / 10.191392 (4.334469) | 0.184758 / 0.680424 (-0.495666) | 0.020206 / 0.534201 (-0.513995) | 0.397309 / 0.579283 (-0.181975) | 0.418120 / 0.434364 (-0.016244) | 0.471817 / 0.540337 (-0.068520) | 0.681691 / 1.386936 (-0.705245) |\n\n</details>\n</details>\n\n\n",
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2023-10-10T15:17:29Z
| 2023-10-13T08:57:14Z
| 2023-10-13T08:56:48Z
|
NONE
| null | null | null |
testing https://github.com/huggingface/doc-builder/pull/426
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/11827707?v=4",
"events_url": "https://api.github.com/users/mishig25/events{/privacy}",
"followers_url": "https://api.github.com/users/mishig25/followers",
"following_url": "https://api.github.com/users/mishig25/following{/other_user}",
"gists_url": "https://api.github.com/users/mishig25/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mishig25",
"id": 11827707,
"login": "mishig25",
"node_id": "MDQ6VXNlcjExODI3NzA3",
"organizations_url": "https://api.github.com/users/mishig25/orgs",
"received_events_url": "https://api.github.com/users/mishig25/received_events",
"repos_url": "https://api.github.com/users/mishig25/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mishig25/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mishig25/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mishig25",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6289/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6289/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6289.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6289",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6289.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6289"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5201
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5201/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5201/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5201/events
|
https://github.com/huggingface/datasets/pull/5201
| 1,435,881,554
|
PR_kwDODunzps5CM0zn
| 5,201
|
Do not sort splits in dataset info
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/polinaeterna",
"id": 16348744,
"login": "polinaeterna",
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"type": "User",
"url": "https://api.github.com/users/polinaeterna",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"It would be coherent with https://github.com/huggingface/datasets-server/issues/614#issuecomment-1290534153",
"I think we started working on this issue nearly at the same time... :sweat_smile: \r\n- CI was fixed with this: https://huggingface.co/datasets/paws/discussions/1\r\n\r\nRelated issue:\r\n- #5202",
"@albertvillanova yeah I noticed it right after the PR :smile: thank you! the fix of the dataset info yaml fixes tests on CI, but in general order of splits in yaml influences the order in which they are displayed in the viewer, if I understand it correctly. So I suggest not to sort splits in yaml initially to avoid this for other datasets in the future. I think [this change](https://github.com/huggingface/datasets/pull/5201/files#diff-198ba4fdf2f94cb3e1aba8a0170a43b08d4ab5636d682374321c5a383a8be24dR571) should work for it. \r\n\r\nChanges to tests here maybe can be reverted considering that order in yaml now corresponds to the one in tests, thanks to your change in the dataset info.",
"Hehe, @polinaeterna, we make comments nearly at the same time as well... :laughing: "
] | 2022-11-04T10:47:21Z
| 2022-11-04T14:47:37Z
| 2022-11-04T14:45:09Z
|
CONTRIBUTOR
| null | null | null |
I suggest not to sort splits by their names in dataset_info in README so that they are displayed in the order specified in the loading script. Otherwise `test` split is displayed first, see this repo: https://huggingface.co/datasets/paws
What do you think?
But I added sorting in tests to fix CI (for the same dataset).
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/polinaeterna",
"id": 16348744,
"login": "polinaeterna",
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"type": "User",
"url": "https://api.github.com/users/polinaeterna",
"user_view_type": "public"
}
|
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5201/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5201/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/5201.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5201",
"merged_at": "2022-11-04T14:45:09Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5201.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5201"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5267
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5267/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5267/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5267/events
|
https://github.com/huggingface/datasets/pull/5267
| 1,455,466,464
|
PR_kwDODunzps5DOlFR
| 5,267
|
Fix `max_shard_size` docs
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-11-18T16:55:22Z
| 2022-11-18T17:28:58Z
| 2022-11-18T17:25:27Z
|
MEMBER
| null | null | null | null |
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5267/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5267/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/5267.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5267",
"merged_at": "2022-11-18T17:25:26Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5267.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5267"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7504
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7504/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7504/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7504/events
|
https://github.com/huggingface/datasets/issues/7504
| 2,979,410,641
|
I_kwDODunzps6xljLR
| 7,504
|
BuilderConfig ParquetConfig(...) doesn't have a 'use_auth_token' key.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/20015750?v=4",
"events_url": "https://api.github.com/users/tteguayco/events{/privacy}",
"followers_url": "https://api.github.com/users/tteguayco/followers",
"following_url": "https://api.github.com/users/tteguayco/following{/other_user}",
"gists_url": "https://api.github.com/users/tteguayco/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/tteguayco",
"id": 20015750,
"login": "tteguayco",
"node_id": "MDQ6VXNlcjIwMDE1NzUw",
"organizations_url": "https://api.github.com/users/tteguayco/orgs",
"received_events_url": "https://api.github.com/users/tteguayco/received_events",
"repos_url": "https://api.github.com/users/tteguayco/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/tteguayco/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tteguayco/subscriptions",
"type": "User",
"url": "https://api.github.com/users/tteguayco",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"I encountered the same error, have you resolved it?",
"Hi ! `use_auth_token` has been deprecated and removed some time ago. You should use `token` instead in `load_dataset()`"
] | 2025-04-08T10:55:03Z
| 2025-04-15T12:36:28Z
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
Trying to run the following fine-tuning script (based on this page [here](https://github.com/huggingface/instruction-tuned-sd)):
```
! accelerate launch /content/instruction-tuned-sd/finetune_instruct_pix2pix.py \
--pretrained_model_name_or_path=${MODEL_ID} \
--dataset_name=${DATASET_NAME} \
--use_ema \
--enable_xformers_memory_efficient_attention \
--resolution=512 --random_flip \
--train_batch_size=2 --gradient_accumulation_steps=4 --gradient_checkpointing \
--max_train_steps=500 \
--checkpointing_steps=25 --checkpoints_total_limit=1 \
--learning_rate=5e-05 --max_grad_norm=1 --lr_warmup_steps=20 \
--conditioning_dropout_prob=0.1 \
--mixed_precision=fp16 \
--seed=42 \
--output_dir=${OUTPUT_DIR} \
--original_image_column=before \
--edit_prompt=prompt \
--edited_image=after
```
but I keep getting the following error:
```
Traceback (most recent call last):
File "/content/instruction-tuned-sd/finetune_instruct_pix2pix.py", line 1137, in <module>
main()
File "/content/instruction-tuned-sd/finetune_instruct_pix2pix.py", line 652, in main
dataset = load_dataset(
^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/datasets/load.py", line 2129, in load_dataset
builder_instance = load_dataset_builder(
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/datasets/load.py", line 1886, in load_dataset_builder
builder_instance: DatasetBuilder = builder_cls(
^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/datasets/builder.py", line 342, in __init__
self.config, self.config_id = self._create_builder_config(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/datasets/builder.py", line 590, in _create_builder_config
raise ValueError(f"BuilderConfig {builder_config} doesn't have a '{key}' key.")
ValueError: BuilderConfig ParquetConfig(name='default', version=0.0.0, data_dir=None, data_files={'train': ['data/train-*']}, description=None, batch_size=None, columns=None, features=None, filters=None) doesn't have a 'use_auth_token' key.
Traceback (most recent call last):
File "/usr/local/bin/accelerate", line 10, in <module>
sys.exit(main())
^^^^^^
```
Any ideas? `datasets` version should be `3.2.0`.
### Steps to reproduce the bug
Just running the script above.
### Expected behavior
No errors
### Environment info
Python 3.11.11
datasets==3.2.0
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7504/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7504/timeline
| null | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/4986
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4986/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4986/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4986/events
|
https://github.com/huggingface/datasets/pull/4986
| 1,375,895,035
|
PR_kwDODunzps4_GNSd
| 4,986
|
[doc] Fix broken snippet that had too many quotes
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/37621491?v=4",
"events_url": "https://api.github.com/users/tomaarsen/events{/privacy}",
"followers_url": "https://api.github.com/users/tomaarsen/followers",
"following_url": "https://api.github.com/users/tomaarsen/following{/other_user}",
"gists_url": "https://api.github.com/users/tomaarsen/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/tomaarsen",
"id": 37621491,
"login": "tomaarsen",
"node_id": "MDQ6VXNlcjM3NjIxNDkx",
"organizations_url": "https://api.github.com/users/tomaarsen/orgs",
"received_events_url": "https://api.github.com/users/tomaarsen/received_events",
"repos_url": "https://api.github.com/users/tomaarsen/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/tomaarsen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tomaarsen/subscriptions",
"type": "User",
"url": "https://api.github.com/users/tomaarsen",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Spent the day familiarising myself with the huggingface line of products, and happened to run into some small issues here and there. Magically, I've found exactly one small issue in `transformers`, one in `accelerate` and now one in `datasets`, hah!\r\n\r\nAs for this PR, the issue seems solved according to the [new PR documentation](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4986/en/process#map):\r\n\r\n"
] | 2022-09-16T12:41:07Z
| 2022-09-16T22:12:21Z
| 2022-09-16T17:32:14Z
|
MEMBER
| null | null | null |
Hello!
### Pull request overview
* Fix broken snippet in https://huggingface.co/docs/datasets/main/en/process that has too many quotes
### Details
The snippet in question can be found here: https://huggingface.co/docs/datasets/main/en/process#map
This screenshot shows the issue, there is a quote too many, causing the snippet to be colored incorrectly:

The change speaks for itself.
Thank you for the detailed documentation, by the way.
- Tom Aarsen
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4986/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4986/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/4986.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4986",
"merged_at": "2022-09-16T17:32:14Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4986.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4986"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7488
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7488/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7488/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7488/events
|
https://github.com/huggingface/datasets/pull/7488
| 2,956,559,358
|
PR_kwDODunzps6QlLmn
| 7,488
|
Support underscore int read instruction
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7488). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"you rock, Quentin - thank you!"
] | 2025-03-28T16:01:15Z
| 2025-03-28T16:20:44Z
| 2025-03-28T16:20:43Z
|
MEMBER
| null | null | null |
close https://github.com/huggingface/datasets/issues/7481
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7488/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7488/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7488.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7488",
"merged_at": "2025-03-28T16:20:43Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7488.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7488"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6148
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6148/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6148/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6148/events
|
https://github.com/huggingface/datasets/pull/6148
| 1,849,524,683
|
PR_kwDODunzps5X3oqv
| 6,148
|
Ignore parallel warning in map_nested
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006818 / 0.011353 (-0.004534) | 0.004166 / 0.011008 (-0.006842) | 0.086756 / 0.038508 (0.048248) | 0.084444 / 0.023109 (0.061335) | 0.319249 / 0.275898 (0.043351) | 0.358689 / 0.323480 (0.035209) | 0.004344 / 0.007986 (-0.003641) | 0.003564 / 0.004328 (-0.000765) | 0.065021 / 0.004250 (0.060771) | 0.055991 / 0.037052 (0.018939) | 0.319573 / 0.258489 (0.061084) | 0.373239 / 0.293841 (0.079398) | 0.031431 / 0.128546 (-0.097115) | 0.008671 / 0.075646 (-0.066975) | 0.288484 / 0.419271 (-0.130788) | 0.053501 / 0.043533 (0.009968) | 0.316934 / 0.255139 (0.061795) | 0.354233 / 0.283200 (0.071034) | 0.028088 / 0.141683 (-0.113595) | 1.510905 / 1.452155 (0.058750) | 1.568614 / 1.492716 (0.075898) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.292343 / 0.018006 (0.274337) | 0.592309 / 0.000490 (0.591819) | 0.003850 / 0.000200 (0.003650) | 0.000084 / 0.000054 (0.000030) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033510 / 0.037411 (-0.003901) | 0.089546 / 0.014526 (0.075020) | 0.104909 / 0.176557 (-0.071648) | 0.162219 / 0.737135 (-0.574916) | 0.104137 / 0.296338 (-0.192202) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.407993 / 0.215209 (0.192784) | 4.063423 / 2.077655 (1.985768) | 2.050237 / 1.504120 (0.546117) | 1.888939 / 1.541195 (0.347744) | 2.015195 / 1.468490 (0.546704) | 0.492617 / 4.584777 (-4.092160) | 3.595871 / 3.745712 (-0.149841) | 3.320467 / 5.269862 (-1.949395) | 2.099987 / 4.565676 (-2.465690) | 0.058513 / 0.424275 (-0.365762) | 0.007709 / 0.007607 (0.000102) | 0.479277 / 0.226044 (0.253233) | 4.790712 / 2.268929 (2.521783) | 2.517292 / 55.444624 (-52.927332) | 2.167461 / 6.876477 (-4.709016) | 2.432011 / 2.142072 (0.289939) | 0.600537 / 4.805227 (-4.204690) | 0.133538 / 6.500664 (-6.367126) | 0.059621 / 0.075469 (-0.015848) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.280375 / 1.841788 (-0.561413) | 20.777971 / 8.074308 (12.703663) | 14.869539 / 10.191392 (4.678147) | 0.159372 / 0.680424 (-0.521052) | 0.018096 / 0.534201 (-0.516105) | 0.393945 / 0.579283 (-0.185338) | 0.409598 / 0.434364 (-0.024766) | 0.459202 / 0.540337 (-0.081136) | 0.632298 / 1.386936 (-0.754638) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006694 / 0.011353 (-0.004659) | 0.004299 / 0.011008 (-0.006709) | 0.064880 / 0.038508 (0.026372) | 0.083233 / 0.023109 (0.060124) | 0.366488 / 0.275898 (0.090590) | 0.405049 / 0.323480 (0.081569) | 0.005602 / 0.007986 (-0.002384) | 0.003623 / 0.004328 (-0.000705) | 0.064410 / 0.004250 (0.060160) | 0.057962 / 0.037052 (0.020910) | 0.365318 / 0.258489 (0.106829) | 0.403151 / 0.293841 (0.109310) | 0.031285 / 0.128546 (-0.097261) | 0.008867 / 0.075646 (-0.066780) | 0.071137 / 0.419271 (-0.348135) | 0.048398 / 0.043533 (0.004865) | 0.360187 / 0.255139 (0.105048) | 0.383872 / 0.283200 (0.100673) | 0.023232 / 0.141683 (-0.118451) | 1.526980 / 1.452155 (0.074826) | 1.587265 / 1.492716 (0.094549) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.362603 / 0.018006 (0.344596) | 0.557034 / 0.000490 (0.556544) | 0.025303 / 0.000200 (0.025103) | 0.000562 / 0.000054 (0.000508) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030636 / 0.037411 (-0.006775) | 0.088085 / 0.014526 (0.073559) | 0.103238 / 0.176557 (-0.073318) | 0.155208 / 0.737135 (-0.581928) | 0.106661 / 0.296338 (-0.189678) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.413660 / 0.215209 (0.198451) | 4.122717 / 2.077655 (2.045063) | 2.097656 / 1.504120 (0.593536) | 1.931995 / 1.541195 (0.390801) | 2.071497 / 1.468490 (0.603007) | 0.490257 / 4.584777 (-4.094520) | 3.588076 / 3.745712 (-0.157636) | 3.423087 / 5.269862 (-1.846774) | 2.147974 / 4.565676 (-2.417703) | 0.058783 / 0.424275 (-0.365492) | 0.007456 / 0.007607 (-0.000151) | 0.492350 / 0.226044 (0.266305) | 4.935935 / 2.268929 (2.667006) | 2.604217 / 55.444624 (-52.840407) | 2.333723 / 6.876477 (-4.542754) | 2.585293 / 2.142072 (0.443220) | 0.608800 / 4.805227 (-4.196427) | 0.135806 / 6.500664 (-6.364858) | 0.062716 / 0.075469 (-0.012753) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.347359 / 1.841788 (-0.494429) | 21.420505 / 8.074308 (13.346197) | 14.325914 / 10.191392 (4.134522) | 0.159617 / 0.680424 (-0.520806) | 0.018769 / 0.534201 (-0.515432) | 0.399677 / 0.579283 (-0.179606) | 0.402992 / 0.434364 (-0.031372) | 0.484629 / 0.540337 (-0.055709) | 0.656007 / 1.386936 (-0.730929) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007291 / 0.011353 (-0.004062) | 0.004501 / 0.011008 (-0.006508) | 0.097529 / 0.038508 (0.059021) | 0.079257 / 0.023109 (0.056147) | 0.356390 / 0.275898 (0.080492) | 0.390065 / 0.323480 (0.066585) | 0.006071 / 0.007986 (-0.001914) | 0.003783 / 0.004328 (-0.000546) | 0.074598 / 0.004250 (0.070348) | 0.059626 / 0.037052 (0.022574) | 0.395344 / 0.258489 (0.136855) | 0.418564 / 0.293841 (0.124723) | 0.041843 / 0.128546 (-0.086704) | 0.009293 / 0.075646 (-0.066354) | 0.332668 / 0.419271 (-0.086604) | 0.065753 / 0.043533 (0.022220) | 0.357285 / 0.255139 (0.102146) | 0.402974 / 0.283200 (0.119775) | 0.028714 / 0.141683 (-0.112968) | 1.733913 / 1.452155 (0.281759) | 1.802574 / 1.492716 (0.309858) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.253114 / 0.018006 (0.235108) | 0.606338 / 0.000490 (0.605848) | 0.006871 / 0.000200 (0.006671) | 0.000126 / 0.000054 (0.000072) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031850 / 0.037411 (-0.005562) | 0.095148 / 0.014526 (0.080622) | 0.111499 / 0.176557 (-0.065057) | 0.174653 / 0.737135 (-0.562483) | 0.109396 / 0.296338 (-0.186943) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.440442 / 0.215209 (0.225233) | 4.408792 / 2.077655 (2.331137) | 2.149778 / 1.504120 (0.645658) | 1.922430 / 1.541195 (0.381235) | 2.029281 / 1.468490 (0.560791) | 0.611586 / 4.584777 (-3.973191) | 4.204571 / 3.745712 (0.458859) | 3.638194 / 5.269862 (-1.631668) | 2.336146 / 4.565676 (-2.229531) | 0.065383 / 0.424275 (-0.358892) | 0.008441 / 0.007607 (0.000834) | 0.527357 / 0.226044 (0.301313) | 5.247892 / 2.268929 (2.978963) | 2.654005 / 55.444624 (-52.790620) | 2.256596 / 6.876477 (-4.619881) | 2.432191 / 2.142072 (0.290119) | 0.672759 / 4.805227 (-4.132469) | 0.148494 / 6.500664 (-6.352170) | 0.068248 / 0.075469 (-0.007221) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.544250 / 1.841788 (-0.297538) | 21.882016 / 8.074308 (13.807708) | 16.470182 / 10.191392 (6.278790) | 0.166107 / 0.680424 (-0.514317) | 0.021305 / 0.534201 (-0.512896) | 0.445069 / 0.579283 (-0.134214) | 0.500631 / 0.434364 (0.066267) | 0.525801 / 0.540337 (-0.014536) | 0.806534 / 1.386936 (-0.580402) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007322 / 0.011353 (-0.004030) | 0.004206 / 0.011008 (-0.006802) | 0.074827 / 0.038508 (0.036319) | 0.084759 / 0.023109 (0.061650) | 0.421204 / 0.275898 (0.145306) | 0.464442 / 0.323480 (0.140962) | 0.006523 / 0.007986 (-0.001463) | 0.003613 / 0.004328 (-0.000716) | 0.073796 / 0.004250 (0.069545) | 0.066609 / 0.037052 (0.029557) | 0.430108 / 0.258489 (0.171619) | 0.463165 / 0.293841 (0.169324) | 0.036015 / 0.128546 (-0.092532) | 0.009696 / 0.075646 (-0.065951) | 0.083326 / 0.419271 (-0.335946) | 0.056804 / 0.043533 (0.013271) | 0.423333 / 0.255139 (0.168194) | 0.450538 / 0.283200 (0.167338) | 0.027067 / 0.141683 (-0.114616) | 1.700563 / 1.452155 (0.248408) | 1.748738 / 1.492716 (0.256021) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.395682 / 0.018006 (0.377675) | 0.540192 / 0.000490 (0.539702) | 0.140049 / 0.000200 (0.139849) | 0.000694 / 0.000054 (0.000639) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.036643 / 0.037411 (-0.000769) | 0.104422 / 0.014526 (0.089896) | 0.113072 / 0.176557 (-0.063484) | 0.179561 / 0.737135 (-0.557575) | 0.118620 / 0.296338 (-0.177718) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.476547 / 0.215209 (0.261338) | 4.716009 / 2.077655 (2.638354) | 2.412111 / 1.504120 (0.907991) | 2.246389 / 1.541195 (0.705194) | 2.307058 / 1.468490 (0.838568) | 0.552759 / 4.584777 (-4.032018) | 4.172484 / 3.745712 (0.426771) | 3.848419 / 5.269862 (-1.421443) | 2.310338 / 4.565676 (-2.255339) | 0.071757 / 0.424275 (-0.352518) | 0.011206 / 0.007607 (0.003599) | 0.609526 / 0.226044 (0.383482) | 5.583065 / 2.268929 (3.314136) | 3.081227 / 55.444624 (-52.363397) | 2.637782 / 6.876477 (-4.238695) | 2.887561 / 2.142072 (0.745489) | 0.667227 / 4.805227 (-4.138000) | 0.154421 / 6.500664 (-6.346243) | 0.070772 / 0.075469 (-0.004697) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.605500 / 1.841788 (-0.236288) | 22.872717 / 8.074308 (14.798409) | 15.865333 / 10.191392 (5.673941) | 0.170353 / 0.680424 (-0.510071) | 0.021854 / 0.534201 (-0.512347) | 0.461467 / 0.579283 (-0.117816) | 0.477743 / 0.434364 (0.043379) | 0.597234 / 0.540337 (0.056896) | 0.800416 / 1.386936 (-0.586520) |\n\n</details>\n</details>\n\n\n"
] | 2023-08-14T10:43:41Z
| 2023-08-17T08:54:06Z
| 2023-08-17T08:43:58Z
|
MEMBER
| null | null | null |
This warning message was shown every time you pass num_proc to `load_dataset` because of `map_nested`
```
parallel_map is experimental and might be subject to breaking changes in the future
```
This PR removes it for `map_nested`. If someone uses another parallel backend they're already warned when `parallel_backend` is called anyway
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6148/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6148/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6148.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6148",
"merged_at": "2023-08-17T08:43:58Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6148.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6148"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6055
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6055/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6055/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6055/events
|
https://github.com/huggingface/datasets/issues/6055
| 1,813,524,145
|
I_kwDODunzps5sGC6x
| 6,055
|
Fix host URL in The Pile datasets
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/7540752?v=4",
"events_url": "https://api.github.com/users/nickovchinnikov/events{/privacy}",
"followers_url": "https://api.github.com/users/nickovchinnikov/followers",
"following_url": "https://api.github.com/users/nickovchinnikov/following{/other_user}",
"gists_url": "https://api.github.com/users/nickovchinnikov/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/nickovchinnikov",
"id": 7540752,
"login": "nickovchinnikov",
"node_id": "MDQ6VXNlcjc1NDA3NTI=",
"organizations_url": "https://api.github.com/users/nickovchinnikov/orgs",
"received_events_url": "https://api.github.com/users/nickovchinnikov/received_events",
"repos_url": "https://api.github.com/users/nickovchinnikov/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/nickovchinnikov/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nickovchinnikov/subscriptions",
"type": "User",
"url": "https://api.github.com/users/nickovchinnikov",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[] | 2023-07-20T09:08:52Z
| 2023-07-20T09:09:37Z
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
In #3627 and #5543, you tried to fix the host URL in The Pile datasets. But both URLs are not working now:
`HTTPError: 404 Client Error: Not Found for URL: https://the-eye.eu/public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst`
And
`ConnectTimeout: HTTPSConnectionPool(host='mystic.the-eye.eu', port=443): Max retries exceeded with url: /public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst (Caused by ConnectTimeoutError(, 'Connection to mystic.the-eye.eu timed out. (connect timeout=10.0)'))`
### Steps to reproduce the bug
```
from datasets import load_dataset
# This takes a few minutes to run, so go grab a tea or coffee while you wait :)
data_files = "https://mystic.the-eye.eu/public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst"
pubmed_dataset = load_dataset("json", data_files=data_files, split="train")
pubmed_dataset
```
Result:
`ConnectTimeout: HTTPSConnectionPool(host='mystic.the-eye.eu', port=443): Max retries exceeded with url: /public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst (Caused by ConnectTimeoutError(, 'Connection to mystic.the-eye.eu timed out. (connect timeout=10.0)'))`
And
```
from datasets import load_dataset
# This takes a few minutes to run, so go grab a tea or coffee while you wait :)
data_files = "https://the-eye.eu/public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst"
pubmed_dataset = load_dataset("json", data_files=data_files, split="train")
pubmed_dataset
```
Result:
`HTTPError: 404 Client Error: Not Found for URL: https://the-eye.eu/public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst`
### Expected behavior
Downloading as normal.
### Environment info
Environment info
`datasets` version: 2.9.0
Platform: Windows
Python version: 3.9.13
| null |
{
"+1": 5,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 5,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6055/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6055/timeline
| null | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/5465
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5465/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5465/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5465/events
|
https://github.com/huggingface/datasets/issues/5465
| 1,557,510,618
|
I_kwDODunzps5c1bna
| 5,465
|
audiofolder creates empty dataset even though the dataset passed in follows the correct structure
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/107211437?v=4",
"events_url": "https://api.github.com/users/jcho19/events{/privacy}",
"followers_url": "https://api.github.com/users/jcho19/followers",
"following_url": "https://api.github.com/users/jcho19/following{/other_user}",
"gists_url": "https://api.github.com/users/jcho19/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jcho19",
"id": 107211437,
"login": "jcho19",
"node_id": "U_kgDOBmPqrQ",
"organizations_url": "https://api.github.com/users/jcho19/orgs",
"received_events_url": "https://api.github.com/users/jcho19/received_events",
"repos_url": "https://api.github.com/users/jcho19/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jcho19/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jcho19/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jcho19",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2023-01-26T01:45:45Z
| 2023-01-26T08:48:45Z
| 2023-01-26T08:48:45Z
|
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
The structure of my dataset folder called "my_dataset" is : data metadata.csv
The data folder consists of all mp3 files and metadata.csv consist of file locations like 'data/...mp3 and transcriptions. There's 400+ mp3 files and corresponding transcriptions for my dataset.
When I run the following:
ds = load_dataset("audiofolder", data_dir="my_dataset")
I get:
Using custom data configuration default-...
Downloading and preparing dataset audiofolder/default to /...
Downloading data files: 0%| | 0/2 [00:00<?, ?it/s]
Downloading data files: 0it [00:00, ?it/s]
Extracting data files: 0it [00:00, ?it/s]
Generating train split: 0 examples [00:00, ? examples/s]
Dataset audiofolder downloaded and prepared to /.... Subsequent calls will reuse this data.
0%| | 0/1 [00:00<?, ?it/s]
DatasetDict({
train: Dataset({
features: ['audio', 'transcription'],
num_rows: 1
})
})
### Steps to reproduce the bug
Create a dataset folder called 'my_dataset' with a subfolder called 'data' that has mp3 files. Also, create metadata.csv that has file locations like 'data/...mp3' and their corresponding transcription.
Run:
ds = load_dataset("audiofolder", data_dir="my_dataset")
### Expected behavior
It should generate a dataset with numerous rows.
### Environment info
Run on Jupyter notebook
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/107211437?v=4",
"events_url": "https://api.github.com/users/jcho19/events{/privacy}",
"followers_url": "https://api.github.com/users/jcho19/followers",
"following_url": "https://api.github.com/users/jcho19/following{/other_user}",
"gists_url": "https://api.github.com/users/jcho19/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jcho19",
"id": 107211437,
"login": "jcho19",
"node_id": "U_kgDOBmPqrQ",
"organizations_url": "https://api.github.com/users/jcho19/orgs",
"received_events_url": "https://api.github.com/users/jcho19/received_events",
"repos_url": "https://api.github.com/users/jcho19/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jcho19/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jcho19/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jcho19",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5465/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5465/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/5603
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5603/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5603/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5603/events
|
https://github.com/huggingface/datasets/pull/5603
| 1,607,143,509
|
PR_kwDODunzps5LJZzG
| 5,603
|
Don't compute checksums if not necessary in `datasets-cli test`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008550 / 0.011353 (-0.002803) | 0.004476 / 0.011008 (-0.006532) | 0.100902 / 0.038508 (0.062394) | 0.029684 / 0.023109 (0.006575) | 0.308081 / 0.275898 (0.032183) | 0.363435 / 0.323480 (0.039955) | 0.006987 / 0.007986 (-0.000999) | 0.003401 / 0.004328 (-0.000927) | 0.078218 / 0.004250 (0.073967) | 0.036657 / 0.037052 (-0.000395) | 0.319670 / 0.258489 (0.061181) | 0.349952 / 0.293841 (0.056111) | 0.033416 / 0.128546 (-0.095130) | 0.011511 / 0.075646 (-0.064135) | 0.323888 / 0.419271 (-0.095384) | 0.042429 / 0.043533 (-0.001104) | 0.307310 / 0.255139 (0.052171) | 0.329459 / 0.283200 (0.046259) | 0.085209 / 0.141683 (-0.056474) | 1.475893 / 1.452155 (0.023739) | 1.502782 / 1.492716 (0.010065) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.200137 / 0.018006 (0.182131) | 0.411269 / 0.000490 (0.410780) | 0.000415 / 0.000200 (0.000215) | 0.000061 / 0.000054 (0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022626 / 0.037411 (-0.014785) | 0.097045 / 0.014526 (0.082519) | 0.102955 / 0.176557 (-0.073602) | 0.148411 / 0.737135 (-0.588725) | 0.107238 / 0.296338 (-0.189100) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.421683 / 0.215209 (0.206474) | 4.203031 / 2.077655 (2.125376) | 1.908232 / 1.504120 (0.404112) | 1.698867 / 1.541195 (0.157672) | 1.743561 / 1.468490 (0.275071) | 0.693199 / 4.584777 (-3.891578) | 3.361022 / 3.745712 (-0.384690) | 2.989610 / 5.269862 (-2.280251) | 1.533036 / 4.565676 (-3.032641) | 0.082675 / 0.424275 (-0.341601) | 0.012419 / 0.007607 (0.004812) | 0.531543 / 0.226044 (0.305499) | 5.330595 / 2.268929 (3.061666) | 2.347519 / 55.444624 (-53.097105) | 1.975672 / 6.876477 (-4.900804) | 2.039541 / 2.142072 (-0.102532) | 0.810281 / 4.805227 (-3.994946) | 0.148917 / 6.500664 (-6.351747) | 0.065441 / 0.075469 (-0.010028) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.266213 / 1.841788 (-0.575574) | 13.628106 / 8.074308 (5.553798) | 13.852191 / 10.191392 (3.660799) | 0.149004 / 0.680424 (-0.531420) | 0.028549 / 0.534201 (-0.505652) | 0.399824 / 0.579283 (-0.179459) | 0.401231 / 0.434364 (-0.033133) | 0.473251 / 0.540337 (-0.067086) | 0.561094 / 1.386936 (-0.825842) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006669 / 0.011353 (-0.004684) | 0.004477 / 0.011008 (-0.006532) | 0.077514 / 0.038508 (0.039006) | 0.027489 / 0.023109 (0.004380) | 0.341935 / 0.275898 (0.066037) | 0.377392 / 0.323480 (0.053912) | 0.004947 / 0.007986 (-0.003039) | 0.004600 / 0.004328 (0.000271) | 0.075938 / 0.004250 (0.071687) | 0.039586 / 0.037052 (0.002534) | 0.344966 / 0.258489 (0.086477) | 0.392181 / 0.293841 (0.098340) | 0.031838 / 0.128546 (-0.096708) | 0.011572 / 0.075646 (-0.064075) | 0.085811 / 0.419271 (-0.333461) | 0.042250 / 0.043533 (-0.001283) | 0.345605 / 0.255139 (0.090466) | 0.367814 / 0.283200 (0.084615) | 0.090683 / 0.141683 (-0.051000) | 1.483168 / 1.452155 (0.031014) | 1.559724 / 1.492716 (0.067008) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.235655 / 0.018006 (0.217649) | 0.399016 / 0.000490 (0.398527) | 0.003096 / 0.000200 (0.002896) | 0.000077 / 0.000054 (0.000022) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024454 / 0.037411 (-0.012957) | 0.100710 / 0.014526 (0.086185) | 0.107950 / 0.176557 (-0.068606) | 0.161560 / 0.737135 (-0.575576) | 0.111840 / 0.296338 (-0.184498) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.441362 / 0.215209 (0.226153) | 4.428105 / 2.077655 (2.350450) | 2.074501 / 1.504120 (0.570381) | 1.866672 / 1.541195 (0.325477) | 1.928266 / 1.468490 (0.459776) | 0.703561 / 4.584777 (-3.881216) | 3.396537 / 3.745712 (-0.349175) | 3.047369 / 5.269862 (-2.222492) | 1.595133 / 4.565676 (-2.970543) | 0.084028 / 0.424275 (-0.340247) | 0.012349 / 0.007607 (0.004741) | 0.539354 / 0.226044 (0.313310) | 5.401535 / 2.268929 (3.132606) | 2.499874 / 55.444624 (-52.944750) | 2.161406 / 6.876477 (-4.715071) | 2.197385 / 2.142072 (0.055313) | 0.810864 / 4.805227 (-3.994363) | 0.152277 / 6.500664 (-6.348387) | 0.067266 / 0.075469 (-0.008203) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.280900 / 1.841788 (-0.560887) | 13.815731 / 8.074308 (5.741423) | 13.007438 / 10.191392 (2.816046) | 0.129711 / 0.680424 (-0.550713) | 0.016852 / 0.534201 (-0.517349) | 0.380775 / 0.579283 (-0.198508) | 0.384143 / 0.434364 (-0.050221) | 0.459954 / 0.540337 (-0.080383) | 0.549335 / 1.386936 (-0.837601) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009570 / 0.011353 (-0.001783) | 0.005219 / 0.011008 (-0.005789) | 0.098472 / 0.038508 (0.059964) | 0.035429 / 0.023109 (0.012320) | 0.303086 / 0.275898 (0.027188) | 0.365926 / 0.323480 (0.042446) | 0.008797 / 0.007986 (0.000811) | 0.004220 / 0.004328 (-0.000108) | 0.076670 / 0.004250 (0.072419) | 0.045596 / 0.037052 (0.008543) | 0.309476 / 0.258489 (0.050987) | 0.343958 / 0.293841 (0.050117) | 0.038741 / 0.128546 (-0.089805) | 0.011990 / 0.075646 (-0.063657) | 0.332326 / 0.419271 (-0.086945) | 0.048897 / 0.043533 (0.005364) | 0.296002 / 0.255139 (0.040863) | 0.322048 / 0.283200 (0.038849) | 0.104403 / 0.141683 (-0.037280) | 1.461777 / 1.452155 (0.009622) | 1.516362 / 1.492716 (0.023645) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.201565 / 0.018006 (0.183559) | 0.435781 / 0.000490 (0.435291) | 0.004215 / 0.000200 (0.004015) | 0.000282 / 0.000054 (0.000227) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027272 / 0.037411 (-0.010139) | 0.106157 / 0.014526 (0.091631) | 0.116948 / 0.176557 (-0.059609) | 0.160404 / 0.737135 (-0.576731) | 0.122518 / 0.296338 (-0.173820) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.397721 / 0.215209 (0.182512) | 3.966433 / 2.077655 (1.888778) | 1.755410 / 1.504120 (0.251290) | 1.566480 / 1.541195 (0.025285) | 1.623684 / 1.468490 (0.155194) | 0.696820 / 4.584777 (-3.887957) | 3.750437 / 3.745712 (0.004725) | 2.105875 / 5.269862 (-3.163986) | 1.442026 / 4.565676 (-3.123650) | 0.085026 / 0.424275 (-0.339249) | 0.012239 / 0.007607 (0.004632) | 0.502613 / 0.226044 (0.276569) | 5.049016 / 2.268929 (2.780087) | 2.314499 / 55.444624 (-53.130126) | 1.967943 / 6.876477 (-4.908534) | 2.033507 / 2.142072 (-0.108565) | 0.861908 / 4.805227 (-3.943319) | 0.167784 / 6.500664 (-6.332880) | 0.063022 / 0.075469 (-0.012447) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.210434 / 1.841788 (-0.631353) | 14.979319 / 8.074308 (6.905011) | 14.095263 / 10.191392 (3.903871) | 0.174203 / 0.680424 (-0.506221) | 0.028547 / 0.534201 (-0.505654) | 0.442509 / 0.579283 (-0.136774) | 0.445811 / 0.434364 (0.011447) | 0.531313 / 0.540337 (-0.009024) | 0.636541 / 1.386936 (-0.750395) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007341 / 0.011353 (-0.004012) | 0.005197 / 0.011008 (-0.005811) | 0.075413 / 0.038508 (0.036905) | 0.033261 / 0.023109 (0.010152) | 0.339596 / 0.275898 (0.063698) | 0.376051 / 0.323480 (0.052571) | 0.005827 / 0.007986 (-0.002159) | 0.005473 / 0.004328 (0.001144) | 0.074851 / 0.004250 (0.070600) | 0.049059 / 0.037052 (0.012007) | 0.357182 / 0.258489 (0.098693) | 0.384589 / 0.293841 (0.090748) | 0.037122 / 0.128546 (-0.091424) | 0.012298 / 0.075646 (-0.063348) | 0.088191 / 0.419271 (-0.331081) | 0.052002 / 0.043533 (0.008469) | 0.343216 / 0.255139 (0.088077) | 0.364534 / 0.283200 (0.081334) | 0.105462 / 0.141683 (-0.036221) | 1.486717 / 1.452155 (0.034562) | 1.584725 / 1.492716 (0.092009) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.199210 / 0.018006 (0.181203) | 0.439069 / 0.000490 (0.438580) | 0.000436 / 0.000200 (0.000236) | 0.000059 / 0.000054 (0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029931 / 0.037411 (-0.007480) | 0.109564 / 0.014526 (0.095038) | 0.122284 / 0.176557 (-0.054273) | 0.170819 / 0.737135 (-0.566317) | 0.125886 / 0.296338 (-0.170452) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.422724 / 0.215209 (0.207515) | 4.210304 / 2.077655 (2.132650) | 2.001481 / 1.504120 (0.497361) | 1.810818 / 1.541195 (0.269623) | 1.901367 / 1.468490 (0.432877) | 0.686004 / 4.584777 (-3.898773) | 3.768850 / 3.745712 (0.023138) | 2.079501 / 5.269862 (-3.190360) | 1.326970 / 4.565676 (-3.238706) | 0.085991 / 0.424275 (-0.338284) | 0.012298 / 0.007607 (0.004690) | 0.526878 / 0.226044 (0.300833) | 5.267241 / 2.268929 (2.998312) | 2.451781 / 55.444624 (-52.992843) | 2.109143 / 6.876477 (-4.767333) | 2.185426 / 2.142072 (0.043353) | 0.830165 / 4.805227 (-3.975063) | 0.166167 / 6.500664 (-6.334497) | 0.064077 / 0.075469 (-0.011392) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.270430 / 1.841788 (-0.571358) | 14.844852 / 8.074308 (6.770544) | 13.196672 / 10.191392 (3.005280) | 0.162853 / 0.680424 (-0.517571) | 0.017727 / 0.534201 (-0.516474) | 0.424803 / 0.579283 (-0.154480) | 0.439970 / 0.434364 (0.005606) | 0.530691 / 0.540337 (-0.009647) | 0.630474 / 1.386936 (-0.756462) |\n\n</details>\n</details>\n\n\n"
] | 2023-03-02T16:42:39Z
| 2023-03-03T15:45:32Z
| 2023-03-03T15:38:28Z
|
MEMBER
| null | null | null |
we only need them if there exists a `dataset_infos.json`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5603/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5603/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/5603.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5603",
"merged_at": "2023-03-03T15:38:28Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5603.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5603"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5826
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5826/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5826/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5826/events
|
https://github.com/huggingface/datasets/pull/5826
| 1,698,155,751
|
PR_kwDODunzps5P5FYZ
| 5,826
|
Support working_dir in from_spark
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/106995444?v=4",
"events_url": "https://api.github.com/users/maddiedawson/events{/privacy}",
"followers_url": "https://api.github.com/users/maddiedawson/followers",
"following_url": "https://api.github.com/users/maddiedawson/following{/other_user}",
"gists_url": "https://api.github.com/users/maddiedawson/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/maddiedawson",
"id": 106995444,
"login": "maddiedawson",
"node_id": "U_kgDOBmCe9A",
"organizations_url": "https://api.github.com/users/maddiedawson/orgs",
"received_events_url": "https://api.github.com/users/maddiedawson/received_events",
"repos_url": "https://api.github.com/users/maddiedawson/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/maddiedawson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/maddiedawson/subscriptions",
"type": "User",
"url": "https://api.github.com/users/maddiedawson",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Added env var",
"@lhoestq would you or another maintainer be able to review please? :)",
"I removed the env var",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005771 / 0.011353 (-0.005582) | 0.004086 / 0.011008 (-0.006922) | 0.097170 / 0.038508 (0.058661) | 0.027464 / 0.023109 (0.004355) | 0.305425 / 0.275898 (0.029527) | 0.343869 / 0.323480 (0.020389) | 0.004899 / 0.007986 (-0.003087) | 0.003294 / 0.004328 (-0.001034) | 0.074710 / 0.004250 (0.070459) | 0.034982 / 0.037052 (-0.002070) | 0.306063 / 0.258489 (0.047574) | 0.343115 / 0.293841 (0.049274) | 0.025155 / 0.128546 (-0.103392) | 0.008429 / 0.075646 (-0.067217) | 0.318680 / 0.419271 (-0.100591) | 0.043304 / 0.043533 (-0.000229) | 0.306703 / 0.255139 (0.051564) | 0.335535 / 0.283200 (0.052335) | 0.087428 / 0.141683 (-0.054255) | 1.483769 / 1.452155 (0.031614) | 1.538753 / 1.492716 (0.046037) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.203313 / 0.018006 (0.185307) | 0.413864 / 0.000490 (0.413375) | 0.003186 / 0.000200 (0.002986) | 0.000068 / 0.000054 (0.000013) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022862 / 0.037411 (-0.014550) | 0.097306 / 0.014526 (0.082780) | 0.102823 / 0.176557 (-0.073733) | 0.162803 / 0.737135 (-0.574333) | 0.106311 / 0.296338 (-0.190028) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.451710 / 0.215209 (0.236501) | 4.508520 / 2.077655 (2.430865) | 2.181118 / 1.504120 (0.676998) | 1.977607 / 1.541195 (0.436412) | 2.008366 / 1.468490 (0.539876) | 0.565388 / 4.584777 (-4.019389) | 3.439318 / 3.745712 (-0.306394) | 1.747512 / 5.269862 (-3.522349) | 1.102124 / 4.565676 (-3.463553) | 0.069212 / 0.424275 (-0.355063) | 0.011926 / 0.007607 (0.004318) | 0.553414 / 0.226044 (0.327370) | 5.548959 / 2.268929 (3.280031) | 2.628769 / 55.444624 (-52.815856) | 2.301003 / 6.876477 (-4.575473) | 2.341744 / 2.142072 (0.199672) | 0.673092 / 4.805227 (-4.132135) | 0.137722 / 6.500664 (-6.362942) | 0.066909 / 0.075469 (-0.008560) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.196854 / 1.841788 (-0.644934) | 13.421776 / 8.074308 (5.347468) | 13.839760 / 10.191392 (3.648368) | 0.140557 / 0.680424 (-0.539867) | 0.016619 / 0.534201 (-0.517582) | 0.357985 / 0.579283 (-0.221298) | 0.387018 / 0.434364 (-0.047346) | 0.452798 / 0.540337 (-0.087540) | 0.542085 / 1.386936 (-0.844851) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005868 / 0.011353 (-0.005484) | 0.004103 / 0.011008 (-0.006905) | 0.076126 / 0.038508 (0.037618) | 0.027744 / 0.023109 (0.004635) | 0.357257 / 0.275898 (0.081359) | 0.387981 / 0.323480 (0.064501) | 0.004807 / 0.007986 (-0.003178) | 0.003337 / 0.004328 (-0.000991) | 0.075486 / 0.004250 (0.071236) | 0.035121 / 0.037052 (-0.001931) | 0.361385 / 0.258489 (0.102896) | 0.399346 / 0.293841 (0.105505) | 0.025263 / 0.128546 (-0.103284) | 0.008571 / 0.075646 (-0.067075) | 0.081815 / 0.419271 (-0.337457) | 0.041114 / 0.043533 (-0.002418) | 0.362840 / 0.255139 (0.107701) | 0.380926 / 0.283200 (0.097727) | 0.092728 / 0.141683 (-0.048955) | 1.517647 / 1.452155 (0.065492) | 1.534914 / 1.492716 (0.042198) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.199669 / 0.018006 (0.181663) | 0.399070 / 0.000490 (0.398580) | 0.002014 / 0.000200 (0.001814) | 0.000079 / 0.000054 (0.000024) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024541 / 0.037411 (-0.012870) | 0.099676 / 0.014526 (0.085151) | 0.106503 / 0.176557 (-0.070054) | 0.153755 / 0.737135 (-0.583380) | 0.108564 / 0.296338 (-0.187775) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.443842 / 0.215209 (0.228633) | 4.441158 / 2.077655 (2.363503) | 2.159496 / 1.504120 (0.655376) | 1.955358 / 1.541195 (0.414163) | 1.973864 / 1.468490 (0.505374) | 0.550467 / 4.584777 (-4.034310) | 3.381831 / 3.745712 (-0.363881) | 2.561192 / 5.269862 (-2.708670) | 1.361684 / 4.565676 (-3.203992) | 0.068140 / 0.424275 (-0.356135) | 0.012005 / 0.007607 (0.004398) | 0.551921 / 0.226044 (0.325877) | 5.503591 / 2.268929 (3.234662) | 2.591609 / 55.444624 (-52.853015) | 2.246681 / 6.876477 (-4.629796) | 2.290941 / 2.142072 (0.148868) | 0.655212 / 4.805227 (-4.150015) | 0.136013 / 6.500664 (-6.364651) | 0.066995 / 0.075469 (-0.008474) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.300438 / 1.841788 (-0.541350) | 13.866224 / 8.074308 (5.791916) | 13.932624 / 10.191392 (3.741232) | 0.144345 / 0.680424 (-0.536079) | 0.016623 / 0.534201 (-0.517578) | 0.357629 / 0.579283 (-0.221654) | 0.389759 / 0.434364 (-0.044605) | 0.417704 / 0.540337 (-0.122633) | 0.501358 / 1.386936 (-0.885578) |\n\n</details>\n</details>\n\n\n",
"Thank you!"
] | 2023-05-05T20:22:40Z
| 2023-05-25T17:45:54Z
| 2023-05-25T08:46:15Z
|
CONTRIBUTOR
| null | null | null |
Accept `working_dir` as an argument to `Dataset.from_spark`. Setting a non-NFS working directory for Spark workers to materialize to will improve write performance.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5826/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5826/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/5826.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5826",
"merged_at": "2023-05-25T08:46:15Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5826.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5826"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5596
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5596/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5596/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5596/events
|
https://github.com/huggingface/datasets/issues/5596
| 1,604,919,993
|
I_kwDODunzps5fqSK5
| 5,596
|
[TypeError: Couldn't cast array of type] Can only load a subset of the dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/44069155?v=4",
"events_url": "https://api.github.com/users/loubnabnl/events{/privacy}",
"followers_url": "https://api.github.com/users/loubnabnl/followers",
"following_url": "https://api.github.com/users/loubnabnl/following{/other_user}",
"gists_url": "https://api.github.com/users/loubnabnl/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/loubnabnl",
"id": 44069155,
"login": "loubnabnl",
"node_id": "MDQ6VXNlcjQ0MDY5MTU1",
"organizations_url": "https://api.github.com/users/loubnabnl/orgs",
"received_events_url": "https://api.github.com/users/loubnabnl/received_events",
"repos_url": "https://api.github.com/users/loubnabnl/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/loubnabnl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/loubnabnl/subscriptions",
"type": "User",
"url": "https://api.github.com/users/loubnabnl",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Apparently some JSON objects have a `\"labels\"` field. Since this field is not present in every object, you must specify all the fields types in the README.md\r\n\r\nEDIT: actually specifying the feature types doesn’t solve the issue, it raises an error because “labels” is missing in the data",
"We've updated the dataset to remove the extra `labels` field from some files, closing this issue. Thanks!",
"A similar error occurs in the Pile dataset (EleutherAI/the_pile)\r\n\r\nLoading the dataset produces the following error.\r\n\r\n```\r\nTypeError: Couldn't cast array of type\r\nstruct<file: string, id: string>\r\nto\r\n{'id': Value(dtype='string', id=None)}\r\n```\r\n",
"I think this was fixed in https://huggingface.co/datasets/EleutherAI/the_pile/discussions/11",
"i have the same problem ,how to solve :\r\n raise TypeError(f\"Couldn't cast array of type\\n{array.type}\\nto\\n{feature}\")\r\nTypeError: Couldn't cast array of type\r\nlist<item: string>\r\nto\r\n{'content': Value(dtype='string', id=None), 'role': Value(dtype='string', id=None)}"
] | 2023-03-01T12:53:08Z
| 2023-12-05T03:22:00Z
| 2023-03-02T11:12:11Z
|
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
I'm trying to load this [dataset](https://huggingface.co/datasets/bigcode-data/the-stack-gh-issues) which consists of jsonl files and I get the following error:
```
casted_values = _c(array.values, feature[0])
File "/opt/conda/lib/python3.7/site-packages/datasets/table.py", line 1839, in wrapper
return func(array, *args, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/datasets/table.py", line 2132, in cast_array_to_feature
raise TypeError(f"Couldn't cast array of type\n{array.type}\nto\n{feature}")
TypeError: Couldn't cast array of type
struct<type: string, action: string, datetime: timestamp[s], author: string, title: string, description: string, comment_id: int64, comment: string, labels: list<item: string>>
to
{'type': Value(dtype='string', id=None), 'action': Value(dtype='string', id=None), 'datetime': Value(dtype='timestamp[s]', id=None), 'author': Value(dtype='string', id=None), 'title': Value(dtype='string', id=None), 'description': Value(dtype='string', id=None), 'comment_id': Value(dtype='int64', id=None), 'comment': Value(dtype='string', id=None)}
```
But I can succesfully load a subset of the dataset, for example this works:
```python
ds = load_dataset('bigcode-data/the-stack-gh-issues', split="train", data_files=[f"data/data-{x}.jsonl" for x in range(10)])
```
and `ds.features` returns:
```
{'repo': Value(dtype='string', id=None),
'org': Value(dtype='string', id=None),
'issue_id': Value(dtype='int64', id=None),
'issue_number': Value(dtype='int64', id=None),
'pull_request': {'user_login': Value(dtype='string', id=None),
'repo': Value(dtype='string', id=None),
'number': Value(dtype='int64', id=None)},
'events': [{'type': Value(dtype='string', id=None),
'action': Value(dtype='string', id=None),
'datetime': Value(dtype='timestamp[s]', id=None),
'author': Value(dtype='string', id=None),
'title': Value(dtype='string', id=None),
'description': Value(dtype='string', id=None),
'comment_id': Value(dtype='int64', id=None),
'comment': Value(dtype='string', id=None)}]}
```
So I'm not sure if there's an issue with just some of the files. Grateful if you have any suggestions to fix the issue.
Side note:
I saw this related [issue](https://github.com/huggingface/datasets/issues/3637) and tried to write a loading script to have `events` as a `Sequence` and not `list` [here](https://huggingface.co/datasets/bigcode-data/the-stack-gh-issues/blob/main/loading.py) (the script was renamed). It worked with a subset locally but doesn't for the remote dataset it can't find https://huggingface.co/datasets/bigcode-data/the-stack-gh-issues/resolve/main/data.
### Steps to reproduce the bug
```python
from datasets import load_dataset
ds = load_dataset('bigcode-data/the-stack-gh-issues', split="train")
```
### Expected behavior
Load the entire dataset succesfully.
### Environment info
- `datasets` version: 2.10.1
- Platform: Linux-4.19.0-23-cloud-amd64-x86_64-with-debian-10.13
- Python version: 3.7.12
- PyArrow version: 9.0.0
- Pandas version: 1.3.4
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/44069155?v=4",
"events_url": "https://api.github.com/users/loubnabnl/events{/privacy}",
"followers_url": "https://api.github.com/users/loubnabnl/followers",
"following_url": "https://api.github.com/users/loubnabnl/following{/other_user}",
"gists_url": "https://api.github.com/users/loubnabnl/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/loubnabnl",
"id": 44069155,
"login": "loubnabnl",
"node_id": "MDQ6VXNlcjQ0MDY5MTU1",
"organizations_url": "https://api.github.com/users/loubnabnl/orgs",
"received_events_url": "https://api.github.com/users/loubnabnl/received_events",
"repos_url": "https://api.github.com/users/loubnabnl/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/loubnabnl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/loubnabnl/subscriptions",
"type": "User",
"url": "https://api.github.com/users/loubnabnl",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5596/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5596/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/5468
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5468/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5468/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5468/events
|
https://github.com/huggingface/datasets/issues/5468
| 1,558,066,625
|
I_kwDODunzps5c3jXB
| 5,468
|
Allow opposite of remove_columns on Dataset and DatasetDict
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/346853?v=4",
"events_url": "https://api.github.com/users/hollance/events{/privacy}",
"followers_url": "https://api.github.com/users/hollance/followers",
"following_url": "https://api.github.com/users/hollance/following{/other_user}",
"gists_url": "https://api.github.com/users/hollance/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/hollance",
"id": 346853,
"login": "hollance",
"node_id": "MDQ6VXNlcjM0Njg1Mw==",
"organizations_url": "https://api.github.com/users/hollance/orgs",
"received_events_url": "https://api.github.com/users/hollance/received_events",
"repos_url": "https://api.github.com/users/hollance/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/hollance/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hollance/subscriptions",
"type": "User",
"url": "https://api.github.com/users/hollance",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "7057ff",
"default": true,
"description": "Good for newcomers",
"id": 1935892877,
"name": "good first issue",
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue"
}
] |
closed
| false
| null |
[] | null |
[
"Hi! I agree it would be nice to have a method like that. Instead of `keep_columns`, we can name it `select_columns` to be more aligned with PyArrow's naming convention (`pa.Table.select`).",
"Hi, I am a newbie to open source and would like to contribute. @mariosasko can I take up this issue ?",
"Hey, I also want to work on this issue I am a newbie to open source. ",
"This sounds related to https://github.com/huggingface/datasets/issues/5474\r\n\r\nI'm fine with `select_columns`, or we could also override `select` to also accept a list of columns maybe ?",
"@lhoestq, I am planning to add a member function to the dataset class to perform the selection operation. Do you think its the right way to proceed? or there is a better option ?",
"Unless @mariosasko thinks otherwise, I think it can go in `Dataset.select()` :)\r\nThough some parameters like keep_in_memory, indices_cache_file_name or writer_batch_size wouldn't when selecting columns, so we would need to update the docstring as well",
"If someone wants to give it a shot, feel free to comment `#self-assign` and it will assign the issue to you.\r\n\r\nFeel free to ping us here if you have questions or if we can help :)",
"I would rather have this functionality as a separate method. IMO it's always better to be explicit than to have an API where a single method can do different/uncorrelated things (somewhat reminds me of Pandas, and there is probably a good reason why PyArrow is more rigid in this aspect).",
"In the end I also think it would be nice to have it as a separate method, this way we can also have it for `IterableDataset` (which can't have `select` for indices)"
] | 2023-01-26T12:28:09Z
| 2023-02-13T09:59:38Z
| 2023-02-13T09:59:38Z
|
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Feature request
In this blog post https://huggingface.co/blog/audio-datasets, I noticed the following code:
```python
COLUMNS_TO_KEEP = ["text", "audio"]
all_columns = gigaspeech["train"].column_names
columns_to_remove = set(all_columns) - set(COLUMNS_TO_KEEP)
gigaspeech = gigaspeech.remove_columns(columns_to_remove)
```
This kind of thing happens a lot when you don't need to keep all columns from the dataset. It would be more convenient (and less error prone) if you could just write:
```python
gigaspeech = gigaspeech.keep_columns(["text", "audio"])
```
Internally, `keep_columns` could still call `remove_columns`, but it expresses more clearly what the user's intent is.
### Motivation
Less code to write for the user of the dataset.
### Your contribution
-
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5468/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5468/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/4706
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4706/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4706/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4706/events
|
https://github.com/huggingface/datasets/pull/4706
| 1,308,198,454
|
PR_kwDODunzps47lNBg
| 4,706
|
Fix empty examples in xtreme dataset for bucc18 config
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"I guess the report link is this instead: https://huggingface.co/datasets/xtreme/discussions/1"
] | 2022-07-18T16:22:46Z
| 2022-07-19T06:41:14Z
| 2022-07-19T06:29:17Z
|
MEMBER
| null | null | null |
As reported in https://huggingface.co/muibk, there are empty examples in xtreme/bucc18.de
I applied your fix @mustaszewski
I also used a dict to make the dataset generation much faster
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4706/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4706/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/4706.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4706",
"merged_at": "2022-07-19T06:29:17Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4706.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4706"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7476
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7476/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7476/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7476/events
|
https://github.com/huggingface/datasets/pull/7476
| 2,946,997,924
|
PR_kwDODunzps6QEbmO
| 7,476
|
Priotitize json
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7476). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-03-25T15:44:31Z
| 2025-03-25T15:47:00Z
| 2025-03-25T15:45:00Z
|
MEMBER
| null | null | null |
`datasets` should load the JSON data in https://huggingface.co/datasets/facebook/natural_reasoning, not the PDF
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7476/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7476/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7476.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7476",
"merged_at": "2025-03-25T15:45:00Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7476.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7476"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5321
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5321/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5321/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5321/events
|
https://github.com/huggingface/datasets/pull/5321
| 1,471,430,667
|
PR_kwDODunzps5EEOhE
| 5,321
|
Fix loading from HF GCP cache
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"> Do you know why this stopped working?\r\n\r\nIt comes from the changes in https://github.com/huggingface/datasets/pull/5107/files#diff-355ae5c229f95f86895404b72378ecd6e966c41cbeebb674af6fe6e9611bc126"
] | 2022-12-01T14:39:06Z
| 2022-12-01T16:10:09Z
| 2022-12-01T16:07:02Z
|
MEMBER
| null | null | null |
As reported in https://discuss.huggingface.co/t/error-loading-wikipedia-dataset/26599/4 it's not possible to download a cached version of Wikipedia from the HF GCP cache
I fixed it and added an integration test (runs in 10sec)
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5321/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5321/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/5321.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5321",
"merged_at": "2022-12-01T16:07:02Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5321.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5321"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6103
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6103/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6103/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6103/events
|
https://github.com/huggingface/datasets/pull/6103
| 1,828,515,165
|
PR_kwDODunzps5Ww2gV
| 6,103
|
Set dev version
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6103). All of your documentation changes will be reflected on that endpoint.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006528 / 0.011353 (-0.004825) | 0.003909 / 0.011008 (-0.007099) | 0.083954 / 0.038508 (0.045446) | 0.070513 / 0.023109 (0.047404) | 0.344362 / 0.275898 (0.068464) | 0.370278 / 0.323480 (0.046798) | 0.005395 / 0.007986 (-0.002591) | 0.003323 / 0.004328 (-0.001005) | 0.064538 / 0.004250 (0.060288) | 0.055616 / 0.037052 (0.018564) | 0.353590 / 0.258489 (0.095101) | 0.382159 / 0.293841 (0.088318) | 0.031133 / 0.128546 (-0.097414) | 0.008429 / 0.075646 (-0.067217) | 0.288665 / 0.419271 (-0.130606) | 0.052626 / 0.043533 (0.009093) | 0.347676 / 0.255139 (0.092537) | 0.363726 / 0.283200 (0.080526) | 0.021956 / 0.141683 (-0.119727) | 1.506091 / 1.452155 (0.053936) | 1.563940 / 1.492716 (0.071223) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.207658 / 0.018006 (0.189652) | 0.473411 / 0.000490 (0.472922) | 0.005437 / 0.000200 (0.005237) | 0.000087 / 0.000054 (0.000033) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027769 / 0.037411 (-0.009643) | 0.082566 / 0.014526 (0.068040) | 0.092700 / 0.176557 (-0.083857) | 0.152589 / 0.737135 (-0.584546) | 0.093772 / 0.296338 (-0.202566) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.401072 / 0.215209 (0.185863) | 3.997922 / 2.077655 (1.920267) | 2.028223 / 1.504120 (0.524103) | 1.845229 / 1.541195 (0.304035) | 1.883980 / 1.468490 (0.415489) | 0.485112 / 4.584777 (-4.099665) | 3.657048 / 3.745712 (-0.088664) | 4.998475 / 5.269862 (-0.271386) | 3.007417 / 4.565676 (-1.558259) | 0.057003 / 0.424275 (-0.367272) | 0.007270 / 0.007607 (-0.000338) | 0.482220 / 0.226044 (0.256176) | 4.817560 / 2.268929 (2.548631) | 2.484285 / 55.444624 (-52.960340) | 2.163327 / 6.876477 (-4.713149) | 2.326412 / 2.142072 (0.184339) | 0.600349 / 4.805227 (-4.204878) | 0.134245 / 6.500664 (-6.366419) | 0.060705 / 0.075469 (-0.014764) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.281440 / 1.841788 (-0.560347) | 19.165591 / 8.074308 (11.091283) | 14.007728 / 10.191392 (3.816336) | 0.168367 / 0.680424 (-0.512057) | 0.018149 / 0.534201 (-0.516052) | 0.391688 / 0.579283 (-0.187595) | 0.414528 / 0.434364 (-0.019836) | 0.456964 / 0.540337 (-0.083373) | 0.613807 / 1.386936 (-0.773129) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006502 / 0.011353 (-0.004851) | 0.003956 / 0.011008 (-0.007052) | 0.064297 / 0.038508 (0.025789) | 0.073430 / 0.023109 (0.050321) | 0.364113 / 0.275898 (0.088215) | 0.389021 / 0.323480 (0.065541) | 0.005375 / 0.007986 (-0.002611) | 0.003363 / 0.004328 (-0.000966) | 0.064404 / 0.004250 (0.060153) | 0.056664 / 0.037052 (0.019612) | 0.365504 / 0.258489 (0.107015) | 0.398477 / 0.293841 (0.104636) | 0.031739 / 0.128546 (-0.096807) | 0.008663 / 0.075646 (-0.066984) | 0.070757 / 0.419271 (-0.348515) | 0.051014 / 0.043533 (0.007481) | 0.368287 / 0.255139 (0.113148) | 0.382941 / 0.283200 (0.099742) | 0.024642 / 0.141683 (-0.117041) | 1.516721 / 1.452155 (0.064567) | 1.557625 / 1.492716 (0.064908) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.208248 / 0.018006 (0.190242) | 0.443560 / 0.000490 (0.443070) | 0.004004 / 0.000200 (0.003805) | 0.000085 / 0.000054 (0.000031) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031116 / 0.037411 (-0.006295) | 0.086814 / 0.014526 (0.072288) | 0.099111 / 0.176557 (-0.077445) | 0.155032 / 0.737135 (-0.582104) | 0.098938 / 0.296338 (-0.197401) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.413080 / 0.215209 (0.197871) | 4.115546 / 2.077655 (2.037891) | 2.162073 / 1.504120 (0.657953) | 2.008107 / 1.541195 (0.466912) | 2.052317 / 1.468490 (0.583827) | 0.485158 / 4.584777 (-4.099619) | 3.617478 / 3.745712 (-0.128234) | 5.030564 / 5.269862 (-0.239298) | 2.787812 / 4.565676 (-1.777865) | 0.057466 / 0.424275 (-0.366809) | 0.007656 / 0.007607 (0.000049) | 0.490037 / 0.226044 (0.263993) | 4.887896 / 2.268929 (2.618968) | 2.639644 / 55.444624 (-52.804981) | 2.258051 / 6.876477 (-4.618426) | 2.417573 / 2.142072 (0.275500) | 0.604473 / 4.805227 (-4.200754) | 0.134770 / 6.500664 (-6.365894) | 0.061709 / 0.075469 (-0.013760) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.342500 / 1.841788 (-0.499288) | 19.354990 / 8.074308 (11.280682) | 14.161975 / 10.191392 (3.970583) | 0.157084 / 0.680424 (-0.523339) | 0.018227 / 0.534201 (-0.515974) | 0.391819 / 0.579283 (-0.187464) | 0.399157 / 0.434364 (-0.035207) | 0.460582 / 0.540337 (-0.079756) | 0.612183 / 1.386936 (-0.774753) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009318 / 0.011353 (-0.002035) | 0.005515 / 0.011008 (-0.005493) | 0.108532 / 0.038508 (0.070024) | 0.103583 / 0.023109 (0.080473) | 0.419249 / 0.275898 (0.143351) | 0.453573 / 0.323480 (0.130093) | 0.006601 / 0.007986 (-0.001384) | 0.005297 / 0.004328 (0.000968) | 0.082737 / 0.004250 (0.078487) | 0.064708 / 0.037052 (0.027656) | 0.425679 / 0.258489 (0.167190) | 0.462028 / 0.293841 (0.168187) | 0.048104 / 0.128546 (-0.080442) | 0.014069 / 0.075646 (-0.061577) | 0.377780 / 0.419271 (-0.041491) | 0.067510 / 0.043533 (0.023977) | 0.422421 / 0.255139 (0.167282) | 0.447127 / 0.283200 (0.163927) | 0.037745 / 0.141683 (-0.103938) | 1.855306 / 1.452155 (0.403152) | 1.943876 / 1.492716 (0.451160) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.280161 / 0.018006 (0.262155) | 0.598001 / 0.000490 (0.597512) | 0.001130 / 0.000200 (0.000930) | 0.000091 / 0.000054 (0.000036) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.036064 / 0.037411 (-0.001347) | 0.113256 / 0.014526 (0.098730) | 0.120598 / 0.176557 (-0.055959) | 0.191386 / 0.737135 (-0.545750) | 0.118125 / 0.296338 (-0.178214) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.616887 / 0.215209 (0.401678) | 6.085498 / 2.077655 (4.007844) | 2.639428 / 1.504120 (1.135308) | 2.215444 / 1.541195 (0.674249) | 2.311990 / 1.468490 (0.843500) | 0.820539 / 4.584777 (-3.764238) | 5.306010 / 3.745712 (1.560298) | 4.731726 / 5.269862 (-0.538136) | 3.053933 / 4.565676 (-1.511744) | 0.098862 / 0.424275 (-0.325413) | 0.009456 / 0.007607 (0.001849) | 0.725455 / 0.226044 (0.499411) | 7.367385 / 2.268929 (5.098457) | 3.464921 / 55.444624 (-51.979703) | 2.833868 / 6.876477 (-4.042608) | 3.033008 / 2.142072 (0.890935) | 1.036751 / 4.805227 (-3.768476) | 0.243646 / 6.500664 (-6.257018) | 0.081079 / 0.075469 (0.005610) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.584695 / 1.841788 (-0.257093) | 25.150355 / 8.074308 (17.076047) | 21.826622 / 10.191392 (11.635230) | 0.212502 / 0.680424 (-0.467921) | 0.029865 / 0.534201 (-0.504335) | 0.496814 / 0.579283 (-0.082470) | 0.611959 / 0.434364 (0.177595) | 0.550434 / 0.540337 (0.010097) | 0.800897 / 1.386936 (-0.586039) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009117 / 0.011353 (-0.002236) | 0.005236 / 0.011008 (-0.005772) | 0.082402 / 0.038508 (0.043894) | 0.090578 / 0.023109 (0.067468) | 0.487302 / 0.275898 (0.211404) | 0.523639 / 0.323480 (0.200159) | 0.006684 / 0.007986 (-0.001302) | 0.004306 / 0.004328 (-0.000023) | 0.083273 / 0.004250 (0.079023) | 0.068585 / 0.037052 (0.031532) | 0.487751 / 0.258489 (0.229262) | 0.538972 / 0.293841 (0.245131) | 0.048915 / 0.128546 (-0.079632) | 0.014312 / 0.075646 (-0.061335) | 0.091863 / 0.419271 (-0.327409) | 0.066114 / 0.043533 (0.022581) | 0.483552 / 0.255139 (0.228413) | 0.522250 / 0.283200 (0.239050) | 0.038533 / 0.141683 (-0.103150) | 1.803834 / 1.452155 (0.351680) | 1.891927 / 1.492716 (0.399211) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.336662 / 0.018006 (0.318656) | 0.611408 / 0.000490 (0.610918) | 0.014310 / 0.000200 (0.014110) | 0.000152 / 0.000054 (0.000097) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034755 / 0.037411 (-0.002656) | 0.101008 / 0.014526 (0.086483) | 0.124530 / 0.176557 (-0.052026) | 0.179844 / 0.737135 (-0.557292) | 0.125027 / 0.296338 (-0.171312) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.618341 / 0.215209 (0.403132) | 6.146848 / 2.077655 (4.069193) | 2.893305 / 1.504120 (1.389185) | 2.608722 / 1.541195 (1.067528) | 2.671276 / 1.468490 (1.202786) | 0.860096 / 4.584777 (-3.724681) | 5.440671 / 3.745712 (1.694959) | 4.776958 / 5.269862 (-0.492903) | 3.098300 / 4.565676 (-1.467376) | 0.098664 / 0.424275 (-0.325611) | 0.009270 / 0.007607 (0.001663) | 0.712780 / 0.226044 (0.486735) | 7.199721 / 2.268929 (4.930793) | 3.620723 / 55.444624 (-51.823902) | 3.052218 / 6.876477 (-3.824259) | 3.321093 / 2.142072 (1.179021) | 1.070992 / 4.805227 (-3.734235) | 0.224091 / 6.500664 (-6.276573) | 0.083395 / 0.075469 (0.007926) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.716867 / 1.841788 (-0.124921) | 25.534617 / 8.074308 (17.460309) | 25.221014 / 10.191392 (15.029621) | 0.248098 / 0.680424 (-0.432326) | 0.029659 / 0.534201 (-0.504542) | 0.492929 / 0.579283 (-0.086355) | 0.618253 / 0.434364 (0.183889) | 0.577108 / 0.540337 (0.036771) | 0.803188 / 1.386936 (-0.583748) |\n\n</details>\n</details>\n\n\n"
] | 2023-07-31T06:44:05Z
| 2023-07-31T06:55:58Z
| 2023-07-31T06:45:41Z
|
MEMBER
| null | null | null | null |
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6103/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6103/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6103.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6103",
"merged_at": "2023-07-31T06:45:41Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6103.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6103"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5609
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5609/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5609/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5609/events
|
https://github.com/huggingface/datasets/issues/5609
| 1,610,062,862
|
I_kwDODunzps5f95wO
| 5,609
|
`load_from_disk` vs `load_dataset` performance.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/4443482?v=4",
"events_url": "https://api.github.com/users/davidgilbertson/events{/privacy}",
"followers_url": "https://api.github.com/users/davidgilbertson/followers",
"following_url": "https://api.github.com/users/davidgilbertson/following{/other_user}",
"gists_url": "https://api.github.com/users/davidgilbertson/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/davidgilbertson",
"id": 4443482,
"login": "davidgilbertson",
"node_id": "MDQ6VXNlcjQ0NDM0ODI=",
"organizations_url": "https://api.github.com/users/davidgilbertson/orgs",
"received_events_url": "https://api.github.com/users/davidgilbertson/received_events",
"repos_url": "https://api.github.com/users/davidgilbertson/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/davidgilbertson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/davidgilbertson/subscriptions",
"type": "User",
"url": "https://api.github.com/users/davidgilbertson",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"Hi! We've recently made some improvements to `save_to_disk`/`list_to_disk` (100x faster in some scenarios), so it would help if you could install `datasets` directly from `main` (`pip install git+https://github.com/huggingface/datasets.git`) and re-run the \"benchmark\".",
"Great to hear! I'll give it a try when I've got a moment.",
"@mariosasko is that fix released to pip in the meantime? Asking cause im facing still the same issue (regarding loading images from local paths):\r\n```\r\ndataset = load_dataset(\"csv\", cache_dir=\"cache\", data_files=[\"/STORAGE/DATA/mijam/vit/code/list_filtered.csv\"], num_proc=16, split=\"train\").cast_column(\"image\", Image())\r\ndataset = dataset.class_encode_column(\"label\")\r\n```\r\nquite fast. \r\n\r\nThen I do `save_to_disk()` and some time later:\r\n```\r\ndataset = load_from_disk('/STORAGE/DATA/mijam/accel/saved_arrow_big')\r\n```\r\nreally slow. In theory it should be quicked since it only loads arrow files, no conversions and so on.\r\n",
"@mjamroz I assume your CSV file stores image file paths. This means `save_to_disk` needs to embed the image bytes resulting in a much bigger Arrow file (than the initial one). Maybe specifying `num_shards` to make the Arrow files smaller can help (large Arrow files on some systems take a long time to load)."
] | 2023-03-05T05:27:15Z
| 2023-07-13T18:48:05Z
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
I have downloaded `openwebtext` (~12GB) and filtered out a small amount of junk (it's still huge). Now, I would like to use this filtered version for future work. It seems I have two choices:
1. Use `load_dataset` each time, relying on the cache mechanism, and re-run my filtering.
2. `save_to_disk` and then use `load_from_disk` to load the filtered version.
The performance of these two approaches is wildly different:
* Using `load_dataset` takes about 20 seconds to load the dataset, and a few seconds to re-filter (thanks to the brilliant filter/map caching)
* Using `load_from_disk` takes 14 minutes! And the second time I tried, the session just crashed (on a machine with 32GB of RAM)
I don't know if you'd call this a bug, but it seems like there shouldn't need to be two methods to load from disk, or that they should not take such wildly different amounts of time, or that one should not crash. Or maybe that the docs could offer some guidance about when to pick which method and why two methods exist, or just how do most people do it?
Something I couldn't work out from reading the docs was this: can I modify a dataset from the hub, save it (locally) and use `load_dataset` to load it? This [post seemed to suggest that the answer is no](https://discuss.huggingface.co/t/save-and-load-datasets/9260).
### Steps to reproduce the bug
See above
### Expected behavior
Load times should be about the same.
### Environment info
- `datasets` version: 2.9.0
- Platform: Linux-5.10.102.1-microsoft-standard-WSL2-x86_64-with-glibc2.31
- Python version: 3.10.8
- PyArrow version: 11.0.0
- Pandas version: 1.5.3
| null |
{
"+1": 4,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 4,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5609/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5609/timeline
| null | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/6628
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6628/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6628/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6628/events
|
https://github.com/huggingface/datasets/pull/6628
| 2,105,760,502
|
PR_kwDODunzps5lVxXU
| 6,628
|
Make CLI test support multi-processing
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6628). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"@huggingface/datasets, feel free to review this PR so that it can be included in the next release.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004907 / 0.011353 (-0.006446) | 0.003200 / 0.011008 (-0.007808) | 0.062601 / 0.038508 (0.024093) | 0.028607 / 0.023109 (0.005498) | 0.242688 / 0.275898 (-0.033210) | 0.263754 / 0.323480 (-0.059726) | 0.003084 / 0.007986 (-0.004901) | 0.002744 / 0.004328 (-0.001585) | 0.048686 / 0.004250 (0.044436) | 0.040734 / 0.037052 (0.003682) | 0.262585 / 0.258489 (0.004096) | 0.282822 / 0.293841 (-0.011019) | 0.027470 / 0.128546 (-0.101076) | 0.010356 / 0.075646 (-0.065290) | 0.206397 / 0.419271 (-0.212874) | 0.035440 / 0.043533 (-0.008093) | 0.248599 / 0.255139 (-0.006540) | 0.268869 / 0.283200 (-0.014331) | 0.018542 / 0.141683 (-0.123141) | 1.128139 / 1.452155 (-0.324016) | 1.172115 / 1.492716 (-0.320602) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.107939 / 0.018006 (0.089933) | 0.301801 / 0.000490 (0.301311) | 0.000207 / 0.000200 (0.000007) | 0.000043 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018505 / 0.037411 (-0.018906) | 0.061350 / 0.014526 (0.046824) | 0.072645 / 0.176557 (-0.103912) | 0.119459 / 0.737135 (-0.617676) | 0.074711 / 0.296338 (-0.221628) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.275132 / 0.215209 (0.059922) | 2.714936 / 2.077655 (0.637281) | 1.434204 / 1.504120 (-0.069916) | 1.328358 / 1.541195 (-0.212837) | 1.320706 / 1.468490 (-0.147784) | 0.555723 / 4.584777 (-4.029054) | 2.401335 / 3.745712 (-1.344378) | 2.765609 / 5.269862 (-2.504253) | 1.715207 / 4.565676 (-2.850470) | 0.074990 / 0.424275 (-0.349285) | 0.004999 / 0.007607 (-0.002608) | 0.328435 / 0.226044 (0.102390) | 3.254945 / 2.268929 (0.986017) | 1.781105 / 55.444624 (-53.663519) | 1.509491 / 6.876477 (-5.366985) | 1.520670 / 2.142072 (-0.621402) | 0.636411 / 4.805227 (-4.168817) | 0.115616 / 6.500664 (-6.385048) | 0.041633 / 0.075469 (-0.033836) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.975462 / 1.841788 (-0.866326) | 11.480359 / 8.074308 (3.406051) | 10.528665 / 10.191392 (0.337273) | 0.141323 / 0.680424 (-0.539100) | 0.013510 / 0.534201 (-0.520691) | 0.293570 / 0.579283 (-0.285713) | 0.259956 / 0.434364 (-0.174408) | 0.331440 / 0.540337 (-0.208898) | 0.453487 / 1.386936 (-0.933449) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005278 / 0.011353 (-0.006075) | 0.003400 / 0.011008 (-0.007608) | 0.049442 / 0.038508 (0.010934) | 0.031738 / 0.023109 (0.008628) | 0.292334 / 0.275898 (0.016436) | 0.308931 / 0.323480 (-0.014549) | 0.004290 / 0.007986 (-0.003696) | 0.002738 / 0.004328 (-0.001591) | 0.048944 / 0.004250 (0.044694) | 0.044273 / 0.037052 (0.007221) | 0.301434 / 0.258489 (0.042945) | 0.333067 / 0.293841 (0.039226) | 0.048741 / 0.128546 (-0.079805) | 0.010357 / 0.075646 (-0.065289) | 0.057777 / 0.419271 (-0.361495) | 0.033892 / 0.043533 (-0.009641) | 0.286921 / 0.255139 (0.031782) | 0.306204 / 0.283200 (0.023005) | 0.018764 / 0.141683 (-0.122919) | 1.142000 / 1.452155 (-0.310155) | 1.206728 / 1.492716 (-0.285988) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094233 / 0.018006 (0.076227) | 0.302553 / 0.000490 (0.302063) | 0.000213 / 0.000200 (0.000013) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021814 / 0.037411 (-0.015598) | 0.075143 / 0.014526 (0.060617) | 0.087717 / 0.176557 (-0.088840) | 0.126079 / 0.737135 (-0.611056) | 0.089083 / 0.296338 (-0.207255) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.293844 / 0.215209 (0.078635) | 2.859481 / 2.077655 (0.781827) | 1.580366 / 1.504120 (0.076246) | 1.462633 / 1.541195 (-0.078562) | 1.471052 / 1.468490 (0.002562) | 0.574755 / 4.584777 (-4.010022) | 2.408925 / 3.745712 (-1.336787) | 2.673618 / 5.269862 (-2.596243) | 1.746218 / 4.565676 (-2.819459) | 0.063435 / 0.424275 (-0.360840) | 0.005023 / 0.007607 (-0.002584) | 0.341990 / 0.226044 (0.115946) | 3.430862 / 2.268929 (1.161933) | 1.953869 / 55.444624 (-53.490755) | 1.661276 / 6.876477 (-5.215201) | 1.761575 / 2.142072 (-0.380498) | 0.656388 / 4.805227 (-4.148839) | 0.117774 / 6.500664 (-6.382890) | 0.040290 / 0.075469 (-0.035179) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.004315 / 1.841788 (-0.837473) | 12.249719 / 8.074308 (4.175411) | 10.942703 / 10.191392 (0.751311) | 0.128552 / 0.680424 (-0.551872) | 0.015958 / 0.534201 (-0.518242) | 0.287330 / 0.579283 (-0.291953) | 0.274336 / 0.434364 (-0.160028) | 0.326233 / 0.540337 (-0.214104) | 0.414548 / 1.386936 (-0.972388) |\n\n</details>\n</details>\n\n\n"
] | 2024-01-29T15:30:09Z
| 2024-02-05T10:29:20Z
| 2024-02-05T10:23:13Z
|
MEMBER
| null | null | null |
Support passing `--num_proc` to CLI test.
This was really useful recently to run the command on `pubmed`: https://huggingface.co/datasets/pubmed/discussions/11
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6628/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6628/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6628.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6628",
"merged_at": "2024-02-05T10:23:13Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6628.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6628"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7165
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7165/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7165/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7165/events
|
https://github.com/huggingface/datasets/pull/7165
| 2,544,972,541
|
PR_kwDODunzps58fva1
| 7,165
|
fix increase_load_count
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7165). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"I tested a few load_dataset and they do show up in download stats now",
"Thanks for having noticed and fixed."
] | 2024-09-24T10:14:40Z
| 2024-09-24T17:31:07Z
| 2024-09-24T13:48:00Z
|
MEMBER
| null | null | null |
it was failing since 3.0 and therefore not updating download counts on HF or in our dashboard
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 1,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7165/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7165/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7165.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7165",
"merged_at": "2024-09-24T13:48:00Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7165.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7165"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4789
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4789/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4789/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4789/events
|
https://github.com/huggingface/datasets/pull/4789
| 1,328,409,253
|
PR_kwDODunzps48o3Kk
| 4,789
|
Update doc upload_dataset.mdx
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/11827707?v=4",
"events_url": "https://api.github.com/users/mishig25/events{/privacy}",
"followers_url": "https://api.github.com/users/mishig25/followers",
"following_url": "https://api.github.com/users/mishig25/following{/other_user}",
"gists_url": "https://api.github.com/users/mishig25/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mishig25",
"id": 11827707,
"login": "mishig25",
"node_id": "MDQ6VXNlcjExODI3NzA3",
"organizations_url": "https://api.github.com/users/mishig25/orgs",
"received_events_url": "https://api.github.com/users/mishig25/received_events",
"repos_url": "https://api.github.com/users/mishig25/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mishig25/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mishig25/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mishig25",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-08-04T10:24:00Z
| 2022-09-09T16:37:10Z
| 2022-09-09T16:34:58Z
|
NONE
| null | null | null | null |
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4789/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4789/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/4789.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4789",
"merged_at": "2022-09-09T16:34:58Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4789.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4789"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6216
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6216/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6216/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6216/events
|
https://github.com/huggingface/datasets/pull/6216
| 1,883,492,703
|
PR_kwDODunzps5Zp8al
| 6,216
|
Release: 2.13.2
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007801 / 0.011353 (-0.003552) | 0.004831 / 0.011008 (-0.006177) | 0.123101 / 0.038508 (0.084593) | 0.053246 / 0.023109 (0.030137) | 0.381787 / 0.275898 (0.105889) | 0.461822 / 0.323480 (0.138342) | 0.004655 / 0.007986 (-0.003331) | 0.004818 / 0.004328 (0.000490) | 0.090865 / 0.004250 (0.086614) | 0.070626 / 0.037052 (0.033574) | 0.409122 / 0.258489 (0.150633) | 0.449627 / 0.293841 (0.155787) | 0.037477 / 0.128546 (-0.091069) | 0.010677 / 0.075646 (-0.064970) | 0.419970 / 0.419271 (0.000699) | 0.064626 / 0.043533 (0.021093) | 0.379536 / 0.255139 (0.124397) | 0.405790 / 0.283200 (0.122590) | 0.027290 / 0.141683 (-0.114393) | 1.884973 / 1.452155 (0.432819) | 1.960547 / 1.492716 (0.467831) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.259393 / 0.018006 (0.241386) | 0.502130 / 0.000490 (0.501640) | 0.013053 / 0.000200 (0.012853) | 0.000336 / 0.000054 (0.000281) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033459 / 0.037411 (-0.003953) | 0.135888 / 0.014526 (0.121362) | 0.145354 / 0.176557 (-0.031203) | 0.213289 / 0.737135 (-0.523847) | 0.151239 / 0.296338 (-0.145100) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.510817 / 0.215209 (0.295608) | 5.077888 / 2.077655 (3.000234) | 2.502991 / 1.504120 (0.998871) | 2.275566 / 1.541195 (0.734371) | 2.353025 / 1.468490 (0.884535) | 0.659062 / 4.584777 (-3.925715) | 4.411399 / 3.745712 (0.665686) | 2.227395 / 5.269862 (-3.042467) | 1.306771 / 4.565676 (-3.258905) | 0.081121 / 0.424275 (-0.343154) | 0.014252 / 0.007607 (0.006645) | 0.635040 / 0.226044 (0.408996) | 6.357500 / 2.268929 (4.088572) | 3.056647 / 55.444624 (-52.387977) | 2.671997 / 6.876477 (-4.204480) | 2.847955 / 2.142072 (0.705883) | 0.808163 / 4.805227 (-3.997064) | 0.177176 / 6.500664 (-6.323488) | 0.079984 / 0.075469 (0.004515) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.490471 / 1.841788 (-0.351317) | 17.927433 / 8.074308 (9.853124) | 17.744967 / 10.191392 (7.553575) | 0.171034 / 0.680424 (-0.509390) | 0.021432 / 0.534201 (-0.512769) | 0.515745 / 0.579283 (-0.063538) | 0.504746 / 0.434364 (0.070382) | 0.630862 / 0.540337 (0.090524) | 0.755275 / 1.386936 (-0.631662) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008227 / 0.011353 (-0.003126) | 0.004864 / 0.011008 (-0.006144) | 0.092801 / 0.038508 (0.054293) | 0.054996 / 0.023109 (0.031887) | 0.500348 / 0.275898 (0.224450) | 0.565028 / 0.323480 (0.241548) | 0.004792 / 0.007986 (-0.003194) | 0.005052 / 0.004328 (0.000723) | 0.090640 / 0.004250 (0.086390) | 0.074427 / 0.037052 (0.037374) | 0.499908 / 0.258489 (0.241419) | 0.566260 / 0.293841 (0.272419) | 0.040011 / 0.128546 (-0.088536) | 0.010438 / 0.075646 (-0.065208) | 0.099385 / 0.419271 (-0.319887) | 0.060485 / 0.043533 (0.016952) | 0.480603 / 0.255139 (0.225464) | 0.508807 / 0.283200 (0.225607) | 0.025976 / 0.141683 (-0.115707) | 1.870860 / 1.452155 (0.418705) | 1.943460 / 1.492716 (0.450744) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.227753 / 0.018006 (0.209747) | 0.501859 / 0.000490 (0.501369) | 0.008211 / 0.000200 (0.008011) | 0.000127 / 0.000054 (0.000073) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.038329 / 0.037411 (0.000918) | 0.148214 / 0.014526 (0.133688) | 0.162704 / 0.176557 (-0.013852) | 0.218543 / 0.737135 (-0.518592) | 0.162992 / 0.296338 (-0.133347) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.553195 / 0.215209 (0.337986) | 5.568080 / 2.077655 (3.490425) | 2.936616 / 1.504120 (1.432496) | 2.712624 / 1.541195 (1.171429) | 2.713245 / 1.468490 (1.244755) | 0.648593 / 4.584777 (-3.936184) | 4.641361 / 3.745712 (0.895648) | 2.207064 / 5.269862 (-3.062798) | 1.315325 / 4.565676 (-3.250351) | 0.080285 / 0.424275 (-0.343990) | 0.014143 / 0.007607 (0.006536) | 0.672467 / 0.226044 (0.446423) | 6.730262 / 2.268929 (4.461333) | 3.344468 / 55.444624 (-52.100157) | 2.927837 / 6.876477 (-3.948640) | 3.124735 / 2.142072 (0.982662) | 0.795894 / 4.805227 (-4.009333) | 0.170985 / 6.500664 (-6.329679) | 0.077406 / 0.075469 (0.001937) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.598059 / 1.841788 (-0.243729) | 18.531854 / 8.074308 (10.457546) | 18.394895 / 10.191392 (8.203503) | 0.195702 / 0.680424 (-0.484722) | 0.023633 / 0.534201 (-0.510568) | 0.518110 / 0.579283 (-0.061173) | 0.517773 / 0.434364 (0.083409) | 0.617902 / 0.540337 (0.077565) | 0.736459 / 1.386936 (-0.650477) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006943 / 0.011353 (-0.004410) | 0.004524 / 0.011008 (-0.006485) | 0.121603 / 0.038508 (0.083095) | 0.047462 / 0.023109 (0.024353) | 0.362393 / 0.275898 (0.086495) | 0.440577 / 0.323480 (0.117098) | 0.004153 / 0.007986 (-0.003832) | 0.003778 / 0.004328 (-0.000550) | 0.090402 / 0.004250 (0.086152) | 0.066268 / 0.037052 (0.029216) | 0.380721 / 0.258489 (0.122232) | 0.442959 / 0.293841 (0.149118) | 0.035228 / 0.128546 (-0.093318) | 0.010217 / 0.075646 (-0.065429) | 0.408587 / 0.419271 (-0.010684) | 0.062609 / 0.043533 (0.019076) | 0.372682 / 0.255139 (0.117543) | 0.389270 / 0.283200 (0.106070) | 0.026699 / 0.141683 (-0.114984) | 1.760476 / 1.452155 (0.308321) | 1.795081 / 1.492716 (0.302365) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.229912 / 0.018006 (0.211906) | 0.476837 / 0.000490 (0.476348) | 0.008178 / 0.000200 (0.007978) | 0.000100 / 0.000054 (0.000045) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031116 / 0.037411 (-0.006296) | 0.126767 / 0.014526 (0.112241) | 0.134242 / 0.176557 (-0.042315) | 0.202120 / 0.737135 (-0.535016) | 0.142777 / 0.296338 (-0.153561) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.470690 / 0.215209 (0.255481) | 4.723198 / 2.077655 (2.645543) | 2.163870 / 1.504120 (0.659750) | 1.914177 / 1.541195 (0.372982) | 2.034529 / 1.468490 (0.566038) | 0.620472 / 4.584777 (-3.964305) | 4.391008 / 3.745712 (0.645296) | 2.100966 / 5.269862 (-3.168896) | 1.225945 / 4.565676 (-3.339732) | 0.076279 / 0.424275 (-0.347996) | 0.013551 / 0.007607 (0.005944) | 0.600989 / 0.226044 (0.374945) | 5.946715 / 2.268929 (3.677787) | 2.665117 / 55.444624 (-52.779508) | 2.320004 / 6.876477 (-4.556473) | 2.413131 / 2.142072 (0.271059) | 0.771908 / 4.805227 (-4.033320) | 0.165438 / 6.500664 (-6.335226) | 0.074512 / 0.075469 (-0.000957) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.432728 / 1.841788 (-0.409060) | 17.398133 / 8.074308 (9.323824) | 16.819152 / 10.191392 (6.627760) | 0.191849 / 0.680424 (-0.488575) | 0.021557 / 0.534201 (-0.512644) | 0.514380 / 0.579283 (-0.064903) | 0.501453 / 0.434364 (0.067089) | 0.634091 / 0.540337 (0.093753) | 0.756786 / 1.386936 (-0.630150) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007946 / 0.011353 (-0.003407) | 0.004751 / 0.011008 (-0.006257) | 0.090190 / 0.038508 (0.051682) | 0.052841 / 0.023109 (0.029732) | 0.480150 / 0.275898 (0.204252) | 0.537509 / 0.323480 (0.214029) | 0.004833 / 0.007986 (-0.003153) | 0.004796 / 0.004328 (0.000467) | 0.090616 / 0.004250 (0.086366) | 0.074325 / 0.037052 (0.037273) | 0.483776 / 0.258489 (0.225287) | 0.552094 / 0.293841 (0.258254) | 0.039240 / 0.128546 (-0.089307) | 0.010416 / 0.075646 (-0.065230) | 0.100275 / 0.419271 (-0.318996) | 0.058086 / 0.043533 (0.014553) | 0.468989 / 0.255139 (0.213850) | 0.485502 / 0.283200 (0.202302) | 0.027514 / 0.141683 (-0.114169) | 1.849625 / 1.452155 (0.397470) | 1.919515 / 1.492716 (0.426798) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.248061 / 0.018006 (0.230055) | 0.475630 / 0.000490 (0.475141) | 0.006248 / 0.000200 (0.006048) | 0.000105 / 0.000054 (0.000050) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.037746 / 0.037411 (0.000335) | 0.141638 / 0.014526 (0.127112) | 0.149530 / 0.176557 (-0.027026) | 0.209255 / 0.737135 (-0.527880) | 0.156447 / 0.296338 (-0.139892) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.544640 / 0.215209 (0.329431) | 5.493152 / 2.077655 (3.415497) | 2.869733 / 1.504120 (1.365613) | 2.624216 / 1.541195 (1.083022) | 2.710818 / 1.468490 (1.242328) | 0.640626 / 4.584777 (-3.944151) | 4.516130 / 3.745712 (0.770418) | 2.128097 / 5.269862 (-3.141765) | 1.278990 / 4.565676 (-3.286686) | 0.077114 / 0.424275 (-0.347161) | 0.013280 / 0.007607 (0.005673) | 0.655552 / 0.226044 (0.429507) | 6.526875 / 2.268929 (4.257947) | 3.347072 / 55.444624 (-52.097553) | 2.992435 / 6.876477 (-3.884041) | 3.124351 / 2.142072 (0.982278) | 0.778523 / 4.805227 (-4.026704) | 0.161873 / 6.500664 (-6.338791) | 0.072897 / 0.075469 (-0.002572) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.587058 / 1.841788 (-0.254730) | 18.170612 / 8.074308 (10.096304) | 17.220483 / 10.191392 (7.029091) | 0.207863 / 0.680424 (-0.472561) | 0.023746 / 0.534201 (-0.510455) | 0.512607 / 0.579283 (-0.066676) | 0.513258 / 0.434364 (0.078894) | 0.597880 / 0.540337 (0.057543) | 0.714974 / 1.386936 (-0.671962) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006224 / 0.011353 (-0.005128) | 0.003857 / 0.011008 (-0.007151) | 0.099786 / 0.038508 (0.061278) | 0.037919 / 0.023109 (0.014810) | 0.315294 / 0.275898 (0.039396) | 0.390178 / 0.323480 (0.066698) | 0.005358 / 0.007986 (-0.002628) | 0.002989 / 0.004328 (-0.001340) | 0.077834 / 0.004250 (0.073583) | 0.053315 / 0.037052 (0.016263) | 0.325155 / 0.258489 (0.066666) | 0.374712 / 0.293841 (0.080871) | 0.029176 / 0.128546 (-0.099370) | 0.008658 / 0.075646 (-0.066988) | 0.314245 / 0.419271 (-0.105027) | 0.046684 / 0.043533 (0.003151) | 0.316473 / 0.255139 (0.061334) | 0.346119 / 0.283200 (0.062919) | 0.022452 / 0.141683 (-0.119230) | 1.540497 / 1.452155 (0.088343) | 1.594888 / 1.492716 (0.102172) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.204349 / 0.018006 (0.186343) | 0.426842 / 0.000490 (0.426353) | 0.003060 / 0.000200 (0.002860) | 0.000073 / 0.000054 (0.000019) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023611 / 0.037411 (-0.013801) | 0.100247 / 0.014526 (0.085721) | 0.107824 / 0.176557 (-0.068733) | 0.166845 / 0.737135 (-0.570291) | 0.112782 / 0.296338 (-0.183556) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.423053 / 0.215209 (0.207844) | 4.235553 / 2.077655 (2.157899) | 1.936589 / 1.504120 (0.432469) | 1.738519 / 1.541195 (0.197325) | 1.787905 / 1.468490 (0.319415) | 0.573362 / 4.584777 (-4.011414) | 3.395272 / 3.745712 (-0.350440) | 1.765977 / 5.269862 (-3.503884) | 1.049596 / 4.565676 (-3.516081) | 0.068868 / 0.424275 (-0.355407) | 0.011028 / 0.007607 (0.003421) | 0.532835 / 0.226044 (0.306791) | 5.314890 / 2.268929 (3.045962) | 2.368733 / 55.444624 (-53.075891) | 2.033959 / 6.876477 (-4.842518) | 2.130481 / 2.142072 (-0.011591) | 0.689360 / 4.805227 (-4.115867) | 0.140271 / 6.500664 (-6.360393) | 0.068198 / 0.075469 (-0.007271) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.237212 / 1.841788 (-0.604576) | 14.182215 / 8.074308 (6.107907) | 14.972608 / 10.191392 (4.781216) | 0.133977 / 0.680424 (-0.546447) | 0.016759 / 0.534201 (-0.517442) | 0.361552 / 0.579283 (-0.217731) | 0.394932 / 0.434364 (-0.039432) | 0.442601 / 0.540337 (-0.097736) | 0.535709 / 1.386936 (-0.851227) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006327 / 0.011353 (-0.005026) | 0.003780 / 0.011008 (-0.007228) | 0.078358 / 0.038508 (0.039850) | 0.037271 / 0.023109 (0.014162) | 0.456766 / 0.275898 (0.180868) | 0.515721 / 0.323480 (0.192241) | 0.004770 / 0.007986 (-0.003216) | 0.002942 / 0.004328 (-0.001387) | 0.077383 / 0.004250 (0.073132) | 0.051773 / 0.037052 (0.014721) | 0.460722 / 0.258489 (0.202233) | 0.519997 / 0.293841 (0.226157) | 0.030461 / 0.128546 (-0.098085) | 0.008622 / 0.075646 (-0.067024) | 0.083271 / 0.419271 (-0.336000) | 0.042242 / 0.043533 (-0.001291) | 0.447691 / 0.255139 (0.192552) | 0.481965 / 0.283200 (0.198765) | 0.019510 / 0.141683 (-0.122173) | 1.536718 / 1.452155 (0.084563) | 1.588433 / 1.492716 (0.095717) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.215880 / 0.018006 (0.197874) | 0.426102 / 0.000490 (0.425612) | 0.003976 / 0.000200 (0.003776) | 0.000079 / 0.000054 (0.000025) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026168 / 0.037411 (-0.011243) | 0.105786 / 0.014526 (0.091260) | 0.113772 / 0.176557 (-0.062785) | 0.166576 / 0.737135 (-0.570559) | 0.117560 / 0.296338 (-0.178779) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.490485 / 0.215209 (0.275276) | 4.890105 / 2.077655 (2.812450) | 2.515099 / 1.504120 (1.010979) | 2.306591 / 1.541195 (0.765396) | 2.383634 / 1.468490 (0.915144) | 0.573780 / 4.584777 (-4.010997) | 3.474394 / 3.745712 (-0.271318) | 1.746795 / 5.269862 (-3.523067) | 1.044678 / 4.565676 (-3.520998) | 0.069176 / 0.424275 (-0.355099) | 0.011045 / 0.007607 (0.003438) | 0.597234 / 0.226044 (0.371189) | 5.979614 / 2.268929 (3.710685) | 3.024203 / 55.444624 (-52.420422) | 2.687502 / 6.876477 (-4.188975) | 2.781637 / 2.142072 (0.639565) | 0.690482 / 4.805227 (-4.114745) | 0.150138 / 6.500664 (-6.350526) | 0.077076 / 0.075469 (0.001607) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.307501 / 1.841788 (-0.534287) | 14.366780 / 8.074308 (6.292471) | 14.966981 / 10.191392 (4.775589) | 0.153829 / 0.680424 (-0.526594) | 0.018047 / 0.534201 (-0.516154) | 0.361391 / 0.579283 (-0.217892) | 0.398345 / 0.434364 (-0.036019) | 0.424574 / 0.540337 (-0.115764) | 0.517165 / 1.386936 (-0.869771) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006944 / 0.011353 (-0.004409) | 0.004504 / 0.011008 (-0.006504) | 0.105224 / 0.038508 (0.066716) | 0.047830 / 0.023109 (0.024721) | 0.339723 / 0.275898 (0.063825) | 0.419249 / 0.323480 (0.095769) | 0.005510 / 0.007986 (-0.002476) | 0.003574 / 0.004328 (-0.000754) | 0.079879 / 0.004250 (0.075628) | 0.066610 / 0.037052 (0.029557) | 0.353818 / 0.258489 (0.095329) | 0.397992 / 0.293841 (0.104151) | 0.031551 / 0.128546 (-0.096995) | 0.009037 / 0.075646 (-0.066610) | 0.355310 / 0.419271 (-0.063961) | 0.054931 / 0.043533 (0.011398) | 0.335153 / 0.255139 (0.080014) | 0.357460 / 0.283200 (0.074260) | 0.026031 / 0.141683 (-0.115652) | 1.546705 / 1.452155 (0.094550) | 1.627324 / 1.492716 (0.134608) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.276708 / 0.018006 (0.258701) | 0.589402 / 0.000490 (0.588912) | 0.009560 / 0.000200 (0.009360) | 0.000095 / 0.000054 (0.000041) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031041 / 0.037411 (-0.006370) | 0.117219 / 0.014526 (0.102693) | 0.125200 / 0.176557 (-0.051356) | 0.181528 / 0.737135 (-0.555607) | 0.131898 / 0.296338 (-0.164440) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.409965 / 0.215209 (0.194756) | 4.102700 / 2.077655 (2.025045) | 1.887578 / 1.504120 (0.383458) | 1.696490 / 1.541195 (0.155295) | 1.821352 / 1.468490 (0.352862) | 0.545422 / 4.584777 (-4.039355) | 3.933784 / 3.745712 (0.188071) | 1.934254 / 5.269862 (-3.335607) | 1.114935 / 4.565676 (-3.450742) | 0.067615 / 0.424275 (-0.356660) | 0.012004 / 0.007607 (0.004397) | 0.522048 / 0.226044 (0.296004) | 5.209224 / 2.268929 (2.940296) | 2.369911 / 55.444624 (-53.074714) | 2.032960 / 6.876477 (-4.843517) | 2.228874 / 2.142072 (0.086802) | 0.673172 / 4.805227 (-4.132055) | 0.147017 / 6.500664 (-6.353647) | 0.067020 / 0.075469 (-0.008449) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.281490 / 1.841788 (-0.560298) | 16.129701 / 8.074308 (8.055393) | 15.474730 / 10.191392 (5.283338) | 0.143934 / 0.680424 (-0.536490) | 0.018311 / 0.534201 (-0.515890) | 0.435940 / 0.579283 (-0.143343) | 0.446846 / 0.434364 (0.012482) | 0.543943 / 0.540337 (0.003605) | 0.648041 / 1.386936 (-0.738895) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007380 / 0.011353 (-0.003973) | 0.004510 / 0.011008 (-0.006499) | 0.080741 / 0.038508 (0.042233) | 0.050907 / 0.023109 (0.027797) | 0.425548 / 0.275898 (0.149650) | 0.487959 / 0.323480 (0.164479) | 0.005887 / 0.007986 (-0.002099) | 0.003689 / 0.004328 (-0.000639) | 0.079588 / 0.004250 (0.075338) | 0.071841 / 0.037052 (0.034788) | 0.425172 / 0.258489 (0.166683) | 0.471185 / 0.293841 (0.177344) | 0.035768 / 0.128546 (-0.092779) | 0.009229 / 0.075646 (-0.066418) | 0.086021 / 0.419271 (-0.333250) | 0.052424 / 0.043533 (0.008891) | 0.413634 / 0.255139 (0.158495) | 0.422310 / 0.283200 (0.139111) | 0.026019 / 0.141683 (-0.115664) | 1.616861 / 1.452155 (0.164707) | 1.653660 / 1.492716 (0.160943) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.280096 / 0.018006 (0.262090) | 0.587853 / 0.000490 (0.587363) | 0.006560 / 0.000200 (0.006360) | 0.000181 / 0.000054 (0.000127) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033747 / 0.037411 (-0.003665) | 0.125089 / 0.014526 (0.110564) | 0.137995 / 0.176557 (-0.038561) | 0.188192 / 0.737135 (-0.548943) | 0.141438 / 0.296338 (-0.154900) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.471524 / 0.215209 (0.256315) | 4.713988 / 2.077655 (2.636334) | 2.414785 / 1.504120 (0.910665) | 2.226815 / 1.541195 (0.685620) | 2.259222 / 1.468490 (0.790732) | 0.551663 / 4.584777 (-4.033114) | 4.031399 / 3.745712 (0.285686) | 1.966917 / 5.269862 (-3.302945) | 1.154487 / 4.565676 (-3.411190) | 0.068500 / 0.424275 (-0.355775) | 0.012127 / 0.007607 (0.004520) | 0.579342 / 0.226044 (0.353298) | 5.757415 / 2.268929 (3.488486) | 2.820012 / 55.444624 (-52.624613) | 2.521783 / 6.876477 (-4.354694) | 2.699994 / 2.142072 (0.557921) | 0.686152 / 4.805227 (-4.119075) | 0.148521 / 6.500664 (-6.352143) | 0.068478 / 0.075469 (-0.006991) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.336260 / 1.841788 (-0.505528) | 17.016935 / 8.074308 (8.942627) | 16.406951 / 10.191392 (6.215559) | 0.166907 / 0.680424 (-0.513517) | 0.020166 / 0.534201 (-0.514035) | 0.437690 / 0.579283 (-0.141593) | 0.480337 / 0.434364 (0.045973) | 0.518065 / 0.540337 (-0.022272) | 0.625904 / 1.386936 (-0.761032) |\n\n</details>\n</details>\n\n\n"
] | 2023-09-06T08:15:32Z
| 2023-09-06T08:52:18Z
| 2023-09-06T08:22:43Z
|
MEMBER
| null | null | null | null |
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6216/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6216/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6216.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6216",
"merged_at": "2023-09-06T08:22:43Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6216.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6216"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7300
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7300/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7300/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7300/events
|
https://github.com/huggingface/datasets/pull/7300
| 2,701,424,320
|
PR_kwDODunzps6Dcba8
| 7,300
|
fix: update elasticsearch version
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/31152346?v=4",
"events_url": "https://api.github.com/users/ruidazeng/events{/privacy}",
"followers_url": "https://api.github.com/users/ruidazeng/followers",
"following_url": "https://api.github.com/users/ruidazeng/following{/other_user}",
"gists_url": "https://api.github.com/users/ruidazeng/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ruidazeng",
"id": 31152346,
"login": "ruidazeng",
"node_id": "MDQ6VXNlcjMxMTUyMzQ2",
"organizations_url": "https://api.github.com/users/ruidazeng/orgs",
"received_events_url": "https://api.github.com/users/ruidazeng/received_events",
"repos_url": "https://api.github.com/users/ruidazeng/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ruidazeng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ruidazeng/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ruidazeng",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"May I request a review @lhoestq",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7300). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2024-11-28T09:14:21Z
| 2024-12-03T14:36:56Z
| 2024-12-03T14:24:42Z
|
CONTRIBUTOR
| null | null | null |
This should fix the `test_py311 (windows latest, deps-latest` errors.
```
=========================== short test summary info ===========================
ERROR tests/test_search.py - AttributeError: `np.float_` was removed in the NumPy 2.0 release. Use `np.float64` instead.
ERROR tests/test_search.py - AttributeError: `np.float_` was removed in the NumPy 2.0 release. Use `np.float64` instead.
===== 2822 passed, 54 skipped, 10 warnings, 2 errors in 373.36s (0:06:13) =====
Error: Process completed with exit code 1.
```
The elasticsearch version used is `elasticsearch==7.9.1`, which is 4 years old and uses the removed `numpy.float_`.
elasticsearch fixed this in [https://github.com/elastic/elasticsearch-py/pull/2551](https://github.com/elastic/elasticsearch-py/pull/2551) and released in 8.15.0 (August 2024) and 7.17.12 (September 2024).
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7300/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7300/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7300.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7300",
"merged_at": "2024-12-03T14:24:42Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7300.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7300"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4647
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4647/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4647/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4647/events
|
https://github.com/huggingface/datasets/issues/4647
| 1,296,311,270
|
I_kwDODunzps5NRCPm
| 4,647
|
Add Reddit dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/4755430?v=4",
"events_url": "https://api.github.com/users/omarespejel/events{/privacy}",
"followers_url": "https://api.github.com/users/omarespejel/followers",
"following_url": "https://api.github.com/users/omarespejel/following{/other_user}",
"gists_url": "https://api.github.com/users/omarespejel/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/omarespejel",
"id": 4755430,
"login": "omarespejel",
"node_id": "MDQ6VXNlcjQ3NTU0MzA=",
"organizations_url": "https://api.github.com/users/omarespejel/orgs",
"received_events_url": "https://api.github.com/users/omarespejel/received_events",
"repos_url": "https://api.github.com/users/omarespejel/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/omarespejel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/omarespejel/subscriptions",
"type": "User",
"url": "https://api.github.com/users/omarespejel",
"user_view_type": "public"
}
|
[
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] |
open
| false
| null |
[] | null |
[] | 2022-07-06T19:49:18Z
| 2022-07-06T19:49:18Z
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
## Adding a Dataset
- **Name:** *Reddit comments (2015-2018)*
- **Description:** *Reddit is an American social news aggregation website, where users can post links, and take part in discussions on these posts. These threaded discussions provide a large corpus, which is converted into a conversational dataset using the tools in this directory.*
- **Paper:** *https://arxiv.org/abs/1904.06472*
- **Data:** *https://github.com/PolyAI-LDN/conversational-datasets/tree/master/reddit*
- **Motivation:** *Dataset for training and evaluating models of conversational response*
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4647/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4647/timeline
| null | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/6201
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6201/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6201/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6201/events
|
https://github.com/huggingface/datasets/pull/6201
| 1,875,256,775
|
PR_kwDODunzps5ZOVbV
| 6,201
|
Fix to_json ValueError and remove pandas pin
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006852 / 0.011353 (-0.004501) | 0.004195 / 0.011008 (-0.006813) | 0.095008 / 0.038508 (0.056500) | 0.073469 / 0.023109 (0.050360) | 0.350170 / 0.275898 (0.074272) | 0.394309 / 0.323480 (0.070829) | 0.004391 / 0.007986 (-0.003595) | 0.003432 / 0.004328 (-0.000896) | 0.072849 / 0.004250 (0.068599) | 0.058595 / 0.037052 (0.021543) | 0.372335 / 0.258489 (0.113846) | 0.410616 / 0.293841 (0.116775) | 0.034477 / 0.128546 (-0.094069) | 0.009426 / 0.075646 (-0.066220) | 0.329262 / 0.419271 (-0.090009) | 0.057941 / 0.043533 (0.014408) | 0.358624 / 0.255139 (0.103485) | 0.413803 / 0.283200 (0.130604) | 0.025845 / 0.141683 (-0.115837) | 1.684289 / 1.452155 (0.232134) | 1.791567 / 1.492716 (0.298850) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.222731 / 0.018006 (0.204724) | 0.511615 / 0.000490 (0.511126) | 0.004163 / 0.000200 (0.003963) | 0.000088 / 0.000054 (0.000033) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033260 / 0.037411 (-0.004152) | 0.091685 / 0.014526 (0.077159) | 0.105655 / 0.176557 (-0.070901) | 0.167973 / 0.737135 (-0.569163) | 0.105458 / 0.296338 (-0.190880) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.441789 / 0.215209 (0.226580) | 4.404803 / 2.077655 (2.327148) | 2.163739 / 1.504120 (0.659620) | 1.956828 / 1.541195 (0.415633) | 2.042183 / 1.468490 (0.573693) | 0.552221 / 4.584777 (-4.032556) | 3.951769 / 3.745712 (0.206057) | 3.591983 / 5.269862 (-1.677878) | 2.225058 / 4.565676 (-2.340619) | 0.064528 / 0.424275 (-0.359747) | 0.008403 / 0.007607 (0.000796) | 0.528830 / 0.226044 (0.302786) | 5.233686 / 2.268929 (2.964757) | 2.681156 / 55.444624 (-52.763468) | 2.261188 / 6.876477 (-4.615289) | 2.470037 / 2.142072 (0.327964) | 0.661793 / 4.805227 (-4.143434) | 0.150138 / 6.500664 (-6.350527) | 0.068663 / 0.075469 (-0.006807) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.463086 / 1.841788 (-0.378701) | 21.408232 / 8.074308 (13.333924) | 15.521718 / 10.191392 (5.330326) | 0.164587 / 0.680424 (-0.515837) | 0.021035 / 0.534201 (-0.513166) | 0.445466 / 0.579283 (-0.133817) | 0.462489 / 0.434364 (0.028125) | 0.517733 / 0.540337 (-0.022604) | 0.724242 / 1.386936 (-0.662694) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007117 / 0.011353 (-0.004236) | 0.004230 / 0.011008 (-0.006778) | 0.072186 / 0.038508 (0.033678) | 0.076758 / 0.023109 (0.053648) | 0.452606 / 0.275898 (0.176708) | 0.491872 / 0.323480 (0.168392) | 0.005989 / 0.007986 (-0.001996) | 0.003611 / 0.004328 (-0.000717) | 0.072642 / 0.004250 (0.068392) | 0.058985 / 0.037052 (0.021933) | 0.463414 / 0.258489 (0.204925) | 0.497538 / 0.293841 (0.203697) | 0.036325 / 0.128546 (-0.092221) | 0.009814 / 0.075646 (-0.065832) | 0.078745 / 0.419271 (-0.340527) | 0.054308 / 0.043533 (0.010775) | 0.468210 / 0.255139 (0.213071) | 0.476434 / 0.283200 (0.193234) | 0.023683 / 0.141683 (-0.118000) | 1.706457 / 1.452155 (0.254302) | 1.775855 / 1.492716 (0.283139) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.241599 / 0.018006 (0.223592) | 0.483859 / 0.000490 (0.483370) | 0.006432 / 0.000200 (0.006233) | 0.000177 / 0.000054 (0.000123) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034723 / 0.037411 (-0.002688) | 0.104420 / 0.014526 (0.089894) | 0.121071 / 0.176557 (-0.055486) | 0.174899 / 0.737135 (-0.562237) | 0.119587 / 0.296338 (-0.176751) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.492731 / 0.215209 (0.277522) | 4.898621 / 2.077655 (2.820967) | 2.710931 / 1.504120 (1.206811) | 2.513889 / 1.541195 (0.972694) | 2.578073 / 1.468490 (1.109583) | 0.548318 / 4.584777 (-4.036459) | 4.048603 / 3.745712 (0.302891) | 3.637654 / 5.269862 (-1.632208) | 2.263682 / 4.565676 (-2.301994) | 0.065786 / 0.424275 (-0.358489) | 0.008119 / 0.007607 (0.000512) | 0.578693 / 0.226044 (0.352649) | 5.780619 / 2.268929 (3.511691) | 3.224625 / 55.444624 (-52.220000) | 2.838750 / 6.876477 (-4.037726) | 2.970276 / 2.142072 (0.828204) | 0.654423 / 4.805227 (-4.150805) | 0.148696 / 6.500664 (-6.351969) | 0.066469 / 0.075469 (-0.009000) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.574772 / 1.841788 (-0.267015) | 21.822356 / 8.074308 (13.748048) | 16.504127 / 10.191392 (6.312735) | 0.183357 / 0.680424 (-0.497067) | 0.022759 / 0.534201 (-0.511442) | 0.453746 / 0.579283 (-0.125537) | 0.447037 / 0.434364 (0.012673) | 0.536562 / 0.540337 (-0.003775) | 0.731063 / 1.386936 (-0.655873) |\n\n</details>\n</details>\n\n\n",
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008542 / 0.011353 (-0.002811) | 0.005481 / 0.011008 (-0.005527) | 0.100122 / 0.038508 (0.061614) | 0.078968 / 0.023109 (0.055858) | 0.403751 / 0.275898 (0.127853) | 0.457559 / 0.323480 (0.134079) | 0.006152 / 0.007986 (-0.001834) | 0.003805 / 0.004328 (-0.000523) | 0.072787 / 0.004250 (0.068536) | 0.054794 / 0.037052 (0.017741) | 0.419815 / 0.258489 (0.161326) | 0.437453 / 0.293841 (0.143612) | 0.044641 / 0.128546 (-0.083905) | 0.013755 / 0.075646 (-0.061892) | 0.374683 / 0.419271 (-0.044589) | 0.071442 / 0.043533 (0.027909) | 0.395814 / 0.255139 (0.140675) | 0.439042 / 0.283200 (0.155842) | 0.034596 / 0.141683 (-0.107087) | 1.655056 / 1.452155 (0.202902) | 1.826410 / 1.492716 (0.333694) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.278667 / 0.018006 (0.260661) | 0.617354 / 0.000490 (0.616864) | 0.004111 / 0.000200 (0.003911) | 0.000138 / 0.000054 (0.000083) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025905 / 0.037411 (-0.011506) | 0.084721 / 0.014526 (0.070195) | 0.099737 / 0.176557 (-0.076819) | 0.163016 / 0.737135 (-0.574119) | 0.095104 / 0.296338 (-0.201234) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.531589 / 0.215209 (0.316380) | 5.455303 / 2.077655 (3.377649) | 2.495112 / 1.504120 (0.990992) | 2.234139 / 1.541195 (0.692944) | 2.295090 / 1.468490 (0.826599) | 0.777627 / 4.584777 (-3.807150) | 5.053069 / 3.745712 (1.307357) | 4.488715 / 5.269862 (-0.781147) | 2.775991 / 4.565676 (-1.789686) | 0.094175 / 0.424275 (-0.330100) | 0.008681 / 0.007607 (0.001074) | 0.668174 / 0.226044 (0.442130) | 6.631876 / 2.268929 (4.362948) | 3.118055 / 55.444624 (-52.326569) | 2.480355 / 6.876477 (-4.396122) | 2.706643 / 2.142072 (0.564571) | 0.927173 / 4.805227 (-3.878054) | 0.217385 / 6.500664 (-6.283279) | 0.067110 / 0.075469 (-0.008359) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.517926 / 1.841788 (-0.323861) | 21.420546 / 8.074308 (13.346238) | 21.108266 / 10.191392 (10.916874) | 0.222449 / 0.680424 (-0.457975) | 0.027969 / 0.534201 (-0.506232) | 0.459484 / 0.579283 (-0.119799) | 0.582629 / 0.434364 (0.148265) | 0.520971 / 0.540337 (-0.019366) | 0.694270 / 1.386936 (-0.692666) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008257 / 0.011353 (-0.003096) | 0.004511 / 0.011008 (-0.006497) | 0.075031 / 0.038508 (0.036523) | 0.070526 / 0.023109 (0.047416) | 0.445595 / 0.275898 (0.169697) | 0.512312 / 0.323480 (0.188832) | 0.005933 / 0.007986 (-0.002052) | 0.003814 / 0.004328 (-0.000515) | 0.073553 / 0.004250 (0.069302) | 0.058174 / 0.037052 (0.021121) | 0.472307 / 0.258489 (0.213818) | 0.519679 / 0.293841 (0.225838) | 0.046027 / 0.128546 (-0.082520) | 0.011757 / 0.075646 (-0.063889) | 0.084883 / 0.419271 (-0.334388) | 0.056476 / 0.043533 (0.012943) | 0.475608 / 0.255139 (0.220469) | 0.507588 / 0.283200 (0.224388) | 0.031661 / 0.141683 (-0.110022) | 1.673183 / 1.452155 (0.221028) | 1.736836 / 1.492716 (0.244120) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.350887 / 0.018006 (0.332881) | 0.589796 / 0.000490 (0.589306) | 0.023066 / 0.000200 (0.022867) | 0.000106 / 0.000054 (0.000052) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030764 / 0.037411 (-0.006647) | 0.116967 / 0.014526 (0.102441) | 0.102760 / 0.176557 (-0.073796) | 0.167690 / 0.737135 (-0.569445) | 0.111350 / 0.296338 (-0.184988) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.584565 / 0.215209 (0.369356) | 5.898081 / 2.077655 (3.820426) | 2.770374 / 1.504120 (1.266254) | 2.467519 / 1.541195 (0.926324) | 2.463319 / 1.468490 (0.994829) | 0.794294 / 4.584777 (-3.790483) | 5.272285 / 3.745712 (1.526573) | 4.514830 / 5.269862 (-0.755032) | 2.937259 / 4.565676 (-1.628417) | 0.093702 / 0.424275 (-0.330574) | 0.008012 / 0.007607 (0.000405) | 0.772371 / 0.226044 (0.546327) | 7.574941 / 2.268929 (5.306013) | 3.710965 / 55.444624 (-51.733659) | 2.927964 / 6.876477 (-3.948513) | 3.256036 / 2.142072 (1.113964) | 1.051649 / 4.805227 (-3.753578) | 0.203055 / 6.500664 (-6.297609) | 0.081072 / 0.075469 (0.005603) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.574251 / 1.841788 (-0.267537) | 22.340801 / 8.074308 (14.266493) | 20.497769 / 10.191392 (10.306377) | 0.228725 / 0.680424 (-0.451699) | 0.029095 / 0.534201 (-0.505106) | 0.452460 / 0.579283 (-0.126823) | 0.586419 / 0.434364 (0.152055) | 0.571237 / 0.540337 (0.030900) | 0.745069 / 1.386936 (-0.641867) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006529 / 0.011353 (-0.004824) | 0.004062 / 0.011008 (-0.006946) | 0.083712 / 0.038508 (0.045204) | 0.072378 / 0.023109 (0.049269) | 0.358779 / 0.275898 (0.082881) | 0.387216 / 0.323480 (0.063736) | 0.004038 / 0.007986 (-0.003948) | 0.003316 / 0.004328 (-0.001013) | 0.065207 / 0.004250 (0.060956) | 0.054439 / 0.037052 (0.017386) | 0.370689 / 0.258489 (0.112200) | 0.411008 / 0.293841 (0.117167) | 0.031133 / 0.128546 (-0.097413) | 0.008600 / 0.075646 (-0.067047) | 0.287753 / 0.419271 (-0.131518) | 0.051845 / 0.043533 (0.008312) | 0.360327 / 0.255139 (0.105188) | 0.394791 / 0.283200 (0.111591) | 0.025139 / 0.141683 (-0.116544) | 1.488151 / 1.452155 (0.035996) | 1.556776 / 1.492716 (0.064059) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.209462 / 0.018006 (0.191456) | 0.459168 / 0.000490 (0.458678) | 0.006037 / 0.000200 (0.005837) | 0.000079 / 0.000054 (0.000025) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028444 / 0.037411 (-0.008967) | 0.082974 / 0.014526 (0.068448) | 0.094919 / 0.176557 (-0.081638) | 0.151875 / 0.737135 (-0.585260) | 0.096143 / 0.296338 (-0.200195) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.402675 / 0.215209 (0.187466) | 4.014787 / 2.077655 (1.937133) | 2.015793 / 1.504120 (0.511673) | 1.838976 / 1.541195 (0.297782) | 1.931733 / 1.468490 (0.463243) | 0.489435 / 4.584777 (-4.095342) | 3.581662 / 3.745712 (-0.164050) | 3.315392 / 5.269862 (-1.954469) | 2.053369 / 4.565676 (-2.512307) | 0.057749 / 0.424275 (-0.366526) | 0.007720 / 0.007607 (0.000113) | 0.483388 / 0.226044 (0.257343) | 4.820798 / 2.268929 (2.551870) | 2.544264 / 55.444624 (-52.900361) | 2.170513 / 6.876477 (-4.705963) | 2.416976 / 2.142072 (0.274903) | 0.588351 / 4.805227 (-4.216876) | 0.136988 / 6.500664 (-6.363676) | 0.062294 / 0.075469 (-0.013175) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.263807 / 1.841788 (-0.577980) | 19.888202 / 8.074308 (11.813894) | 14.352977 / 10.191392 (4.161585) | 0.167200 / 0.680424 (-0.513224) | 0.018449 / 0.534201 (-0.515752) | 0.393262 / 0.579283 (-0.186021) | 0.407854 / 0.434364 (-0.026510) | 0.455852 / 0.540337 (-0.084485) | 0.629024 / 1.386936 (-0.757912) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006642 / 0.011353 (-0.004710) | 0.004041 / 0.011008 (-0.006967) | 0.065823 / 0.038508 (0.027315) | 0.076810 / 0.023109 (0.053701) | 0.397680 / 0.275898 (0.121782) | 0.430104 / 0.323480 (0.106624) | 0.006035 / 0.007986 (-0.001951) | 0.003389 / 0.004328 (-0.000939) | 0.066056 / 0.004250 (0.061806) | 0.054222 / 0.037052 (0.017170) | 0.397964 / 0.258489 (0.139475) | 0.439277 / 0.293841 (0.145436) | 0.032394 / 0.128546 (-0.096152) | 0.008586 / 0.075646 (-0.067060) | 0.072538 / 0.419271 (-0.346734) | 0.048346 / 0.043533 (0.004813) | 0.399631 / 0.255139 (0.144492) | 0.418684 / 0.283200 (0.135484) | 0.022570 / 0.141683 (-0.119113) | 1.519788 / 1.452155 (0.067633) | 1.581457 / 1.492716 (0.088740) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.243443 / 0.018006 (0.225436) | 0.453095 / 0.000490 (0.452606) | 0.009940 / 0.000200 (0.009740) | 0.000121 / 0.000054 (0.000066) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032293 / 0.037411 (-0.005118) | 0.091681 / 0.014526 (0.077155) | 0.103729 / 0.176557 (-0.072827) | 0.156361 / 0.737135 (-0.580775) | 0.105034 / 0.296338 (-0.191305) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.427761 / 0.215209 (0.212551) | 4.266044 / 2.077655 (2.188390) | 2.285161 / 1.504120 (0.781041) | 2.118652 / 1.541195 (0.577457) | 2.203469 / 1.468490 (0.734979) | 0.494587 / 4.584777 (-4.090190) | 3.676706 / 3.745712 (-0.069006) | 3.252478 / 5.269862 (-2.017383) | 2.027432 / 4.565676 (-2.538245) | 0.057856 / 0.424275 (-0.366419) | 0.007279 / 0.007607 (-0.000328) | 0.502767 / 0.226044 (0.276723) | 5.031409 / 2.268929 (2.762480) | 2.741767 / 55.444624 (-52.702858) | 2.408480 / 6.876477 (-4.467997) | 2.607193 / 2.142072 (0.465121) | 0.590787 / 4.805227 (-4.214440) | 0.133633 / 6.500664 (-6.367031) | 0.061195 / 0.075469 (-0.014274) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.342824 / 1.841788 (-0.498964) | 20.137195 / 8.074308 (12.062887) | 14.986743 / 10.191392 (4.795351) | 0.168218 / 0.680424 (-0.512206) | 0.020209 / 0.534201 (-0.513992) | 0.397446 / 0.579283 (-0.181837) | 0.427496 / 0.434364 (-0.006868) | 0.475058 / 0.540337 (-0.065279) | 0.648439 / 1.386936 (-0.738497) |\n\n</details>\n</details>\n\n\n"
] | 2023-08-31T10:38:08Z
| 2023-09-05T11:07:07Z
| 2023-09-05T10:58:21Z
|
MEMBER
| null | null | null |
This PR fixes the root cause of the issue:
- #6197
This PR also removes the temporary pin of `pandas` introduced by:
- #6200
Note that for orient in ['records', 'values'], index value is ignored but
- in `pandas` < 2.1.0, a ValueError is raised if not index and orient not in ['split', 'table']
- for orient = 'records', we need index = True
- default index value is True
- in `pandas` = 2.1.0, a ValueError is raised if index is True and orient in ['records', 'values']
- for orient = 'records', we need index = False or None
- default index value is None
This PR fixes the issue by not passing index and thus using default index value (valid for all pandas versions), unless orient is 'split' or 'table' (where we pass index = False, as it was done before this fix).
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6201/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6201/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6201.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6201",
"merged_at": "2023-09-05T10:58:21Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6201.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6201"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5490
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5490/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5490/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5490/events
|
https://github.com/huggingface/datasets/pull/5490
| 1,565,842,327
|
PR_kwDODunzps5I_nz-
| 5,490
|
Do not add index column by default when exporting to CSV
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008581 / 0.011353 (-0.002772) | 0.004519 / 0.011008 (-0.006490) | 0.099721 / 0.038508 (0.061213) | 0.029217 / 0.023109 (0.006107) | 0.298229 / 0.275898 (0.022331) | 0.332605 / 0.323480 (0.009125) | 0.006880 / 0.007986 (-0.001106) | 0.003324 / 0.004328 (-0.001005) | 0.078143 / 0.004250 (0.073892) | 0.034262 / 0.037052 (-0.002790) | 0.304162 / 0.258489 (0.045673) | 0.342351 / 0.293841 (0.048510) | 0.033387 / 0.128546 (-0.095159) | 0.011397 / 0.075646 (-0.064249) | 0.321527 / 0.419271 (-0.097744) | 0.040886 / 0.043533 (-0.002647) | 0.299968 / 0.255139 (0.044829) | 0.322484 / 0.283200 (0.039285) | 0.083832 / 0.141683 (-0.057851) | 1.482241 / 1.452155 (0.030086) | 1.548438 / 1.492716 (0.055721) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.191002 / 0.018006 (0.172996) | 0.403423 / 0.000490 (0.402933) | 0.002493 / 0.000200 (0.002293) | 0.000074 / 0.000054 (0.000020) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023720 / 0.037411 (-0.013691) | 0.100806 / 0.014526 (0.086281) | 0.105314 / 0.176557 (-0.071242) | 0.141490 / 0.737135 (-0.595645) | 0.108695 / 0.296338 (-0.187644) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.412250 / 0.215209 (0.197041) | 4.124830 / 2.077655 (2.047175) | 1.851948 / 1.504120 (0.347828) | 1.651597 / 1.541195 (0.110403) | 1.712486 / 1.468490 (0.243996) | 0.696634 / 4.584777 (-3.888143) | 3.304220 / 3.745712 (-0.441492) | 1.862776 / 5.269862 (-3.407086) | 1.159452 / 4.565676 (-3.406224) | 0.082930 / 0.424275 (-0.341345) | 0.012586 / 0.007607 (0.004979) | 0.524499 / 0.226044 (0.298455) | 5.249235 / 2.268929 (2.980307) | 2.293187 / 55.444624 (-53.151437) | 1.950101 / 6.876477 (-4.926376) | 2.008274 / 2.142072 (-0.133799) | 0.811641 / 4.805227 (-3.993586) | 0.148785 / 6.500664 (-6.351879) | 0.064461 / 0.075469 (-0.011008) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.232227 / 1.841788 (-0.609561) | 13.235896 / 8.074308 (5.161588) | 13.837420 / 10.191392 (3.646028) | 0.135586 / 0.680424 (-0.544838) | 0.028935 / 0.534201 (-0.505266) | 0.397064 / 0.579283 (-0.182220) | 0.393814 / 0.434364 (-0.040549) | 0.480450 / 0.540337 (-0.059887) | 0.561159 / 1.386936 (-0.825777) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006696 / 0.011353 (-0.004657) | 0.004528 / 0.011008 (-0.006480) | 0.077335 / 0.038508 (0.038827) | 0.027181 / 0.023109 (0.004072) | 0.345379 / 0.275898 (0.069481) | 0.372544 / 0.323480 (0.049064) | 0.006808 / 0.007986 (-0.001178) | 0.003284 / 0.004328 (-0.001045) | 0.077379 / 0.004250 (0.073129) | 0.039954 / 0.037052 (0.002901) | 0.348094 / 0.258489 (0.089605) | 0.382315 / 0.293841 (0.088474) | 0.031694 / 0.128546 (-0.096852) | 0.011714 / 0.075646 (-0.063933) | 0.086425 / 0.419271 (-0.332846) | 0.041778 / 0.043533 (-0.001754) | 0.342161 / 0.255139 (0.087022) | 0.363798 / 0.283200 (0.080599) | 0.091315 / 0.141683 (-0.050368) | 1.462066 / 1.452155 (0.009912) | 1.541417 / 1.492716 (0.048700) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.235840 / 0.018006 (0.217834) | 0.397096 / 0.000490 (0.396606) | 0.004597 / 0.000200 (0.004397) | 0.000079 / 0.000054 (0.000025) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024296 / 0.037411 (-0.013115) | 0.099167 / 0.014526 (0.084641) | 0.108257 / 0.176557 (-0.068299) | 0.143434 / 0.737135 (-0.593701) | 0.111933 / 0.296338 (-0.184406) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.440306 / 0.215209 (0.225096) | 4.374065 / 2.077655 (2.296410) | 2.072653 / 1.504120 (0.568533) | 1.864829 / 1.541195 (0.323635) | 1.927970 / 1.468490 (0.459479) | 0.710118 / 4.584777 (-3.874659) | 3.391216 / 3.745712 (-0.354496) | 1.888847 / 5.269862 (-3.381015) | 1.178740 / 4.565676 (-3.386936) | 0.083950 / 0.424275 (-0.340325) | 0.012567 / 0.007607 (0.004960) | 0.540557 / 0.226044 (0.314513) | 5.437621 / 2.268929 (3.168692) | 2.531165 / 55.444624 (-52.913460) | 2.181450 / 6.876477 (-4.695027) | 2.209108 / 2.142072 (0.067035) | 0.814236 / 4.805227 (-3.990991) | 0.153000 / 6.500664 (-6.347664) | 0.066769 / 0.075469 (-0.008700) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.301057 / 1.841788 (-0.540731) | 14.066786 / 8.074308 (5.992478) | 13.641455 / 10.191392 (3.450063) | 0.138838 / 0.680424 (-0.541586) | 0.016733 / 0.534201 (-0.517468) | 0.391823 / 0.579283 (-0.187460) | 0.390817 / 0.434364 (-0.043547) | 0.487682 / 0.540337 (-0.052656) | 0.581134 / 1.386936 (-0.805802) |\n\n</details>\n</details>\n\n\n"
] | 2023-02-01T10:20:55Z
| 2023-02-09T09:29:08Z
| 2023-02-09T09:22:23Z
|
MEMBER
| null | null | null |
As pointed out by @merveenoyan, default behavior of `Dataset.to_csv` adds the index as an additional column without name.
This PR changes the default behavior, so that now the index column is not written.
To add the index column, now you need to pass `index=True` and also `index_label=<name of the index colum>` to name that column.
CC: @merveenoyan
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5490/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5490/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/5490.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5490",
"merged_at": "2023-02-09T09:22:23Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5490.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5490"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4843
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4843/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4843/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4843/events
|
https://github.com/huggingface/datasets/pull/4843
| 1,337,668,699
|
PR_kwDODunzps49HaWT
| 4,843
|
Fix typo in streaming docs
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47894090?v=4",
"events_url": "https://api.github.com/users/flozi00/events{/privacy}",
"followers_url": "https://api.github.com/users/flozi00/followers",
"following_url": "https://api.github.com/users/flozi00/following{/other_user}",
"gists_url": "https://api.github.com/users/flozi00/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/flozi00",
"id": 47894090,
"login": "flozi00",
"node_id": "MDQ6VXNlcjQ3ODk0MDkw",
"organizations_url": "https://api.github.com/users/flozi00/orgs",
"received_events_url": "https://api.github.com/users/flozi00/received_events",
"repos_url": "https://api.github.com/users/flozi00/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/flozi00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/flozi00/subscriptions",
"type": "User",
"url": "https://api.github.com/users/flozi00",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-08-12T20:18:21Z
| 2022-08-14T11:43:30Z
| 2022-08-14T11:02:09Z
|
CONTRIBUTOR
| null | null | null | null |
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4843/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4843/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/4843.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4843",
"merged_at": "2022-08-14T11:02:09Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4843.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4843"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6867
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6867/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6867/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6867/events
|
https://github.com/huggingface/datasets/issues/6867
| 2,279,059,787
|
I_kwDODunzps6H17FL
| 6,867
|
Improve performance of JSON loader
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] | null |
[
"Thanks! Feel free to ping me for examples. May not respond immediately because we're all busy but would like to help.",
"Hi @natolambert, could you please give some examples of JSON files to benchmark?\r\n\r\nPlease note that this JSON file (https://huggingface.co/datasets/allenai/reward-bench-results/blob/main/eval-set-scores/Ray2333/reward-model-Mistral-7B-instruct-Unified-Feedback.json) is not in \"records\" orient; instead it has the following structure:\r\n```json\r\n{\r\n \"chat_template\": \"tulu\",\r\n \"id\": [30, 34, 35,...],\r\n \"model\": \"Ray2333/reward-model-Mistral-7B-instruct-Unified-Feedback\",\r\n \"model_type\": \"Seq. Classifier\",\r\n \"results\": [1, 1, 1, ...],\r\n \"scores_chosen\": [4.421875, 1.8916015625, 3.8515625,...],\r\n \"scores_rejected\": [-2.416015625, -1.47265625, -0.9912109375,...],\r\n \"subset\": [\"alpacaeval-easy\", \"alpacaeval-easy\", \"alpacaeval-easy\",...]\r\n \"text_chosen\": [\"<s>[INST] How do I detail a...\",...],\r\n \"text_rejected\": [\"<s>[INST] How do I detail a...\",...]\r\n}\r\n```\r\n\r\nNote that \"records\" orient should be a list (not a dict) with each row as one item of the list:\r\n```json\r\n[\r\n {\"chat_template\": \"tulu\", \"id\": 30,... },\r\n {\"chat_template\": \"tulu\", \"id\": 34,... },\r\n ...\r\n]\r\n```",
"We use a mix (which is a mess), here's an example with the records orient\r\nhttps://huggingface.co/datasets/allenai/reward-bench-results/blob/main/best-of-n/alpaca_eval/tulu-13b/OpenAssistant/oasst-rm-2.1-pythia-1.4b-epoch-2.5.json\r\n\r\nThere are more in that folder, ~40mb maybe?",
"@albertvillanova here's a snippet so you don't need to click\r\n```\r\n{\r\n \"config\": \"top_p=0.9;temp=1.0\",\r\n \"dataset_details\": \"helpful_base\",\r\n \"id\": [\r\n 0,\r\n 0\r\n ],\r\n \"model\": \"allenai/tulu-2-dpo-13b\",\r\n \"scores\": 3.076171875\r\n}\r\n{\r\n \"config\": \"top_p=0.9;temp=1.0\",\r\n \"dataset_details\": \"helpful_base\",\r\n \"id\": [\r\n 0,\r\n 1\r\n ],\r\n \"model\": \"allenai/tulu-2-dpo-13b\",\r\n \"scores\": 3.87890625\r\n}\r\n{\r\n \"config\": \"top_p=0.9;temp=1.0\",\r\n \"dataset_details\": \"helpful_base\",\r\n \"id\": [\r\n 0,\r\n 2\r\n ],\r\n \"model\": \"allenai/tulu-2-dpo-13b\",\r\n \"scores\": 3.287109375\r\n}\r\n{\r\n \"config\": \"top_p=0.9;temp=1.0\",\r\n \"dataset_details\": \"helpful_base\",\r\n \"id\": [\r\n 0,\r\n 3\r\n ],\r\n \"model\": \"allenai/tulu-2-dpo-13b\",\r\n \"scores\": 1.6337890625\r\n}\r\n{\r\n \"config\": \"top_p=0.9;temp=1.0\",\r\n \"dataset_details\": \"helpful_base\",\r\n \"id\": [\r\n 0,\r\n 4\r\n ],\r\n \"model\": \"allenai/tulu-2-dpo-13b\",\r\n \"scores\": 5.27734375\r\n}\r\n{\r\n \"config\": \"top_p=0.9;temp=1.0\",\r\n \"dataset_details\": \"helpful_base\",\r\n \"id\": [\r\n 0,\r\n 5\r\n ],\r\n \"model\": \"allenai/tulu-2-dpo-13b\",\r\n \"scores\": 3.0625\r\n}\r\n{\r\n \"config\": \"top_p=0.9;temp=1.0\",\r\n \"dataset_details\": \"helpful_base\",\r\n \"id\": [\r\n 0,\r\n 6\r\n ],\r\n \"model\": \"allenai/tulu-2-dpo-13b\",\r\n \"scores\": 2.29296875\r\n}\r\n{\r\n \"config\": \"top_p=0.9;temp=1.0\",\r\n \"dataset_details\": \"helpful_base\",\r\n \"id\": [\r\n 0,\r\n 7\r\n ],\r\n \"model\": \"allenai/tulu-2-dpo-13b\",\r\n \"scores\": 6.77734375\r\n}\r\n{\r\n \"config\": \"top_p=0.9;temp=1.0\",\r\n \"dataset_details\": \"helpful_base\",\r\n \"id\": [\r\n 0,\r\n 8\r\n ],\r\n \"model\": \"allenai/tulu-2-dpo-13b\",\r\n \"scores\": 3.853515625\r\n}\r\n{\r\n \"config\": \"top_p=0.9;temp=1.0\",\r\n \"dataset_details\": \"helpful_base\",\r\n \"id\": [\r\n 0,\r\n 9\r\n ],\r\n \"model\": \"allenai/tulu-2-dpo-13b\",\r\n \"scores\": 4.86328125\r\n}\r\n{\r\n \"config\": \"top_p=0.9;temp=1.0\",\r\n \"dataset_details\": \"helpful_base\",\r\n \"id\": [\r\n 0,\r\n 10\r\n ],\r\n \"model\": \"allenai/tulu-2-dpo-13b\",\r\n \"scores\": 2.890625\r\n}\r\n{\r\n \"config\": \"top_p=0.9;temp=1.0\",\r\n \"dataset_details\": \"helpful_base\",\r\n \"id\": [\r\n 0,\r\n 11\r\n ],\r\n \"model\": \"allenai/tulu-2-dpo-13b\",\r\n \"scores\": 4.70703125\r\n}\r\n{\r\n \"config\": \"top_p=0.9;temp=1.0\",\r\n \"dataset_details\": \"helpful_base\",\r\n \"id\": [\r\n 0,\r\n 12\r\n ],\r\n \"model\": \"allenai/tulu-2-dpo-13b\",\r\n \"scores\": 4.45703125\r\n}\r\n```",
"Thanks again for your feedback, @natolambert.\r\n\r\nHowever, strictly speaking, the last file is not in JSON format but in kind of JSON-Lines like format (although not properly either because there are multiple newline characters within each object). Not even pandas can read that file format.\r\n\r\nAnyway, for JSON-Lines, I would expect that `datasets` and `pandas` have the same performance for JSON Lines files, as both use `pyarrow` under the hood...\r\n\r\nA proper JSON file in records orient should be a list (a JSON array): the first character should be `[`.\r\n\r\nAnyway, I am generating a JSON file from your JSON-Lines file to test performance."
] | 2024-05-04T15:04:16Z
| 2024-05-17T16:22:28Z
| 2024-05-17T16:22:28Z
|
MEMBER
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
As reported by @natolambert, loading regular JSON files with `datasets` shows poor performance.
The cause is that we use the `json` Python standard library instead of other faster libraries. See my old comment: https://github.com/huggingface/datasets/pull/2638#pullrequestreview-706983714
> There are benchmarks that compare different JSON packages, with the Standard Library one among the worst performant:
> - https://github.com/ultrajson/ultrajson#benchmarks
> - https://github.com/ijl/orjson#performance
I remember having a discussion about this and it was decided that it was better not to include an additional dependency on a 3rd-party library.
However:
- We already depend on `pandas` and `pandas` depends on `ujson`: so we have an indirect dependency on `ujson`
- Even if the above were not the case, we always could include `ujson` as an optional extra dependency, and check at runtime if it is installed to decide which library to use, either json or ujson
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 3,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 3,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6867/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6867/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/5038
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5038/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5038/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5038/events
|
https://github.com/huggingface/datasets/issues/5038
| 1,389,631,122
|
I_kwDODunzps5S1BaS
| 5,038
|
`Dataset.unique` showing wrong output after filtering
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/4904985?v=4",
"events_url": "https://api.github.com/users/mxschmdt/events{/privacy}",
"followers_url": "https://api.github.com/users/mxschmdt/followers",
"following_url": "https://api.github.com/users/mxschmdt/following{/other_user}",
"gists_url": "https://api.github.com/users/mxschmdt/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mxschmdt",
"id": 4904985,
"login": "mxschmdt",
"node_id": "MDQ6VXNlcjQ5MDQ5ODU=",
"organizations_url": "https://api.github.com/users/mxschmdt/orgs",
"received_events_url": "https://api.github.com/users/mxschmdt/received_events",
"repos_url": "https://api.github.com/users/mxschmdt/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mxschmdt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxschmdt/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mxschmdt",
"user_view_type": "public"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
| null |
[] | null |
[
"Hi! It seems like `flatten_indices` (called in `unique`) doesn't know how to handle empty indices mappings. I'm working on the fix.",
"Thanks, that was fast!"
] | 2022-09-28T16:20:35Z
| 2022-09-30T15:44:25Z
| 2022-09-30T15:44:25Z
|
CONTRIBUTOR
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
## Describe the bug
After filtering a dataset, and if no samples remain, `Dataset.unique` will return the unique values of the unfiltered dataset.
## Steps to reproduce the bug
```python
from datasets import Dataset
dataset = Dataset.from_dict({'id': [0]})
dataset = dataset.filter(lambda _: False)
print(dataset.unique('id'))
```
## Expected results
The above code should return an empty list since the dataset is empty.
## Actual results
```bash
[0]
```
## Environment info
- `datasets` version: 2.5.1
- Platform: Linux-5.18.19-100.fc35.x86_64-x86_64-with-glibc2.34
- Python version: 3.9.14
- PyArrow version: 7.0.0
- Pandas version: 1.3.5
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5038/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5038/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/4595
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4595/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4595/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4595/events
|
https://github.com/huggingface/datasets/issues/4595
| 1,288,275,976
|
I_kwDODunzps5MyYgI
| 4,595
|
Dataset Viewer issue with False positive PII redaction
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/3664563?v=4",
"events_url": "https://api.github.com/users/cakiki/events{/privacy}",
"followers_url": "https://api.github.com/users/cakiki/followers",
"following_url": "https://api.github.com/users/cakiki/following{/other_user}",
"gists_url": "https://api.github.com/users/cakiki/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/cakiki",
"id": 3664563,
"login": "cakiki",
"node_id": "MDQ6VXNlcjM2NjQ1NjM=",
"organizations_url": "https://api.github.com/users/cakiki/orgs",
"received_events_url": "https://api.github.com/users/cakiki/received_events",
"repos_url": "https://api.github.com/users/cakiki/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/cakiki/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cakiki/subscriptions",
"type": "User",
"url": "https://api.github.com/users/cakiki",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The value is in the data, it's not an issue with the \"dataset-viewer\".\r\n\r\n<img width=\"1161\" alt=\"Capture d’écran 2022-06-29 à 10 25 51\" src=\"https://user-images.githubusercontent.com/1676121/176389325-4d2a9a7f-1583-45b8-aa7a-960ffaa6a36a.png\">\r\n\r\n Maybe open a PR: https://huggingface.co/datasets/cakiki/rosetta-code/discussions\r\n",
"This was indeed a scraping issue which I assumed was a display issue; sorry about that!"
] | 2022-06-29T07:15:57Z
| 2022-06-29T08:29:41Z
| 2022-06-29T08:27:49Z
|
CONTRIBUTOR
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Link
https://huggingface.co/datasets/cakiki/rosetta-code
### Description
Hello, I just noticed an entry being redacted that shouldn't have been:
`RootMeanSquare@Range[10]` is being displayed as `[email protected][10]`
### Owner
_No response_
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/3664563?v=4",
"events_url": "https://api.github.com/users/cakiki/events{/privacy}",
"followers_url": "https://api.github.com/users/cakiki/followers",
"following_url": "https://api.github.com/users/cakiki/following{/other_user}",
"gists_url": "https://api.github.com/users/cakiki/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/cakiki",
"id": 3664563,
"login": "cakiki",
"node_id": "MDQ6VXNlcjM2NjQ1NjM=",
"organizations_url": "https://api.github.com/users/cakiki/orgs",
"received_events_url": "https://api.github.com/users/cakiki/received_events",
"repos_url": "https://api.github.com/users/cakiki/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/cakiki/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cakiki/subscriptions",
"type": "User",
"url": "https://api.github.com/users/cakiki",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4595/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4595/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/6001
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6001/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6001/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6001/events
|
https://github.com/huggingface/datasets/pull/6001
| 1,782,516,627
|
PR_kwDODunzps5UVMMh
| 6,001
|
Align `column_names` type check with type hint in `sort`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006038 / 0.011353 (-0.005315) | 0.003797 / 0.011008 (-0.007211) | 0.097686 / 0.038508 (0.059178) | 0.035235 / 0.023109 (0.012126) | 0.317294 / 0.275898 (0.041396) | 0.377682 / 0.323480 (0.054202) | 0.003485 / 0.007986 (-0.004501) | 0.003603 / 0.004328 (-0.000725) | 0.077268 / 0.004250 (0.073017) | 0.054649 / 0.037052 (0.017597) | 0.322293 / 0.258489 (0.063804) | 0.372277 / 0.293841 (0.078436) | 0.027927 / 0.128546 (-0.100619) | 0.008495 / 0.075646 (-0.067151) | 0.313078 / 0.419271 (-0.106193) | 0.046974 / 0.043533 (0.003441) | 0.313848 / 0.255139 (0.058709) | 0.338454 / 0.283200 (0.055255) | 0.020462 / 0.141683 (-0.121221) | 1.473027 / 1.452155 (0.020873) | 1.539468 / 1.492716 (0.046752) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.221429 / 0.018006 (0.203423) | 0.412044 / 0.000490 (0.411555) | 0.005866 / 0.000200 (0.005666) | 0.000075 / 0.000054 (0.000021) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022870 / 0.037411 (-0.014541) | 0.099129 / 0.014526 (0.084603) | 0.103463 / 0.176557 (-0.073094) | 0.164969 / 0.737135 (-0.572166) | 0.110000 / 0.296338 (-0.186339) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.431311 / 0.215209 (0.216102) | 4.293562 / 2.077655 (2.215907) | 1.961209 / 1.504120 (0.457089) | 1.733680 / 1.541195 (0.192485) | 1.793171 / 1.468490 (0.324681) | 0.568566 / 4.584777 (-4.016211) | 3.401794 / 3.745712 (-0.343918) | 1.827949 / 5.269862 (-3.441913) | 1.055963 / 4.565676 (-3.509714) | 0.068459 / 0.424275 (-0.355816) | 0.011586 / 0.007607 (0.003979) | 0.533936 / 0.226044 (0.307891) | 5.347637 / 2.268929 (3.078708) | 2.378056 / 55.444624 (-53.066569) | 2.032159 / 6.876477 (-4.844318) | 2.159064 / 2.142072 (0.016991) | 0.674528 / 4.805227 (-4.130699) | 0.136859 / 6.500664 (-6.363805) | 0.066629 / 0.075469 (-0.008840) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.218084 / 1.841788 (-0.623704) | 14.141710 / 8.074308 (6.067402) | 13.588415 / 10.191392 (3.397023) | 0.155104 / 0.680424 (-0.525320) | 0.017160 / 0.534201 (-0.517041) | 0.375558 / 0.579283 (-0.203725) | 0.386293 / 0.434364 (-0.048071) | 0.459476 / 0.540337 (-0.080862) | 0.548561 / 1.386936 (-0.838375) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005878 / 0.011353 (-0.005475) | 0.003750 / 0.011008 (-0.007259) | 0.077720 / 0.038508 (0.039212) | 0.034955 / 0.023109 (0.011846) | 0.357480 / 0.275898 (0.081582) | 0.418210 / 0.323480 (0.094730) | 0.004566 / 0.007986 (-0.003419) | 0.002918 / 0.004328 (-0.001410) | 0.076517 / 0.004250 (0.072266) | 0.050202 / 0.037052 (0.013150) | 0.368166 / 0.258489 (0.109677) | 0.415681 / 0.293841 (0.121840) | 0.029496 / 0.128546 (-0.099050) | 0.008547 / 0.075646 (-0.067099) | 0.083037 / 0.419271 (-0.336234) | 0.045001 / 0.043533 (0.001468) | 0.356503 / 0.255139 (0.101364) | 0.383747 / 0.283200 (0.100547) | 0.025071 / 0.141683 (-0.116612) | 1.541985 / 1.452155 (0.089830) | 1.594710 / 1.492716 (0.101994) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.204491 / 0.018006 (0.186484) | 0.408686 / 0.000490 (0.408196) | 0.002505 / 0.000200 (0.002305) | 0.000082 / 0.000054 (0.000027) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024446 / 0.037411 (-0.012965) | 0.101432 / 0.014526 (0.086906) | 0.108105 / 0.176557 (-0.068452) | 0.161195 / 0.737135 (-0.575940) | 0.112671 / 0.296338 (-0.183667) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.459697 / 0.215209 (0.244488) | 4.570071 / 2.077655 (2.492416) | 2.211547 / 1.504120 (0.707427) | 1.996651 / 1.541195 (0.455457) | 2.015621 / 1.468490 (0.547131) | 0.567423 / 4.584777 (-4.017354) | 3.408027 / 3.745712 (-0.337685) | 2.913824 / 5.269862 (-2.356038) | 1.423223 / 4.565676 (-3.142453) | 0.068740 / 0.424275 (-0.355535) | 0.010997 / 0.007607 (0.003390) | 0.567340 / 0.226044 (0.341296) | 5.666280 / 2.268929 (3.397351) | 2.804934 / 55.444624 (-52.639690) | 2.430761 / 6.876477 (-4.445716) | 2.451820 / 2.142072 (0.309748) | 0.681926 / 4.805227 (-4.123301) | 0.137761 / 6.500664 (-6.362903) | 0.067173 / 0.075469 (-0.008296) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.329853 / 1.841788 (-0.511934) | 14.436232 / 8.074308 (6.361924) | 14.398645 / 10.191392 (4.207253) | 0.147421 / 0.680424 (-0.533002) | 0.016743 / 0.534201 (-0.517458) | 0.364964 / 0.579283 (-0.214319) | 0.387072 / 0.434364 (-0.047292) | 0.423892 / 0.540337 (-0.116445) | 0.521304 / 1.386936 (-0.865632) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006463 / 0.011353 (-0.004889) | 0.003923 / 0.011008 (-0.007086) | 0.102096 / 0.038508 (0.063588) | 0.040230 / 0.023109 (0.017121) | 0.384688 / 0.275898 (0.108789) | 0.445574 / 0.323480 (0.122094) | 0.003590 / 0.007986 (-0.004395) | 0.004023 / 0.004328 (-0.000306) | 0.080125 / 0.004250 (0.075875) | 0.057406 / 0.037052 (0.020354) | 0.395049 / 0.258489 (0.136560) | 0.438065 / 0.293841 (0.144224) | 0.028963 / 0.128546 (-0.099583) | 0.008693 / 0.075646 (-0.066954) | 0.317158 / 0.419271 (-0.102114) | 0.047930 / 0.043533 (0.004397) | 0.382442 / 0.255139 (0.127303) | 0.410665 / 0.283200 (0.127466) | 0.020127 / 0.141683 (-0.121555) | 1.558554 / 1.452155 (0.106400) | 1.590959 / 1.492716 (0.098242) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.208826 / 0.018006 (0.190820) | 0.432037 / 0.000490 (0.431547) | 0.006509 / 0.000200 (0.006309) | 0.000285 / 0.000054 (0.000230) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023460 / 0.037411 (-0.013951) | 0.099070 / 0.014526 (0.084545) | 0.105771 / 0.176557 (-0.070785) | 0.166683 / 0.737135 (-0.570452) | 0.108755 / 0.296338 (-0.187583) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.424324 / 0.215209 (0.209115) | 4.225696 / 2.077655 (2.148042) | 1.910955 / 1.504120 (0.406835) | 1.704493 / 1.541195 (0.163298) | 1.782784 / 1.468490 (0.314293) | 0.562927 / 4.584777 (-4.021850) | 3.380163 / 3.745712 (-0.365550) | 1.779641 / 5.269862 (-3.490221) | 1.029134 / 4.565676 (-3.536543) | 0.068325 / 0.424275 (-0.355950) | 0.011528 / 0.007607 (0.003921) | 0.530141 / 0.226044 (0.304097) | 5.323443 / 2.268929 (3.054514) | 2.346956 / 55.444624 (-53.097668) | 2.013335 / 6.876477 (-4.863142) | 2.118531 / 2.142072 (-0.023541) | 0.675206 / 4.805227 (-4.130021) | 0.135473 / 6.500664 (-6.365191) | 0.064804 / 0.075469 (-0.010665) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.240179 / 1.841788 (-0.601608) | 14.692449 / 8.074308 (6.618141) | 13.672223 / 10.191392 (3.480831) | 0.147748 / 0.680424 (-0.532676) | 0.017119 / 0.534201 (-0.517082) | 0.369481 / 0.579283 (-0.209802) | 0.390133 / 0.434364 (-0.044231) | 0.458768 / 0.540337 (-0.081569) | 0.548989 / 1.386936 (-0.837947) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006319 / 0.011353 (-0.005034) | 0.003975 / 0.011008 (-0.007033) | 0.077886 / 0.038508 (0.039378) | 0.038322 / 0.023109 (0.015213) | 0.379851 / 0.275898 (0.103953) | 0.456749 / 0.323480 (0.133269) | 0.005320 / 0.007986 (-0.002665) | 0.003135 / 0.004328 (-0.001194) | 0.078272 / 0.004250 (0.074022) | 0.059919 / 0.037052 (0.022866) | 0.430062 / 0.258489 (0.171573) | 0.477432 / 0.293841 (0.183591) | 0.029713 / 0.128546 (-0.098833) | 0.008704 / 0.075646 (-0.066942) | 0.082488 / 0.419271 (-0.336784) | 0.044667 / 0.043533 (0.001134) | 0.354910 / 0.255139 (0.099771) | 0.434637 / 0.283200 (0.151438) | 0.026402 / 0.141683 (-0.115281) | 1.528825 / 1.452155 (0.076671) | 1.548209 / 1.492716 (0.055493) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.237988 / 0.018006 (0.219982) | 0.420402 / 0.000490 (0.419913) | 0.003098 / 0.000200 (0.002898) | 0.000077 / 0.000054 (0.000023) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026253 / 0.037411 (-0.011159) | 0.106137 / 0.014526 (0.091611) | 0.110273 / 0.176557 (-0.066284) | 0.165316 / 0.737135 (-0.571819) | 0.115720 / 0.296338 (-0.180619) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.454244 / 0.215209 (0.239035) | 4.526018 / 2.077655 (2.448364) | 2.395985 / 1.504120 (0.891865) | 2.234822 / 1.541195 (0.693627) | 2.370235 / 1.468490 (0.901745) | 0.567607 / 4.584777 (-4.017169) | 3.650156 / 3.745712 (-0.095556) | 3.360094 / 5.269862 (-1.909768) | 1.415252 / 4.565676 (-3.150424) | 0.068012 / 0.424275 (-0.356263) | 0.011135 / 0.007607 (0.003528) | 0.561967 / 0.226044 (0.335923) | 5.621819 / 2.268929 (3.352890) | 2.676912 / 55.444624 (-52.767712) | 2.338306 / 6.876477 (-4.538171) | 2.430888 / 2.142072 (0.288815) | 0.684576 / 4.805227 (-4.120651) | 0.138923 / 6.500664 (-6.361741) | 0.069933 / 0.075469 (-0.005536) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.313383 / 1.841788 (-0.528405) | 15.125088 / 8.074308 (7.050780) | 14.801501 / 10.191392 (4.610109) | 0.134235 / 0.680424 (-0.546189) | 0.017058 / 0.534201 (-0.517143) | 0.365166 / 0.579283 (-0.214117) | 0.395415 / 0.434364 (-0.038949) | 0.419355 / 0.540337 (-0.120983) | 0.513411 / 1.386936 (-0.873525) |\n\n</details>\n</details>\n\n\n"
] | 2023-06-30T13:15:50Z
| 2023-06-30T14:18:32Z
| 2023-06-30T14:11:24Z
|
COLLABORATOR
| null | null | null |
Fix #5998
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6001/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6001/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6001.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6001",
"merged_at": "2023-06-30T14:11:24Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6001.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6001"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5119
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5119/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5119/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5119/events
|
https://github.com/huggingface/datasets/pull/5119
| 1,410,561,363
|
PR_kwDODunzps5A4IQp
| 5,119
|
[TYPO] Update new_dataset_script.py
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/3664563?v=4",
"events_url": "https://api.github.com/users/cakiki/events{/privacy}",
"followers_url": "https://api.github.com/users/cakiki/followers",
"following_url": "https://api.github.com/users/cakiki/following{/other_user}",
"gists_url": "https://api.github.com/users/cakiki/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/cakiki",
"id": 3664563,
"login": "cakiki",
"node_id": "MDQ6VXNlcjM2NjQ1NjM=",
"organizations_url": "https://api.github.com/users/cakiki/orgs",
"received_events_url": "https://api.github.com/users/cakiki/received_events",
"repos_url": "https://api.github.com/users/cakiki/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/cakiki/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cakiki/subscriptions",
"type": "User",
"url": "https://api.github.com/users/cakiki",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-10-16T17:36:49Z
| 2022-10-19T09:48:19Z
| 2022-10-19T09:45:59Z
|
CONTRIBUTOR
| null | null | null | null |
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5119/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5119/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/5119.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5119",
"merged_at": "2022-10-19T09:45:59Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5119.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5119"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6591
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6591/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6591/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6591/events
|
https://github.com/huggingface/datasets/issues/6591
| 2,082,378,957
|
I_kwDODunzps58HpTN
| 6,591
|
The datasets models housed in Dropbox can't support a lot of users downloading them
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/4933774?v=4",
"events_url": "https://api.github.com/users/RDaneelOlivav/events{/privacy}",
"followers_url": "https://api.github.com/users/RDaneelOlivav/followers",
"following_url": "https://api.github.com/users/RDaneelOlivav/following{/other_user}",
"gists_url": "https://api.github.com/users/RDaneelOlivav/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/RDaneelOlivav",
"id": 4933774,
"login": "RDaneelOlivav",
"node_id": "MDQ6VXNlcjQ5MzM3NzQ=",
"organizations_url": "https://api.github.com/users/RDaneelOlivav/orgs",
"received_events_url": "https://api.github.com/users/RDaneelOlivav/received_events",
"repos_url": "https://api.github.com/users/RDaneelOlivav/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/RDaneelOlivav/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RDaneelOlivav/subscriptions",
"type": "User",
"url": "https://api.github.com/users/RDaneelOlivav",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Hi! Indeed, Dropbox is not a reliable host. I've just merged https://huggingface.co/datasets/PolyAI/minds14/discussions/24 to fix this by hosting the data files inside the repo."
] | 2024-01-15T16:43:38Z
| 2024-01-22T23:18:09Z
| 2024-01-22T23:18:09Z
|
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
I'm using the datasets
```
from datasets import load_dataset, Audio
dataset = load_dataset("PolyAI/minds14", name="en-US", split="train")
```
And it seems that sometimes when I imagine a lot of users are accessing the same resources, the Dropbox host fails:
`raise ConnectionError(f"Couldn't reach {url} (error {response.status_code})") ConnectionError: Couldn't reach https://www.dropbox.com/s/e2us0hcs3ilr20e/MInDS-14.zip?dl=1 (error 429)`
My question is if we can somehow host these files elsewhere or can you change the limit of simultaneous users accessing those resources or any other solution?
Also, has anyone had this issue before?
Thanks
### Steps to reproduce the bug
1: Create a python script like so:
```
from datasets import load_dataset, Audio
dataset = load_dataset("PolyAI/minds14", name="en-US", split="train")
```
2: Execute this by a certain number of users at the same time
### Expected behavior
I woudl expect that this shouldnt happen unless its a huge amount of users, which it is not the case
### Environment info
This was done in an Ubuntu 22 environment.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6591/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6591/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/4871
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4871/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4871/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4871/events
|
https://github.com/huggingface/datasets/pull/4871
| 1,346,703,568
|
PR_kwDODunzps49k9Rm
| 4,871
|
Fix: wmt datasets - fix CWMT zh subsets
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4871). All of your documentation changes will be reflected on that endpoint."
] | 2022-08-22T16:42:09Z
| 2022-08-23T10:00:20Z
| 2022-08-23T10:00:19Z
|
MEMBER
| null | null | null |
Fix https://github.com/huggingface/datasets/issues/4575
TODO: run `datasets-cli test`:
- [x] wmt17
- [x] wmt18
- [x] wmt19
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4871/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4871/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/4871.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4871",
"merged_at": "2022-08-23T10:00:19Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4871.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4871"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6087
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6087/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6087/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6087/events
|
https://github.com/huggingface/datasets/issues/6087
| 1,825,133,741
|
I_kwDODunzps5syVSt
| 6,087
|
fsspec dependency is set too low
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1085885?v=4",
"events_url": "https://api.github.com/users/iXce/events{/privacy}",
"followers_url": "https://api.github.com/users/iXce/followers",
"following_url": "https://api.github.com/users/iXce/following{/other_user}",
"gists_url": "https://api.github.com/users/iXce/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/iXce",
"id": 1085885,
"login": "iXce",
"node_id": "MDQ6VXNlcjEwODU4ODU=",
"organizations_url": "https://api.github.com/users/iXce/orgs",
"received_events_url": "https://api.github.com/users/iXce/received_events",
"repos_url": "https://api.github.com/users/iXce/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/iXce/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iXce/subscriptions",
"type": "User",
"url": "https://api.github.com/users/iXce",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Thanks for reporting! A PR with a fix has just been merged."
] | 2023-07-27T20:08:22Z
| 2023-07-28T10:07:56Z
| 2023-07-28T10:07:03Z
|
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
fsspec.callbacks.TqdmCallback (used in https://github.com/huggingface/datasets/blob/73bed12ecda17d1573fd3bf73ed5db24d3622f86/src/datasets/utils/file_utils.py#L338) was first released in fsspec [2022.3.0](https://github.com/fsspec/filesystem_spec/releases/tag/2022.3.0, commit where it was added: https://github.com/fsspec/filesystem_spec/commit/9577c8a482eb0a69092913b81580942a68d66a76#diff-906155c7e926a9ff58b9f23369bb513b09b445f5b0f41fa2a84015d0b471c68cR180), however the dependency is set to 2021.11.1 https://github.com/huggingface/datasets/blob/main/setup.py#L129
### Steps to reproduce the bug
1. Install fsspec==2021.11.1
2. Install latest datasets==2.14.1
3. Import datasets, import fails due to lack of `fsspec.callbacks.TqdmCallback`
### Expected behavior
No import issue
### Environment info
N/A
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6087/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6087/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/7117
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7117/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7117/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7117/events
|
https://github.com/huggingface/datasets/issues/7117
| 2,476,555,659
|
I_kwDODunzps6TnT2L
| 7,117
|
Audio dataset load everything in RAM and is very slow
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/64205064?v=4",
"events_url": "https://api.github.com/users/Jourdelune/events{/privacy}",
"followers_url": "https://api.github.com/users/Jourdelune/followers",
"following_url": "https://api.github.com/users/Jourdelune/following{/other_user}",
"gists_url": "https://api.github.com/users/Jourdelune/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Jourdelune",
"id": 64205064,
"login": "Jourdelune",
"node_id": "MDQ6VXNlcjY0MjA1MDY0",
"organizations_url": "https://api.github.com/users/Jourdelune/orgs",
"received_events_url": "https://api.github.com/users/Jourdelune/received_events",
"repos_url": "https://api.github.com/users/Jourdelune/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Jourdelune/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Jourdelune/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Jourdelune",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"Hi ! I think the issue comes from the fact that you return `row` entirely, and therefore the dataset has to re-encode the audio data in `row`.\r\n\r\nCan you try this instead ?\r\n\r\n```python\r\n# map the dataset\r\ndef transcribe_audio(row):\r\n audio = row[\"audio\"] # get the audio but do nothing with it\r\n return {\"transcribed\": True}\r\n```\r\n\r\nPS: no need to iter on the dataset to trigger the `map` function on a `Dataset` - `map` runs directly when it's called (contrary to `IterableDataset` taht you can get when streaming, which are lazy)",
"No, that doesn't change anything, I manage to solve this problem by setting with_indices=True in the map function and directly retrieving the audio corresponding to the index.\r\n```py\r\nfrom datasets import load_dataset\r\nimport time\r\n\r\nds = load_dataset(\"WaveGenAI/audios2\", split=\"train[:50]\")\r\n\r\n\r\n# map the dataset\r\ndef transcribe_audio(row, idx):\r\n audio = ds[idx][\"audio\"] # get the audio but do nothing with it\r\n row[\"transcribed\"] = True\r\n return row\r\n\r\n\r\ntime1 = time.time()\r\nds = ds.map(\r\n transcribe_audio, with_indices=True\r\n) # set low writer_batch_size to avoid memory issues\r\n\r\nfor row in ds:\r\n pass # do nothing, just iterate to trigger the map function\r\n\r\nprint(f\"Time taken: {time.time() - time1:.2f} seconds\")\r\n```",
"Hmm maybe accessing `row[\"audio\"]` makes `map()` reencode what's inside `row[\"audio\"]` in case there are in-place modifications"
] | 2024-08-20T21:18:12Z
| 2024-08-26T13:11:55Z
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
Hello, I'm working with an audio dataset. I want to transcribe the audio that the dataset contain, and for that I use whisper. My issue is that the dataset load everything in the RAM when I map the dataset, obviously, when RAM usage is too high, the program crashes.
To fix this issue, I'm using writer_batch_size that I set to 10, but in this case, the mapping of the dataset is extremely slow.
To illustrate this, on 50 examples, with `writer_batch_size` set to 10, it takes 123.24 seconds to process the dataset, but without `writer_batch_size` set to 10, it takes about ten seconds to process the dataset, but then the process remains blocked (I assume that it is writing the dataset and therefore suffers from the same problem as `writer_batch_size`)
### Steps to reproduce the bug
Hug ram usage but fast (but actually slow when saving the dataset):
```py
from datasets import load_dataset
import time
ds = load_dataset("WaveGenAI/audios2", split="train[:50]")
# map the dataset
def transcribe_audio(row):
audio = row["audio"] # get the audio but do nothing with it
row["transcribed"] = True
return row
time1 = time.time()
ds = ds.map(
transcribe_audio
)
for row in ds:
pass # do nothing, just iterate to trigger the map function
print(f"Time taken: {time.time() - time1:.2f} seconds")
```
Low ram usage but very very slow:
```py
from datasets import load_dataset
import time
ds = load_dataset("WaveGenAI/audios2", split="train[:50]")
# map the dataset
def transcribe_audio(row):
audio = row["audio"] # get the audio but do nothing with it
row["transcribed"] = True
return row
time1 = time.time()
ds = ds.map(
transcribe_audio, writer_batch_size=10
) # set low writer_batch_size to avoid memory issues
for row in ds:
pass # do nothing, just iterate to trigger the map function
print(f"Time taken: {time.time() - time1:.2f} seconds")
```
### Expected behavior
I think the processing should be much faster, on only 50 audio examples, the mapping takes several minutes while nothing is done (just loading the audio).
### Environment info
- `datasets` version: 2.21.0
- Platform: Linux-6.10.5-arch1-1-x86_64-with-glibc2.40
- Python version: 3.10.4
- `huggingface_hub` version: 0.24.5
- PyArrow version: 17.0.0
- Pandas version: 1.5.3
- `fsspec` version: 2024.6.1
# Extra
The dataset has been generated by using audio folder, so I don't think anything specific in my code is causing this problem.
```py
import argparse
from datasets import load_dataset
parser = argparse.ArgumentParser()
parser.add_argument("--folder", help="folder path", default="/media/works/test/")
args = parser.parse_args()
dataset = load_dataset("audiofolder", data_dir=args.folder)
# push the dataset to hub
dataset.push_to_hub("WaveGenAI/audios")
```
Also, it's the combination of `audio = row["audio"]` and `row["transcribed"] = True` which causes problems, `row["transcribed"] = True `alone does nothing and `audio = row["audio"]` alone sometimes causes problems, sometimes not.
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7117/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7117/timeline
| null | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/5767
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5767/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5767/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5767/events
|
https://github.com/huggingface/datasets/issues/5767
| 1,672,433,979
|
I_kwDODunzps5jr1E7
| 5,767
|
How to use Distill-BERT with different datasets?
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/109907638?v=4",
"events_url": "https://api.github.com/users/sauravtii/events{/privacy}",
"followers_url": "https://api.github.com/users/sauravtii/followers",
"following_url": "https://api.github.com/users/sauravtii/following{/other_user}",
"gists_url": "https://api.github.com/users/sauravtii/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sauravtii",
"id": 109907638,
"login": "sauravtii",
"node_id": "U_kgDOBo0Otg",
"organizations_url": "https://api.github.com/users/sauravtii/orgs",
"received_events_url": "https://api.github.com/users/sauravtii/received_events",
"repos_url": "https://api.github.com/users/sauravtii/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sauravtii/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sauravtii/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sauravtii",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Closing this one in favor of the same issue opened in the `transformers` repo."
] | 2023-04-18T06:25:12Z
| 2023-04-20T16:52:05Z
| 2023-04-20T16:52:05Z
|
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
- `transformers` version: 4.11.3
- Platform: Linux-5.4.0-58-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyTorch version (GPU?): 1.12.0+cu102 (True)
- Tensorflow version (GPU?): 2.10.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Steps to reproduce the bug
I recently read [this](https://huggingface.co/docs/transformers/quicktour#train-with-tensorflow:~:text=The%20most%20important%20thing%20to%20remember%20is%20you%20need%20to%20instantiate%20a%20tokenizer%20with%20the%20same%20model%20name%20to%20ensure%20you%E2%80%99re%20using%20the%20same%20tokenization%20rules%20a%20model%20was%20pretrained%20with.) and was wondering how to use distill-BERT (which is pre-trained with imdb dataset) with a different dataset (for eg. [this](https://huggingface.co/datasets/yhavinga/imdb_dutch) dataset)?
### Expected behavior
Distill-BERT should work with different datasets.
### Environment info
- `datasets` version: 1.12.1
- Platform: Linux-5.4.0-58-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 11.0.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5767/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5767/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/5347
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5347/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5347/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5347/events
|
https://github.com/huggingface/datasets/pull/5347
| 1,486,920,261
|
PR_kwDODunzps5E6jb1
| 5,347
|
Force soundfile to return float32 instead of the default float64
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/25608944?v=4",
"events_url": "https://api.github.com/users/qmeeus/events{/privacy}",
"followers_url": "https://api.github.com/users/qmeeus/followers",
"following_url": "https://api.github.com/users/qmeeus/following{/other_user}",
"gists_url": "https://api.github.com/users/qmeeus/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/qmeeus",
"id": 25608944,
"login": "qmeeus",
"node_id": "MDQ6VXNlcjI1NjA4OTQ0",
"organizations_url": "https://api.github.com/users/qmeeus/orgs",
"received_events_url": "https://api.github.com/users/qmeeus/received_events",
"repos_url": "https://api.github.com/users/qmeeus/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/qmeeus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qmeeus/subscriptions",
"type": "User",
"url": "https://api.github.com/users/qmeeus",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"cc @polinaeterna",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5347). All of your documentation changes will be reflected on that endpoint.",
"Cool ! Feel free to add a comment in the code to explain that and we can merge :)",
"I'm not sure if this is a good change since we plan to get rid of `torchaudio` in the next couple of months...",
"What do you think @polinaeterna @patrickvonplaten ? Models are usually using float32 (e.g. Wev2vec2 in `transformers`) IIRC",
"IMO we can safely assume that float32 is always good enough when using audio models in inference or training. Nevertheless there might be use cases for audio datasets in the future where float64 is needed. \r\n\r\n=> I would by default always cast to float32, but then possible allow the user to cast to float64 ",
"> I'm not sure if this is a good change since we plan to get rid of torchaudio in the next couple of months...\r\n\r\n@mariosasko I agree but who knows how long we will have to wait until we are really able to do so (https://github.com/bastibe/libsndfile-binaries/pull/17 is a draft. so as @patrickvonplaten is okay with float32, I'd merge.\r\n\r\n\r\n",
"@polinaeterna Can you comment on the linked PR to see why it's still a draft? Maybe we can help somehow to get this merged finally.\r\n\r\nI think it's weird to align `soundfile` with `torchaudio` when the latter is only used for MP3 (and prob for not much longer). "
] | 2022-12-09T15:10:24Z
| 2023-01-17T16:12:49Z
| null |
NONE
| null | null | null |
(Fixes issue #5345)
| null |
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5347/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5347/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/5347.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5347",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5347.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5347"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7205
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7205/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7205/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7205/events
|
https://github.com/huggingface/datasets/pull/7205
| 2,573,490,859
|
PR_kwDODunzps599w0I
| 7,205
|
fix ci benchmark
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7205). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2024-10-08T15:06:18Z
| 2024-10-08T15:25:28Z
| 2024-10-08T15:25:25Z
|
MEMBER
| null | null | null |
we're not using the benchmarks anymore + they were not working anyway due to token permissions
I keep the code in case we ever want to re-run the benchmark manually
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7205/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7205/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7205.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7205",
"merged_at": "2024-10-08T15:25:25Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7205.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7205"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5350
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5350/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5350/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5350/events
|
https://github.com/huggingface/datasets/pull/5350
| 1,487,559,904
|
PR_kwDODunzps5E8y2E
| 5,350
|
Clean up Loading methods docstrings
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/stevhliu",
"id": 59462357,
"login": "stevhliu",
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"type": "User",
"url": "https://api.github.com/users/stevhliu",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-12-09T22:25:30Z
| 2022-12-12T17:27:20Z
| 2022-12-12T17:24:01Z
|
MEMBER
| null | null | null |
Clean up for the docstrings in Loading methods!
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/stevhliu",
"id": 59462357,
"login": "stevhliu",
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"type": "User",
"url": "https://api.github.com/users/stevhliu",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5350/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5350/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/5350.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5350",
"merged_at": "2022-12-12T17:24:01Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5350.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5350"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7083
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7083/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7083/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7083/events
|
https://github.com/huggingface/datasets/pull/7083
| 2,439,518,466
|
PR_kwDODunzps5292hC
| 7,083
|
fix streaming from arrow files
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/23191892?v=4",
"events_url": "https://api.github.com/users/fschlatt/events{/privacy}",
"followers_url": "https://api.github.com/users/fschlatt/followers",
"following_url": "https://api.github.com/users/fschlatt/following{/other_user}",
"gists_url": "https://api.github.com/users/fschlatt/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/fschlatt",
"id": 23191892,
"login": "fschlatt",
"node_id": "MDQ6VXNlcjIzMTkxODky",
"organizations_url": "https://api.github.com/users/fschlatt/orgs",
"received_events_url": "https://api.github.com/users/fschlatt/received_events",
"repos_url": "https://api.github.com/users/fschlatt/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/fschlatt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fschlatt/subscriptions",
"type": "User",
"url": "https://api.github.com/users/fschlatt",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2024-07-31T09:02:42Z
| 2024-08-30T15:17:03Z
| 2024-08-30T15:17:03Z
|
CONTRIBUTOR
| null | null | null | null |
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7083/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7083/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7083.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7083",
"merged_at": "2024-08-30T15:17:03Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7083.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7083"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7241
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7241/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7241/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7241/events
|
https://github.com/huggingface/datasets/issues/7241
| 2,599,899,156
|
I_kwDODunzps6a91AU
| 7,241
|
`push_to_hub` overwrite argument
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/60838378?v=4",
"events_url": "https://api.github.com/users/ceferisbarov/events{/privacy}",
"followers_url": "https://api.github.com/users/ceferisbarov/followers",
"following_url": "https://api.github.com/users/ceferisbarov/following{/other_user}",
"gists_url": "https://api.github.com/users/ceferisbarov/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ceferisbarov",
"id": 60838378,
"login": "ceferisbarov",
"node_id": "MDQ6VXNlcjYwODM4Mzc4",
"organizations_url": "https://api.github.com/users/ceferisbarov/orgs",
"received_events_url": "https://api.github.com/users/ceferisbarov/received_events",
"repos_url": "https://api.github.com/users/ceferisbarov/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ceferisbarov/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ceferisbarov/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ceferisbarov",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
closed
| false
| null |
[] | null |
[
"Hi ! Do you mean deleting all the files ? or erasing the repository git history before push_to_hub ?",
"Hi! I meant the latter.",
"I don't think there is a `huggingface_hub` utility to erase the git history, cc @Wauplin maybe ?",
"What is the goal exactly of deleting all the git history without deleting the repo? ",
"You can use [`super_squash_commit`](https://huggingface.co/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.super_squash_history) to squash all the commits into a single one, hence deleting the git history. This is not exactly what you asked for since it squashes the commits for a specific revision (example: \"all commits on main\"). This means that if other branches exists, they are kept the same. Also if some PRs are already opened on the repo, they will become unmergeable since the commits will have diverted.",
"So the solution is:\r\n\r\n```python\r\nfrom huggingface_hub import HfApi\r\nrepo_id = \"username/dataset_name\"\r\nds.push_to_hub(repo_id)\r\nHfApi().super_squash_commit(repo_id)\r\n```\r\n\r\nThis way you erase previous git history to end up with only 1 commit containing your dataset.\r\nStill, I'd be curious why it's important in your case. Is it to save storage space ? or to disallow loading old versions of the data ?",
"Thanks, everyone! I am building a new dataset and playing around with column names, splits, etc. Sometimes I push to the hub to share it with other teammates, I don't want those variations to be part of the repo. Deleting the repo from the website takes a little time, but it also loses repo settings that I have set, since I always set it to public with manually approved requests.\r\n\r\nBTW, I had to write `HfApi().super_squash_history(repo_id, repo_type=\"dataset\")`, but otherwise it works.",
"@ceferisbarov just to let you know, recreating a gated repo + granting access to your teammates is something that you can automate with something like this (not fully tested but should work):\r\n\r\n```py\r\nfrom huggingface_hub import HfApi\r\n\r\napi = HfApi()\r\napi.delete_repo(repo_id, repo_type=\"dataset\", missing_ok=True)\r\napi.create_repo(repo_id, repo_type=\"dataset\", private=False)\r\napi.update_repo_settings(repo_id, repo_type=\"dataset\", gated=\"manual\")\r\nfor user in [\"user1\", \"user2\"] # list of teammates\r\n api.grant_access(repo_id, user, repo_type=\"dataset\")\r\n```\r\n\r\nI think it'd be a better solution than squashing commits (which is more of a hack), typically if you are using the dataset viewer.",
"This is great, @Wauplin. If we can achieve this with HfApi, then we probably don't need to add another parameter to push_to_hub. I am closing the issue."
] | 2024-10-20T03:23:26Z
| 2024-10-24T17:39:08Z
| 2024-10-24T17:39:08Z
|
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Feature request
Add an `overwrite` argument to the `push_to_hub` method.
### Motivation
I want to overwrite a repo without deleting it on Hugging Face. Is this possible? I couldn't find anything in the documentation or tutorials.
### Your contribution
I can create a PR.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/60838378?v=4",
"events_url": "https://api.github.com/users/ceferisbarov/events{/privacy}",
"followers_url": "https://api.github.com/users/ceferisbarov/followers",
"following_url": "https://api.github.com/users/ceferisbarov/following{/other_user}",
"gists_url": "https://api.github.com/users/ceferisbarov/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ceferisbarov",
"id": 60838378,
"login": "ceferisbarov",
"node_id": "MDQ6VXNlcjYwODM4Mzc4",
"organizations_url": "https://api.github.com/users/ceferisbarov/orgs",
"received_events_url": "https://api.github.com/users/ceferisbarov/received_events",
"repos_url": "https://api.github.com/users/ceferisbarov/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ceferisbarov/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ceferisbarov/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ceferisbarov",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7241/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7241/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/4590
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4590/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4590/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4590/events
|
https://github.com/huggingface/datasets/pull/4590
| 1,287,941,058
|
PR_kwDODunzps46htv0
| 4,590
|
Generalize meta_path json file creation in load.py [#4540]
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/20517962?v=4",
"events_url": "https://api.github.com/users/VijayKalmath/events{/privacy}",
"followers_url": "https://api.github.com/users/VijayKalmath/followers",
"following_url": "https://api.github.com/users/VijayKalmath/following{/other_user}",
"gists_url": "https://api.github.com/users/VijayKalmath/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/VijayKalmath",
"id": 20517962,
"login": "VijayKalmath",
"node_id": "MDQ6VXNlcjIwNTE3OTYy",
"organizations_url": "https://api.github.com/users/VijayKalmath/orgs",
"received_events_url": "https://api.github.com/users/VijayKalmath/received_events",
"repos_url": "https://api.github.com/users/VijayKalmath/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/VijayKalmath/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VijayKalmath/subscriptions",
"type": "User",
"url": "https://api.github.com/users/VijayKalmath",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@albertvillanova, Can you please review this PR for Issue #4540 ",
"@lhoestq Thank you for merging the PR . Is there a slack channel for contributing to the datasets library. I would love to work on the library and make meaningful contributions.",
"Hi ! Sure feel free to join our discord ^^ \r\nhttps://discuss.huggingface.co/t/join-the-hugging-face-discord/11263 so that we can discuss together mor eeasily. Otherwise everything happens on github ;)"
] | 2022-06-28T21:48:06Z
| 2022-07-08T14:55:13Z
| 2022-07-07T13:17:45Z
|
CONTRIBUTOR
| null | null | null |
# What does this PR do?
## Summary
*In function `_copy_script_and_other_resources_in_importable_dir`, using string split when generating `meta_path` throws error in the edge case raised in #4540.*
## Additions
-
## Changes
- Changed meta_path to use `os.path.splitext` instead of using `str.split` to generalize code.
## Deletions
-
## Issues Addressed :
Fixes #4540
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4590/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4590/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/4590.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4590",
"merged_at": "2022-07-07T13:17:44Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4590.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4590"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6016
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6016/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6016/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6016/events
|
https://github.com/huggingface/datasets/pull/6016
| 1,798,968,033
|
PR_kwDODunzps5VNEvn
| 6,016
|
Dataset string representation enhancement
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/63643948?v=4",
"events_url": "https://api.github.com/users/Ganryuu/events{/privacy}",
"followers_url": "https://api.github.com/users/Ganryuu/followers",
"following_url": "https://api.github.com/users/Ganryuu/following{/other_user}",
"gists_url": "https://api.github.com/users/Ganryuu/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Ganryuu",
"id": 63643948,
"login": "Ganryuu",
"node_id": "MDQ6VXNlcjYzNjQzOTQ4",
"organizations_url": "https://api.github.com/users/Ganryuu/orgs",
"received_events_url": "https://api.github.com/users/Ganryuu/received_events",
"repos_url": "https://api.github.com/users/Ganryuu/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Ganryuu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ganryuu/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Ganryuu",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6016). All of your documentation changes will be reflected on that endpoint.",
"It we could have something similar to Polars, that would be great.\r\n\r\nThis is what Polars outputs: \r\n* `__repr__`/`__str__` :\r\n```\r\nshape: (67_349, 3)\r\n┌───────┬───────────────────────────────────┬───────┐\r\n│ idx ┆ sentence ┆ label │\r\n│ --- ┆ --- ┆ --- │\r\n│ i32 ┆ str ┆ i64 │\r\n╞═══════╪═══════════════════════════════════╪═══════╡\r\n│ 0 ┆ hide new secretions from the par… ┆ 0 │\r\n│ 1 ┆ contains no wit , only labored g… ┆ 0 │\r\n│ 2 ┆ that loves its characters and co… ┆ 1 │\r\n│ 3 ┆ remains utterly satisfied to rem… ┆ 0 │\r\n│ … ┆ … ┆ … │\r\n│ 67345 ┆ anguish , anger and frustration ┆ 0 │\r\n│ 67346 ┆ at achieving the modest , crowd-… ┆ 1 │\r\n│ 67347 ┆ a patient viewer ┆ 1 │\r\n│ 67348 ┆ this new jangle of noise , mayhe… ┆ 0 │\r\n└───────┴───────────────────────────────────┴───────┘\r\n```\r\n\r\n* `_repr_html_`:\r\n<img width=\"251\" alt=\"Screenshot 2023-07-12 at 18 25 58\" src=\"https://github.com/huggingface/datasets/assets/47462742/5d04519d-f302-4411-9fbc-7445bdf53b23\">\r\n\r\n"
] | 2023-07-11T13:38:25Z
| 2023-07-16T10:26:18Z
| null |
NONE
| null | null | null |
my attempt at #6010
not sure if this is the right way to go about it, I will wait for your feedback
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6016/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6016/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6016.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6016",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6016.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6016"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5381
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5381/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5381/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5381/events
|
https://github.com/huggingface/datasets/issues/5381
| 1,504,498,387
|
I_kwDODunzps5ZrNLT
| 5,381
|
Wrong URL for the_pile dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/45738728?v=4",
"events_url": "https://api.github.com/users/LeoGrin/events{/privacy}",
"followers_url": "https://api.github.com/users/LeoGrin/followers",
"following_url": "https://api.github.com/users/LeoGrin/following{/other_user}",
"gists_url": "https://api.github.com/users/LeoGrin/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/LeoGrin",
"id": 45738728,
"login": "LeoGrin",
"node_id": "MDQ6VXNlcjQ1NzM4NzI4",
"organizations_url": "https://api.github.com/users/LeoGrin/orgs",
"received_events_url": "https://api.github.com/users/LeoGrin/received_events",
"repos_url": "https://api.github.com/users/LeoGrin/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/LeoGrin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LeoGrin/subscriptions",
"type": "User",
"url": "https://api.github.com/users/LeoGrin",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Hi! This error can happen if there is a local file/folder with the same name as the requested dataset. And to avoid it, rename the local file/folder.\r\n\r\nSoon, it will be possible to explicitly request a Hub dataset as follows:https://github.com/huggingface/datasets/issues/5228#issuecomment-1313494020"
] | 2022-12-20T12:40:14Z
| 2023-02-15T16:24:57Z
| 2023-02-15T16:24:57Z
|
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
When trying to load `the_pile` dataset from the library, I get a `FileNotFound` error.
### Steps to reproduce the bug
Steps to reproduce:
Run:
```
from datasets import load_dataset
dataset = load_dataset("the_pile")
```
I get the output:
"name": "FileNotFoundError",
"message": "Unable to resolve any data file that matches '['**']' at /storage/store/work/lgrinszt/memorization/the_pile with any supported extension ['csv', 'tsv', 'json', 'jsonl', 'parquet', 'txt', 'blp', 'bmp', 'dib', 'bufr', 'cur', 'pcx', 'dcx', 'dds', 'ps', 'eps', 'fit', 'fits', 'fli', 'flc', 'ftc', 'ftu', 'gbr', 'gif', 'grib', 'h5', 'hdf', 'png', 'apng', 'jp2', 'j2k', 'jpc', 'jpf', 'jpx', 'j2c', 'icns', 'ico', 'im', 'iim', 'tif', 'tiff', 'jfif', 'jpe', 'jpg', 'jpeg', 'mpg', 'mpeg', 'msp', 'pcd', 'pxr', 'pbm', 'pgm', 'ppm', 'pnm', 'psd', 'bw', 'rgb', 'rgba', 'sgi', 'ras', 'tga', 'icb', 'vda', 'vst', 'webp', 'wmf', 'emf', 'xbm', 'xpm', 'BLP', 'BMP', 'DIB', 'BUFR', 'CUR', 'PCX', 'DCX', 'DDS', 'PS', 'EPS', 'FIT', 'FITS', 'FLI', 'FLC', 'FTC', 'FTU', 'GBR', 'GIF', 'GRIB', 'H5', 'HDF', 'PNG', 'APNG', 'JP2', 'J2K', 'JPC', 'JPF', 'JPX', 'J2C', 'ICNS', 'ICO', 'IM', 'IIM', 'TIF', 'TIFF', 'JFIF', 'JPE', 'JPG', 'JPEG', 'MPG', 'MPEG', 'MSP', 'PCD', 'PXR', 'PBM', 'PGM', 'PPM', 'PNM', 'PSD', 'BW', 'RGB', 'RGBA', 'SGI', 'RAS', 'TGA', 'ICB', 'VDA', 'VST', 'WEBP', 'WMF', 'EMF', 'XBM', 'XPM', 'aiff', 'au', 'avr', 'caf', 'flac', 'htk', 'svx', 'mat4', 'mat5', 'mpc2k', 'ogg', 'paf', 'pvf', 'raw', 'rf64', 'sd2', 'sds', 'ircam', 'voc', 'w64', 'wav', 'nist', 'wavex', 'wve', 'xi', 'mp3', 'opus', 'AIFF', 'AU', 'AVR', 'CAF', 'FLAC', 'HTK', 'SVX', 'MAT4', 'MAT5', 'MPC2K', 'OGG', 'PAF', 'PVF', 'RAW', 'RF64', 'SD2', 'SDS', 'IRCAM', 'VOC', 'W64', 'WAV', 'NIST', 'WAVEX', 'WVE', 'XI', 'MP3', 'OPUS', 'zip']"
### Expected behavior
`the_pile` dataset should be dowloaded.
### Environment info
- `datasets` version: 2.7.1
- Platform: Linux-4.15.0-112-generic-x86_64-with-glibc2.27
- Python version: 3.10.8
- PyArrow version: 10.0.1
- Pandas version: 1.5.2
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5381/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5381/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/5527
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5527/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5527/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5527/events
|
https://github.com/huggingface/datasets/pull/5527
| 1,581,228,531
|
PR_kwDODunzps5JysSM
| 5,527
|
Fix benchmarks CI - pin protobuf
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.011142 / 0.011353 (-0.000211) | 0.005885 / 0.011008 (-0.005123) | 0.115374 / 0.038508 (0.076866) | 0.041704 / 0.023109 (0.018594) | 0.356996 / 0.275898 (0.081098) | 0.395076 / 0.323480 (0.071596) | 0.008726 / 0.007986 (0.000740) | 0.005528 / 0.004328 (0.001199) | 0.087817 / 0.004250 (0.083566) | 0.049273 / 0.037052 (0.012221) | 0.363778 / 0.258489 (0.105289) | 0.408801 / 0.293841 (0.114960) | 0.045232 / 0.128546 (-0.083314) | 0.013788 / 0.075646 (-0.061859) | 0.395634 / 0.419271 (-0.023637) | 0.056583 / 0.043533 (0.013051) | 0.360779 / 0.255139 (0.105640) | 0.386843 / 0.283200 (0.103643) | 0.116632 / 0.141683 (-0.025051) | 1.830020 / 1.452155 (0.377865) | 1.808720 / 1.492716 (0.316003) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.221029 / 0.018006 (0.203023) | 0.489463 / 0.000490 (0.488973) | 0.002104 / 0.000200 (0.001904) | 0.000098 / 0.000054 (0.000043) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032873 / 0.037411 (-0.004539) | 0.129526 / 0.014526 (0.115000) | 0.141446 / 0.176557 (-0.035111) | 0.189222 / 0.737135 (-0.547913) | 0.149329 / 0.296338 (-0.147010) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.471389 / 0.215209 (0.256180) | 4.710174 / 2.077655 (2.632519) | 2.239122 / 1.504120 (0.735002) | 1.977789 / 1.541195 (0.436595) | 2.107336 / 1.468490 (0.638846) | 0.816852 / 4.584777 (-3.767925) | 4.944056 / 3.745712 (1.198344) | 4.637939 / 5.269862 (-0.631922) | 2.355546 / 4.565676 (-2.210131) | 0.099324 / 0.424275 (-0.324951) | 0.014529 / 0.007607 (0.006922) | 0.596322 / 0.226044 (0.370277) | 5.972216 / 2.268929 (3.703287) | 2.697281 / 55.444624 (-52.747344) | 2.293836 / 6.876477 (-4.582641) | 2.380271 / 2.142072 (0.238199) | 1.001307 / 4.805227 (-3.803920) | 0.196981 / 6.500664 (-6.303683) | 0.074390 / 0.075469 (-0.001079) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.482915 / 1.841788 (-0.358872) | 18.739511 / 8.074308 (10.665202) | 16.768191 / 10.191392 (6.576799) | 0.203163 / 0.680424 (-0.477261) | 0.037514 / 0.534201 (-0.496687) | 0.529017 / 0.579283 (-0.050266) | 0.577591 / 0.434364 (0.143227) | 0.634057 / 0.540337 (0.093720) | 0.759812 / 1.386936 (-0.627124) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008815 / 0.011353 (-0.002537) | 0.005956 / 0.011008 (-0.005052) | 0.087912 / 0.038508 (0.049404) | 0.040291 / 0.023109 (0.017182) | 0.404079 / 0.275898 (0.128181) | 0.447309 / 0.323480 (0.123829) | 0.006515 / 0.007986 (-0.001471) | 0.005917 / 0.004328 (0.001588) | 0.085560 / 0.004250 (0.081310) | 0.057077 / 0.037052 (0.020025) | 0.403349 / 0.258489 (0.144860) | 0.465644 / 0.293841 (0.171803) | 0.043530 / 0.128546 (-0.085016) | 0.014234 / 0.075646 (-0.061412) | 0.102203 / 0.419271 (-0.317068) | 0.058335 / 0.043533 (0.014802) | 0.398488 / 0.255139 (0.143349) | 0.424127 / 0.283200 (0.140927) | 0.119058 / 0.141683 (-0.022625) | 1.748748 / 1.452155 (0.296593) | 1.822190 / 1.492716 (0.329474) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.255782 / 0.018006 (0.237776) | 0.496665 / 0.000490 (0.496176) | 0.000471 / 0.000200 (0.000271) | 0.000069 / 0.000054 (0.000014) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034111 / 0.037411 (-0.003301) | 0.131442 / 0.014526 (0.116917) | 0.144660 / 0.176557 (-0.031897) | 0.188156 / 0.737135 (-0.548979) | 0.149875 / 0.296338 (-0.146463) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.502218 / 0.215209 (0.287009) | 5.004486 / 2.077655 (2.926832) | 2.420379 / 1.504120 (0.916259) | 2.194671 / 1.541195 (0.653476) | 2.306376 / 1.468490 (0.837886) | 0.856623 / 4.584777 (-3.728154) | 4.963211 / 3.745712 (1.217499) | 2.517965 / 5.269862 (-2.751896) | 1.743880 / 4.565676 (-2.821797) | 0.105270 / 0.424275 (-0.319005) | 0.014725 / 0.007607 (0.007118) | 0.621934 / 0.226044 (0.395890) | 6.183827 / 2.268929 (3.914898) | 2.945868 / 55.444624 (-52.498757) | 2.557676 / 6.876477 (-4.318801) | 2.622282 / 2.142072 (0.480210) | 1.011647 / 4.805227 (-3.793580) | 0.199573 / 6.500664 (-6.301091) | 0.076283 / 0.075469 (0.000814) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.518813 / 1.841788 (-0.322975) | 18.833017 / 8.074308 (10.758709) | 16.095249 / 10.191392 (5.903857) | 0.196667 / 0.680424 (-0.483757) | 0.022060 / 0.534201 (-0.512141) | 0.537802 / 0.579283 (-0.041481) | 0.523676 / 0.434364 (0.089312) | 0.629387 / 0.540337 (0.089049) | 0.738042 / 1.386936 (-0.648894) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008608 / 0.011353 (-0.002745) | 0.004553 / 0.011008 (-0.006455) | 0.100031 / 0.038508 (0.061523) | 0.029498 / 0.023109 (0.006389) | 0.306913 / 0.275898 (0.031015) | 0.367369 / 0.323480 (0.043889) | 0.006883 / 0.007986 (-0.001103) | 0.004768 / 0.004328 (0.000440) | 0.077424 / 0.004250 (0.073173) | 0.034005 / 0.037052 (-0.003047) | 0.317772 / 0.258489 (0.059283) | 0.356859 / 0.293841 (0.063018) | 0.033717 / 0.128546 (-0.094829) | 0.011386 / 0.075646 (-0.064260) | 0.322832 / 0.419271 (-0.096439) | 0.043930 / 0.043533 (0.000397) | 0.308087 / 0.255139 (0.052948) | 0.338349 / 0.283200 (0.055149) | 0.094780 / 0.141683 (-0.046903) | 1.463454 / 1.452155 (0.011300) | 1.495055 / 1.492716 (0.002338) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.191039 / 0.018006 (0.173033) | 0.414650 / 0.000490 (0.414160) | 0.002435 / 0.000200 (0.002235) | 0.000075 / 0.000054 (0.000021) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023871 / 0.037411 (-0.013540) | 0.097140 / 0.014526 (0.082614) | 0.105914 / 0.176557 (-0.070643) | 0.147375 / 0.737135 (-0.589760) | 0.107985 / 0.296338 (-0.188354) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.420174 / 0.215209 (0.204965) | 4.208354 / 2.077655 (2.130700) | 1.904568 / 1.504120 (0.400448) | 1.687406 / 1.541195 (0.146212) | 1.723901 / 1.468490 (0.255411) | 0.693554 / 4.584777 (-3.891223) | 3.445474 / 3.745712 (-0.300238) | 1.904919 / 5.269862 (-3.364943) | 1.284378 / 4.565676 (-3.281298) | 0.082539 / 0.424275 (-0.341736) | 0.012490 / 0.007607 (0.004883) | 0.527778 / 0.226044 (0.301733) | 5.300766 / 2.268929 (3.031838) | 2.324666 / 55.444624 (-53.119958) | 1.977166 / 6.876477 (-4.899311) | 2.054396 / 2.142072 (-0.087677) | 0.820966 / 4.805227 (-3.984261) | 0.148584 / 6.500664 (-6.352080) | 0.063618 / 0.075469 (-0.011851) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.188075 / 1.841788 (-0.653712) | 13.706950 / 8.074308 (5.632642) | 13.725122 / 10.191392 (3.533730) | 0.167379 / 0.680424 (-0.513045) | 0.028729 / 0.534201 (-0.505472) | 0.395373 / 0.579283 (-0.183910) | 0.403604 / 0.434364 (-0.030760) | 0.464290 / 0.540337 (-0.076047) | 0.553792 / 1.386936 (-0.833144) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006565 / 0.011353 (-0.004787) | 0.004588 / 0.011008 (-0.006420) | 0.077312 / 0.038508 (0.038804) | 0.027348 / 0.023109 (0.004239) | 0.367753 / 0.275898 (0.091855) | 0.403250 / 0.323480 (0.079770) | 0.005201 / 0.007986 (-0.002785) | 0.004695 / 0.004328 (0.000366) | 0.076203 / 0.004250 (0.071953) | 0.039388 / 0.037052 (0.002336) | 0.374418 / 0.258489 (0.115929) | 0.413623 / 0.293841 (0.119782) | 0.031731 / 0.128546 (-0.096815) | 0.011644 / 0.075646 (-0.064002) | 0.086339 / 0.419271 (-0.332932) | 0.048902 / 0.043533 (0.005369) | 0.352064 / 0.255139 (0.096925) | 0.386637 / 0.283200 (0.103437) | 0.093662 / 0.141683 (-0.048021) | 1.479863 / 1.452155 (0.027709) | 1.562475 / 1.492716 (0.069758) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.231874 / 0.018006 (0.213867) | 0.402185 / 0.000490 (0.401695) | 0.005252 / 0.000200 (0.005052) | 0.000086 / 0.000054 (0.000032) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025402 / 0.037411 (-0.012010) | 0.099896 / 0.014526 (0.085370) | 0.106365 / 0.176557 (-0.070192) | 0.143309 / 0.737135 (-0.593827) | 0.112311 / 0.296338 (-0.184027) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.447637 / 0.215209 (0.232428) | 4.469337 / 2.077655 (2.391682) | 2.164332 / 1.504120 (0.660212) | 1.957826 / 1.541195 (0.416631) | 1.984580 / 1.468490 (0.516090) | 0.702909 / 4.584777 (-3.881868) | 3.361725 / 3.745712 (-0.383987) | 2.818102 / 5.269862 (-2.451760) | 1.589815 / 4.565676 (-2.975862) | 0.083647 / 0.424275 (-0.340628) | 0.012502 / 0.007607 (0.004895) | 0.545578 / 0.226044 (0.319534) | 5.480894 / 2.268929 (3.211966) | 2.605599 / 55.444624 (-52.839026) | 2.253444 / 6.876477 (-4.623032) | 2.289818 / 2.142072 (0.147746) | 0.803680 / 4.805227 (-4.001547) | 0.151870 / 6.500664 (-6.348794) | 0.066610 / 0.075469 (-0.008859) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.327390 / 1.841788 (-0.514398) | 14.046936 / 8.074308 (5.972628) | 13.643169 / 10.191392 (3.451777) | 0.128223 / 0.680424 (-0.552201) | 0.016941 / 0.534201 (-0.517260) | 0.383887 / 0.579283 (-0.195396) | 0.383891 / 0.434364 (-0.050473) | 0.440191 / 0.540337 (-0.100146) | 0.525357 / 1.386936 (-0.861579) |\n\n</details>\n</details>\n\n\n",
"Yea there must have been an update in another package that unconstrained the protobuf dependency - idk which one though",
"It is `tensorboard`. I have reported the issue to `tensorflow`:\r\n- https://github.com/tensorflow/tensorflow/issues/59665"
] | 2023-02-12T11:51:25Z
| 2023-02-13T10:29:03Z
| 2023-02-13T09:24:16Z
|
MEMBER
| null | null | null |
fix https://github.com/huggingface/datasets/actions/runs/4156059127/jobs/7189576331
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5527/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5527/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/5527.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5527",
"merged_at": "2023-02-13T09:24:16Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5527.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5527"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6166
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6166/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6166/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6166/events
|
https://github.com/huggingface/datasets/pull/6166
| 1,861,259,055
|
PR_kwDODunzps5YfOt0
| 6,166
|
Document BUILDER_CONFIG_CLASS
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009036 / 0.011353 (-0.002317) | 0.004564 / 0.011008 (-0.006444) | 0.114958 / 0.038508 (0.076449) | 0.087329 / 0.023109 (0.064220) | 0.440111 / 0.275898 (0.164213) | 0.486056 / 0.323480 (0.162576) | 0.006580 / 0.007986 (-0.001406) | 0.004257 / 0.004328 (-0.000072) | 0.093458 / 0.004250 (0.089208) | 0.063380 / 0.037052 (0.026328) | 0.469455 / 0.258489 (0.210966) | 0.521630 / 0.293841 (0.227790) | 0.053496 / 0.128546 (-0.075050) | 0.013466 / 0.075646 (-0.062181) | 0.361629 / 0.419271 (-0.057642) | 0.068095 / 0.043533 (0.024562) | 0.472440 / 0.255139 (0.217301) | 0.508682 / 0.283200 (0.225483) | 0.034648 / 0.141683 (-0.107035) | 1.820117 / 1.452155 (0.367962) | 1.933448 / 1.492716 (0.440732) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.276543 / 0.018006 (0.258537) | 0.563380 / 0.000490 (0.562890) | 0.005345 / 0.000200 (0.005146) | 0.000107 / 0.000054 (0.000053) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029230 / 0.037411 (-0.008181) | 0.095613 / 0.014526 (0.081087) | 0.106178 / 0.176557 (-0.070378) | 0.181095 / 0.737135 (-0.556040) | 0.107789 / 0.296338 (-0.188550) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.612051 / 0.215209 (0.396842) | 6.065008 / 2.077655 (3.987353) | 2.720911 / 1.504120 (1.216791) | 2.495218 / 1.541195 (0.954023) | 2.423351 / 1.468490 (0.954860) | 0.835571 / 4.584777 (-3.749205) | 5.438230 / 3.745712 (1.692518) | 4.550301 / 5.269862 (-0.719561) | 2.919889 / 4.565676 (-1.645788) | 0.097748 / 0.424275 (-0.326527) | 0.009285 / 0.007607 (0.001678) | 0.741968 / 0.226044 (0.515923) | 7.285394 / 2.268929 (5.016466) | 3.433634 / 55.444624 (-52.010991) | 2.680823 / 6.876477 (-4.195654) | 2.931149 / 2.142072 (0.789076) | 1.012852 / 4.805227 (-3.792375) | 0.224899 / 6.500664 (-6.275765) | 0.089411 / 0.075469 (0.013942) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.622759 / 1.841788 (-0.219029) | 23.690030 / 8.074308 (15.615721) | 21.034451 / 10.191392 (10.843059) | 0.241504 / 0.680424 (-0.438920) | 0.030109 / 0.534201 (-0.504092) | 0.472536 / 0.579283 (-0.106747) | 0.631396 / 0.434364 (0.197032) | 0.598997 / 0.540337 (0.058659) | 0.798680 / 1.386936 (-0.588256) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008696 / 0.011353 (-0.002657) | 0.005032 / 0.011008 (-0.005977) | 0.087369 / 0.038508 (0.048861) | 0.078105 / 0.023109 (0.054996) | 0.464861 / 0.275898 (0.188963) | 0.509620 / 0.323480 (0.186140) | 0.006399 / 0.007986 (-0.001587) | 0.004276 / 0.004328 (-0.000052) | 0.081643 / 0.004250 (0.077393) | 0.062560 / 0.037052 (0.025508) | 0.495377 / 0.258489 (0.236888) | 0.484885 / 0.293841 (0.191044) | 0.054354 / 0.128546 (-0.074193) | 0.013851 / 0.075646 (-0.061795) | 0.089531 / 0.419271 (-0.329740) | 0.068732 / 0.043533 (0.025199) | 0.455842 / 0.255139 (0.200703) | 0.528775 / 0.283200 (0.245575) | 0.039646 / 0.141683 (-0.102037) | 1.733600 / 1.452155 (0.281445) | 1.879074 / 1.492716 (0.386358) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.369616 / 0.018006 (0.351610) | 0.607426 / 0.000490 (0.606936) | 0.055540 / 0.000200 (0.055341) | 0.000543 / 0.000054 (0.000488) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.036026 / 0.037411 (-0.001385) | 0.103968 / 0.014526 (0.089442) | 0.114852 / 0.176557 (-0.061705) | 0.187313 / 0.737135 (-0.549822) | 0.116839 / 0.296338 (-0.179500) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.614018 / 0.215209 (0.398809) | 6.139914 / 2.077655 (4.062259) | 2.826246 / 1.504120 (1.322126) | 2.524133 / 1.541195 (0.982938) | 2.606981 / 1.468490 (1.138491) | 0.844604 / 4.584777 (-3.740173) | 5.537178 / 3.745712 (1.791465) | 4.594624 / 5.269862 (-0.675237) | 3.032145 / 4.565676 (-1.533532) | 0.094771 / 0.424275 (-0.329504) | 0.008132 / 0.007607 (0.000525) | 0.714287 / 0.226044 (0.488242) | 7.296733 / 2.268929 (5.027804) | 3.698066 / 55.444624 (-51.746558) | 2.862781 / 6.876477 (-4.013696) | 3.114502 / 2.142072 (0.972429) | 0.986612 / 4.805227 (-3.818616) | 0.214438 / 6.500664 (-6.286226) | 0.076201 / 0.075469 (0.000732) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.747728 / 1.841788 (-0.094060) | 24.159845 / 8.074308 (16.085537) | 23.553485 / 10.191392 (13.362093) | 0.248387 / 0.680424 (-0.432037) | 0.029850 / 0.534201 (-0.504351) | 0.526416 / 0.579283 (-0.052867) | 0.625681 / 0.434364 (0.191317) | 0.619690 / 0.540337 (0.079352) | 0.827485 / 1.386936 (-0.559451) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006728 / 0.011353 (-0.004625) | 0.003960 / 0.011008 (-0.007048) | 0.085569 / 0.038508 (0.047061) | 0.077463 / 0.023109 (0.054354) | 0.343112 / 0.275898 (0.067214) | 0.379128 / 0.323480 (0.055648) | 0.004087 / 0.007986 (-0.003899) | 0.003357 / 0.004328 (-0.000972) | 0.065570 / 0.004250 (0.061320) | 0.056259 / 0.037052 (0.019207) | 0.368595 / 0.258489 (0.110106) | 0.402672 / 0.293841 (0.108831) | 0.030946 / 0.128546 (-0.097600) | 0.008509 / 0.075646 (-0.067137) | 0.288552 / 0.419271 (-0.130719) | 0.052134 / 0.043533 (0.008601) | 0.344653 / 0.255139 (0.089514) | 0.374199 / 0.283200 (0.090999) | 0.026251 / 0.141683 (-0.115432) | 1.488258 / 1.452155 (0.036103) | 1.567119 / 1.492716 (0.074402) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.218740 / 0.018006 (0.200734) | 0.465483 / 0.000490 (0.464994) | 0.003959 / 0.000200 (0.003759) | 0.000083 / 0.000054 (0.000029) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029860 / 0.037411 (-0.007551) | 0.087968 / 0.014526 (0.073442) | 0.098257 / 0.176557 (-0.078299) | 0.155478 / 0.737135 (-0.581657) | 0.100696 / 0.296338 (-0.195642) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.384642 / 0.215209 (0.169432) | 3.821692 / 2.077655 (1.744038) | 1.838012 / 1.504120 (0.333892) | 1.677554 / 1.541195 (0.136360) | 1.764284 / 1.468490 (0.295794) | 0.487512 / 4.584777 (-4.097265) | 3.614572 / 3.745712 (-0.131141) | 3.300740 / 5.269862 (-1.969122) | 2.079044 / 4.565676 (-2.486632) | 0.057392 / 0.424275 (-0.366883) | 0.007642 / 0.007607 (0.000035) | 0.456161 / 0.226044 (0.230117) | 4.554124 / 2.268929 (2.285196) | 2.319288 / 55.444624 (-53.125336) | 1.972024 / 6.876477 (-4.904452) | 2.210598 / 2.142072 (0.068526) | 0.588442 / 4.805227 (-4.216785) | 0.134474 / 6.500664 (-6.366191) | 0.062682 / 0.075469 (-0.012787) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.243548 / 1.841788 (-0.598239) | 20.267230 / 8.074308 (12.192922) | 14.872096 / 10.191392 (4.680704) | 0.165164 / 0.680424 (-0.515260) | 0.018985 / 0.534201 (-0.515216) | 0.394526 / 0.579283 (-0.184757) | 0.413918 / 0.434364 (-0.020446) | 0.467130 / 0.540337 (-0.073208) | 0.627055 / 1.386936 (-0.759881) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006940 / 0.011353 (-0.004412) | 0.004203 / 0.011008 (-0.006805) | 0.065828 / 0.038508 (0.027320) | 0.076604 / 0.023109 (0.053495) | 0.401781 / 0.275898 (0.125883) | 0.434838 / 0.323480 (0.111358) | 0.005626 / 0.007986 (-0.002359) | 0.003409 / 0.004328 (-0.000920) | 0.064702 / 0.004250 (0.060452) | 0.057525 / 0.037052 (0.020473) | 0.405032 / 0.258489 (0.146543) | 0.440906 / 0.293841 (0.147065) | 0.032713 / 0.128546 (-0.095833) | 0.008723 / 0.075646 (-0.066923) | 0.071448 / 0.419271 (-0.347823) | 0.048186 / 0.043533 (0.004653) | 0.403950 / 0.255139 (0.148811) | 0.419506 / 0.283200 (0.136307) | 0.023532 / 0.141683 (-0.118150) | 1.496435 / 1.452155 (0.044280) | 1.567236 / 1.492716 (0.074519) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.229194 / 0.018006 (0.211188) | 0.451363 / 0.000490 (0.450873) | 0.003651 / 0.000200 (0.003451) | 0.000108 / 0.000054 (0.000053) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033674 / 0.037411 (-0.003737) | 0.097521 / 0.014526 (0.082995) | 0.108806 / 0.176557 (-0.067751) | 0.161002 / 0.737135 (-0.576133) | 0.108594 / 0.296338 (-0.187745) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.436638 / 0.215209 (0.221429) | 4.348844 / 2.077655 (2.271189) | 2.341737 / 1.504120 (0.837617) | 2.195850 / 1.541195 (0.654656) | 2.332147 / 1.468490 (0.863657) | 0.496180 / 4.584777 (-4.088597) | 3.680987 / 3.745712 (-0.064725) | 3.332203 / 5.269862 (-1.937659) | 2.099541 / 4.565676 (-2.466136) | 0.058629 / 0.424275 (-0.365646) | 0.007363 / 0.007607 (-0.000245) | 0.517658 / 0.226044 (0.291614) | 5.175321 / 2.268929 (2.906392) | 2.858660 / 55.444624 (-52.585964) | 2.540557 / 6.876477 (-4.335920) | 2.755360 / 2.142072 (0.613288) | 0.595488 / 4.805227 (-4.209739) | 0.134265 / 6.500664 (-6.366399) | 0.062033 / 0.075469 (-0.013436) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.389950 / 1.841788 (-0.451838) | 20.800274 / 8.074308 (12.725966) | 15.314531 / 10.191392 (5.123139) | 0.166822 / 0.680424 (-0.513602) | 0.021099 / 0.534201 (-0.513102) | 0.400388 / 0.579283 (-0.178895) | 0.419981 / 0.434364 (-0.014383) | 0.474259 / 0.540337 (-0.066078) | 0.731678 / 1.386936 (-0.655258) |\n\n</details>\n</details>\n\n\n"
] | 2023-08-22T11:27:41Z
| 2023-08-23T14:01:25Z
| 2023-08-23T13:52:36Z
|
MEMBER
| null | null | null |
Related to https://github.com/huggingface/datasets/issues/6130
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6166/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6166/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6166.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6166",
"merged_at": "2023-08-23T13:52:36Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6166.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6166"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5447
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5447/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5447/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5447/events
|
https://github.com/huggingface/datasets/pull/5447
| 1,550,599,193
|
PR_kwDODunzps5IM0Nu
| 5,447
|
Fix CI by temporarily pinning fsspec < 2023.1.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.011875 / 0.011353 (0.000522) | 0.008188 / 0.011008 (-0.002821) | 0.131137 / 0.038508 (0.092629) | 0.038127 / 0.023109 (0.015018) | 0.383864 / 0.275898 (0.107966) | 0.458617 / 0.323480 (0.135137) | 0.010989 / 0.007986 (0.003003) | 0.004892 / 0.004328 (0.000563) | 0.101955 / 0.004250 (0.097704) | 0.045081 / 0.037052 (0.008029) | 0.409768 / 0.258489 (0.151279) | 0.446597 / 0.293841 (0.152756) | 0.058588 / 0.128546 (-0.069958) | 0.020872 / 0.075646 (-0.054774) | 0.432982 / 0.419271 (0.013711) | 0.075875 / 0.043533 (0.032342) | 0.380923 / 0.255139 (0.125784) | 0.432994 / 0.283200 (0.149795) | 0.122678 / 0.141683 (-0.019005) | 1.857865 / 1.452155 (0.405710) | 1.927801 / 1.492716 (0.435085) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.212941 / 0.018006 (0.194935) | 0.527977 / 0.000490 (0.527488) | 0.002996 / 0.000200 (0.002797) | 0.000105 / 0.000054 (0.000051) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030046 / 0.037411 (-0.007366) | 0.126384 / 0.014526 (0.111858) | 0.138307 / 0.176557 (-0.038250) | 0.185338 / 0.737135 (-0.551797) | 0.144733 / 0.296338 (-0.151606) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.627096 / 0.215209 (0.411887) | 6.418014 / 2.077655 (4.340360) | 2.547675 / 1.504120 (1.043555) | 2.195552 / 1.541195 (0.654357) | 2.200377 / 1.468490 (0.731887) | 1.289935 / 4.584777 (-3.294842) | 5.670839 / 3.745712 (1.925127) | 5.252597 / 5.269862 (-0.017265) | 2.878470 / 4.565676 (-1.687207) | 0.143754 / 0.424275 (-0.280521) | 0.014814 / 0.007607 (0.007207) | 0.810073 / 0.226044 (0.584028) | 8.183757 / 2.268929 (5.914829) | 3.375525 / 55.444624 (-52.069099) | 2.594048 / 6.876477 (-4.282428) | 2.598095 / 2.142072 (0.456023) | 1.554493 / 4.805227 (-3.250734) | 0.263159 / 6.500664 (-6.237505) | 0.089822 / 0.075469 (0.014353) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.660847 / 1.841788 (-0.180941) | 18.434283 / 8.074308 (10.359975) | 21.764887 / 10.191392 (11.573495) | 0.264524 / 0.680424 (-0.415900) | 0.048519 / 0.534201 (-0.485682) | 0.587468 / 0.579283 (0.008185) | 0.634142 / 0.434364 (0.199778) | 0.675374 / 0.540337 (0.135037) | 0.777510 / 1.386936 (-0.609426) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010021 / 0.011353 (-0.001332) | 0.006207 / 0.011008 (-0.004801) | 0.130490 / 0.038508 (0.091982) | 0.037957 / 0.023109 (0.014848) | 0.489381 / 0.275898 (0.213483) | 0.536522 / 0.323480 (0.213042) | 0.008611 / 0.007986 (0.000626) | 0.004894 / 0.004328 (0.000565) | 0.101617 / 0.004250 (0.097367) | 0.052629 / 0.037052 (0.015577) | 0.509211 / 0.258489 (0.250721) | 0.545023 / 0.293841 (0.251182) | 0.057468 / 0.128546 (-0.071078) | 0.023393 / 0.075646 (-0.052253) | 0.431408 / 0.419271 (0.012137) | 0.064967 / 0.043533 (0.021434) | 0.495261 / 0.255139 (0.240122) | 0.527098 / 0.283200 (0.243898) | 0.113172 / 0.141683 (-0.028511) | 1.937072 / 1.452155 (0.484918) | 2.048413 / 1.492716 (0.555697) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.245406 / 0.018006 (0.227399) | 0.526772 / 0.000490 (0.526283) | 0.004379 / 0.000200 (0.004179) | 0.000114 / 0.000054 (0.000060) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031785 / 0.037411 (-0.005626) | 0.130949 / 0.014526 (0.116424) | 0.145660 / 0.176557 (-0.030896) | 0.186991 / 0.737135 (-0.550144) | 0.151000 / 0.296338 (-0.145338) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.708643 / 0.215209 (0.493434) | 7.179252 / 2.077655 (5.101597) | 3.143375 / 1.504120 (1.639255) | 2.714298 / 1.541195 (1.173103) | 2.773441 / 1.468490 (1.304951) | 1.312821 / 4.584777 (-3.271956) | 5.798396 / 3.745712 (2.052684) | 3.253215 / 5.269862 (-2.016646) | 2.147260 / 4.565676 (-2.418416) | 0.154673 / 0.424275 (-0.269602) | 0.014918 / 0.007607 (0.007311) | 0.860618 / 0.226044 (0.634573) | 8.774455 / 2.268929 (6.505527) | 3.925020 / 55.444624 (-51.519604) | 3.139361 / 6.876477 (-3.737115) | 3.208883 / 2.142072 (1.066810) | 1.547305 / 4.805227 (-3.257922) | 0.268814 / 6.500664 (-6.231850) | 0.084578 / 0.075469 (0.009109) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.694990 / 1.841788 (-0.146798) | 18.619183 / 8.074308 (10.544875) | 21.929886 / 10.191392 (11.738494) | 0.265763 / 0.680424 (-0.414661) | 0.028325 / 0.534201 (-0.505876) | 0.552910 / 0.579283 (-0.026373) | 0.616864 / 0.434364 (0.182500) | 0.637858 / 0.540337 (0.097521) | 0.744508 / 1.386936 (-0.642428) |\n\n</details>\n</details>\n\n\n"
] | 2023-01-20T10:11:02Z
| 2023-01-20T10:38:13Z
| 2023-01-20T10:28:43Z
|
MEMBER
| null | null | null |
Temporarily pin fsspec < 2023.1.0
Fix #5445.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5447/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5447/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/5447.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5447",
"merged_at": "2023-01-20T10:28:43Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5447.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5447"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4925
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4925/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4925/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4925/events
|
https://github.com/huggingface/datasets/pull/4925
| 1,360,007,616
|
PR_kwDODunzps4-RbP5
| 4,925
|
Add note about loading image / audio files to docs
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lewtun",
"id": 26859204,
"login": "lewtun",
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"repos_url": "https://api.github.com/users/lewtun/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lewtun",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4925). All of your documentation changes will be reflected on that endpoint.",
"Thanks for the feedback @polinaeterna ! I've reworded the docs a bit to integrate your comments and this should be ready for another review :)",
"> I've just realized that there is another PR about audio documentation open: #4872\r\n> and there the more detailed description on how to use `audiofolder` is moved to another section (\"Create an audio dataset\")\r\n\r\nAh yes, let's add a comment to #4872 - that will be simpler than the alternatives :)",
"@polinaeterna @lhoestq What do you think about adding support for the metadata format from Kaggle (one metadata file for each split with the name equal to the split name) to ImageFolder/AudioFolder? I also think we can relax some requirements a bit by:\r\n* allowing `filename` as the name of the main metadata column (currently, only `file_path` is allowed)\r\n* not requiring that the features of all the given metadata files are equal. Instead, we can have a soft check by using `_check_if_features_can_be_aligned` + `_align_features`. The rationale is that train/val metadata often has extra columns compared to test metadata.\r\n\r\nThese changes would allow us to load the Kaggle dataset linked in the forum thread without any \"interventions\".\r\n\r\nPS: this metadata format for ImageFolder was also proposed by @abhishekkrthakur initially.\r\n",
"Can you give more details about the Kaggle format ? I'm down to discuss it in a separate issue if you don't mind.\r\n\r\n> allowing filename as the name of the main metadata column (currently, only file_path is allowed)\r\n\r\n`filename` refers to the name of the file, so there's no logic about relative path or directories. If I recall correctly this is what we're doing right now so why not\r\n\r\n> not requiring that the features of all the given metadata files are equal. Instead, we can have a soft check by using _check_if_features_can_be_aligned + _align_features. The rationale is that train/val metadata often has extra columns compared to test metadata.\r\n\r\n+1 and we can set to None the missing features",
"I'm not sure if this is worth opening a new issue :).\r\n\r\nWhat I mean by the Kaggle format is the structure like this one (the name of a metadata file is equal to the directory it \"references\"):\r\n```\r\n- train\r\n - img1.jpeg\r\n - img2.jpeg\r\n - ...\r\n- test\r\n - img1.jpeg\r\n - img2.jpeg\r\n - ... \r\n- train.csv\r\n- test.csv\r\n```\r\n\r\n\r\n",
"Sounds nice !",
"@mariosasko +1 to allowing different features set and metadata filenames corresponding to split names\r\n\r\nConsidering filename column - right now it's even called `file_name` now, which is not nice because in fact it's a relative file path indeed, so I think it should be `file_path` (and I don't know why I haven't thought about it before the release...)",
"@lewtun don't you mind if I close this pull request as I've integrated your changes in https://github.com/huggingface/datasets/pull/4872 ? (it doesn't have a link to a kaggle example though)"
] | 2022-09-02T10:31:58Z
| 2022-09-26T12:21:30Z
| 2022-09-23T13:59:07Z
|
MEMBER
| null | null | null |
This PR adds a small note about how to load image / audio datasets that have multiple splits in their dataset structure.
Related forum thread: https://discuss.huggingface.co/t/loading-train-and-test-splits-with-audiofolder/22447
cc @NielsRogge
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/polinaeterna",
"id": 16348744,
"login": "polinaeterna",
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"type": "User",
"url": "https://api.github.com/users/polinaeterna",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4925/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4925/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/4925.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4925",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/4925.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4925"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4559
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4559/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4559/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4559/events
|
https://github.com/huggingface/datasets/pull/4559
| 1,283,544,937
|
PR_kwDODunzps46TV7-
| 4,559
|
Add action names in schema_guided_dstc8 dataset card
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-06-24T10:00:01Z
| 2022-06-24T10:54:28Z
| 2022-06-24T10:43:47Z
|
MEMBER
| null | null | null |
As aseked in https://huggingface.co/datasets/schema_guided_dstc8/discussions/1, I added the action names in the dataset card
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4559/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4559/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/4559.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4559",
"merged_at": "2022-06-24T10:43:47Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4559.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4559"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4594
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4594/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4594/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4594/events
|
https://github.com/huggingface/datasets/issues/4594
| 1,288,070,023
|
I_kwDODunzps5MxmOH
| 4,594
|
load_from_disk suggests incorrect fix when used to load DatasetDict
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/11157811?v=4",
"events_url": "https://api.github.com/users/dvsth/events{/privacy}",
"followers_url": "https://api.github.com/users/dvsth/followers",
"following_url": "https://api.github.com/users/dvsth/following{/other_user}",
"gists_url": "https://api.github.com/users/dvsth/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dvsth",
"id": 11157811,
"login": "dvsth",
"node_id": "MDQ6VXNlcjExMTU3ODEx",
"organizations_url": "https://api.github.com/users/dvsth/orgs",
"received_events_url": "https://api.github.com/users/dvsth/received_events",
"repos_url": "https://api.github.com/users/dvsth/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dvsth/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dvsth/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dvsth",
"user_view_type": "public"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
| null |
[] | null |
[] | 2022-06-29T01:40:01Z
| 2022-06-29T04:03:44Z
| 2022-06-29T04:03:44Z
|
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
Edit: Please feel free to remove this issue. The problem was not the error message but the fact that the DatasetDict.load_from_disk does not support loading nested splits, i.e. if one of the splits is itself a DatasetDict. If nesting splits is an antipattern, perhaps the load_from_disk function can throw a warning indicating that?
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/11157811?v=4",
"events_url": "https://api.github.com/users/dvsth/events{/privacy}",
"followers_url": "https://api.github.com/users/dvsth/followers",
"following_url": "https://api.github.com/users/dvsth/following{/other_user}",
"gists_url": "https://api.github.com/users/dvsth/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dvsth",
"id": 11157811,
"login": "dvsth",
"node_id": "MDQ6VXNlcjExMTU3ODEx",
"organizations_url": "https://api.github.com/users/dvsth/orgs",
"received_events_url": "https://api.github.com/users/dvsth/received_events",
"repos_url": "https://api.github.com/users/dvsth/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dvsth/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dvsth/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dvsth",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4594/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4594/timeline
| null |
not_planned
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/6744
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6744/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6744/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6744/events
|
https://github.com/huggingface/datasets/issues/6744
| 2,197,910,168
|
I_kwDODunzps6DAXKY
| 6,744
|
Option to disable file locking
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/35767167?v=4",
"events_url": "https://api.github.com/users/VRehnberg/events{/privacy}",
"followers_url": "https://api.github.com/users/VRehnberg/followers",
"following_url": "https://api.github.com/users/VRehnberg/following{/other_user}",
"gists_url": "https://api.github.com/users/VRehnberg/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/VRehnberg",
"id": 35767167,
"login": "VRehnberg",
"node_id": "MDQ6VXNlcjM1NzY3MTY3",
"organizations_url": "https://api.github.com/users/VRehnberg/orgs",
"received_events_url": "https://api.github.com/users/VRehnberg/received_events",
"repos_url": "https://api.github.com/users/VRehnberg/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/VRehnberg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VRehnberg/subscriptions",
"type": "User",
"url": "https://api.github.com/users/VRehnberg",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
open
| false
| null |
[] | null |
[] | 2024-03-20T15:59:45Z
| 2024-03-20T15:59:45Z
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Feature request
Commands such as `load_dataset` creates file locks with `filelock.FileLock`. It would be good if there was a way to disable this.
### Motivation
File locking doesn't work on all file-systems (in my case NFS mounted Weka). If the `cache_dir` only had small files then it would be possible to point to local disk and the problem would be solved. However, as cache_dir is both where the small info files are written and the processed datasets are put this isn't a feasible solution.
Considering https://github.com/huggingface/datasets/issues/6395 I still do think this is something that belongs in HuggingFace. The possibility to control packages separately is valuable. It might be that a user has their dataset on a file-system that doesn't support file-locking while they are using file locking on local disk to control some other type of access.
### Your contribution
My suggested solution:
```
diff --git a/src/datasets/utils/_filelock.py b/src/datasets/utils/_filelock.py
index 19620e6e..58f41a02 100644
--- a/src/datasets/utils/_filelock.py
+++ b/src/datasets/utils/_filelock.py
@@ -18,11 +18,15 @@
import os
from filelock import FileLock as FileLock_
-from filelock import UnixFileLock
+from filelock import SoftFileLock, UnixFileLock
from filelock import __version__ as _filelock_version
from packaging import version
+if os.getenv('HF_USE_SOFTFILELOCK', 'false').lower() in ('true', '1'):
+ FileLock_ = SoftFileLock
+
+
class FileLock(FileLock_):
"""
A `filelock.FileLock` initializer that handles long paths.
```
| null |
{
"+1": 8,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 8,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6744/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6744/timeline
| null | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/6736
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6736/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6736/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6736/events
|
https://github.com/huggingface/datasets/issues/6736
| 2,190,181,422
|
I_kwDODunzps6Ci4Qu
| 6,736
|
Mosaic Streaming (MDS) Support
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/2498509?v=4",
"events_url": "https://api.github.com/users/siddk/events{/privacy}",
"followers_url": "https://api.github.com/users/siddk/followers",
"following_url": "https://api.github.com/users/siddk/following{/other_user}",
"gists_url": "https://api.github.com/users/siddk/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/siddk",
"id": 2498509,
"login": "siddk",
"node_id": "MDQ6VXNlcjI0OTg1MDk=",
"organizations_url": "https://api.github.com/users/siddk/orgs",
"received_events_url": "https://api.github.com/users/siddk/received_events",
"repos_url": "https://api.github.com/users/siddk/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/siddk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/siddk/subscriptions",
"type": "User",
"url": "https://api.github.com/users/siddk",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
open
| false
| null |
[] | null |
[
"Hi ! that would be great :) Though note that `datasets` doesn't implement format-specific resuming when streaming, so in general I think it's better if users can use the mosaic-streaming library to read their MDS datasets. I wonder if they support `hf://` paths though...\r\n\r\nAnyway for those interested, the code for WebDataset is a single file here: https://github.com/huggingface/datasets/blob/main/src/datasets/packaged_modules/webdataset/webdataset.py.\r\n\r\nIt implements `_split_generators` that downloads files and returns the lists of splits (train/validation/test) and `_split_generators` to generate examples (dicts) from the downloaded files. Streaming is automatically supported by making download steps lazy and by extending `open()` to work with remote URLs."
] | 2024-03-16T18:42:04Z
| 2024-03-18T15:13:34Z
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Feature request
I'm a huge fan of the current HF Datasets `webdataset` integration (especially the built-in streaming support). However, I'd love to upload some robotics and multimodal datasets I've processed for use with [Mosaic Streaming](https://docs.mosaicml.com/projects/streaming/en/stable/), specifically their [MDS Format](https://docs.mosaicml.com/projects/streaming/en/stable/fundamentals/dataset_format.html#mds).
Because the shard files have similar semantics to WebDataset, I'm hoping that adding such support won't be too much trouble?
### Motivation
One of the downsides with WebDataset is a lack of out-of-the-box determinism (especially for large-scale training and reproducibility), easy job resumption, and the ability to quickly debug / visualize individual examples.
Mosaic Streaming provides a [great interface for this out of the box](https://docs.mosaicml.com/projects/streaming/en/stable/#key-features), so I'd love to see it supported in HF Datasets.
### Your contribution
Happy to help test things / provide example data. Can potentially submit a PR if maintainers could point me to the necessary WebDataset logic / steps for adding a new streaming format!
| null |
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6736/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6736/timeline
| null | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/5210
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5210/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5210/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5210/events
|
https://github.com/huggingface/datasets/pull/5210
| 1,438,492,507
|
PR_kwDODunzps5CVUzx
| 5,210
|
Tweak readme
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Nit: We should also update the `Disclaimers` section to let the dataset owners know they should use Hub discussions rather than GH issues for removal requests/updates",
"Updated the disclaimers section, thanks !\r\n\r\nDoes it sound good to you @albertvillanova ?"
] | 2022-11-07T14:51:23Z
| 2022-11-24T11:35:07Z
| 2022-11-24T11:26:16Z
|
MEMBER
| null | null | null |
Tweaked some paragraphs mentioning the modalities we support + added a paragraph on security
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5210/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5210/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/5210.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5210",
"merged_at": "2022-11-24T11:26:16Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5210.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5210"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.