url
stringlengths
61
61
repository_url
stringclasses
1 value
labels_url
stringlengths
75
75
comments_url
stringlengths
70
70
events_url
stringlengths
68
68
html_url
stringlengths
49
51
id
int64
1.18B
2.34B
node_id
stringlengths
18
19
number
int64
3.98k
6.96k
title
stringlengths
1
290
user
dict
labels
listlengths
0
4
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
listlengths
0
3
milestone
dict
comments
sequencelengths
0
30
created_at
unknown
updated_at
unknown
closed_at
unknown
author_association
stringclasses
4 values
active_lock_reason
null
draft
bool
2 classes
pull_request
dict
body
stringlengths
1
33.9k
reactions
dict
timeline_url
stringlengths
70
70
performed_via_github_app
null
state_reason
stringclasses
3 values
is_pull_request
bool
2 classes
https://api.github.com/repos/huggingface/datasets/issues/6964
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6964/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6964/comments
https://api.github.com/repos/huggingface/datasets/issues/6964/events
https://github.com/huggingface/datasets/pull/6964
2,344,973,229
PR_kwDODunzps5yCNGa
6,964
Fix resuming arrow format
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6964). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
"2024-06-10T22:40:33"
"2024-06-11T11:54:19"
null
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6964", "html_url": "https://github.com/huggingface/datasets/pull/6964", "diff_url": "https://github.com/huggingface/datasets/pull/6964.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6964.patch", "merged_at": null }
following https://github.com/huggingface/datasets/pull/6658
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6964/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6964/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6963
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6963/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6963/comments
https://api.github.com/repos/huggingface/datasets/issues/6963/events
https://github.com/huggingface/datasets/pull/6963
2,344,269,477
PR_kwDODunzps5x_yu-
6,963
[Streaming] retry on requests errors
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6963). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
"2024-06-10T15:51:56"
"2024-06-11T07:37:21"
null
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6963", "html_url": "https://github.com/huggingface/datasets/pull/6963", "diff_url": "https://github.com/huggingface/datasets/pull/6963.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6963.patch", "merged_at": null }
reported in https://discuss.huggingface.co/t/speeding-up-streaming-of-large-datasets-fineweb/90714/6 when training using a streaming a dataloader cc @Wauplin it looks like the retries from `hfh` are not always enough. In this PR I let `datasets` do additional retries (that users can configure in `datasets.config`) since I couldn't find an easy way to increase the max_retries for `hfh` users in general.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6963/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6963/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6962
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6962/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6962/comments
https://api.github.com/repos/huggingface/datasets/issues/6962/events
https://github.com/huggingface/datasets/pull/6962
2,343,394,378
PR_kwDODunzps5x8yHt
6,962
fix(ci): remove unnecessary permissions
{ "login": "McPatate", "id": 9112841, "node_id": "MDQ6VXNlcjkxMTI4NDE=", "avatar_url": "https://avatars.githubusercontent.com/u/9112841?v=4", "gravatar_id": "", "url": "https://api.github.com/users/McPatate", "html_url": "https://github.com/McPatate", "followers_url": "https://api.github.com/users/McPatate/followers", "following_url": "https://api.github.com/users/McPatate/following{/other_user}", "gists_url": "https://api.github.com/users/McPatate/gists{/gist_id}", "starred_url": "https://api.github.com/users/McPatate/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/McPatate/subscriptions", "organizations_url": "https://api.github.com/users/McPatate/orgs", "repos_url": "https://api.github.com/users/McPatate/repos", "events_url": "https://api.github.com/users/McPatate/events{/privacy}", "received_events_url": "https://api.github.com/users/McPatate/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6962). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005520 / 0.011353 (-0.005833) | 0.003989 / 0.011008 (-0.007019) | 0.064786 / 0.038508 (0.026278) | 0.031075 / 0.023109 (0.007966) | 0.241619 / 0.275898 (-0.034279) | 0.275341 / 0.323480 (-0.048139) | 0.003139 / 0.007986 (-0.004847) | 0.002820 / 0.004328 (-0.001508) | 0.049766 / 0.004250 (0.045515) | 0.045047 / 0.037052 (0.007995) | 0.251906 / 0.258489 (-0.006583) | 0.285889 / 0.293841 (-0.007952) | 0.028297 / 0.128546 (-0.100249) | 0.010683 / 0.075646 (-0.064963) | 0.206467 / 0.419271 (-0.212805) | 0.036267 / 0.043533 (-0.007266) | 0.250720 / 0.255139 (-0.004419) | 0.268565 / 0.283200 (-0.014635) | 0.020394 / 0.141683 (-0.121289) | 1.114283 / 1.452155 (-0.337872) | 1.163884 / 1.492716 (-0.328833) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.112698 / 0.018006 (0.094692) | 0.302740 / 0.000490 (0.302251) | 0.000209 / 0.000200 (0.000009) | 0.000044 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019337 / 0.037411 (-0.018075) | 0.062854 / 0.014526 (0.048328) | 0.077088 / 0.176557 (-0.099468) | 0.120926 / 0.737135 (-0.616209) | 0.075594 / 0.296338 (-0.220744) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.290787 / 0.215209 (0.075578) | 2.867894 / 2.077655 (0.790239) | 1.490043 / 1.504120 (-0.014076) | 1.356383 / 1.541195 (-0.184812) | 1.400229 / 1.468490 (-0.068261) | 0.582076 / 4.584777 (-4.002701) | 2.398270 / 3.745712 (-1.347442) | 2.856459 / 5.269862 (-2.413403) | 1.815545 / 4.565676 (-2.750131) | 0.063259 / 0.424275 (-0.361016) | 0.005056 / 0.007607 (-0.002551) | 0.347699 / 0.226044 (0.121655) | 3.466511 / 2.268929 (1.197582) | 1.862096 / 55.444624 (-53.582528) | 1.532324 / 6.876477 (-5.344152) | 1.599411 / 2.142072 (-0.542661) | 0.657350 / 4.805227 (-4.147878) | 0.118981 / 6.500664 (-6.381683) | 0.042224 / 0.075469 (-0.033245) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.965649 / 1.841788 (-0.876139) | 11.896501 / 8.074308 (3.822193) | 9.873923 / 10.191392 (-0.317469) | 0.141165 / 0.680424 (-0.539258) | 0.013885 / 0.534201 (-0.520316) | 0.291464 / 0.579283 (-0.287819) | 0.273153 / 0.434364 (-0.161211) | 0.324395 / 0.540337 (-0.215942) | 0.422040 / 1.386936 (-0.964897) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005640 / 0.011353 (-0.005713) | 0.004035 / 0.011008 (-0.006973) | 0.050831 / 0.038508 (0.012323) | 0.032841 / 0.023109 (0.009732) | 0.272226 / 0.275898 (-0.003672) | 0.297880 / 0.323480 (-0.025599) | 0.004397 / 0.007986 (-0.003588) | 0.002762 / 0.004328 (-0.001566) | 0.049887 / 0.004250 (0.045637) | 0.040372 / 0.037052 (0.003320) | 0.286337 / 0.258489 (0.027848) | 0.320015 / 0.293841 (0.026174) | 0.029992 / 0.128546 (-0.098554) | 0.010781 / 0.075646 (-0.064865) | 0.059391 / 0.419271 (-0.359880) | 0.034410 / 0.043533 (-0.009123) | 0.273024 / 0.255139 (0.017885) | 0.288953 / 0.283200 (0.005754) | 0.018072 / 0.141683 (-0.123611) | 1.125742 / 1.452155 (-0.326413) | 1.175233 / 1.492716 (-0.317483) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093470 / 0.018006 (0.075463) | 0.313248 / 0.000490 (0.312758) | 0.000324 / 0.000200 (0.000124) | 0.000081 / 0.000054 (0.000026) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023529 / 0.037411 (-0.013882) | 0.077305 / 0.014526 (0.062779) | 0.088916 / 0.176557 (-0.087640) | 0.128792 / 0.737135 (-0.608344) | 0.090141 / 0.296338 (-0.206197) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.291110 / 0.215209 (0.075901) | 2.848118 / 2.077655 (0.770464) | 1.581664 / 1.504120 (0.077544) | 1.446390 / 1.541195 (-0.094804) | 1.452594 / 1.468490 (-0.015896) | 0.571213 / 4.584777 (-4.013564) | 0.976382 / 3.745712 (-2.769330) | 2.756192 / 5.269862 (-2.513670) | 1.770274 / 4.565676 (-2.795403) | 0.064513 / 0.424275 (-0.359763) | 0.005334 / 0.007607 (-0.002273) | 0.347380 / 0.226044 (0.121335) | 3.424800 / 2.268929 (1.155871) | 1.942374 / 55.444624 (-53.502250) | 1.636069 / 6.876477 (-5.240407) | 1.795327 / 2.142072 (-0.346745) | 0.658942 / 4.805227 (-4.146285) | 0.119542 / 6.500664 (-6.381123) | 0.041826 / 0.075469 (-0.033643) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.007230 / 1.841788 (-0.834558) | 12.293084 / 8.074308 (4.218776) | 10.618104 / 10.191392 (0.426712) | 0.133691 / 0.680424 (-0.546733) | 0.015725 / 0.534201 (-0.518476) | 0.288860 / 0.579283 (-0.290423) | 0.130546 / 0.434364 (-0.303818) | 0.327279 / 0.540337 (-0.213059) | 0.428768 / 1.386936 (-0.958168) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#af3acfdfcf76bb980dbac871540e30c2cade0cf9 \"CML watermark\")\n" ]
"2024-06-10T09:28:02"
"2024-06-11T08:31:52"
"2024-06-11T08:25:47"
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6962", "html_url": "https://github.com/huggingface/datasets/pull/6962", "diff_url": "https://github.com/huggingface/datasets/pull/6962.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6962.patch", "merged_at": "2024-06-11T08:25:47" }
### What does this PR do? Remove unnecessary permissions granted to the actions workflow. Sorry for the mishap.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6962/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6962/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6961
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6961/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6961/comments
https://api.github.com/repos/huggingface/datasets/issues/6961/events
https://github.com/huggingface/datasets/issues/6961
2,342,022,418
I_kwDODunzps6LmG0S
6,961
Manual downloads should count as downloads
{ "login": "umarbutler", "id": 8473183, "node_id": "MDQ6VXNlcjg0NzMxODM=", "avatar_url": "https://avatars.githubusercontent.com/u/8473183?v=4", "gravatar_id": "", "url": "https://api.github.com/users/umarbutler", "html_url": "https://github.com/umarbutler", "followers_url": "https://api.github.com/users/umarbutler/followers", "following_url": "https://api.github.com/users/umarbutler/following{/other_user}", "gists_url": "https://api.github.com/users/umarbutler/gists{/gist_id}", "starred_url": "https://api.github.com/users/umarbutler/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/umarbutler/subscriptions", "organizations_url": "https://api.github.com/users/umarbutler/orgs", "repos_url": "https://api.github.com/users/umarbutler/repos", "events_url": "https://api.github.com/users/umarbutler/events{/privacy}", "received_events_url": "https://api.github.com/users/umarbutler/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[]
"2024-06-09T04:52:06"
"2024-06-09T04:52:06"
null
NONE
null
null
null
### Feature request I would like to request that manual downloads of data files from Hugging Face dataset repositories count as downloads of a dataset. According to the documentation for the Hugging Face Hub, that is currently not the case: https://huggingface.co/docs/hub/en/datasets-download-stats ### Motivation This would ensure that downloads are accurately reported to end users. ### Your contribution N/A
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6961/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6961/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6960
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6960/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6960/comments
https://api.github.com/repos/huggingface/datasets/issues/6960/events
https://github.com/huggingface/datasets/pull/6960
2,340,791,685
PR_kwDODunzps5x0R3T
6,960
feat(ci): add trufflehog secrets detection
{ "login": "McPatate", "id": 9112841, "node_id": "MDQ6VXNlcjkxMTI4NDE=", "avatar_url": "https://avatars.githubusercontent.com/u/9112841?v=4", "gravatar_id": "", "url": "https://api.github.com/users/McPatate", "html_url": "https://github.com/McPatate", "followers_url": "https://api.github.com/users/McPatate/followers", "following_url": "https://api.github.com/users/McPatate/following{/other_user}", "gists_url": "https://api.github.com/users/McPatate/gists{/gist_id}", "starred_url": "https://api.github.com/users/McPatate/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/McPatate/subscriptions", "organizations_url": "https://api.github.com/users/McPatate/orgs", "repos_url": "https://api.github.com/users/McPatate/repos", "events_url": "https://api.github.com/users/McPatate/events{/privacy}", "received_events_url": "https://api.github.com/users/McPatate/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6960). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "Yes!", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005007 / 0.011353 (-0.006346) | 0.003603 / 0.011008 (-0.007405) | 0.062719 / 0.038508 (0.024211) | 0.029327 / 0.023109 (0.006217) | 0.250360 / 0.275898 (-0.025538) | 0.265095 / 0.323480 (-0.058385) | 0.004205 / 0.007986 (-0.003781) | 0.002713 / 0.004328 (-0.001616) | 0.049209 / 0.004250 (0.044958) | 0.045162 / 0.037052 (0.008110) | 0.260439 / 0.258489 (0.001950) | 0.287778 / 0.293841 (-0.006063) | 0.027458 / 0.128546 (-0.101088) | 0.010169 / 0.075646 (-0.065477) | 0.199487 / 0.419271 (-0.219784) | 0.036584 / 0.043533 (-0.006949) | 0.254523 / 0.255139 (-0.000616) | 0.269902 / 0.283200 (-0.013298) | 0.017138 / 0.141683 (-0.124545) | 1.099285 / 1.452155 (-0.352869) | 1.150878 / 1.492716 (-0.341839) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092868 / 0.018006 (0.074862) | 0.300421 / 0.000490 (0.299932) | 0.000213 / 0.000200 (0.000013) | 0.000053 / 0.000054 (-0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018810 / 0.037411 (-0.018601) | 0.062341 / 0.014526 (0.047815) | 0.074779 / 0.176557 (-0.101777) | 0.120641 / 0.737135 (-0.616494) | 0.075020 / 0.296338 (-0.221318) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.277782 / 0.215209 (0.062573) | 2.716427 / 2.077655 (0.638772) | 1.434204 / 1.504120 (-0.069916) | 1.335990 / 1.541195 (-0.205205) | 1.336636 / 1.468490 (-0.131854) | 0.557562 / 4.584777 (-4.027215) | 2.323517 / 3.745712 (-1.422196) | 2.647937 / 5.269862 (-2.621925) | 1.728735 / 4.565676 (-2.836941) | 0.061888 / 0.424275 (-0.362387) | 0.004981 / 0.007607 (-0.002627) | 0.329429 / 0.226044 (0.103385) | 3.324708 / 2.268929 (1.055779) | 1.832641 / 55.444624 (-53.611983) | 1.514386 / 6.876477 (-5.362091) | 1.656912 / 2.142072 (-0.485160) | 0.630706 / 4.805227 (-4.174521) | 0.116250 / 6.500664 (-6.384414) | 0.042598 / 0.075469 (-0.032871) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.969217 / 1.841788 (-0.872570) | 11.232580 / 8.074308 (3.158272) | 9.541306 / 10.191392 (-0.650086) | 0.139544 / 0.680424 (-0.540880) | 0.014441 / 0.534201 (-0.519760) | 0.285834 / 0.579283 (-0.293449) | 0.261950 / 0.434364 (-0.172414) | 0.325449 / 0.540337 (-0.214889) | 0.415501 / 1.386936 (-0.971435) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005422 / 0.011353 (-0.005931) | 0.003528 / 0.011008 (-0.007480) | 0.049582 / 0.038508 (0.011074) | 0.032683 / 0.023109 (0.009574) | 0.277309 / 0.275898 (0.001411) | 0.298598 / 0.323480 (-0.024882) | 0.004325 / 0.007986 (-0.003661) | 0.002741 / 0.004328 (-0.001588) | 0.047933 / 0.004250 (0.043683) | 0.040778 / 0.037052 (0.003726) | 0.287492 / 0.258489 (0.029003) | 0.311408 / 0.293841 (0.017567) | 0.029482 / 0.128546 (-0.099064) | 0.010630 / 0.075646 (-0.065016) | 0.057745 / 0.419271 (-0.361526) | 0.033501 / 0.043533 (-0.010031) | 0.279880 / 0.255139 (0.024741) | 0.297421 / 0.283200 (0.014221) | 0.017907 / 0.141683 (-0.123776) | 1.152221 / 1.452155 (-0.299934) | 1.189332 / 1.492716 (-0.303385) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094464 / 0.018006 (0.076457) | 0.300769 / 0.000490 (0.300279) | 0.000196 / 0.000200 (-0.000004) | 0.000050 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022232 / 0.037411 (-0.015179) | 0.076626 / 0.014526 (0.062100) | 0.087807 / 0.176557 (-0.088750) | 0.128847 / 0.737135 (-0.608288) | 0.092135 / 0.296338 (-0.204203) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.299013 / 0.215209 (0.083804) | 2.929788 / 2.077655 (0.852133) | 1.614185 / 1.504120 (0.110065) | 1.486720 / 1.541195 (-0.054475) | 1.492473 / 1.468490 (0.023983) | 0.563699 / 4.584777 (-4.021078) | 0.928820 / 3.745712 (-2.816892) | 2.597271 / 5.269862 (-2.672590) | 1.716534 / 4.565676 (-2.849142) | 0.062568 / 0.424275 (-0.361707) | 0.005168 / 0.007607 (-0.002439) | 0.353781 / 0.226044 (0.127737) | 3.493732 / 2.268929 (1.224803) | 2.018343 / 55.444624 (-53.426282) | 1.694516 / 6.876477 (-5.181961) | 1.796950 / 2.142072 (-0.345123) | 0.634846 / 4.805227 (-4.170382) | 0.115230 / 6.500664 (-6.385434) | 0.040816 / 0.075469 (-0.034654) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.986212 / 1.841788 (-0.855575) | 11.954392 / 8.074308 (3.880084) | 10.299670 / 10.191392 (0.108278) | 0.128358 / 0.680424 (-0.552066) | 0.016313 / 0.534201 (-0.517888) | 0.289621 / 0.579283 (-0.289662) | 0.124708 / 0.434364 (-0.309656) | 0.325269 / 0.540337 (-0.215068) | 0.415133 / 1.386936 (-0.971803) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#97513be330114a8aa07e5199ec252ac662aeb76d \"CML watermark\")\n" ]
"2024-06-07T16:18:23"
"2024-06-08T14:58:27"
"2024-06-08T14:52:18"
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6960", "html_url": "https://github.com/huggingface/datasets/pull/6960", "diff_url": "https://github.com/huggingface/datasets/pull/6960.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6960.patch", "merged_at": "2024-06-08T14:52:18" }
### What does this PR do? Adding a GH action to scan for leaked secrets on each commit.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6960/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6960/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6959
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6959/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6959/comments
https://api.github.com/repos/huggingface/datasets/issues/6959/events
https://github.com/huggingface/datasets/pull/6959
2,340,229,908
PR_kwDODunzps5xyVt6
6,959
Better error handling in `dataset_module_factory`
{ "login": "Wauplin", "id": 11801849, "node_id": "MDQ6VXNlcjExODAxODQ5", "avatar_url": "https://avatars.githubusercontent.com/u/11801849?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Wauplin", "html_url": "https://github.com/Wauplin", "followers_url": "https://api.github.com/users/Wauplin/followers", "following_url": "https://api.github.com/users/Wauplin/following{/other_user}", "gists_url": "https://api.github.com/users/Wauplin/gists{/gist_id}", "starred_url": "https://api.github.com/users/Wauplin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Wauplin/subscriptions", "organizations_url": "https://api.github.com/users/Wauplin/orgs", "repos_url": "https://api.github.com/users/Wauplin/repos", "events_url": "https://api.github.com/users/Wauplin/events{/privacy}", "received_events_url": "https://api.github.com/users/Wauplin/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6959). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "Test should be fixed by https://github.com/huggingface/datasets/pull/6959/commits/ef8f7cee79ffb070d9b5190f21128fc523b3d3ee (tested locally). Let's see what CI says :crossed_fingers: ", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005678 / 0.011353 (-0.005675) | 0.004119 / 0.011008 (-0.006889) | 0.063901 / 0.038508 (0.025393) | 0.032071 / 0.023109 (0.008961) | 0.243182 / 0.275898 (-0.032716) | 0.280709 / 0.323480 (-0.042770) | 0.004195 / 0.007986 (-0.003791) | 0.002810 / 0.004328 (-0.001518) | 0.048722 / 0.004250 (0.044472) | 0.049381 / 0.037052 (0.012328) | 0.257816 / 0.258489 (-0.000673) | 0.288460 / 0.293841 (-0.005381) | 0.028518 / 0.128546 (-0.100029) | 0.010775 / 0.075646 (-0.064871) | 0.203149 / 0.419271 (-0.216122) | 0.038792 / 0.043533 (-0.004741) | 0.248502 / 0.255139 (-0.006637) | 0.268251 / 0.283200 (-0.014949) | 0.019536 / 0.141683 (-0.122147) | 1.133935 / 1.452155 (-0.318220) | 1.182855 / 1.492716 (-0.309862) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.097531 / 0.018006 (0.079525) | 0.303612 / 0.000490 (0.303122) | 0.000222 / 0.000200 (0.000022) | 0.000044 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019670 / 0.037411 (-0.017741) | 0.063439 / 0.014526 (0.048913) | 0.075119 / 0.176557 (-0.101438) | 0.122419 / 0.737135 (-0.614717) | 0.076965 / 0.296338 (-0.219374) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.286780 / 0.215209 (0.071571) | 2.811860 / 2.077655 (0.734206) | 1.485165 / 1.504120 (-0.018954) | 1.373296 / 1.541195 (-0.167898) | 1.412700 / 1.468490 (-0.055790) | 0.566442 / 4.584777 (-4.018335) | 2.382616 / 3.745712 (-1.363096) | 2.677214 / 5.269862 (-2.592647) | 1.760073 / 4.565676 (-2.805603) | 0.062673 / 0.424275 (-0.361602) | 0.005050 / 0.007607 (-0.002557) | 0.341701 / 0.226044 (0.115657) | 3.321182 / 2.268929 (1.052253) | 1.811715 / 55.444624 (-53.632909) | 1.554986 / 6.876477 (-5.321491) | 1.727448 / 2.142072 (-0.414624) | 0.642193 / 4.805227 (-4.163034) | 0.117878 / 6.500664 (-6.382786) | 0.042814 / 0.075469 (-0.032655) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.985894 / 1.841788 (-0.855894) | 12.195975 / 8.074308 (4.121667) | 9.890180 / 10.191392 (-0.301212) | 0.142638 / 0.680424 (-0.537786) | 0.015207 / 0.534201 (-0.518994) | 0.283140 / 0.579283 (-0.296143) | 0.266016 / 0.434364 (-0.168348) | 0.325518 / 0.540337 (-0.214820) | 0.418994 / 1.386936 (-0.967942) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005978 / 0.011353 (-0.005374) | 0.003915 / 0.011008 (-0.007093) | 0.051592 / 0.038508 (0.013084) | 0.033338 / 0.023109 (0.010229) | 0.267925 / 0.275898 (-0.007973) | 0.296011 / 0.323480 (-0.027469) | 0.004503 / 0.007986 (-0.003483) | 0.002854 / 0.004328 (-0.001475) | 0.049958 / 0.004250 (0.045707) | 0.041708 / 0.037052 (0.004656) | 0.287185 / 0.258489 (0.028696) | 0.322715 / 0.293841 (0.028874) | 0.030088 / 0.128546 (-0.098458) | 0.010709 / 0.075646 (-0.064938) | 0.059736 / 0.419271 (-0.359536) | 0.034294 / 0.043533 (-0.009239) | 0.264316 / 0.255139 (0.009177) | 0.285471 / 0.283200 (0.002272) | 0.019197 / 0.141683 (-0.122486) | 1.135571 / 1.452155 (-0.316583) | 1.190019 / 1.492716 (-0.302698) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.099251 / 0.018006 (0.081245) | 0.305357 / 0.000490 (0.304867) | 0.000215 / 0.000200 (0.000015) | 0.000045 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023206 / 0.037411 (-0.014205) | 0.077835 / 0.014526 (0.063310) | 0.090242 / 0.176557 (-0.086315) | 0.131208 / 0.737135 (-0.605928) | 0.091726 / 0.296338 (-0.204612) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.292487 / 0.215209 (0.077278) | 2.837044 / 2.077655 (0.759389) | 1.553155 / 1.504120 (0.049035) | 1.433645 / 1.541195 (-0.107550) | 1.476702 / 1.468490 (0.008212) | 0.561926 / 4.584777 (-4.022851) | 0.954630 / 3.745712 (-2.791082) | 2.752286 / 5.269862 (-2.517575) | 1.782746 / 4.565676 (-2.782931) | 0.062984 / 0.424275 (-0.361291) | 0.005056 / 0.007607 (-0.002551) | 0.341700 / 0.226044 (0.115656) | 3.343726 / 2.268929 (1.074798) | 1.953390 / 55.444624 (-53.491234) | 1.616989 / 6.876477 (-5.259488) | 1.785104 / 2.142072 (-0.356969) | 0.643465 / 4.805227 (-4.161763) | 0.115905 / 6.500664 (-6.384759) | 0.041678 / 0.075469 (-0.033791) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.000237 / 1.841788 (-0.841550) | 12.633517 / 8.074308 (4.559208) | 10.553485 / 10.191392 (0.362092) | 0.143188 / 0.680424 (-0.537236) | 0.016020 / 0.534201 (-0.518181) | 0.286739 / 0.579283 (-0.292544) | 0.128488 / 0.434364 (-0.305876) | 0.321932 / 0.540337 (-0.218405) | 0.418635 / 1.386936 (-0.968301) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#9510252f03fded02b8cc87ca6dfa3195d17594ba \"CML watermark\")\n" ]
"2024-06-07T11:24:15"
"2024-06-10T07:33:53"
"2024-06-10T07:27:43"
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6959", "html_url": "https://github.com/huggingface/datasets/pull/6959", "diff_url": "https://github.com/huggingface/datasets/pull/6959.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6959.patch", "merged_at": "2024-06-10T07:27:43" }
cc @cakiki who reported it on [slack](https://huggingface.slack.com/archives/C039P47V1L5/p1717754405578539) (private link) This PR updates how errors are handled in `dataset_module_factory` when the `dataset_info` cannot be accessed: 1. Use multiple `except ... as e` instead of using `isinstance(e, ...)` 2. Always raise `DatasetNotFoundError` with `from e` so that the initial error is explicitly logged in the stacktrace. 3. Differentiate `RepoNotFoundError` / `GatedRepoError` / `RevisionNotFoundError` cases
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6959/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 2, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6959/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6958
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6958/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6958/comments
https://api.github.com/repos/huggingface/datasets/issues/6958/events
https://github.com/huggingface/datasets/issues/6958
2,337,476,383
I_kwDODunzps6LUw8f
6,958
My Private Dataset doesn't exist on the Hub or cannot be accessed
{ "login": "wangguan1995", "id": 39621324, "node_id": "MDQ6VXNlcjM5NjIxMzI0", "avatar_url": "https://avatars.githubusercontent.com/u/39621324?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wangguan1995", "html_url": "https://github.com/wangguan1995", "followers_url": "https://api.github.com/users/wangguan1995/followers", "following_url": "https://api.github.com/users/wangguan1995/following{/other_user}", "gists_url": "https://api.github.com/users/wangguan1995/gists{/gist_id}", "starred_url": "https://api.github.com/users/wangguan1995/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wangguan1995/subscriptions", "organizations_url": "https://api.github.com/users/wangguan1995/orgs", "repos_url": "https://api.github.com/users/wangguan1995/repos", "events_url": "https://api.github.com/users/wangguan1995/events{/privacy}", "received_events_url": "https://api.github.com/users/wangguan1995/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "I can load public dataset, but for my private dataset it fails", "https://huggingface.co/docs/datasets/upload_dataset", "I have checked the API HTTP link. Repository Not Found for url: https://huggingface.co/api/datasets/xxx/xxx.\r\n\r\n![image](https://github.com/huggingface/datasets/assets/39621324/4aceef59-0c65-4161-9665-676d25d73225)\r\n\r\nIt just works fine.", "It seems that everything is in a mass huh....\r\n\r\n![image](https://github.com/huggingface/datasets/assets/39621324/fb2fe12c-4f0a-4bf6-9656-63ba50347b10)\r\n", "https://huggingface.co/datasets/rajpurkar/squad/blob/main/squad.py fails again", "https://github.com/huggingface/datasets/blob/main/templates/new_dataset_script.py#L81 can not use this, too complex. I just need a def to load my file to a dict" ]
"2024-06-06T06:52:19"
"2024-06-06T07:52:03"
null
NONE
null
null
null
### Describe the bug ``` File "/root/miniconda3/envs/gino_conda/lib/python3.9/site-packages/datasets/load.py", line 1852, in dataset_module_factory raise DatasetNotFoundError(msg + f" at revision '{revision}'" if revision else msg) datasets.exceptions.DatasetNotFoundError: Dataset 'xxx' doesn't exist on the Hub or cannot be accessed >>> dataset = load_dataset("xxxx", token=True) 404 error 404 Client Error. (Request ID: Root=xxxx) Repository Not Found for url: https://huggingface.co/api/datasets/xxx/xxx. Please make sure you specified the correct `repo_id` and `repo_type`. If you are trying to access a private or gated repo, make sure you are authenticated. Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/root/miniconda3/envs/gino_conda/lib/python3.9/site-packages/datasets/load.py", line 2593, in load_dataset builder_instance = load_dataset_builder( File "/root/miniconda3/envs/gino_conda/lib/python3.9/site-packages/datasets/load.py", line 2265, in load_dataset_builder dataset_module = dataset_module_factory( File "/root/miniconda3/envs/gino_conda/lib/python3.9/site-packages/datasets/load.py", line 1910, in dataset_module_factory raise e1 from None File "/root/miniconda3/envs/gino_conda/lib/python3.9/site-packages/datasets/load.py", line 1852, in dataset_module_factory raise DatasetNotFoundError(msg + f" at revision '{revision}'" if revision else msg) datasets.exceptions.DatasetNotFoundError: Dataset 'xxx' doesn't exist on the Hub or cannot be accessed ``` ### Steps to reproduce the bug 123 ### Expected behavior 123 ### Environment info 123
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6958/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6958/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6957
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6957/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6957/comments
https://api.github.com/repos/huggingface/datasets/issues/6957/events
https://github.com/huggingface/datasets/pull/6957
2,335,559,400
PR_kwDODunzps5xiTwJ
6,957
Fix typos in docs
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6957). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005371 / 0.011353 (-0.005982) | 0.003834 / 0.011008 (-0.007174) | 0.063032 / 0.038508 (0.024524) | 0.031623 / 0.023109 (0.008514) | 0.250008 / 0.275898 (-0.025890) | 0.273998 / 0.323480 (-0.049482) | 0.004114 / 0.007986 (-0.003871) | 0.002821 / 0.004328 (-0.001508) | 0.049470 / 0.004250 (0.045220) | 0.046586 / 0.037052 (0.009534) | 0.276807 / 0.258489 (0.018318) | 0.288607 / 0.293841 (-0.005234) | 0.027427 / 0.128546 (-0.101119) | 0.010634 / 0.075646 (-0.065012) | 0.202451 / 0.419271 (-0.216821) | 0.036346 / 0.043533 (-0.007187) | 0.250426 / 0.255139 (-0.004713) | 0.274104 / 0.283200 (-0.009096) | 0.018461 / 0.141683 (-0.123222) | 1.120326 / 1.452155 (-0.331829) | 1.157635 / 1.492716 (-0.335081) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.102287 / 0.018006 (0.084281) | 0.313145 / 0.000490 (0.312655) | 0.000255 / 0.000200 (0.000055) | 0.000044 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019494 / 0.037411 (-0.017917) | 0.063252 / 0.014526 (0.048727) | 0.075318 / 0.176557 (-0.101239) | 0.122194 / 0.737135 (-0.614942) | 0.076837 / 0.296338 (-0.219501) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.284098 / 0.215209 (0.068889) | 2.822301 / 2.077655 (0.744647) | 1.490185 / 1.504120 (-0.013935) | 1.366723 / 1.541195 (-0.174472) | 1.398832 / 1.468490 (-0.069658) | 0.563661 / 4.584777 (-4.021116) | 2.385129 / 3.745712 (-1.360583) | 2.689823 / 5.269862 (-2.580039) | 1.731271 / 4.565676 (-2.834405) | 0.063351 / 0.424275 (-0.360924) | 0.004974 / 0.007607 (-0.002633) | 0.332163 / 0.226044 (0.106119) | 3.314906 / 2.268929 (1.045977) | 1.811331 / 55.444624 (-53.633294) | 1.513357 / 6.876477 (-5.363120) | 1.718454 / 2.142072 (-0.423618) | 0.639663 / 4.805227 (-4.165564) | 0.120377 / 6.500664 (-6.380287) | 0.043254 / 0.075469 (-0.032215) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.978534 / 1.841788 (-0.863253) | 11.622313 / 8.074308 (3.548005) | 9.608732 / 10.191392 (-0.582660) | 0.131339 / 0.680424 (-0.549085) | 0.015226 / 0.534201 (-0.518975) | 0.287317 / 0.579283 (-0.291966) | 0.266647 / 0.434364 (-0.167717) | 0.324243 / 0.540337 (-0.216094) | 0.442025 / 1.386936 (-0.944911) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005673 / 0.011353 (-0.005680) | 0.003722 / 0.011008 (-0.007286) | 0.049483 / 0.038508 (0.010975) | 0.033308 / 0.023109 (0.010199) | 0.261912 / 0.275898 (-0.013986) | 0.291151 / 0.323480 (-0.032329) | 0.004389 / 0.007986 (-0.003596) | 0.002762 / 0.004328 (-0.001567) | 0.048970 / 0.004250 (0.044719) | 0.041509 / 0.037052 (0.004457) | 0.273288 / 0.258489 (0.014798) | 0.308351 / 0.293841 (0.014510) | 0.029958 / 0.128546 (-0.098589) | 0.010500 / 0.075646 (-0.065146) | 0.058253 / 0.419271 (-0.361019) | 0.033820 / 0.043533 (-0.009713) | 0.261089 / 0.255139 (0.005950) | 0.282179 / 0.283200 (-0.001021) | 0.018543 / 0.141683 (-0.123140) | 1.121303 / 1.452155 (-0.330852) | 1.166141 / 1.492716 (-0.326575) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.099209 / 0.018006 (0.081203) | 0.316920 / 0.000490 (0.316430) | 0.000216 / 0.000200 (0.000016) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023339 / 0.037411 (-0.014072) | 0.077127 / 0.014526 (0.062602) | 0.088160 / 0.176557 (-0.088396) | 0.129449 / 0.737135 (-0.607686) | 0.093159 / 0.296338 (-0.203180) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.281262 / 0.215209 (0.066053) | 2.797504 / 2.077655 (0.719850) | 1.513354 / 1.504120 (0.009234) | 1.383034 / 1.541195 (-0.158161) | 1.395202 / 1.468490 (-0.073288) | 0.563180 / 4.584777 (-4.021597) | 0.979330 / 3.745712 (-2.766383) | 2.674008 / 5.269862 (-2.595853) | 1.762174 / 4.565676 (-2.803502) | 0.062333 / 0.424275 (-0.361942) | 0.004991 / 0.007607 (-0.002616) | 0.336043 / 0.226044 (0.109999) | 3.313500 / 2.268929 (1.044571) | 1.848083 / 55.444624 (-53.596541) | 1.554723 / 6.876477 (-5.321754) | 1.743485 / 2.142072 (-0.398587) | 0.657117 / 4.805227 (-4.148111) | 0.115736 / 6.500664 (-6.384928) | 0.040527 / 0.075469 (-0.034942) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.005876 / 1.841788 (-0.835911) | 12.525895 / 8.074308 (4.451587) | 10.492961 / 10.191392 (0.301569) | 0.143443 / 0.680424 (-0.536981) | 0.016652 / 0.534201 (-0.517548) | 0.288236 / 0.579283 (-0.291047) | 0.131401 / 0.434364 (-0.302963) | 0.322885 / 0.540337 (-0.217452) | 0.416048 / 1.386936 (-0.970888) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#6548e0e282aeeda7bfb18beafbc65ebecd780c63 \"CML watermark\")\n" ]
"2024-06-05T10:46:47"
"2024-06-05T13:01:07"
"2024-06-05T12:43:26"
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6957", "html_url": "https://github.com/huggingface/datasets/pull/6957", "diff_url": "https://github.com/huggingface/datasets/pull/6957.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6957.patch", "merged_at": "2024-06-05T12:43:26" }
Fix typos in docs introduced by: - #6956 Typos: - `comparisions` => `comparisons` - two consecutive sentences both ending in colon - split one sentence into two Sorry, I did not have time to review that PR. CC: @lhoestq
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6957/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6957/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6956
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6956/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6956/comments
https://api.github.com/repos/huggingface/datasets/issues/6956/events
https://github.com/huggingface/datasets/pull/6956
2,333,940,021
PR_kwDODunzps5xcwXz
6,956
update docs on N-dim arrays
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6956). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005348 / 0.011353 (-0.006005) | 0.003785 / 0.011008 (-0.007223) | 0.061674 / 0.038508 (0.023166) | 0.032127 / 0.023109 (0.009017) | 0.247095 / 0.275898 (-0.028803) | 0.276466 / 0.323480 (-0.047014) | 0.004197 / 0.007986 (-0.003789) | 0.002734 / 0.004328 (-0.001594) | 0.049604 / 0.004250 (0.045354) | 0.048553 / 0.037052 (0.011500) | 0.253230 / 0.258489 (-0.005259) | 0.286954 / 0.293841 (-0.006887) | 0.028181 / 0.128546 (-0.100365) | 0.010602 / 0.075646 (-0.065044) | 0.200719 / 0.419271 (-0.218552) | 0.037278 / 0.043533 (-0.006254) | 0.251565 / 0.255139 (-0.003574) | 0.269026 / 0.283200 (-0.014174) | 0.017632 / 0.141683 (-0.124050) | 1.136216 / 1.452155 (-0.315939) | 1.181158 / 1.492716 (-0.311559) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.004892 / 0.018006 (-0.013114) | 0.312921 / 0.000490 (0.312431) | 0.000247 / 0.000200 (0.000047) | 0.000054 / 0.000054 (-0.000000) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019303 / 0.037411 (-0.018108) | 0.062699 / 0.014526 (0.048174) | 0.075227 / 0.176557 (-0.101329) | 0.122919 / 0.737135 (-0.614217) | 0.076506 / 0.296338 (-0.219833) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.277299 / 0.215209 (0.062090) | 2.754771 / 2.077655 (0.677116) | 1.457164 / 1.504120 (-0.046956) | 1.318878 / 1.541195 (-0.222317) | 1.374245 / 1.468490 (-0.094245) | 0.566253 / 4.584777 (-4.018524) | 2.352589 / 3.745712 (-1.393123) | 2.764263 / 5.269862 (-2.505599) | 1.843141 / 4.565676 (-2.722535) | 0.063996 / 0.424275 (-0.360279) | 0.005045 / 0.007607 (-0.002562) | 0.336703 / 0.226044 (0.110658) | 3.342538 / 2.268929 (1.073609) | 1.836664 / 55.444624 (-53.607960) | 1.528901 / 6.876477 (-5.347576) | 1.769562 / 2.142072 (-0.372511) | 0.674192 / 4.805227 (-4.131035) | 0.122421 / 6.500664 (-6.378243) | 0.043714 / 0.075469 (-0.031756) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.989432 / 1.841788 (-0.852356) | 12.178341 / 8.074308 (4.104033) | 9.730838 / 10.191392 (-0.460554) | 0.146751 / 0.680424 (-0.533673) | 0.014720 / 0.534201 (-0.519481) | 0.285821 / 0.579283 (-0.293462) | 0.266474 / 0.434364 (-0.167889) | 0.327886 / 0.540337 (-0.212451) | 0.455672 / 1.386936 (-0.931264) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005691 / 0.011353 (-0.005662) | 0.004089 / 0.011008 (-0.006919) | 0.049878 / 0.038508 (0.011370) | 0.033578 / 0.023109 (0.010469) | 0.268295 / 0.275898 (-0.007603) | 0.288918 / 0.323480 (-0.034561) | 0.005092 / 0.007986 (-0.002894) | 0.002916 / 0.004328 (-0.001412) | 0.049489 / 0.004250 (0.045239) | 0.042495 / 0.037052 (0.005442) | 0.276253 / 0.258489 (0.017764) | 0.313321 / 0.293841 (0.019480) | 0.029386 / 0.128546 (-0.099160) | 0.010926 / 0.075646 (-0.064720) | 0.071747 / 0.419271 (-0.347525) | 0.033642 / 0.043533 (-0.009891) | 0.264950 / 0.255139 (0.009811) | 0.282962 / 0.283200 (-0.000238) | 0.018878 / 0.141683 (-0.122805) | 1.170685 / 1.452155 (-0.281470) | 1.198321 / 1.492716 (-0.294396) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.100422 / 0.018006 (0.082415) | 0.311750 / 0.000490 (0.311260) | 0.000235 / 0.000200 (0.000035) | 0.000063 / 0.000054 (0.000008) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023093 / 0.037411 (-0.014318) | 0.076934 / 0.014526 (0.062408) | 0.088959 / 0.176557 (-0.087598) | 0.129511 / 0.737135 (-0.607624) | 0.090151 / 0.296338 (-0.206187) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.301646 / 0.215209 (0.086437) | 2.961780 / 2.077655 (0.884126) | 1.656051 / 1.504120 (0.151931) | 1.533154 / 1.541195 (-0.008041) | 1.585152 / 1.468490 (0.116662) | 0.582157 / 4.584777 (-4.002620) | 0.954881 / 3.745712 (-2.790831) | 2.813174 / 5.269862 (-2.456688) | 1.842840 / 4.565676 (-2.722837) | 0.065598 / 0.424275 (-0.358677) | 0.005306 / 0.007607 (-0.002301) | 0.359610 / 0.226044 (0.133565) | 3.575320 / 2.268929 (1.306391) | 2.015327 / 55.444624 (-53.429297) | 1.734086 / 6.876477 (-5.142391) | 1.919081 / 2.142072 (-0.222991) | 0.671178 / 4.805227 (-4.134049) | 0.120109 / 6.500664 (-6.380555) | 0.042353 / 0.075469 (-0.033116) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.011726 / 1.841788 (-0.830062) | 13.007806 / 8.074308 (4.933498) | 10.632486 / 10.191392 (0.441094) | 0.148535 / 0.680424 (-0.531889) | 0.015988 / 0.534201 (-0.518213) | 0.290023 / 0.579283 (-0.289260) | 0.130685 / 0.434364 (-0.303679) | 0.322912 / 0.540337 (-0.217425) | 0.420596 / 1.386936 (-0.966340) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#336512dcba4fdb4c349d5ecb632b6ced80e038d5 \"CML watermark\")\n" ]
"2024-06-04T16:32:19"
"2024-06-04T16:46:34"
"2024-06-04T16:40:27"
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6956", "html_url": "https://github.com/huggingface/datasets/pull/6956", "diff_url": "https://github.com/huggingface/datasets/pull/6956.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6956.patch", "merged_at": "2024-06-04T16:40:27" }
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6956/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6956/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6955
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6955/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6955/comments
https://api.github.com/repos/huggingface/datasets/issues/6955/events
https://github.com/huggingface/datasets/pull/6955
2,333,802,815
PR_kwDODunzps5xcSYm
6,955
Fix small typo
{ "login": "marcenacp", "id": 17081356, "node_id": "MDQ6VXNlcjE3MDgxMzU2", "avatar_url": "https://avatars.githubusercontent.com/u/17081356?v=4", "gravatar_id": "", "url": "https://api.github.com/users/marcenacp", "html_url": "https://github.com/marcenacp", "followers_url": "https://api.github.com/users/marcenacp/followers", "following_url": "https://api.github.com/users/marcenacp/following{/other_user}", "gists_url": "https://api.github.com/users/marcenacp/gists{/gist_id}", "starred_url": "https://api.github.com/users/marcenacp/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/marcenacp/subscriptions", "organizations_url": "https://api.github.com/users/marcenacp/orgs", "repos_url": "https://api.github.com/users/marcenacp/repos", "events_url": "https://api.github.com/users/marcenacp/events{/privacy}", "received_events_url": "https://api.github.com/users/marcenacp/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005507 / 0.011353 (-0.005845) | 0.003757 / 0.011008 (-0.007251) | 0.063274 / 0.038508 (0.024766) | 0.029720 / 0.023109 (0.006610) | 0.247974 / 0.275898 (-0.027924) | 0.272283 / 0.323480 (-0.051197) | 0.004186 / 0.007986 (-0.003799) | 0.002820 / 0.004328 (-0.001508) | 0.049070 / 0.004250 (0.044820) | 0.050026 / 0.037052 (0.012973) | 0.256501 / 0.258489 (-0.001988) | 0.297082 / 0.293841 (0.003241) | 0.028549 / 0.128546 (-0.099997) | 0.010361 / 0.075646 (-0.065285) | 0.213202 / 0.419271 (-0.206070) | 0.038117 / 0.043533 (-0.005416) | 0.258878 / 0.255139 (0.003739) | 0.282980 / 0.283200 (-0.000220) | 0.018911 / 0.141683 (-0.122772) | 1.118857 / 1.452155 (-0.333298) | 1.157763 / 1.492716 (-0.334953) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.004499 / 0.018006 (-0.013507) | 0.310445 / 0.000490 (0.309956) | 0.000218 / 0.000200 (0.000018) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019275 / 0.037411 (-0.018137) | 0.063257 / 0.014526 (0.048731) | 0.075833 / 0.176557 (-0.100724) | 0.122323 / 0.737135 (-0.614812) | 0.079046 / 0.296338 (-0.217292) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.292811 / 0.215209 (0.077602) | 2.903501 / 2.077655 (0.825846) | 1.592434 / 1.504120 (0.088314) | 1.450833 / 1.541195 (-0.090362) | 1.481285 / 1.468490 (0.012795) | 0.570150 / 4.584777 (-4.014627) | 2.388618 / 3.745712 (-1.357094) | 2.699322 / 5.269862 (-2.570540) | 1.781405 / 4.565676 (-2.784272) | 0.063451 / 0.424275 (-0.360824) | 0.004979 / 0.007607 (-0.002628) | 0.353346 / 0.226044 (0.127302) | 3.541217 / 2.268929 (1.272289) | 1.972335 / 55.444624 (-53.472289) | 1.634780 / 6.876477 (-5.241697) | 1.815944 / 2.142072 (-0.326128) | 0.651559 / 4.805227 (-4.153669) | 0.118398 / 6.500664 (-6.382266) | 0.041962 / 0.075469 (-0.033507) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.971435 / 1.841788 (-0.870352) | 11.843740 / 8.074308 (3.769431) | 9.716333 / 10.191392 (-0.475059) | 0.145923 / 0.680424 (-0.534501) | 0.015073 / 0.534201 (-0.519128) | 0.293307 / 0.579283 (-0.285976) | 0.265505 / 0.434364 (-0.168859) | 0.327578 / 0.540337 (-0.212760) | 0.436409 / 1.386936 (-0.950527) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005647 / 0.011353 (-0.005706) | 0.003669 / 0.011008 (-0.007339) | 0.050234 / 0.038508 (0.011726) | 0.033033 / 0.023109 (0.009924) | 0.269303 / 0.275898 (-0.006595) | 0.282472 / 0.323480 (-0.041008) | 0.004283 / 0.007986 (-0.003703) | 0.002821 / 0.004328 (-0.001507) | 0.050887 / 0.004250 (0.046637) | 0.041618 / 0.037052 (0.004565) | 0.277628 / 0.258489 (0.019139) | 0.310539 / 0.293841 (0.016698) | 0.030036 / 0.128546 (-0.098511) | 0.010401 / 0.075646 (-0.065245) | 0.058845 / 0.419271 (-0.360427) | 0.033676 / 0.043533 (-0.009857) | 0.261148 / 0.255139 (0.006009) | 0.295232 / 0.283200 (0.012032) | 0.018603 / 0.141683 (-0.123080) | 1.132182 / 1.452155 (-0.319972) | 1.173763 / 1.492716 (-0.318953) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.100594 / 0.018006 (0.082588) | 0.308101 / 0.000490 (0.307611) | 0.000217 / 0.000200 (0.000017) | 0.000055 / 0.000054 (0.000000) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023040 / 0.037411 (-0.014371) | 0.080676 / 0.014526 (0.066150) | 0.094687 / 0.176557 (-0.081870) | 0.129780 / 0.737135 (-0.607356) | 0.092241 / 0.296338 (-0.204097) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.294799 / 0.215209 (0.079590) | 2.957570 / 2.077655 (0.879915) | 1.576795 / 1.504120 (0.072675) | 1.446869 / 1.541195 (-0.094326) | 1.463133 / 1.468490 (-0.005357) | 0.568511 / 4.584777 (-4.016266) | 1.011502 / 3.745712 (-2.734211) | 2.759571 / 5.269862 (-2.510291) | 1.771738 / 4.565676 (-2.793939) | 0.064104 / 0.424275 (-0.360171) | 0.005160 / 0.007607 (-0.002448) | 0.347554 / 0.226044 (0.121510) | 3.463905 / 2.268929 (1.194976) | 1.931843 / 55.444624 (-53.512781) | 1.622765 / 6.876477 (-5.253712) | 1.809146 / 2.142072 (-0.332926) | 0.653388 / 4.805227 (-4.151839) | 0.122703 / 6.500664 (-6.377961) | 0.041680 / 0.075469 (-0.033790) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.000428 / 1.841788 (-0.841359) | 12.503003 / 8.074308 (4.428695) | 10.434802 / 10.191392 (0.243410) | 0.144684 / 0.680424 (-0.535740) | 0.015988 / 0.534201 (-0.518213) | 0.287179 / 0.579283 (-0.292104) | 0.124811 / 0.434364 (-0.309553) | 0.327855 / 0.540337 (-0.212482) | 0.425144 / 1.386936 (-0.961792) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#f7170067f819222153fcd45682db61279bdfe673 \"CML watermark\")\n" ]
"2024-06-04T15:19:02"
"2024-06-05T10:18:56"
"2024-06-04T15:20:55"
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6955", "html_url": "https://github.com/huggingface/datasets/pull/6955", "diff_url": "https://github.com/huggingface/datasets/pull/6955.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6955.patch", "merged_at": "2024-06-04T15:20:55" }
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6955/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6955/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6954
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6954/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6954/comments
https://api.github.com/repos/huggingface/datasets/issues/6954/events
https://github.com/huggingface/datasets/pull/6954
2,333,530,558
PR_kwDODunzps5xbWtU
6,954
Remove default `trust_remote_code=True`
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6954). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "yay! 🎉 ", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004881 / 0.011353 (-0.006472) | 0.003246 / 0.011008 (-0.007762) | 0.062496 / 0.038508 (0.023988) | 0.030760 / 0.023109 (0.007651) | 0.241500 / 0.275898 (-0.034398) | 0.272073 / 0.323480 (-0.051407) | 0.004123 / 0.007986 (-0.003863) | 0.002796 / 0.004328 (-0.001533) | 0.049015 / 0.004250 (0.044764) | 0.047095 / 0.037052 (0.010043) | 0.257002 / 0.258489 (-0.001487) | 0.287602 / 0.293841 (-0.006239) | 0.027281 / 0.128546 (-0.101265) | 0.010132 / 0.075646 (-0.065514) | 0.203699 / 0.419271 (-0.215572) | 0.036553 / 0.043533 (-0.006980) | 0.246221 / 0.255139 (-0.008918) | 0.268137 / 0.283200 (-0.015062) | 0.017260 / 0.141683 (-0.124423) | 1.100677 / 1.452155 (-0.351478) | 1.148367 / 1.492716 (-0.344349) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.102519 / 0.018006 (0.084513) | 0.301929 / 0.000490 (0.301439) | 0.000223 / 0.000200 (0.000023) | 0.000046 / 0.000054 (-0.000009) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018590 / 0.037411 (-0.018821) | 0.061615 / 0.014526 (0.047089) | 0.074579 / 0.176557 (-0.101978) | 0.121415 / 0.737135 (-0.615720) | 0.075696 / 0.296338 (-0.220642) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.283842 / 0.215209 (0.068633) | 2.788321 / 2.077655 (0.710666) | 1.481376 / 1.504120 (-0.022743) | 1.356064 / 1.541195 (-0.185131) | 1.380592 / 1.468490 (-0.087898) | 0.575577 / 4.584777 (-4.009199) | 2.471858 / 3.745712 (-1.273854) | 2.760769 / 5.269862 (-2.509093) | 1.808638 / 4.565676 (-2.757038) | 0.064930 / 0.424275 (-0.359345) | 0.005056 / 0.007607 (-0.002551) | 0.337794 / 0.226044 (0.111750) | 3.359444 / 2.268929 (1.090515) | 1.829540 / 55.444624 (-53.615084) | 1.518660 / 6.876477 (-5.357817) | 1.671612 / 2.142072 (-0.470460) | 0.664286 / 4.805227 (-4.140941) | 0.119593 / 6.500664 (-6.381071) | 0.042519 / 0.075469 (-0.032950) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.993152 / 1.841788 (-0.848636) | 11.733054 / 8.074308 (3.658746) | 9.746734 / 10.191392 (-0.444658) | 0.143026 / 0.680424 (-0.537398) | 0.014900 / 0.534201 (-0.519301) | 0.292243 / 0.579283 (-0.287040) | 0.261301 / 0.434364 (-0.173063) | 0.330838 / 0.540337 (-0.209500) | 0.523719 / 1.386936 (-0.863217) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005707 / 0.011353 (-0.005646) | 0.003523 / 0.011008 (-0.007485) | 0.052265 / 0.038508 (0.013757) | 0.034296 / 0.023109 (0.011187) | 0.266589 / 0.275898 (-0.009309) | 0.288441 / 0.323480 (-0.035039) | 0.004507 / 0.007986 (-0.003478) | 0.002745 / 0.004328 (-0.001583) | 0.049417 / 0.004250 (0.045167) | 0.042679 / 0.037052 (0.005627) | 0.278518 / 0.258489 (0.020029) | 0.328751 / 0.293841 (0.034911) | 0.029530 / 0.128546 (-0.099016) | 0.010373 / 0.075646 (-0.065274) | 0.058207 / 0.419271 (-0.361064) | 0.033434 / 0.043533 (-0.010099) | 0.267902 / 0.255139 (0.012763) | 0.288192 / 0.283200 (0.004993) | 0.018866 / 0.141683 (-0.122817) | 1.132734 / 1.452155 (-0.319421) | 1.172879 / 1.492716 (-0.319837) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.097787 / 0.018006 (0.079780) | 0.305509 / 0.000490 (0.305019) | 0.000268 / 0.000200 (0.000068) | 0.000060 / 0.000054 (0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023230 / 0.037411 (-0.014181) | 0.076637 / 0.014526 (0.062111) | 0.088386 / 0.176557 (-0.088171) | 0.131079 / 0.737135 (-0.606057) | 0.091142 / 0.296338 (-0.205197) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.295586 / 0.215209 (0.080377) | 2.872090 / 2.077655 (0.794435) | 1.538152 / 1.504120 (0.034032) | 1.405695 / 1.541195 (-0.135500) | 1.421058 / 1.468490 (-0.047432) | 0.561179 / 4.584777 (-4.023598) | 0.943954 / 3.745712 (-2.801758) | 2.684381 / 5.269862 (-2.585481) | 1.757457 / 4.565676 (-2.808220) | 0.062903 / 0.424275 (-0.361372) | 0.004998 / 0.007607 (-0.002610) | 0.370290 / 0.226044 (0.144245) | 3.374988 / 2.268929 (1.106059) | 1.899282 / 55.444624 (-53.545342) | 1.598787 / 6.876477 (-5.277690) | 1.735371 / 2.142072 (-0.406702) | 0.647367 / 4.805227 (-4.157860) | 0.116975 / 6.500664 (-6.383689) | 0.040811 / 0.075469 (-0.034658) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.996380 / 1.841788 (-0.845408) | 12.225657 / 8.074308 (4.151349) | 10.291221 / 10.191392 (0.099829) | 0.142791 / 0.680424 (-0.537633) | 0.016087 / 0.534201 (-0.518114) | 0.299978 / 0.579283 (-0.279305) | 0.149444 / 0.434364 (-0.284920) | 0.321354 / 0.540337 (-0.218984) | 0.414492 / 1.386936 (-0.972444) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#a2dc287cbef5311cf1a32ad4e3685f4052db227c \"CML watermark\")\n" ]
"2024-06-04T13:22:56"
"2024-06-07T12:26:37"
"2024-06-07T12:20:29"
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6954", "html_url": "https://github.com/huggingface/datasets/pull/6954", "diff_url": "https://github.com/huggingface/datasets/pull/6954.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6954.patch", "merged_at": "2024-06-07T12:20:29" }
TODO: - [x] fix tests
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6954/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6954/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6953
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6953/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6953/comments
https://api.github.com/repos/huggingface/datasets/issues/6953/events
https://github.com/huggingface/datasets/issues/6953
2,333,366,120
I_kwDODunzps6LFFdo
6,953
Remove canonical datasets from docs
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892861, "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation", "name": "documentation", "color": "0075ca", "default": true, "description": "Improvements or additions to documentation" } ]
open
false
null
[]
null
[]
"2024-06-04T12:09:03"
"2024-06-04T12:09:03"
null
MEMBER
null
null
null
Remove canonical datasets from docs, now that we no longer have canonical datasets.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6953/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6953/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6952
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6952/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6952/comments
https://api.github.com/repos/huggingface/datasets/issues/6952/events
https://github.com/huggingface/datasets/pull/6952
2,333,320,411
PR_kwDODunzps5xaosH
6,952
Move info_utils errors to exceptions module
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6952). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005232 / 0.011353 (-0.006121) | 0.003744 / 0.011008 (-0.007264) | 0.064089 / 0.038508 (0.025581) | 0.032409 / 0.023109 (0.009300) | 0.255886 / 0.275898 (-0.020013) | 0.276033 / 0.323480 (-0.047447) | 0.004165 / 0.007986 (-0.003821) | 0.002741 / 0.004328 (-0.001588) | 0.052145 / 0.004250 (0.047894) | 0.043863 / 0.037052 (0.006811) | 0.258844 / 0.258489 (0.000355) | 0.290108 / 0.293841 (-0.003733) | 0.027390 / 0.128546 (-0.101156) | 0.010543 / 0.075646 (-0.065103) | 0.206936 / 0.419271 (-0.212335) | 0.036778 / 0.043533 (-0.006755) | 0.254331 / 0.255139 (-0.000808) | 0.279037 / 0.283200 (-0.004163) | 0.018564 / 0.141683 (-0.123119) | 1.112765 / 1.452155 (-0.339390) | 1.160099 / 1.492716 (-0.332617) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092148 / 0.018006 (0.074142) | 0.297156 / 0.000490 (0.296667) | 0.000211 / 0.000200 (0.000011) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018797 / 0.037411 (-0.018615) | 0.062992 / 0.014526 (0.048466) | 0.076361 / 0.176557 (-0.100195) | 0.121168 / 0.737135 (-0.615968) | 0.075845 / 0.296338 (-0.220494) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.293842 / 0.215209 (0.078633) | 2.880720 / 2.077655 (0.803065) | 1.477779 / 1.504120 (-0.026341) | 1.345136 / 1.541195 (-0.196059) | 1.352153 / 1.468490 (-0.116337) | 0.574722 / 4.584777 (-4.010055) | 2.373925 / 3.745712 (-1.371787) | 2.750704 / 5.269862 (-2.519157) | 1.725979 / 4.565676 (-2.839697) | 0.063006 / 0.424275 (-0.361269) | 0.005019 / 0.007607 (-0.002588) | 0.341228 / 0.226044 (0.115184) | 3.352576 / 2.268929 (1.083647) | 1.821363 / 55.444624 (-53.623261) | 1.529441 / 6.876477 (-5.347036) | 1.543401 / 2.142072 (-0.598671) | 0.634282 / 4.805227 (-4.170945) | 0.115565 / 6.500664 (-6.385099) | 0.042514 / 0.075469 (-0.032956) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.987532 / 1.841788 (-0.854255) | 11.483853 / 8.074308 (3.409545) | 9.565657 / 10.191392 (-0.625735) | 0.141247 / 0.680424 (-0.539176) | 0.015026 / 0.534201 (-0.519175) | 0.299905 / 0.579283 (-0.279378) | 0.267667 / 0.434364 (-0.166697) | 0.320661 / 0.540337 (-0.219676) | 0.427368 / 1.386936 (-0.959568) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005448 / 0.011353 (-0.005905) | 0.003726 / 0.011008 (-0.007283) | 0.049776 / 0.038508 (0.011268) | 0.032733 / 0.023109 (0.009624) | 0.261387 / 0.275898 (-0.014511) | 0.280087 / 0.323480 (-0.043393) | 0.004351 / 0.007986 (-0.003634) | 0.002842 / 0.004328 (-0.001487) | 0.049440 / 0.004250 (0.045190) | 0.039585 / 0.037052 (0.002533) | 0.266331 / 0.258489 (0.007842) | 0.299643 / 0.293841 (0.005802) | 0.029649 / 0.128546 (-0.098897) | 0.010381 / 0.075646 (-0.065265) | 0.058596 / 0.419271 (-0.360676) | 0.033271 / 0.043533 (-0.010262) | 0.251070 / 0.255139 (-0.004069) | 0.272850 / 0.283200 (-0.010349) | 0.016728 / 0.141683 (-0.124955) | 1.146952 / 1.452155 (-0.305202) | 1.182602 / 1.492716 (-0.310114) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091673 / 0.018006 (0.073667) | 0.297228 / 0.000490 (0.296738) | 0.000197 / 0.000200 (-0.000003) | 0.000051 / 0.000054 (-0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023174 / 0.037411 (-0.014237) | 0.078866 / 0.014526 (0.064341) | 0.088436 / 0.176557 (-0.088121) | 0.129650 / 0.737135 (-0.607485) | 0.091100 / 0.296338 (-0.205238) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.293882 / 0.215209 (0.078673) | 2.882667 / 2.077655 (0.805012) | 1.562949 / 1.504120 (0.058829) | 1.435104 / 1.541195 (-0.106090) | 1.450815 / 1.468490 (-0.017675) | 0.584090 / 4.584777 (-4.000687) | 0.984176 / 3.745712 (-2.761536) | 2.668740 / 5.269862 (-2.601121) | 1.766993 / 4.565676 (-2.798683) | 0.064710 / 0.424275 (-0.359565) | 0.005329 / 0.007607 (-0.002278) | 0.346008 / 0.226044 (0.119964) | 3.414576 / 2.268929 (1.145647) | 1.911388 / 55.444624 (-53.533236) | 1.660357 / 6.876477 (-5.216120) | 1.818628 / 2.142072 (-0.323444) | 0.659585 / 4.805227 (-4.145643) | 0.116980 / 6.500664 (-6.383684) | 0.041364 / 0.075469 (-0.034105) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.005659 / 1.841788 (-0.836129) | 12.023761 / 8.074308 (3.949453) | 10.351086 / 10.191392 (0.159694) | 0.143261 / 0.680424 (-0.537162) | 0.016143 / 0.534201 (-0.518058) | 0.287793 / 0.579283 (-0.291490) | 0.123698 / 0.434364 (-0.310666) | 0.325241 / 0.540337 (-0.215097) | 0.418772 / 1.386936 (-0.968164) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#37a603679f451826cfafd8aae00738b01dcb9d58 \"CML watermark\")\n" ]
"2024-06-04T11:48:32"
"2024-06-10T14:09:59"
"2024-06-10T14:03:55"
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6952", "html_url": "https://github.com/huggingface/datasets/pull/6952", "diff_url": "https://github.com/huggingface/datasets/pull/6952.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6952.patch", "merged_at": "2024-06-10T14:03:55" }
Move `info_utils` errors to `exceptions` module. Additionally rename some of them, deprecate the former ones, and make the deprecation backward compatible (by making the new errors inherit from the former ones).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6952/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6952/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6951
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6951/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6951/comments
https://api.github.com/repos/huggingface/datasets/issues/6951/events
https://github.com/huggingface/datasets/issues/6951
2,333,231,042
I_kwDODunzps6LEkfC
6,951
load_dataset() should load all subsets, if no specific subset is specified
{ "login": "windmaple", "id": 5577741, "node_id": "MDQ6VXNlcjU1Nzc3NDE=", "avatar_url": "https://avatars.githubusercontent.com/u/5577741?v=4", "gravatar_id": "", "url": "https://api.github.com/users/windmaple", "html_url": "https://github.com/windmaple", "followers_url": "https://api.github.com/users/windmaple/followers", "following_url": "https://api.github.com/users/windmaple/following{/other_user}", "gists_url": "https://api.github.com/users/windmaple/gists{/gist_id}", "starred_url": "https://api.github.com/users/windmaple/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/windmaple/subscriptions", "organizations_url": "https://api.github.com/users/windmaple/orgs", "repos_url": "https://api.github.com/users/windmaple/repos", "events_url": "https://api.github.com/users/windmaple/events{/privacy}", "received_events_url": "https://api.github.com/users/windmaple/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[ "@xianbaoqian " ]
"2024-06-04T11:02:33"
"2024-06-04T11:02:49"
null
NONE
null
null
null
### Feature request Currently load_dataset() is forcing users to specify a subset. Example `from datasets import load_dataset dataset = load_dataset("m-a-p/COIG-CQIA")` ```--------------------------------------------------------------------------- ValueError Traceback (most recent call last) [<ipython-input-10-c0cb49385da6>](https://localhost:8080/#) in <cell line: 2>() 1 from datasets import load_dataset ----> 2 dataset = load_dataset("m-a-p/COIG-CQIA") 3 frames [/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in _create_builder_config(self, config_name, custom_features, **config_kwargs) 582 if not config_kwargs: 583 example_of_usage = f"load_dataset('{self.dataset_name}', '{self.BUILDER_CONFIGS[0].name}')" --> 584 raise ValueError( 585 "Config name is missing." 586 f"\nPlease pick one among the available configs: {list(self.builder_configs.keys())}" ValueError: Config name is missing. Please pick one among the available configs: ['chinese_traditional', 'coig_pc', 'exam', 'finance', 'douban', 'human_value', 'logi_qa', 'ruozhiba', 'segmentfault', 'wiki', 'wikihow', 'xhs', 'zhihu'] Example of usage: `load_dataset('coig-cqia', 'chinese_traditional')` ``` This means a dataset cannot contain all the subsets at the same time. I guess one workaround is to manually specify the subset files like in [here](https://huggingface.co/datasets/m-a-p/COIG-CQIA/discussions/1#658698b44bb41498f75c5622), which is clumsy. ### Motivation Ideally, if not subset is specified, the API should just try to load all subsets. This makes it much easier to handle datasets w/ subsets. ### Your contribution Not sure since I'm not familiar w/ the lib src.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6951/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6951/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6950
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6950/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6950/comments
https://api.github.com/repos/huggingface/datasets/issues/6950/events
https://github.com/huggingface/datasets/issues/6950
2,333,005,974
I_kwDODunzps6LDtiW
6,950
`Dataset.with_format` behaves inconsistently with documentation
{ "login": "iansheng", "id": 42494185, "node_id": "MDQ6VXNlcjQyNDk0MTg1", "avatar_url": "https://avatars.githubusercontent.com/u/42494185?v=4", "gravatar_id": "", "url": "https://api.github.com/users/iansheng", "html_url": "https://github.com/iansheng", "followers_url": "https://api.github.com/users/iansheng/followers", "following_url": "https://api.github.com/users/iansheng/following{/other_user}", "gists_url": "https://api.github.com/users/iansheng/gists{/gist_id}", "starred_url": "https://api.github.com/users/iansheng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/iansheng/subscriptions", "organizations_url": "https://api.github.com/users/iansheng/orgs", "repos_url": "https://api.github.com/users/iansheng/repos", "events_url": "https://api.github.com/users/iansheng/events{/privacy}", "received_events_url": "https://api.github.com/users/iansheng/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892861, "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation", "name": "documentation", "color": "0075ca", "default": true, "description": "Improvements or additions to documentation" } ]
open
false
null
[]
null
[ "Hi ! It seems the documentation was outdated in this paragraph\r\n\r\nI fixed it here: https://github.com/huggingface/datasets/pull/6956" ]
"2024-06-04T09:18:32"
"2024-06-05T10:19:56"
null
NONE
null
null
null
### Describe the bug The actual behavior of the interface `Dataset.with_format` is inconsistent with the documentation. https://huggingface.co/docs/datasets/use_with_pytorch#n-dimensional-arrays https://huggingface.co/docs/datasets/v2.19.0/en/use_with_tensorflow#n-dimensional-arrays > If your dataset consists of N-dimensional arrays, you will see that by default they are considered as nested lists. > In particular, a PyTorch formatted dataset outputs nested lists instead of a single tensor. > A TensorFlow formatted dataset outputs a RaggedTensor instead of a single tensor. But I get a single tensor by default, which is inconsistent with the description. Actually the current behavior seems more reasonable to me. Therefore, the document needs to be modified. ### Steps to reproduce the bug ```python >>> from datasets import Dataset >>> data = [[[1, 2],[3, 4]],[[5, 6],[7, 8]]] >>> ds = Dataset.from_dict({"data": data}) >>> ds = ds.with_format("torch") >>> ds[0] {'data': tensor([[1, 2], [3, 4]])} >>> ds = ds.with_format("tf") >>> ds[0] {'data': <tf.Tensor: shape=(2, 2), dtype=int64, numpy= array([[1, 2], [3, 4]])>} ``` ### Expected behavior ```python >>> from datasets import Dataset >>> data = [[[1, 2],[3, 4]],[[5, 6],[7, 8]]] >>> ds = Dataset.from_dict({"data": data}) >>> ds = ds.with_format("torch") >>> ds[0] {'data': [tensor([1, 2]), tensor([3, 4])]} >>> ds = ds.with_format("tf") >>> ds[0] {'data': <tf.RaggedTensor [[1, 2], [3, 4]]>} ``` ### Environment info datasets==2.19.1 torch==2.1.0 tensorflow==2.13.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6950/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6950/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6949
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6949/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6949/comments
https://api.github.com/repos/huggingface/datasets/issues/6949/events
https://github.com/huggingface/datasets/issues/6949
2,332,336,573
I_kwDODunzps6LBKG9
6,949
load_dataset error
{ "login": "lion-ops", "id": 27952522, "node_id": "MDQ6VXNlcjI3OTUyNTIy", "avatar_url": "https://avatars.githubusercontent.com/u/27952522?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lion-ops", "html_url": "https://github.com/lion-ops", "followers_url": "https://api.github.com/users/lion-ops/followers", "following_url": "https://api.github.com/users/lion-ops/following{/other_user}", "gists_url": "https://api.github.com/users/lion-ops/gists{/gist_id}", "starred_url": "https://api.github.com/users/lion-ops/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lion-ops/subscriptions", "organizations_url": "https://api.github.com/users/lion-ops/orgs", "repos_url": "https://api.github.com/users/lion-ops/repos", "events_url": "https://api.github.com/users/lion-ops/events{/privacy}", "received_events_url": "https://api.github.com/users/lion-ops/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Hi, @lion-ops.\r\n\r\nIn our Continuous Integration we have many tests on loading JSON files and all of them work properly.\r\n\r\nCould you please share your \"train.json\" file, so that we can try to reproduce the issue you have? ", "> Hi, @lion-ops.\r\n> \r\n> In our Continuous Integration we have many tests on loading JSON files and all of them work properly.\r\n> \r\n> Could you please share your \"train.json\" file, so that we can try to reproduce the issue you have?\r\n\r\nThank you for your reply. I can load it normally in another server. Is it possible that the disk of my server is a network disk in the LAN, so it will be downloaded from the LAN and get stuck?" ]
"2024-06-04T01:24:45"
"2024-06-04T05:54:54"
null
NONE
null
null
null
### Describe the bug Why does the program get stuck when I use load_dataset method, and it still gets stuck after loading for several hours? In fact, my json file is only 21m, and I can load it in one go using open('', 'r'). ### Steps to reproduce the bug 1. pip install datasets==2.19.2 2. from datasets import Dataset, DatasetDict, NamedSplit, Split, load_dataset 3. data = load_dataset('json', data_files='train.json') ### Expected behavior It is able to load my json correctly ### Environment info datasets==2.19.2
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6949/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6949/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6948
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6948/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6948/comments
https://api.github.com/repos/huggingface/datasets/issues/6948/events
https://github.com/huggingface/datasets/issues/6948
2,331,758,300
I_kwDODunzps6K-87c
6,948
to_tf_dataset: Visible devices cannot be modified after being initialized
{ "login": "logasja", "id": 7151661, "node_id": "MDQ6VXNlcjcxNTE2NjE=", "avatar_url": "https://avatars.githubusercontent.com/u/7151661?v=4", "gravatar_id": "", "url": "https://api.github.com/users/logasja", "html_url": "https://github.com/logasja", "followers_url": "https://api.github.com/users/logasja/followers", "following_url": "https://api.github.com/users/logasja/following{/other_user}", "gists_url": "https://api.github.com/users/logasja/gists{/gist_id}", "starred_url": "https://api.github.com/users/logasja/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/logasja/subscriptions", "organizations_url": "https://api.github.com/users/logasja/orgs", "repos_url": "https://api.github.com/users/logasja/repos", "events_url": "https://api.github.com/users/logasja/events{/privacy}", "received_events_url": "https://api.github.com/users/logasja/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
"2024-06-03T18:10:57"
"2024-06-03T18:10:57"
null
NONE
null
null
null
### Describe the bug When trying to use to_tf_dataset with a custom data_loader collate_fn when I use parallelism I am met with the following error as many times as number of workers there were in ``num_workers``. File "/opt/miniconda/envs/env/lib/python3.11/site-packages/multiprocess/process.py", line 314, in _bootstrap self.run() File "/opt/miniconda/envs/env/lib/python3.11/site-packages/multiprocess/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/opt/miniconda/envs/env/lib/python3.11/site-packages/datasets/utils/tf_utils.py", line 438, in worker_loop tf.config.set_visible_devices([], "GPU") # Make sure workers don't try to allocate GPU memory ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/miniconda/envs/env/lib/python3.11/site-packages/tensorflow/python/framework/config.py", line 566, in set_visible_devices context.context().set_visible_devices(devices, device_type) File "/opt/miniconda/envs/env/lib/python3.11/site-packages/tensorflow/python/eager/context.py", line 1737, in set_visible_devices raise RuntimeError( RuntimeError: Visible devices cannot be modified after being initialized ### Steps to reproduce the bug 1. Download a dataset using HuggingFace load_dataset 2. Define a function that transforms the data in some way to be used in the collate_fn argument 3. Provide a ``batch_size`` and ``num_workers`` value in the ``to_tf_dataset`` function 4. Either retrieve directly or use tfds benchmark to test the dataset ``` python from datasets import load_datasets import tensorflow_datasets as tfds from keras_cv.layers import Resizing def data_loader(examples): x = Resizing(examples[0]['image'], 256, 256, crop_to_aspect_ratio=True) return {X[0]: x} ds = load_datasets("logasja/FDF", split="test") ds = ds.to_tf_dataset(collate_fn=data_loader, batch_size=16, num_workers=2) tfds.benchmark(ds) ``` ### Expected behavior Use multiple processes to apply transformations from the collate_fn to the tf dataset on the CPU. ### Environment info - `datasets` version: 2.19.1 - Platform: Linux-6.5.0-1023-oracle-x86_64-with-glibc2.35 - Python version: 3.11.8 - `huggingface_hub` version: 0.22.2 - PyArrow version: 15.0.2 - Pandas version: 2.2.1 - `fsspec` version: 2024.2.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6948/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6948/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6947
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6947/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6947/comments
https://api.github.com/repos/huggingface/datasets/issues/6947/events
https://github.com/huggingface/datasets/issues/6947
2,331,114,055
I_kwDODunzps6K8fpH
6,947
FileNotFoundError:error when loading C4 dataset
{ "login": "W-215", "id": 62374585, "node_id": "MDQ6VXNlcjYyMzc0NTg1", "avatar_url": "https://avatars.githubusercontent.com/u/62374585?v=4", "gravatar_id": "", "url": "https://api.github.com/users/W-215", "html_url": "https://github.com/W-215", "followers_url": "https://api.github.com/users/W-215/followers", "following_url": "https://api.github.com/users/W-215/following{/other_user}", "gists_url": "https://api.github.com/users/W-215/gists{/gist_id}", "starred_url": "https://api.github.com/users/W-215/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/W-215/subscriptions", "organizations_url": "https://api.github.com/users/W-215/orgs", "repos_url": "https://api.github.com/users/W-215/repos", "events_url": "https://api.github.com/users/W-215/events{/privacy}", "received_events_url": "https://api.github.com/users/W-215/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "same problem here", "Hello,\r\n\r\nAre you sure you are really using datasets version 2.19.2? We just made the patch release yesterday specifically to fix this issue:\r\n- #6925\r\n\r\nI can't reproduce the error:\r\n```python\r\nIn [1]: from datasets import load_dataset\r\n\r\nIn [2]: ds = load_dataset('allenai/c4', data_files={'validation': 'en/c4-validation.00003-of-00008.json.gz'}, split='validation')\r\nDownloading readme: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 41.1k/41.1k [00:00<00:00, 596kB/s]\r\nDownloading data: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 40.7M/40.7M [00:04<00:00, 8.50MB/s]\r\nGenerating validation split: 45576 examples [00:01, 44956.75 examples/s]\r\n\r\nIn [3]: ds\r\nOut[3]: \r\nDataset({\r\n features: ['text', 'timestamp', 'url'],\r\n num_rows: 45576\r\n})\r\n```", "> Hello,\r\n> \r\n> Are you sure you are really using datasets version 2.19.2? We just made the patch release yesterday specifically to fix this issue:\r\n> \r\n> * [Fix NonMatchingSplitsSizesError/ExpectedMoreSplits when passing data_dir/data_files in no-code Hub datasets #6925](https://github.com/huggingface/datasets/pull/6925)\r\n> \r\n> I can't reproduce the error:\r\n> \r\n> ```python\r\n> In [1]: from datasets import load_dataset\r\n> \r\n> In [2]: ds = load_dataset('allenai/c4', data_files={'validation': 'en/c4-validation.00003-of-00008.json.gz'}, split='validation')\r\n> Downloading readme: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 41.1k/41.1k [00:00<00:00, 596kB/s]\r\n> Downloading data: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 40.7M/40.7M [00:04<00:00, 8.50MB/s]\r\n> Generating validation split: 45576 examples [00:01, 44956.75 examples/s]\r\n> \r\n> In [3]: ds\r\n> Out[3]: \r\n> Dataset({\r\n> features: ['text', 'timestamp', 'url'],\r\n> num_rows: 45576\r\n> })\r\n> ```\r\nThank you for your reply,ExpectedMoreSplits was encountered in datasets version 2.12.2. After I updated the version, that is, datasets version 2.19.2, I encountered the FileNotFoundError problem mentioned above.", "That might be due to a corrupted cache.\r\n\r\nPlease, retry loading the dataset passing: `download_mode=\"force_redownload\"`\r\n```python\r\nds = load_dataset('allenai/c4', data_files={'validation': 'en/c4-validation.00003-of-00008.json.gz'}, split='validation', download_mode=\"force_redownload\")\r\n```\r\n\r\nIt the above command does not fix the issue, then you will need to fix the cache manually, by removing the corresponding directory inside `~/.cache/huggingface/`.\r\n", "> That might be due to a corrupted cache.\r\n> \r\n> Please, retry loading the dataset passing: `download_mode=\"force_redownload\"`\r\n> \r\n> ```python\r\n> ds = load_dataset('allenai/c4', data_files={'validation': 'en/c4-validation.00003-of-00008.json.gz'}, split='validation', download_mode=\"force_redownload\")\r\n> ```\r\n> \r\n> It the above command does not fix the issue, then you will need to fix the cache manually, by removing the corresponding directory inside `~/.cache/huggingface/`.\r\n\r\nThe two methods you mentioned above can not solve this problem, but the command line interface shows Downloading readme: 41.1kB [00:00, 281kB/s], and then FileNotFoundError appears. It is worth noting that I have no problem loading other datasets with the initial method, such as wikitext datasets" ]
"2024-06-03T13:06:33"
"2024-06-04T12:48:40"
null
NONE
null
null
null
### Describe the bug can't load c4 datasets When I replace the datasets package to 2.12.2 I get raise datasets.utils.info_utils.ExpectedMoreSplits: {'train'} How can I fix this? ### Steps to reproduce the bug 1.from datasets import load_dataset 2.dataset = load_dataset('allenai/c4', data_files={'validation': 'en/c4-validation.00003-of-00008.json.gz'}, split='validation') 3. raise FileNotFoundError( FileNotFoundError: Couldn't find a dataset script at local_path/c4_val/allenai/c4/c4.py or any data file in the same directory. Couldn't find 'allenai/c4' on the Hugging Face Hub either: FileNotFoundError: Unable to find 'hf://datasets/allenai/c4@1588ec454efa1a09f29cd18ddd04fe05fc8653a2/en/c4-validation.00003-of-00008.json.gz' with any supported extension ['.csv', '.tsv', '.json', '.jsonl', '.parquet', '.geoparquet', '.gpq', '.arrow', '.txt', '.tar', '.blp', '.bmp', '.dib', '.bufr', '.cur', '.pcx', '.dcx', '.dds', '.ps', '.eps', '.fit', '.fits', '.fli', '.flc', '.ftc', '.ftu', '.gbr', '.gif', '.grib', '.h5', '.hdf', '.png', '.apng', '.jp2', '.j2k', '.jpc', '.jpf', '.jpx', '.j2c', '.icns', '.ico', '.im', '.iim', '.tif', '.tiff', '.jfif', '.jpe', '.jpg', '.jpeg', '.mpg', '.mpeg', '.msp', '.pcd', '.pxr', '.pbm', '.pgm', '.ppm', '.pnm', '.psd', '.bw', '.rgb', '.rgba', '.sgi', '.ras', '.tga', '.icb', '.vda', '.vst', '.webp', '.wmf', '.emf', '.xbm', '.xpm', '.BLP', '.BMP', '.DIB', '.BUFR', '.CUR', '.PCX', '.DCX', '.DDS', '.PS', '.EPS', '.FIT', '.FITS', '.FLI', '.FLC', '.FTC', '.FTU', '.GBR', '.GIF', '.GRIB', '.H5', '.HDF', '.PNG', '.APNG', '.JP2', '.J2K', '.JPC', '.JPF', '.JPX', '.J2C', '.ICNS', '.ICO', '.IM', '.IIM', '.TIF', '.TIFF', '.JFIF', '.JPE', '.JPG', '.JPEG', '.MPG', '.MPEG', '.MSP', '.PCD', '.PXR', '.PBM', '.PGM', '.PPM', '.PNM', '.PSD', '.BW', '.RGB', '.RGBA', '.SGI', '.RAS', '.TGA', '.ICB', '.VDA', '.VST', '.WEBP', '.WMF', '.EMF', '.XBM', '.XPM', '.aiff', '.au', '.avr', '.caf', '.flac', '.htk', '.svx', '.mat4', '.mat5', '.mpc2k', '.ogg', '.paf', '.pvf', '.raw', '.rf64', '.sd2', '.sds', '.ircam', '.voc', '.w64', '.wav', '.nist', '.wavex', '.wve', '.xi', '.mp3', '.opus', '.AIFF', '.AU', '.AVR', '.CAF', '.FLAC', '.HTK', '.SVX', '.MAT4', '.MAT5', '.MPC2K', '.OGG', '.PAF', '.PVF', '.RAW', '.RF64', '.SD2', '.SDS', '.IRCAM', '.VOC', '.W64', '.WAV', '.NIST', '.WAVEX', '.WVE', '.XI', '.MP3', '.OPUS', '.zip'] ### Expected behavior The data was successfully imported ### Environment info python version 3.9 datasets version 2.19.2
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6947/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6947/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6946
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6946/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6946/comments
https://api.github.com/repos/huggingface/datasets/issues/6946/events
https://github.com/huggingface/datasets/pull/6946
2,330,276,848
PR_kwDODunzps5xQNao
6,946
Re-enable import sorting disabled by flake8:noqa directive when using ruff linter
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6946). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004847 / 0.011353 (-0.006506) | 0.003199 / 0.011008 (-0.007810) | 0.060677 / 0.038508 (0.022169) | 0.030544 / 0.023109 (0.007435) | 0.240870 / 0.275898 (-0.035028) | 0.261320 / 0.323480 (-0.062160) | 0.002816 / 0.007986 (-0.005170) | 0.002483 / 0.004328 (-0.001845) | 0.048527 / 0.004250 (0.044277) | 0.045496 / 0.037052 (0.008444) | 0.251296 / 0.258489 (-0.007193) | 0.285746 / 0.293841 (-0.008095) | 0.025076 / 0.128546 (-0.103470) | 0.009417 / 0.075646 (-0.066229) | 0.191361 / 0.419271 (-0.227911) | 0.033778 / 0.043533 (-0.009755) | 0.235581 / 0.255139 (-0.019558) | 0.261069 / 0.283200 (-0.022131) | 0.018255 / 0.141683 (-0.123428) | 1.098437 / 1.452155 (-0.353718) | 1.127124 / 1.492716 (-0.365592) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.004479 / 0.018006 (-0.013527) | 0.283706 / 0.000490 (0.283216) | 0.000214 / 0.000200 (0.000014) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018364 / 0.037411 (-0.019048) | 0.058398 / 0.014526 (0.043872) | 0.073056 / 0.176557 (-0.103501) | 0.117147 / 0.737135 (-0.619989) | 0.073683 / 0.296338 (-0.222656) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.265121 / 0.215209 (0.049912) | 2.636981 / 2.077655 (0.559327) | 1.380192 / 1.504120 (-0.123928) | 1.270779 / 1.541195 (-0.270416) | 1.295729 / 1.468490 (-0.172762) | 0.523768 / 4.584777 (-4.061009) | 2.295720 / 3.745712 (-1.449992) | 2.519211 / 5.269862 (-2.750650) | 1.618712 / 4.565676 (-2.946965) | 0.058321 / 0.424275 (-0.365954) | 0.004492 / 0.007607 (-0.003115) | 0.316101 / 0.226044 (0.090057) | 3.169913 / 2.268929 (0.900984) | 1.793412 / 55.444624 (-53.651213) | 1.473784 / 6.876477 (-5.402693) | 1.565325 / 2.142072 (-0.576748) | 0.592734 / 4.805227 (-4.212493) | 0.109333 / 6.500664 (-6.391331) | 0.039063 / 0.075469 (-0.036406) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.935504 / 1.841788 (-0.906284) | 10.865520 / 8.074308 (2.791212) | 9.219337 / 10.191392 (-0.972055) | 0.135284 / 0.680424 (-0.545140) | 0.013664 / 0.534201 (-0.520537) | 0.271601 / 0.579283 (-0.307682) | 0.260456 / 0.434364 (-0.173908) | 0.302931 / 0.540337 (-0.237406) | 0.414643 / 1.386936 (-0.972293) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004801 / 0.011353 (-0.006552) | 0.003092 / 0.011008 (-0.007917) | 0.046471 / 0.038508 (0.007963) | 0.031337 / 0.023109 (0.008228) | 0.258920 / 0.275898 (-0.016978) | 0.269842 / 0.323480 (-0.053638) | 0.003976 / 0.007986 (-0.004009) | 0.002661 / 0.004328 (-0.001668) | 0.045676 / 0.004250 (0.041426) | 0.038199 / 0.037052 (0.001146) | 0.277382 / 0.258489 (0.018893) | 0.289351 / 0.293841 (-0.004490) | 0.028452 / 0.128546 (-0.100094) | 0.009737 / 0.075646 (-0.065910) | 0.055201 / 0.419271 (-0.364071) | 0.032686 / 0.043533 (-0.010847) | 0.259617 / 0.255139 (0.004478) | 0.277163 / 0.283200 (-0.006037) | 0.017825 / 0.141683 (-0.123858) | 1.102797 / 1.452155 (-0.349357) | 1.105018 / 1.492716 (-0.387699) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094844 / 0.018006 (0.076838) | 0.290519 / 0.000490 (0.290029) | 0.000211 / 0.000200 (0.000012) | 0.000050 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021917 / 0.037411 (-0.015494) | 0.075278 / 0.014526 (0.060753) | 0.085971 / 0.176557 (-0.090586) | 0.127072 / 0.737135 (-0.610063) | 0.088244 / 0.296338 (-0.208095) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.276704 / 0.215209 (0.061495) | 2.736960 / 2.077655 (0.659305) | 1.519634 / 1.504120 (0.015514) | 1.403026 / 1.541195 (-0.138168) | 1.418465 / 1.468490 (-0.050025) | 0.552425 / 4.584777 (-4.032352) | 0.955244 / 3.745712 (-2.790468) | 2.556563 / 5.269862 (-2.713298) | 1.705095 / 4.565676 (-2.860582) | 0.061212 / 0.424275 (-0.363063) | 0.004707 / 0.007607 (-0.002900) | 0.326284 / 0.226044 (0.100239) | 3.253911 / 2.268929 (0.984983) | 1.868649 / 55.444624 (-53.575976) | 1.598697 / 6.876477 (-5.277780) | 1.682617 / 2.142072 (-0.459455) | 0.606379 / 4.805227 (-4.198848) | 0.114126 / 6.500664 (-6.386538) | 0.038869 / 0.075469 (-0.036601) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.966354 / 1.841788 (-0.875433) | 11.575918 / 8.074308 (3.501609) | 9.816597 / 10.191392 (-0.374795) | 0.141492 / 0.680424 (-0.538932) | 0.015375 / 0.534201 (-0.518826) | 0.276027 / 0.579283 (-0.303256) | 0.118979 / 0.434364 (-0.315385) | 0.313467 / 0.540337 (-0.226870) | 0.403539 / 1.386936 (-0.983397) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#1b59c75856d765e60b66a5216062102d001c6612 \"CML watermark\")\n" ]
"2024-06-03T06:24:47"
"2024-06-04T10:00:08"
"2024-06-04T09:54:23"
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6946", "html_url": "https://github.com/huggingface/datasets/pull/6946", "diff_url": "https://github.com/huggingface/datasets/pull/6946.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6946.patch", "merged_at": "2024-06-04T09:54:23" }
Re-enable import sorting that was wrongly disabled by `flake8: noqa` directive after switching to `ruff` linter in datasets-2.10.0 PR: - #5519 Note that after the linter switch, we wrongly replaced `flake8: noqa` with `ruff: noqa` in datasets-2.17.0 PR: - #6619 That replacement was wrong because we kept the `isort: skip` directives although they were indeed disabled by `flake8: noqa` first and by `ruff: noqa` afterwards. See for example `__init__.py` file after the linter switch: - We kept the `flake8: noqa` directive https://github.com/huggingface/datasets/blob/06ae3f678651bfbb3ca7dd3274ee2f38e0e0237e/src/datasets/__init__.py#L1 - Whereas we also kept the `isort: skip` directives (that were disabled) https://github.com/huggingface/datasets/blob/06ae3f678651bfbb3ca7dd3274ee2f38e0e0237e/src/datasets/__init__.py#L82-L84 Fix #6942.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6946/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6946/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6945
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6945/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6945/comments
https://api.github.com/repos/huggingface/datasets/issues/6945/events
https://github.com/huggingface/datasets/pull/6945
2,330,224,869
PR_kwDODunzps5xQCCx
6,945
Update yanked version of minimum requests requirement
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6945). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005725 / 0.011353 (-0.005627) | 0.003788 / 0.011008 (-0.007220) | 0.063059 / 0.038508 (0.024551) | 0.031364 / 0.023109 (0.008255) | 0.259209 / 0.275898 (-0.016689) | 0.278805 / 0.323480 (-0.044675) | 0.003032 / 0.007986 (-0.004953) | 0.002633 / 0.004328 (-0.001696) | 0.049804 / 0.004250 (0.045554) | 0.046717 / 0.037052 (0.009665) | 0.267246 / 0.258489 (0.008757) | 0.299271 / 0.293841 (0.005430) | 0.027687 / 0.128546 (-0.100860) | 0.010524 / 0.075646 (-0.065123) | 0.201736 / 0.419271 (-0.217536) | 0.036192 / 0.043533 (-0.007341) | 0.264492 / 0.255139 (0.009353) | 0.280809 / 0.283200 (-0.002391) | 0.018187 / 0.141683 (-0.123496) | 1.170751 / 1.452155 (-0.281404) | 1.223450 / 1.492716 (-0.269266) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.096610 / 0.018006 (0.078604) | 0.297122 / 0.000490 (0.296632) | 0.000211 / 0.000200 (0.000011) | 0.000046 / 0.000054 (-0.000008) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018380 / 0.037411 (-0.019031) | 0.062214 / 0.014526 (0.047688) | 0.075833 / 0.176557 (-0.100723) | 0.121825 / 0.737135 (-0.615310) | 0.075475 / 0.296338 (-0.220864) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.275601 / 0.215209 (0.060392) | 2.698014 / 2.077655 (0.620359) | 1.434043 / 1.504120 (-0.070077) | 1.313217 / 1.541195 (-0.227978) | 1.339014 / 1.468490 (-0.129476) | 0.566703 / 4.584777 (-4.018074) | 2.367794 / 3.745712 (-1.377918) | 2.660787 / 5.269862 (-2.609074) | 1.738503 / 4.565676 (-2.827174) | 0.061693 / 0.424275 (-0.362582) | 0.004978 / 0.007607 (-0.002629) | 0.334719 / 0.226044 (0.108675) | 3.300889 / 2.268929 (1.031960) | 1.764493 / 55.444624 (-53.680131) | 1.475956 / 6.876477 (-5.400521) | 1.635988 / 2.142072 (-0.506084) | 0.643906 / 4.805227 (-4.161321) | 0.118002 / 6.500664 (-6.382662) | 0.042593 / 0.075469 (-0.032876) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.953511 / 1.841788 (-0.888276) | 11.489727 / 8.074308 (3.415419) | 9.775017 / 10.191392 (-0.416375) | 0.139864 / 0.680424 (-0.540560) | 0.014219 / 0.534201 (-0.519982) | 0.284389 / 0.579283 (-0.294894) | 0.264250 / 0.434364 (-0.170113) | 0.323471 / 0.540337 (-0.216866) | 0.415189 / 1.386936 (-0.971747) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005437 / 0.011353 (-0.005916) | 0.003710 / 0.011008 (-0.007298) | 0.049940 / 0.038508 (0.011432) | 0.032565 / 0.023109 (0.009456) | 0.266374 / 0.275898 (-0.009524) | 0.288069 / 0.323480 (-0.035411) | 0.004140 / 0.007986 (-0.003845) | 0.002669 / 0.004328 (-0.001660) | 0.049646 / 0.004250 (0.045395) | 0.040926 / 0.037052 (0.003874) | 0.278805 / 0.258489 (0.020316) | 0.311396 / 0.293841 (0.017555) | 0.029363 / 0.128546 (-0.099183) | 0.010260 / 0.075646 (-0.065386) | 0.058222 / 0.419271 (-0.361049) | 0.033063 / 0.043533 (-0.010470) | 0.266798 / 0.255139 (0.011659) | 0.283091 / 0.283200 (-0.000109) | 0.017904 / 0.141683 (-0.123779) | 1.139531 / 1.452155 (-0.312624) | 1.163909 / 1.492716 (-0.328808) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.089063 / 0.018006 (0.071057) | 0.296757 / 0.000490 (0.296268) | 0.000202 / 0.000200 (0.000002) | 0.000054 / 0.000054 (-0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022843 / 0.037411 (-0.014568) | 0.076032 / 0.014526 (0.061507) | 0.087545 / 0.176557 (-0.089012) | 0.128870 / 0.737135 (-0.608266) | 0.089359 / 0.296338 (-0.206980) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.285213 / 0.215209 (0.070004) | 2.854950 / 2.077655 (0.777295) | 1.539311 / 1.504120 (0.035191) | 1.413753 / 1.541195 (-0.127442) | 1.440819 / 1.468490 (-0.027671) | 0.564734 / 4.584777 (-4.020043) | 0.944924 / 3.745712 (-2.800788) | 2.703612 / 5.269862 (-2.566249) | 1.749429 / 4.565676 (-2.816247) | 0.063239 / 0.424275 (-0.361036) | 0.005024 / 0.007607 (-0.002583) | 0.340866 / 0.226044 (0.114821) | 3.359511 / 2.268929 (1.090582) | 1.895794 / 55.444624 (-53.548831) | 1.606613 / 6.876477 (-5.269864) | 1.756539 / 2.142072 (-0.385533) | 0.646553 / 4.805227 (-4.158675) | 0.121278 / 6.500664 (-6.379386) | 0.041066 / 0.075469 (-0.034403) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.005548 / 1.841788 (-0.836240) | 12.080103 / 8.074308 (4.005794) | 10.444822 / 10.191392 (0.253430) | 0.145024 / 0.680424 (-0.535400) | 0.015287 / 0.534201 (-0.518914) | 0.288567 / 0.579283 (-0.290716) | 0.118034 / 0.434364 (-0.316330) | 0.333474 / 0.540337 (-0.206864) | 0.421716 / 1.386936 (-0.965220) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#3d95159dbd918009e1ff710dba0cd15d96d4264e \"CML watermark\")\n" ]
"2024-06-03T05:45:50"
"2024-06-03T06:15:48"
"2024-06-03T06:09:43"
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6945", "html_url": "https://github.com/huggingface/datasets/pull/6945", "diff_url": "https://github.com/huggingface/datasets/pull/6945.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6945.patch", "merged_at": "2024-06-03T06:09:43" }
Update yanked version of minimum requests requirement. Version 2.32.1 was yanked: https://pypi.org/project/requests/2.32.1/
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6945/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6945/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6944
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6944/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6944/comments
https://api.github.com/repos/huggingface/datasets/issues/6944/events
https://github.com/huggingface/datasets/pull/6944
2,330,207,120
PR_kwDODunzps5xP-KD
6,944
Set dev version
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6944). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005150 / 0.011353 (-0.006203) | 0.003663 / 0.011008 (-0.007346) | 0.062832 / 0.038508 (0.024324) | 0.031928 / 0.023109 (0.008819) | 0.246455 / 0.275898 (-0.029443) | 0.272121 / 0.323480 (-0.051359) | 0.004220 / 0.007986 (-0.003765) | 0.002756 / 0.004328 (-0.001573) | 0.050071 / 0.004250 (0.045821) | 0.046074 / 0.037052 (0.009022) | 0.259676 / 0.258489 (0.001187) | 0.290674 / 0.293841 (-0.003167) | 0.027822 / 0.128546 (-0.100724) | 0.010791 / 0.075646 (-0.064855) | 0.202827 / 0.419271 (-0.216445) | 0.037057 / 0.043533 (-0.006476) | 0.256128 / 0.255139 (0.000989) | 0.269422 / 0.283200 (-0.013777) | 0.017395 / 0.141683 (-0.124288) | 1.125919 / 1.452155 (-0.326236) | 1.177708 / 1.492716 (-0.315008) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.098466 / 0.018006 (0.080460) | 0.305508 / 0.000490 (0.305018) | 0.000232 / 0.000200 (0.000032) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018866 / 0.037411 (-0.018545) | 0.062079 / 0.014526 (0.047553) | 0.074670 / 0.176557 (-0.101886) | 0.121025 / 0.737135 (-0.616111) | 0.075883 / 0.296338 (-0.220455) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.291880 / 0.215209 (0.076671) | 2.874064 / 2.077655 (0.796409) | 1.477040 / 1.504120 (-0.027080) | 1.356198 / 1.541195 (-0.184997) | 1.354676 / 1.468490 (-0.113814) | 0.559731 / 4.584777 (-4.025046) | 2.362746 / 3.745712 (-1.382966) | 2.678838 / 5.269862 (-2.591024) | 1.752633 / 4.565676 (-2.813044) | 0.064023 / 0.424275 (-0.360252) | 0.005035 / 0.007607 (-0.002572) | 0.354807 / 0.226044 (0.128762) | 3.424463 / 2.268929 (1.155534) | 1.810476 / 55.444624 (-53.634149) | 1.519031 / 6.876477 (-5.357446) | 1.693957 / 2.142072 (-0.448116) | 0.647987 / 4.805227 (-4.157240) | 0.118993 / 6.500664 (-6.381671) | 0.042186 / 0.075469 (-0.033283) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.982565 / 1.841788 (-0.859223) | 11.645075 / 8.074308 (3.570767) | 9.588360 / 10.191392 (-0.603032) | 0.142369 / 0.680424 (-0.538055) | 0.014025 / 0.534201 (-0.520176) | 0.285668 / 0.579283 (-0.293616) | 0.265825 / 0.434364 (-0.168539) | 0.323371 / 0.540337 (-0.216966) | 0.421227 / 1.386936 (-0.965709) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005587 / 0.011353 (-0.005766) | 0.003664 / 0.011008 (-0.007345) | 0.050411 / 0.038508 (0.011903) | 0.033268 / 0.023109 (0.010159) | 0.266631 / 0.275898 (-0.009267) | 0.291135 / 0.323480 (-0.032345) | 0.004275 / 0.007986 (-0.003710) | 0.002822 / 0.004328 (-0.001506) | 0.049349 / 0.004250 (0.045099) | 0.040653 / 0.037052 (0.003601) | 0.282641 / 0.258489 (0.024152) | 0.315460 / 0.293841 (0.021619) | 0.029343 / 0.128546 (-0.099203) | 0.010606 / 0.075646 (-0.065040) | 0.058783 / 0.419271 (-0.360489) | 0.033205 / 0.043533 (-0.010327) | 0.266805 / 0.255139 (0.011666) | 0.288907 / 0.283200 (0.005707) | 0.017817 / 0.141683 (-0.123866) | 1.128132 / 1.452155 (-0.324023) | 1.175120 / 1.492716 (-0.317597) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095653 / 0.018006 (0.077647) | 0.304825 / 0.000490 (0.304335) | 0.000212 / 0.000200 (0.000012) | 0.000045 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022766 / 0.037411 (-0.014645) | 0.076598 / 0.014526 (0.062072) | 0.088314 / 0.176557 (-0.088242) | 0.127888 / 0.737135 (-0.609247) | 0.090391 / 0.296338 (-0.205947) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.293384 / 0.215209 (0.078175) | 2.883742 / 2.077655 (0.806087) | 1.533868 / 1.504120 (0.029748) | 1.391964 / 1.541195 (-0.149231) | 1.423732 / 1.468490 (-0.044759) | 0.575457 / 4.584777 (-4.009320) | 0.970860 / 3.745712 (-2.774852) | 2.711405 / 5.269862 (-2.558457) | 1.774468 / 4.565676 (-2.791208) | 0.064611 / 0.424275 (-0.359664) | 0.005120 / 0.007607 (-0.002487) | 0.343892 / 0.226044 (0.117847) | 3.362579 / 2.268929 (1.093650) | 1.880200 / 55.444624 (-53.564424) | 1.587435 / 6.876477 (-5.289042) | 1.756464 / 2.142072 (-0.385609) | 0.661469 / 4.805227 (-4.143759) | 0.119030 / 6.500664 (-6.381634) | 0.041704 / 0.075469 (-0.033765) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.025008 / 1.841788 (-0.816780) | 12.146244 / 8.074308 (4.071936) | 10.397267 / 10.191392 (0.205875) | 0.145917 / 0.680424 (-0.534507) | 0.015779 / 0.534201 (-0.518422) | 0.287122 / 0.579283 (-0.292161) | 0.125464 / 0.434364 (-0.308900) | 0.323315 / 0.540337 (-0.217023) | 0.416761 / 1.386936 (-0.970175) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#e2d15a6b1871f3998986853298e4338d72891491 \"CML watermark\")\n" ]
"2024-06-03T05:29:59"
"2024-06-03T05:37:51"
"2024-06-03T05:31:47"
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6944", "html_url": "https://github.com/huggingface/datasets/pull/6944", "diff_url": "https://github.com/huggingface/datasets/pull/6944.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6944.patch", "merged_at": "2024-06-03T05:31:46" }
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6944/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6944/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6943
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6943/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6943/comments
https://api.github.com/repos/huggingface/datasets/issues/6943/events
https://github.com/huggingface/datasets/pull/6943
2,330,176,890
PR_kwDODunzps5xP3jp
6,943
Release 2.19.2
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6943). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
"2024-06-03T05:01:50"
"2024-06-03T05:17:41"
"2024-06-03T05:17:40"
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6943", "html_url": "https://github.com/huggingface/datasets/pull/6943", "diff_url": "https://github.com/huggingface/datasets/pull/6943.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6943.patch", "merged_at": "2024-06-03T05:17:40" }
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6943/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6943/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6942
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6942/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6942/comments
https://api.github.com/repos/huggingface/datasets/issues/6942/events
https://github.com/huggingface/datasets/issues/6942
2,329,562,382
I_kwDODunzps6K2k0O
6,942
Import sorting is disabled by flake8 noqa directive after switching to ruff linter
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 4296013012, "node_id": "LA_kwDODunzps8AAAABAA_01A", "url": "https://api.github.com/repos/huggingface/datasets/labels/maintenance", "name": "maintenance", "color": "d4c5f9", "default": false, "description": "Maintenance tasks" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[]
"2024-06-02T09:43:34"
"2024-06-04T09:54:24"
"2024-06-04T09:54:24"
MEMBER
null
null
null
When we switched to `ruff` linter in PR: - #5519 import sorting was disabled in all files containing the `# flake8: noqa` directive - https://github.com/astral-sh/ruff/issues/11679 We should re-enable import sorting on those files.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6942/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6942/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6941
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6941/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6941/comments
https://api.github.com/repos/huggingface/datasets/issues/6941/events
https://github.com/huggingface/datasets/issues/6941
2,328,930,165
I_kwDODunzps6K0Kd1
6,941
Supporting FFCV: Fast Forward Computer Vision
{ "login": "Luciennnnnnn", "id": 20135317, "node_id": "MDQ6VXNlcjIwMTM1MzE3", "avatar_url": "https://avatars.githubusercontent.com/u/20135317?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Luciennnnnnn", "html_url": "https://github.com/Luciennnnnnn", "followers_url": "https://api.github.com/users/Luciennnnnnn/followers", "following_url": "https://api.github.com/users/Luciennnnnnn/following{/other_user}", "gists_url": "https://api.github.com/users/Luciennnnnnn/gists{/gist_id}", "starred_url": "https://api.github.com/users/Luciennnnnnn/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Luciennnnnnn/subscriptions", "organizations_url": "https://api.github.com/users/Luciennnnnnn/orgs", "repos_url": "https://api.github.com/users/Luciennnnnnn/repos", "events_url": "https://api.github.com/users/Luciennnnnnn/events{/privacy}", "received_events_url": "https://api.github.com/users/Luciennnnnnn/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[]
"2024-06-01T05:34:52"
"2024-06-01T05:34:52"
null
NONE
null
null
null
### Feature request Supporting FFCV, https://github.com/libffcv/ffcv ### Motivation According to the benchmark, FFCV seems to be fastest image loading method. ### Your contribution no
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6941/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6941/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6940
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6940/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6940/comments
https://api.github.com/repos/huggingface/datasets/issues/6940/events
https://github.com/huggingface/datasets/issues/6940
2,328,637,831
I_kwDODunzps6KzDGH
6,940
Enable Sharding to Equal Sized Shards
{ "login": "yuvalkirstain", "id": 57996478, "node_id": "MDQ6VXNlcjU3OTk2NDc4", "avatar_url": "https://avatars.githubusercontent.com/u/57996478?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yuvalkirstain", "html_url": "https://github.com/yuvalkirstain", "followers_url": "https://api.github.com/users/yuvalkirstain/followers", "following_url": "https://api.github.com/users/yuvalkirstain/following{/other_user}", "gists_url": "https://api.github.com/users/yuvalkirstain/gists{/gist_id}", "starred_url": "https://api.github.com/users/yuvalkirstain/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yuvalkirstain/subscriptions", "organizations_url": "https://api.github.com/users/yuvalkirstain/orgs", "repos_url": "https://api.github.com/users/yuvalkirstain/repos", "events_url": "https://api.github.com/users/yuvalkirstain/events{/privacy}", "received_events_url": "https://api.github.com/users/yuvalkirstain/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[]
"2024-05-31T21:55:50"
"2024-06-01T07:34:12"
null
NONE
null
null
null
### Feature request Add an option when sharding a dataset to have all shards the same size. Will be good to provide both an option of duplication, and by truncation. ### Motivation Currently the behavior of sharding is "If n % i == l, then the first l shards will have length (n // i) + 1, and the remaining shards will have length (n // i).". However, when using FSDP we want the shards to have the same size. This requires the user to manually handle this situation, but it will be nice if we had an option to shard the dataset into equally sized shards. ### Your contribution For now just a PR. I can also add code that does what is needed, but probably not efficient. Shard to equal size by duplication: ``` remainder = len(dataset) % num_shards num_missing_examples = num_shards - remainder duplicated = dataset.select(list(range(num_missing_examples))) dataset = concatenate_datasets([dataset, duplicated]) shard = dataset.shard(num_shards, shard_idx) ``` Or by truncation: ``` shard = dataset.shard(num_shards, shard_idx) num_examples_per_shard = len(dataset) // num_shards shard = shard.select(list(range(num_examples_per_shard))) ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6940/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6940/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6939
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6939/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6939/comments
https://api.github.com/repos/huggingface/datasets/issues/6939/events
https://github.com/huggingface/datasets/issues/6939
2,328,059,386
I_kwDODunzps6Kw136
6,939
ExpectedMoreSplits error when using data_dir
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[]
"2024-05-31T15:08:42"
"2024-05-31T17:10:39"
"2024-05-31T17:10:39"
MEMBER
null
null
null
As reported by @regisss, an `ExpectedMoreSplits` error is raised when passing `data_dir`: ```python from datasets import load_dataset dataset = load_dataset( "lvwerra/stack-exchange-paired", split="train", cache_dir=None, data_dir="data/rl", ) ``` ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python3.10/dist-packages/datasets/load.py", line 2609, in load_dataset builder_instance.download_and_prepare( File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1027, in download_and_prepare self._download_and_prepare( File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1140, in _download_and_prepare verify_splits(self.info.splits, split_dict) File "/usr/local/lib/python3.10/dist-packages/datasets/utils/info_utils.py", line 92, in verify_splits raise ExpectedMoreSplits(str(set(expected_splits) - set(recorded_splits))) datasets.utils.info_utils.ExpectedMoreSplits: {'test'} ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6939/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6939/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6938
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6938/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6938/comments
https://api.github.com/repos/huggingface/datasets/issues/6938/events
https://github.com/huggingface/datasets/pull/6938
2,327,568,281
PR_kwDODunzps5xHNKm
6,938
Fix expected splits when passing data_files or dir
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6938). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "fix is included in https://github.com/huggingface/datasets/pull/6925" ]
"2024-05-31T11:04:22"
"2024-05-31T15:28:03"
"2024-05-31T15:28:02"
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6938", "html_url": "https://github.com/huggingface/datasets/pull/6938", "diff_url": "https://github.com/huggingface/datasets/pull/6938.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6938.patch", "merged_at": null }
reported on slack: The following code snippet gives an error with v2.19 but not with v2.18: from datasets import load_dataset ``` dataset = load_dataset( "lvwerra/stack-exchange-paired", split="train", cache_dir=None, data_dir="data/rl", ) ``` and the error is: ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python3.10/dist-packages/datasets/load.py", line 2609, in load_dataset builder_instance.download_and_prepare( File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1027, in download_and_prepare self._download_and_prepare( File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1140, in _download_and_prepare verify_splits(self.info.splits, split_dict) File "/usr/local/lib/python3.10/dist-packages/datasets/utils/info_utils.py", line 92, in verify_splits raise ExpectedMoreSplits(str(set(expected_splits) - set(recorded_splits))) datasets.utils.info_utils.ExpectedMoreSplits: {'test'} ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6938/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6938/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6937
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6937/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6937/comments
https://api.github.com/repos/huggingface/datasets/issues/6937/events
https://github.com/huggingface/datasets/issues/6937
2,327,212,611
I_kwDODunzps6KtnJD
6,937
JSON loader implicitly coerces floats to integers
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[]
"2024-05-31T08:09:12"
"2024-05-31T08:11:57"
null
MEMBER
null
null
null
The JSON loader implicitly coerces floats to integers. The column values `[0.0, 1.0, 2.0]` are coerced to `[0, 1, 2]`. See CI error in dataset-viewer: https://github.com/huggingface/dataset-viewer/actions/runs/9290164936/job/25576926446 ``` =================================== FAILURES =================================== ___________________________ test_statistics_endpoint ___________________________ normal_user_public_json_dataset = 'DVUser/tmp-dataset-17170199043860' def test_statistics_endpoint(normal_user_public_json_dataset: str) -> None: dataset = normal_user_public_json_dataset config, split = get_default_config_split() statistics_response = poll_until_ready_and_assert( relative_url=f"/statistics?dataset={dataset}&config={config}&split={split}", check_x_revision=True, dataset=dataset, ) content = statistics_response.json() assert len(content) == 3 assert sorted(content) == ["num_examples", "partial", "statistics"], statistics_response statistics = content["statistics"] num_examples = content["num_examples"] partial = content["partial"] assert isinstance(statistics, list), statistics assert len(statistics) == 6 assert num_examples == 4 assert partial is False string_label_column = statistics[0] assert "column_name" in string_label_column assert "column_statistics" in string_label_column assert "column_type" in string_label_column assert string_label_column["column_name"] == "col_1" assert string_label_column["column_type"] == "string_label" # 4 unique values -> label assert isinstance(string_label_column["column_statistics"], dict) assert string_label_column["column_statistics"] == { "nan_count": 0, "nan_proportion": 0.0, "no_label_count": 0, "no_label_proportion": 0.0, "n_unique": 4, "frequencies": { "There goes another one.": 1, "Vader turns round and round in circles as his ship spins into space.": 1, "We count thirty Rebel ships, Lord Vader.": 1, "The wingman spots the pirateship coming at him and warns the Dark Lord": 1, }, } int_column = statistics[1] assert "column_name" in int_column assert "column_statistics" in int_column assert "column_type" in int_column assert int_column["column_name"] == "col_2" assert int_column["column_type"] == "int" assert isinstance(int_column["column_statistics"], dict) assert int_column["column_statistics"] == { "histogram": {"bin_edges": [0, 1, 2, 3, 3], "hist": [1, 1, 1, 1]}, "max": 3, "mean": 1.5, "median": 1.5, "min": 0, "nan_count": 0, "nan_proportion": 0.0, "std": 1.29099, } float_column = statistics[2] assert "column_name" in float_column assert "column_statistics" in float_column assert "column_type" in float_column assert float_column["column_name"] == "col_3" > assert float_column["column_type"] == "float" E AssertionError: assert 'int' == 'float' E - float E + int tests/test_14_statistics.py:72: AssertionError =========================== short test summary info ============================ FAILED tests/test_14_statistics.py::test_statistics_endpoint - AssertionError: assert 'int' == 'float' - float + int ``` This bug was introduced after: - #6914 We have reported the issue to pandas: - https://github.com/pandas-dev/pandas/issues/58866
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6937/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6937/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6936
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6936/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6936/comments
https://api.github.com/repos/huggingface/datasets/issues/6936/events
https://github.com/huggingface/datasets/issues/6936
2,326,119,853
I_kwDODunzps6KpcWt
6,936
save_to_disk() freezes when saving on s3 bucket with multiprocessing
{ "login": "ycattan", "id": 54974949, "node_id": "MDQ6VXNlcjU0OTc0OTQ5", "avatar_url": "https://avatars.githubusercontent.com/u/54974949?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ycattan", "html_url": "https://github.com/ycattan", "followers_url": "https://api.github.com/users/ycattan/followers", "following_url": "https://api.github.com/users/ycattan/following{/other_user}", "gists_url": "https://api.github.com/users/ycattan/gists{/gist_id}", "starred_url": "https://api.github.com/users/ycattan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ycattan/subscriptions", "organizations_url": "https://api.github.com/users/ycattan/orgs", "repos_url": "https://api.github.com/users/ycattan/repos", "events_url": "https://api.github.com/users/ycattan/events{/privacy}", "received_events_url": "https://api.github.com/users/ycattan/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
"2024-05-30T16:48:39"
"2024-05-30T16:49:05"
null
NONE
null
null
null
### Describe the bug I'm trying to save a `Dataset` using the `save_to_disk()` function with: - `num_proc > 1` - `dataset_path` being a s3 bucket path e.g. "s3://{bucket_name}/{dataset_folder}/" The hf progress bar shows up but the saving does not seem to start. When using one processor only (`num_proc=1`), everything works fine. When saving the dataset on local disk (as opposed to s3 bucket) with `num_proc > 1`, everything works fine. Thank you for your help! :) ### Steps to reproduce the bug I tried without any storage options: ``` from datasets import load_dataset sandbox_ds = load_dataset("openai_humaneval") sandbox_ds["test"].save_to_disk( "s3://bucket-name/test_multiprocessing_saving/", num_proc=4, ) ``` and with the specific s3fs storage options: ``` from datasets import load_dataset from s3fs import S3FileSystem def get_s3fs(): return S3FileSystem() sandbox_ds = load_dataset("openai_humaneval") sandbox_ds["test"].save_to_disk( "s3://bucket-name/test_multiprocessing_saving/", num_proc=4, storage_options=get_s3fs().storage_options, # also tried: storage_options=S3FileSystem().storage_options ) ``` I'm guessing I might use `storage_options` parameter wrongly, but I didn't find anything online that made it work. **NB**: Behavior is the same when trying to save the whole `DatasetDict`. ### Expected behavior Progress bar fills in and saving is carried out. ### Environment info `datasets==2.18.0`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6936/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6936/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6935
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6935/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6935/comments
https://api.github.com/repos/huggingface/datasets/issues/6935/events
https://github.com/huggingface/datasets/issues/6935
2,325,612,022
I_kwDODunzps6KngX2
6,935
Support for pathlib.Path in datasets 2.19.0
{ "login": "lamyiowce", "id": 12202811, "node_id": "MDQ6VXNlcjEyMjAyODEx", "avatar_url": "https://avatars.githubusercontent.com/u/12202811?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lamyiowce", "html_url": "https://github.com/lamyiowce", "followers_url": "https://api.github.com/users/lamyiowce/followers", "following_url": "https://api.github.com/users/lamyiowce/following{/other_user}", "gists_url": "https://api.github.com/users/lamyiowce/gists{/gist_id}", "starred_url": "https://api.github.com/users/lamyiowce/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lamyiowce/subscriptions", "organizations_url": "https://api.github.com/users/lamyiowce/orgs", "repos_url": "https://api.github.com/users/lamyiowce/repos", "events_url": "https://api.github.com/users/lamyiowce/events{/privacy}", "received_events_url": "https://api.github.com/users/lamyiowce/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
"2024-05-30T12:53:36"
"2024-05-30T12:53:36"
null
NONE
null
null
null
### Describe the bug After the recent update of `datasets`, Dataset.save_to_disk does not accept a pathlib.Path anymore. It was supported in 2.18.0 and previous versions. Is this intentional? Was it supported before only because of a Python dusk-typing miracle? ### Steps to reproduce the bug ``` from datasets import Dataset import pathlib path = pathlib.Path("./my_out_path") Dataset.from_dict( {"text": ["hello world"], "label": [777], "split": ["train"]} .save_to_disk(path) ``` This results in an error when using datasets 2.19: ``` Traceback (most recent call last): File "<stdin>", line 3, in <module> File "/Users/jb/scratch/venv/lib/python3.11/site-packages/datasets/arrow_dataset.py", line 1515, in save_to_disk fs, _ = url_to_fs(dataset_path, **(storage_options or {})) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/jb/scratch/venv/lib/python3.11/site-packages/fsspec/core.py", line 383, in url_to_fs chain = _un_chain(url, kwargs) ^^^^^^^^^^^^^^^^^^^^^^ File "/Users/jb/scratch/venv/lib/python3.11/site-packages/fsspec/core.py", line 323, in _un_chain if "::" in path ^^^^^^^^^^^^ TypeError: argument of type 'PosixPath' is not iterable ``` Converting to str works, however. ``` Dataset.from_dict( {"text": ["hello world"], "label": [777], "split": ["train"]} ).save_to_disk(str(path)) ``` ### Expected behavior My dataset gets saved to disk without an error. ### Environment info aiohttp==3.9.5 aiosignal==1.3.1 attrs==23.2.0 certifi==2024.2.2 charset-normalizer==3.3.2 datasets==2.19.0 dill==0.3.8 filelock==3.14.0 frozenlist==1.4.1 fsspec==2024.3.1 huggingface-hub==0.23.2 idna==3.7 multidict==6.0.5 multiprocess==0.70.16 numpy==1.26.4 packaging==24.0 pandas==2.2.2 pyarrow==16.1.0 pyarrow-hotfix==0.6 python-dateutil==2.9.0.post0 pytz==2024.1 PyYAML==6.0.1 requests==2.32.3 six==1.16.0 tqdm==4.66.4 typing_extensions==4.12.0 tzdata==2024.1 urllib3==2.2.1 xxhash==3.4.1 yarl==1.9.4
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6935/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6935/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6934
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6934/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6934/comments
https://api.github.com/repos/huggingface/datasets/issues/6934/events
https://github.com/huggingface/datasets/pull/6934
2,325,341,717
PR_kwDODunzps5w_laB
6,934
Revert ci user
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6934). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005218 / 0.011353 (-0.006135) | 0.003313 / 0.011008 (-0.007695) | 0.062992 / 0.038508 (0.024484) | 0.029621 / 0.023109 (0.006512) | 0.244421 / 0.275898 (-0.031477) | 0.267178 / 0.323480 (-0.056302) | 0.002986 / 0.007986 (-0.005000) | 0.002607 / 0.004328 (-0.001721) | 0.049149 / 0.004250 (0.044898) | 0.045362 / 0.037052 (0.008310) | 0.252862 / 0.258489 (-0.005627) | 0.286326 / 0.293841 (-0.007515) | 0.027888 / 0.128546 (-0.100658) | 0.010295 / 0.075646 (-0.065352) | 0.205525 / 0.419271 (-0.213746) | 0.036696 / 0.043533 (-0.006837) | 0.248716 / 0.255139 (-0.006423) | 0.263803 / 0.283200 (-0.019397) | 0.016926 / 0.141683 (-0.124757) | 1.123093 / 1.452155 (-0.329062) | 1.155434 / 1.492716 (-0.337282) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092349 / 0.018006 (0.074343) | 0.298154 / 0.000490 (0.297664) | 0.000213 / 0.000200 (0.000013) | 0.000052 / 0.000054 (-0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018496 / 0.037411 (-0.018915) | 0.061983 / 0.014526 (0.047457) | 0.075043 / 0.176557 (-0.101514) | 0.120678 / 0.737135 (-0.616457) | 0.074917 / 0.296338 (-0.221422) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.290558 / 0.215209 (0.075349) | 2.842635 / 2.077655 (0.764981) | 1.485761 / 1.504120 (-0.018359) | 1.346948 / 1.541195 (-0.194247) | 1.352424 / 1.468490 (-0.116066) | 0.564567 / 4.584777 (-4.020210) | 2.393583 / 3.745712 (-1.352129) | 2.654061 / 5.269862 (-2.615800) | 1.729154 / 4.565676 (-2.836523) | 0.064652 / 0.424275 (-0.359623) | 0.004973 / 0.007607 (-0.002634) | 0.334924 / 0.226044 (0.108879) | 3.330518 / 2.268929 (1.061590) | 1.773848 / 55.444624 (-53.670776) | 1.513796 / 6.876477 (-5.362681) | 1.676492 / 2.142072 (-0.465580) | 0.650551 / 4.805227 (-4.154677) | 0.118423 / 6.500664 (-6.382241) | 0.042700 / 0.075469 (-0.032769) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.943394 / 1.841788 (-0.898394) | 11.235766 / 8.074308 (3.161458) | 9.896586 / 10.191392 (-0.294806) | 0.130174 / 0.680424 (-0.550249) | 0.014148 / 0.534201 (-0.520053) | 0.284002 / 0.579283 (-0.295281) | 0.261354 / 0.434364 (-0.173010) | 0.320839 / 0.540337 (-0.219499) | 0.422399 / 1.386936 (-0.964537) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005496 / 0.011353 (-0.005857) | 0.003603 / 0.011008 (-0.007406) | 0.050104 / 0.038508 (0.011596) | 0.032939 / 0.023109 (0.009830) | 0.265643 / 0.275898 (-0.010255) | 0.291819 / 0.323480 (-0.031661) | 0.004273 / 0.007986 (-0.003713) | 0.002715 / 0.004328 (-0.001613) | 0.049191 / 0.004250 (0.044941) | 0.040782 / 0.037052 (0.003730) | 0.276562 / 0.258489 (0.018072) | 0.314307 / 0.293841 (0.020466) | 0.029878 / 0.128546 (-0.098669) | 0.010134 / 0.075646 (-0.065513) | 0.058686 / 0.419271 (-0.360585) | 0.033562 / 0.043533 (-0.009971) | 0.265961 / 0.255139 (0.010822) | 0.282009 / 0.283200 (-0.001191) | 0.018956 / 0.141683 (-0.122727) | 1.149668 / 1.452155 (-0.302487) | 1.192242 / 1.492716 (-0.300474) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.089449 / 0.018006 (0.071443) | 0.300346 / 0.000490 (0.299856) | 0.000198 / 0.000200 (-0.000001) | 0.000044 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022094 / 0.037411 (-0.015317) | 0.075987 / 0.014526 (0.061461) | 0.088191 / 0.176557 (-0.088365) | 0.127698 / 0.737135 (-0.609437) | 0.089642 / 0.296338 (-0.206696) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.299127 / 0.215209 (0.083918) | 2.961219 / 2.077655 (0.883565) | 1.589108 / 1.504120 (0.084988) | 1.464060 / 1.541195 (-0.077135) | 1.475249 / 1.468490 (0.006759) | 0.569041 / 4.584777 (-4.015736) | 0.966965 / 3.745712 (-2.778747) | 2.653049 / 5.269862 (-2.616813) | 1.733650 / 4.565676 (-2.832026) | 0.062537 / 0.424275 (-0.361738) | 0.005003 / 0.007607 (-0.002605) | 0.353345 / 0.226044 (0.127301) | 3.432888 / 2.268929 (1.163960) | 1.953217 / 55.444624 (-53.491407) | 1.651995 / 6.876477 (-5.224482) | 1.764549 / 2.142072 (-0.377523) | 0.647255 / 4.805227 (-4.157973) | 0.116827 / 6.500664 (-6.383837) | 0.040765 / 0.075469 (-0.034704) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.985490 / 1.841788 (-0.856298) | 11.965147 / 8.074308 (3.890839) | 10.488286 / 10.191392 (0.296894) | 0.142134 / 0.680424 (-0.538290) | 0.015415 / 0.534201 (-0.518786) | 0.289864 / 0.579283 (-0.289419) | 0.122778 / 0.434364 (-0.311586) | 0.328691 / 0.540337 (-0.211647) | 0.422677 / 1.386936 (-0.964259) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#456f790d2c2e9181bc305ab3d54fe2ca58742b9b \"CML watermark\")\n", "There was an incident in hub-ci that invalidated our token. It's been fixed so I reverted this change" ]
"2024-05-30T10:45:26"
"2024-05-31T10:25:08"
"2024-05-30T10:45:37"
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6934", "html_url": "https://github.com/huggingface/datasets/pull/6934", "diff_url": "https://github.com/huggingface/datasets/pull/6934.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6934.patch", "merged_at": "2024-05-30T10:45:37" }
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6934/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6934/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6933
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6933/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6933/comments
https://api.github.com/repos/huggingface/datasets/issues/6933/events
https://github.com/huggingface/datasets/pull/6933
2,325,300,800
PR_kwDODunzps5w_cW4
6,933
update ci user
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6933). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004937 / 0.011353 (-0.006416) | 0.003706 / 0.011008 (-0.007302) | 0.062627 / 0.038508 (0.024119) | 0.031372 / 0.023109 (0.008263) | 0.246616 / 0.275898 (-0.029282) | 0.272196 / 0.323480 (-0.051284) | 0.004129 / 0.007986 (-0.003856) | 0.002766 / 0.004328 (-0.001562) | 0.049975 / 0.004250 (0.045725) | 0.045098 / 0.037052 (0.008046) | 0.261802 / 0.258489 (0.003313) | 0.290088 / 0.293841 (-0.003753) | 0.027082 / 0.128546 (-0.101465) | 0.010442 / 0.075646 (-0.065205) | 0.201795 / 0.419271 (-0.217477) | 0.037081 / 0.043533 (-0.006452) | 0.249500 / 0.255139 (-0.005639) | 0.268800 / 0.283200 (-0.014399) | 0.017556 / 0.141683 (-0.124127) | 1.137201 / 1.452155 (-0.314953) | 1.186993 / 1.492716 (-0.305723) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.097426 / 0.018006 (0.079419) | 0.303653 / 0.000490 (0.303163) | 0.000235 / 0.000200 (0.000035) | 0.000049 / 0.000054 (-0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.020206 / 0.037411 (-0.017206) | 0.063673 / 0.014526 (0.049147) | 0.076173 / 0.176557 (-0.100383) | 0.122459 / 0.737135 (-0.614676) | 0.076958 / 0.296338 (-0.219380) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.282146 / 0.215209 (0.066937) | 2.785682 / 2.077655 (0.708027) | 1.468847 / 1.504120 (-0.035273) | 1.346731 / 1.541195 (-0.194464) | 1.378459 / 1.468490 (-0.090031) | 0.564961 / 4.584777 (-4.019816) | 2.400095 / 3.745712 (-1.345617) | 2.658285 / 5.269862 (-2.611577) | 1.747873 / 4.565676 (-2.817803) | 0.063763 / 0.424275 (-0.360512) | 0.004969 / 0.007607 (-0.002638) | 0.337764 / 0.226044 (0.111720) | 3.309568 / 2.268929 (1.040639) | 1.812516 / 55.444624 (-53.632109) | 1.521519 / 6.876477 (-5.354957) | 1.690091 / 2.142072 (-0.451982) | 0.640922 / 4.805227 (-4.164305) | 0.119291 / 6.500664 (-6.381373) | 0.042195 / 0.075469 (-0.033274) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.965327 / 1.841788 (-0.876461) | 11.538832 / 8.074308 (3.464523) | 9.594644 / 10.191392 (-0.596748) | 0.144687 / 0.680424 (-0.535737) | 0.014049 / 0.534201 (-0.520152) | 0.296873 / 0.579283 (-0.282410) | 0.269281 / 0.434364 (-0.165083) | 0.325091 / 0.540337 (-0.215246) | 0.420917 / 1.386936 (-0.966019) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005239 / 0.011353 (-0.006114) | 0.003168 / 0.011008 (-0.007840) | 0.049301 / 0.038508 (0.010793) | 0.032248 / 0.023109 (0.009139) | 0.266463 / 0.275898 (-0.009435) | 0.293311 / 0.323480 (-0.030168) | 0.004185 / 0.007986 (-0.003800) | 0.002681 / 0.004328 (-0.001647) | 0.048644 / 0.004250 (0.044393) | 0.040366 / 0.037052 (0.003314) | 0.280345 / 0.258489 (0.021856) | 0.312745 / 0.293841 (0.018904) | 0.029616 / 0.128546 (-0.098930) | 0.010001 / 0.075646 (-0.065646) | 0.057365 / 0.419271 (-0.361906) | 0.033189 / 0.043533 (-0.010344) | 0.267601 / 0.255139 (0.012462) | 0.285647 / 0.283200 (0.002448) | 0.017119 / 0.141683 (-0.124564) | 1.139776 / 1.452155 (-0.312378) | 1.172451 / 1.492716 (-0.320266) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095462 / 0.018006 (0.077455) | 0.303009 / 0.000490 (0.302519) | 0.000227 / 0.000200 (0.000027) | 0.000055 / 0.000054 (0.000000) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023026 / 0.037411 (-0.014385) | 0.077905 / 0.014526 (0.063380) | 0.087275 / 0.176557 (-0.089282) | 0.127355 / 0.737135 (-0.609780) | 0.088940 / 0.296338 (-0.207399) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.298267 / 0.215209 (0.083058) | 2.894679 / 2.077655 (0.817024) | 1.568663 / 1.504120 (0.064543) | 1.438342 / 1.541195 (-0.102853) | 1.456110 / 1.468490 (-0.012380) | 0.556337 / 4.584777 (-4.028440) | 0.969795 / 3.745712 (-2.775917) | 2.667348 / 5.269862 (-2.602513) | 1.767169 / 4.565676 (-2.798507) | 0.060969 / 0.424275 (-0.363306) | 0.005009 / 0.007607 (-0.002598) | 0.343299 / 0.226044 (0.117255) | 3.396529 / 2.268929 (1.127601) | 1.889816 / 55.444624 (-53.554808) | 1.635077 / 6.876477 (-5.241400) | 1.795238 / 2.142072 (-0.346835) | 0.631876 / 4.805227 (-4.173352) | 0.115483 / 6.500664 (-6.385181) | 0.041772 / 0.075469 (-0.033697) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.008423 / 1.841788 (-0.833364) | 12.432488 / 8.074308 (4.358180) | 10.418002 / 10.191392 (0.226610) | 0.142395 / 0.680424 (-0.538029) | 0.015718 / 0.534201 (-0.518483) | 0.281917 / 0.579283 (-0.297366) | 0.132619 / 0.434364 (-0.301745) | 0.318500 / 0.540337 (-0.221838) | 0.410798 / 1.386936 (-0.976138) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#3d6cd158d2e3bb9030fea7c5a9580b9d34d721ac \"CML watermark\")\n" ]
"2024-05-30T10:23:02"
"2024-05-30T10:30:54"
"2024-05-30T10:23:12"
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6933", "html_url": "https://github.com/huggingface/datasets/pull/6933", "diff_url": "https://github.com/huggingface/datasets/pull/6933.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6933.patch", "merged_at": "2024-05-30T10:23:12" }
token is ok to be public since it's only for the hub-ci
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6933/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6933/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6932
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6932/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6932/comments
https://api.github.com/repos/huggingface/datasets/issues/6932/events
https://github.com/huggingface/datasets/pull/6932
2,324,729,267
PR_kwDODunzps5w9d7w
6,932
Update dataset_dict.py
{ "login": "Arunprakash-A", "id": 20263729, "node_id": "MDQ6VXNlcjIwMjYzNzI5", "avatar_url": "https://avatars.githubusercontent.com/u/20263729?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Arunprakash-A", "html_url": "https://github.com/Arunprakash-A", "followers_url": "https://api.github.com/users/Arunprakash-A/followers", "following_url": "https://api.github.com/users/Arunprakash-A/following{/other_user}", "gists_url": "https://api.github.com/users/Arunprakash-A/gists{/gist_id}", "starred_url": "https://api.github.com/users/Arunprakash-A/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Arunprakash-A/subscriptions", "organizations_url": "https://api.github.com/users/Arunprakash-A/orgs", "repos_url": "https://api.github.com/users/Arunprakash-A/repos", "events_url": "https://api.github.com/users/Arunprakash-A/events{/privacy}", "received_events_url": "https://api.github.com/users/Arunprakash-A/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "thanks !", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005050 / 0.011353 (-0.006303) | 0.003786 / 0.011008 (-0.007222) | 0.062406 / 0.038508 (0.023898) | 0.029459 / 0.023109 (0.006349) | 0.262388 / 0.275898 (-0.013510) | 0.274119 / 0.323480 (-0.049361) | 0.004085 / 0.007986 (-0.003901) | 0.002754 / 0.004328 (-0.001574) | 0.048779 / 0.004250 (0.044529) | 0.046187 / 0.037052 (0.009135) | 0.263513 / 0.258489 (0.005024) | 0.294260 / 0.293841 (0.000419) | 0.027391 / 0.128546 (-0.101155) | 0.010567 / 0.075646 (-0.065080) | 0.200225 / 0.419271 (-0.219046) | 0.036165 / 0.043533 (-0.007367) | 0.251757 / 0.255139 (-0.003382) | 0.268271 / 0.283200 (-0.014928) | 0.018446 / 0.141683 (-0.123237) | 1.125787 / 1.452155 (-0.326368) | 1.163172 / 1.492716 (-0.329544) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.004428 / 0.018006 (-0.013578) | 0.301730 / 0.000490 (0.301241) | 0.000215 / 0.000200 (0.000015) | 0.000045 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019424 / 0.037411 (-0.017987) | 0.062269 / 0.014526 (0.047743) | 0.074289 / 0.176557 (-0.102268) | 0.121069 / 0.737135 (-0.616067) | 0.076485 / 0.296338 (-0.219853) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.277315 / 0.215209 (0.062106) | 2.742027 / 2.077655 (0.664372) | 1.472970 / 1.504120 (-0.031150) | 1.350065 / 1.541195 (-0.191130) | 1.378806 / 1.468490 (-0.089684) | 0.567742 / 4.584777 (-4.017035) | 2.376752 / 3.745712 (-1.368960) | 2.662459 / 5.269862 (-2.607402) | 1.750396 / 4.565676 (-2.815280) | 0.063589 / 0.424275 (-0.360686) | 0.004987 / 0.007607 (-0.002620) | 0.326441 / 0.226044 (0.100397) | 3.224125 / 2.268929 (0.955197) | 1.801623 / 55.444624 (-53.643001) | 1.534712 / 6.876477 (-5.341765) | 1.652365 / 2.142072 (-0.489708) | 0.647624 / 4.805227 (-4.157603) | 0.117161 / 6.500664 (-6.383504) | 0.041908 / 0.075469 (-0.033561) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.954879 / 1.841788 (-0.886909) | 11.571875 / 8.074308 (3.497567) | 9.489146 / 10.191392 (-0.702246) | 0.141630 / 0.680424 (-0.538794) | 0.014764 / 0.534201 (-0.519437) | 0.285003 / 0.579283 (-0.294280) | 0.266138 / 0.434364 (-0.168226) | 0.323527 / 0.540337 (-0.216810) | 0.419658 / 1.386936 (-0.967278) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005359 / 0.011353 (-0.005994) | 0.003615 / 0.011008 (-0.007393) | 0.050692 / 0.038508 (0.012184) | 0.033632 / 0.023109 (0.010522) | 0.273614 / 0.275898 (-0.002284) | 0.303780 / 0.323480 (-0.019700) | 0.004171 / 0.007986 (-0.003814) | 0.002687 / 0.004328 (-0.001642) | 0.050002 / 0.004250 (0.045751) | 0.040824 / 0.037052 (0.003772) | 0.287759 / 0.258489 (0.029270) | 0.324144 / 0.293841 (0.030303) | 0.029101 / 0.128546 (-0.099445) | 0.010244 / 0.075646 (-0.065402) | 0.059599 / 0.419271 (-0.359672) | 0.033146 / 0.043533 (-0.010387) | 0.276592 / 0.255139 (0.021453) | 0.293670 / 0.283200 (0.010470) | 0.018270 / 0.141683 (-0.123413) | 1.126216 / 1.452155 (-0.325939) | 1.155658 / 1.492716 (-0.337058) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093537 / 0.018006 (0.075530) | 0.302706 / 0.000490 (0.302216) | 0.000216 / 0.000200 (0.000016) | 0.000052 / 0.000054 (-0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023118 / 0.037411 (-0.014293) | 0.076995 / 0.014526 (0.062469) | 0.089476 / 0.176557 (-0.087080) | 0.130705 / 0.737135 (-0.606430) | 0.090258 / 0.296338 (-0.206081) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.285920 / 0.215209 (0.070710) | 2.830581 / 2.077655 (0.752927) | 1.561695 / 1.504120 (0.057575) | 1.522791 / 1.541195 (-0.018403) | 1.429875 / 1.468490 (-0.038615) | 0.566683 / 4.584777 (-4.018094) | 0.957157 / 3.745712 (-2.788555) | 2.663718 / 5.269862 (-2.606143) | 1.748885 / 4.565676 (-2.816791) | 0.063697 / 0.424275 (-0.360578) | 0.004996 / 0.007607 (-0.002611) | 0.340042 / 0.226044 (0.113998) | 3.352792 / 2.268929 (1.083863) | 1.907189 / 55.444624 (-53.537435) | 1.608177 / 6.876477 (-5.268300) | 1.775438 / 2.142072 (-0.366634) | 0.645264 / 4.805227 (-4.159963) | 0.116441 / 6.500664 (-6.384223) | 0.040671 / 0.075469 (-0.034798) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.005050 / 1.841788 (-0.836738) | 12.040057 / 8.074308 (3.965749) | 10.213560 / 10.191392 (0.022168) | 0.138383 / 0.680424 (-0.542041) | 0.015409 / 0.534201 (-0.518792) | 0.283509 / 0.579283 (-0.295774) | 0.125501 / 0.434364 (-0.308863) | 0.318816 / 0.540337 (-0.221521) | 0.415454 / 1.386936 (-0.971482) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#cbb29cea0e21dc0eb8f7de01d0c6ed5718d6ce4e \"CML watermark\")\n" ]
"2024-05-30T05:22:35"
"2024-06-04T12:56:20"
"2024-06-04T12:50:13"
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6932", "html_url": "https://github.com/huggingface/datasets/pull/6932", "diff_url": "https://github.com/huggingface/datasets/pull/6932.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6932.patch", "merged_at": "2024-06-04T12:50:13" }
shape returns (number of rows, number of columns)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6932/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6932/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6931
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6931/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6931/comments
https://api.github.com/repos/huggingface/datasets/issues/6931/events
https://github.com/huggingface/datasets/pull/6931
2,323,457,525
PR_kwDODunzps5w5I-Y
6,931
[WebDataset] Support compressed files
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6931). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005362 / 0.011353 (-0.005991) | 0.003969 / 0.011008 (-0.007039) | 0.063390 / 0.038508 (0.024882) | 0.030814 / 0.023109 (0.007705) | 0.246891 / 0.275898 (-0.029007) | 0.271047 / 0.323480 (-0.052432) | 0.004036 / 0.007986 (-0.003950) | 0.002732 / 0.004328 (-0.001597) | 0.049466 / 0.004250 (0.045216) | 0.047227 / 0.037052 (0.010175) | 0.255978 / 0.258489 (-0.002511) | 0.297956 / 0.293841 (0.004115) | 0.028641 / 0.128546 (-0.099905) | 0.010510 / 0.075646 (-0.065136) | 0.204268 / 0.419271 (-0.215004) | 0.037093 / 0.043533 (-0.006440) | 0.247287 / 0.255139 (-0.007852) | 0.263830 / 0.283200 (-0.019370) | 0.018335 / 0.141683 (-0.123348) | 1.116074 / 1.452155 (-0.336081) | 1.182589 / 1.492716 (-0.310128) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094435 / 0.018006 (0.076429) | 0.310422 / 0.000490 (0.309932) | 0.000215 / 0.000200 (0.000015) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019220 / 0.037411 (-0.018192) | 0.062090 / 0.014526 (0.047564) | 0.074511 / 0.176557 (-0.102046) | 0.121825 / 0.737135 (-0.615310) | 0.075406 / 0.296338 (-0.220933) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.281185 / 0.215209 (0.065976) | 2.770157 / 2.077655 (0.692502) | 1.472095 / 1.504120 (-0.032025) | 1.339342 / 1.541195 (-0.201853) | 1.374621 / 1.468490 (-0.093869) | 0.566607 / 4.584777 (-4.018170) | 2.357642 / 3.745712 (-1.388070) | 2.735034 / 5.269862 (-2.534827) | 1.782779 / 4.565676 (-2.782897) | 0.063046 / 0.424275 (-0.361229) | 0.005015 / 0.007607 (-0.002592) | 0.336690 / 0.226044 (0.110646) | 3.360955 / 2.268929 (1.092027) | 1.804424 / 55.444624 (-53.640200) | 1.517334 / 6.876477 (-5.359143) | 1.665254 / 2.142072 (-0.476818) | 0.627185 / 4.805227 (-4.178042) | 0.114388 / 6.500664 (-6.386276) | 0.041788 / 0.075469 (-0.033681) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.975270 / 1.841788 (-0.866517) | 11.647633 / 8.074308 (3.573325) | 9.872873 / 10.191392 (-0.318519) | 0.141744 / 0.680424 (-0.538680) | 0.014524 / 0.534201 (-0.519677) | 0.286697 / 0.579283 (-0.292586) | 0.266837 / 0.434364 (-0.167527) | 0.328513 / 0.540337 (-0.211825) | 0.424676 / 1.386936 (-0.962260) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005654 / 0.011353 (-0.005699) | 0.004058 / 0.011008 (-0.006950) | 0.051030 / 0.038508 (0.012522) | 0.033085 / 0.023109 (0.009976) | 0.307532 / 0.275898 (0.031634) | 0.335672 / 0.323480 (0.012192) | 0.004244 / 0.007986 (-0.003742) | 0.002842 / 0.004328 (-0.001487) | 0.050131 / 0.004250 (0.045880) | 0.040709 / 0.037052 (0.003656) | 0.319514 / 0.258489 (0.061025) | 0.357153 / 0.293841 (0.063312) | 0.029014 / 0.128546 (-0.099532) | 0.010999 / 0.075646 (-0.064648) | 0.058789 / 0.419271 (-0.360482) | 0.033284 / 0.043533 (-0.010249) | 0.310783 / 0.255139 (0.055644) | 0.331466 / 0.283200 (0.048266) | 0.018998 / 0.141683 (-0.122685) | 1.138822 / 1.452155 (-0.313332) | 1.180731 / 1.492716 (-0.311985) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095725 / 0.018006 (0.077719) | 0.302788 / 0.000490 (0.302298) | 0.000206 / 0.000200 (0.000006) | 0.000056 / 0.000054 (0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023247 / 0.037411 (-0.014164) | 0.077619 / 0.014526 (0.063093) | 0.090489 / 0.176557 (-0.086067) | 0.132033 / 0.737135 (-0.605102) | 0.090964 / 0.296338 (-0.205374) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.297912 / 0.215209 (0.082703) | 2.954107 / 2.077655 (0.876452) | 1.591155 / 1.504120 (0.087035) | 1.469217 / 1.541195 (-0.071978) | 1.513315 / 1.468490 (0.044825) | 0.562728 / 4.584777 (-4.022049) | 0.960093 / 3.745712 (-2.785620) | 2.852106 / 5.269862 (-2.417756) | 1.861668 / 4.565676 (-2.704009) | 0.063530 / 0.424275 (-0.360745) | 0.005194 / 0.007607 (-0.002413) | 0.351116 / 0.226044 (0.125072) | 3.498787 / 2.268929 (1.229859) | 1.952223 / 55.444624 (-53.492401) | 1.696208 / 6.876477 (-5.180269) | 1.861650 / 2.142072 (-0.280422) | 0.653494 / 4.805227 (-4.151733) | 0.123797 / 6.500664 (-6.376868) | 0.042696 / 0.075469 (-0.032773) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.006657 / 1.841788 (-0.835131) | 12.659771 / 8.074308 (4.585463) | 10.672140 / 10.191392 (0.480748) | 0.143726 / 0.680424 (-0.536698) | 0.015895 / 0.534201 (-0.518306) | 0.285952 / 0.579283 (-0.293331) | 0.126078 / 0.434364 (-0.308286) | 0.325943 / 0.540337 (-0.214395) | 0.410774 / 1.386936 (-0.976162) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#88d53d1ae762bec6736fffb000e6540e52bf1998 \"CML watermark\")\n" ]
"2024-05-29T14:19:06"
"2024-05-29T16:33:18"
"2024-05-29T16:24:21"
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6931", "html_url": "https://github.com/huggingface/datasets/pull/6931", "diff_url": "https://github.com/huggingface/datasets/pull/6931.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6931.patch", "merged_at": "2024-05-29T16:24:21" }
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6931/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6931/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6930
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6930/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6930/comments
https://api.github.com/repos/huggingface/datasets/issues/6930/events
https://github.com/huggingface/datasets/issues/6930
2,323,225,922
I_kwDODunzps6KeZ1C
6,930
ValueError: Couldn't infer the same data file format for all splits. Got {'train': ('json', {}), 'validation': (None, {})}
{ "login": "CLL112", "id": 41767521, "node_id": "MDQ6VXNlcjQxNzY3NTIx", "avatar_url": "https://avatars.githubusercontent.com/u/41767521?v=4", "gravatar_id": "", "url": "https://api.github.com/users/CLL112", "html_url": "https://github.com/CLL112", "followers_url": "https://api.github.com/users/CLL112/followers", "following_url": "https://api.github.com/users/CLL112/following{/other_user}", "gists_url": "https://api.github.com/users/CLL112/gists{/gist_id}", "starred_url": "https://api.github.com/users/CLL112/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/CLL112/subscriptions", "organizations_url": "https://api.github.com/users/CLL112/orgs", "repos_url": "https://api.github.com/users/CLL112/repos", "events_url": "https://api.github.com/users/CLL112/events{/privacy}", "received_events_url": "https://api.github.com/users/CLL112/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
"2024-05-29T12:40:05"
"2024-05-29T12:40:05"
null
NONE
null
null
null
### Describe the bug When I run the code en = load_dataset("allenai/c4", "en", streaming=True), I encounter an error: raise ValueError(f"Couldn't infer the same data file format for all splits. Got {split_modules}") ValueError: Couldn't infer the same data file format for all splits. Got {'train': ('json', {}), 'validation': (None, {})}. However, running dataset = load_dataset('allenai/c4', streaming=True, data_files={'validation': 'en/c4-validation.00003-of-00008.json.gz'}, split='validation') works fine. What is the issue here? ### Steps to reproduce the bug run code: import os os.environ['HF_ENDPOINT'] = 'https://hf-mirror.com' from datasets import load_dataset en = load_dataset("allenai/c4", "en", streaming=True) ### Expected behavior Successfully loaded the dataset. ### Environment info - `datasets` version: 2.18.0 - Platform: Linux-6.5.0-28-generic-x86_64-with-glibc2.17 - Python version: 3.8.19 - `huggingface_hub` version: 0.22.2 - PyArrow version: 15.0.2 - Pandas version: 2.0.3 - `fsspec` version: 2024.2.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6930/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6930/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6929
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6929/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6929/comments
https://api.github.com/repos/huggingface/datasets/issues/6929/events
https://github.com/huggingface/datasets/issues/6929
2,322,980,077
I_kwDODunzps6Kddzt
6,929
Avoid downloading the whole dataset when only README.me has been touched on hub.
{ "login": "zinc75", "id": 73740254, "node_id": "MDQ6VXNlcjczNzQwMjU0", "avatar_url": "https://avatars.githubusercontent.com/u/73740254?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zinc75", "html_url": "https://github.com/zinc75", "followers_url": "https://api.github.com/users/zinc75/followers", "following_url": "https://api.github.com/users/zinc75/following{/other_user}", "gists_url": "https://api.github.com/users/zinc75/gists{/gist_id}", "starred_url": "https://api.github.com/users/zinc75/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zinc75/subscriptions", "organizations_url": "https://api.github.com/users/zinc75/orgs", "repos_url": "https://api.github.com/users/zinc75/repos", "events_url": "https://api.github.com/users/zinc75/events{/privacy}", "received_events_url": "https://api.github.com/users/zinc75/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[ "you're right, we're tackling this here: https://github.com/huggingface/dataset-viewer/issues/2757", "@severo : great !" ]
"2024-05-29T10:36:06"
"2024-05-29T20:51:56"
null
NONE
null
null
null
### Feature request `datasets.load_dataset()` triggers a new download of the **whole dataset** when the README.md file has been touched on huggingface hub, even if data files / parquet files are the exact same. I think the current behaviour of the load_dataset function is triggered whenever a change of the hash of latest commit on huggingface hub, but is there a clever way to only download again the dataset **if and only if** data is modified ? ### Motivation The current behaviour is a waste of network bandwidth / disk space / research time. ### Your contribution I don't have time to submit a PR, but I hope a simple solution will emerge from this issue !
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6929/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/huggingface/datasets/issues/6929/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6928
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6928/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6928/comments
https://api.github.com/repos/huggingface/datasets/issues/6928/events
https://github.com/huggingface/datasets/pull/6928
2,322,267,727
PR_kwDODunzps5w1ECb
6,928
Update process.mdx: Code Listings Fixes
{ "login": "FadyMorris", "id": 16918280, "node_id": "MDQ6VXNlcjE2OTE4Mjgw", "avatar_url": "https://avatars.githubusercontent.com/u/16918280?v=4", "gravatar_id": "", "url": "https://api.github.com/users/FadyMorris", "html_url": "https://github.com/FadyMorris", "followers_url": "https://api.github.com/users/FadyMorris/followers", "following_url": "https://api.github.com/users/FadyMorris/following{/other_user}", "gists_url": "https://api.github.com/users/FadyMorris/gists{/gist_id}", "starred_url": "https://api.github.com/users/FadyMorris/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/FadyMorris/subscriptions", "organizations_url": "https://api.github.com/users/FadyMorris/orgs", "repos_url": "https://api.github.com/users/FadyMorris/repos", "events_url": "https://api.github.com/users/FadyMorris/events{/privacy}", "received_events_url": "https://api.github.com/users/FadyMorris/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005062 / 0.011353 (-0.006291) | 0.003410 / 0.011008 (-0.007598) | 0.062241 / 0.038508 (0.023733) | 0.030294 / 0.023109 (0.007185) | 0.249249 / 0.275898 (-0.026649) | 0.267718 / 0.323480 (-0.055761) | 0.003047 / 0.007986 (-0.004938) | 0.002661 / 0.004328 (-0.001668) | 0.049142 / 0.004250 (0.044892) | 0.047929 / 0.037052 (0.010877) | 0.255262 / 0.258489 (-0.003227) | 0.286241 / 0.293841 (-0.007600) | 0.027064 / 0.128546 (-0.101482) | 0.010374 / 0.075646 (-0.065273) | 0.201454 / 0.419271 (-0.217818) | 0.036586 / 0.043533 (-0.006947) | 0.255200 / 0.255139 (0.000061) | 0.267660 / 0.283200 (-0.015539) | 0.018621 / 0.141683 (-0.123062) | 1.159821 / 1.452155 (-0.292334) | 1.171597 / 1.492716 (-0.321120) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.004752 / 0.018006 (-0.013254) | 0.295427 / 0.000490 (0.294937) | 0.000225 / 0.000200 (0.000025) | 0.000052 / 0.000054 (-0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018914 / 0.037411 (-0.018497) | 0.061180 / 0.014526 (0.046654) | 0.073649 / 0.176557 (-0.102907) | 0.120142 / 0.737135 (-0.616993) | 0.074754 / 0.296338 (-0.221585) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.286637 / 0.215209 (0.071428) | 2.807941 / 2.077655 (0.730287) | 1.473577 / 1.504120 (-0.030542) | 1.353112 / 1.541195 (-0.188083) | 1.363020 / 1.468490 (-0.105470) | 0.567745 / 4.584777 (-4.017032) | 2.384887 / 3.745712 (-1.360826) | 2.685132 / 5.269862 (-2.584730) | 1.755922 / 4.565676 (-2.809755) | 0.062296 / 0.424275 (-0.361979) | 0.004941 / 0.007607 (-0.002666) | 0.346752 / 0.226044 (0.120707) | 3.378623 / 2.268929 (1.109694) | 1.809070 / 55.444624 (-53.635555) | 1.531490 / 6.876477 (-5.344986) | 1.687954 / 2.142072 (-0.454119) | 0.639917 / 4.805227 (-4.165310) | 0.118455 / 6.500664 (-6.382209) | 0.043072 / 0.075469 (-0.032397) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.977154 / 1.841788 (-0.864634) | 11.380127 / 8.074308 (3.305819) | 9.621632 / 10.191392 (-0.569760) | 0.141768 / 0.680424 (-0.538655) | 0.014120 / 0.534201 (-0.520081) | 0.285073 / 0.579283 (-0.294210) | 0.264801 / 0.434364 (-0.169563) | 0.322357 / 0.540337 (-0.217981) | 0.431192 / 1.386936 (-0.955744) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005162 / 0.011353 (-0.006191) | 0.003499 / 0.011008 (-0.007509) | 0.049667 / 0.038508 (0.011159) | 0.032473 / 0.023109 (0.009363) | 0.259988 / 0.275898 (-0.015910) | 0.285723 / 0.323480 (-0.037757) | 0.004197 / 0.007986 (-0.003789) | 0.002710 / 0.004328 (-0.001618) | 0.049235 / 0.004250 (0.044984) | 0.040440 / 0.037052 (0.003387) | 0.276791 / 0.258489 (0.018302) | 0.311990 / 0.293841 (0.018149) | 0.029217 / 0.128546 (-0.099329) | 0.010217 / 0.075646 (-0.065429) | 0.057844 / 0.419271 (-0.361427) | 0.032799 / 0.043533 (-0.010734) | 0.260705 / 0.255139 (0.005566) | 0.280439 / 0.283200 (-0.002761) | 0.018682 / 0.141683 (-0.123001) | 1.135946 / 1.452155 (-0.316208) | 1.163144 / 1.492716 (-0.329572) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.097968 / 0.018006 (0.079961) | 0.309276 / 0.000490 (0.308786) | 0.000214 / 0.000200 (0.000014) | 0.000051 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022623 / 0.037411 (-0.014788) | 0.075471 / 0.014526 (0.060945) | 0.087928 / 0.176557 (-0.088629) | 0.129537 / 0.737135 (-0.607599) | 0.089376 / 0.296338 (-0.206963) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.298223 / 0.215209 (0.083014) | 2.940462 / 2.077655 (0.862807) | 1.586024 / 1.504120 (0.081904) | 1.451161 / 1.541195 (-0.090034) | 1.457707 / 1.468490 (-0.010783) | 0.571172 / 4.584777 (-4.013604) | 0.961591 / 3.745712 (-2.784121) | 2.661258 / 5.269862 (-2.608604) | 1.755172 / 4.565676 (-2.810504) | 0.063430 / 0.424275 (-0.360845) | 0.005034 / 0.007607 (-0.002573) | 0.352356 / 0.226044 (0.126312) | 3.454986 / 2.268929 (1.186057) | 1.967375 / 55.444624 (-53.477249) | 1.638465 / 6.876477 (-5.238012) | 1.774098 / 2.142072 (-0.367975) | 0.650094 / 4.805227 (-4.155134) | 0.117377 / 6.500664 (-6.383287) | 0.041229 / 0.075469 (-0.034240) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.014356 / 1.841788 (-0.827432) | 12.175823 / 8.074308 (4.101515) | 10.657486 / 10.191392 (0.466094) | 0.145080 / 0.680424 (-0.535344) | 0.015563 / 0.534201 (-0.518638) | 0.287093 / 0.579283 (-0.292190) | 0.127164 / 0.434364 (-0.307200) | 0.318518 / 0.540337 (-0.221820) | 0.415333 / 1.386936 (-0.971603) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#372078f617d9210c7f073c22f5f6f4fbee52c67f \"CML watermark\")\n" ]
"2024-05-29T03:17:07"
"2024-06-04T13:08:19"
"2024-06-04T12:55:00"
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6928", "html_url": "https://github.com/huggingface/datasets/pull/6928", "diff_url": "https://github.com/huggingface/datasets/pull/6928.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6928.patch", "merged_at": "2024-06-04T12:55:00" }
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6928/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6928/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6927
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6927/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6927/comments
https://api.github.com/repos/huggingface/datasets/issues/6927/events
https://github.com/huggingface/datasets/pull/6927
2,322,260,725
PR_kwDODunzps5w1CmF
6,927
Update process.mdx: Minor Code Listings Updates and Fixes
{ "login": "FadyMorris", "id": 16918280, "node_id": "MDQ6VXNlcjE2OTE4Mjgw", "avatar_url": "https://avatars.githubusercontent.com/u/16918280?v=4", "gravatar_id": "", "url": "https://api.github.com/users/FadyMorris", "html_url": "https://github.com/FadyMorris", "followers_url": "https://api.github.com/users/FadyMorris/followers", "following_url": "https://api.github.com/users/FadyMorris/following{/other_user}", "gists_url": "https://api.github.com/users/FadyMorris/gists{/gist_id}", "starred_url": "https://api.github.com/users/FadyMorris/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/FadyMorris/subscriptions", "organizations_url": "https://api.github.com/users/FadyMorris/orgs", "repos_url": "https://api.github.com/users/FadyMorris/repos", "events_url": "https://api.github.com/users/FadyMorris/events{/privacy}", "received_events_url": "https://api.github.com/users/FadyMorris/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2024-05-29T03:09:01"
"2024-05-29T03:12:46"
"2024-05-29T03:12:46"
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6927", "html_url": "https://github.com/huggingface/datasets/pull/6927", "diff_url": "https://github.com/huggingface/datasets/pull/6927.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6927.patch", "merged_at": null }
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6927/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6927/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6926
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6926/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6926/comments
https://api.github.com/repos/huggingface/datasets/issues/6926/events
https://github.com/huggingface/datasets/pull/6926
2,322,164,287
PR_kwDODunzps5w0uII
6,926
Update process.mdx: Fix code listing in Shard section
{ "login": "FadyMorris", "id": 16918280, "node_id": "MDQ6VXNlcjE2OTE4Mjgw", "avatar_url": "https://avatars.githubusercontent.com/u/16918280?v=4", "gravatar_id": "", "url": "https://api.github.com/users/FadyMorris", "html_url": "https://github.com/FadyMorris", "followers_url": "https://api.github.com/users/FadyMorris/followers", "following_url": "https://api.github.com/users/FadyMorris/following{/other_user}", "gists_url": "https://api.github.com/users/FadyMorris/gists{/gist_id}", "starred_url": "https://api.github.com/users/FadyMorris/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/FadyMorris/subscriptions", "organizations_url": "https://api.github.com/users/FadyMorris/orgs", "repos_url": "https://api.github.com/users/FadyMorris/repos", "events_url": "https://api.github.com/users/FadyMorris/events{/privacy}", "received_events_url": "https://api.github.com/users/FadyMorris/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2024-05-29T01:25:55"
"2024-05-29T03:11:20"
"2024-05-29T03:11:08"
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6926", "html_url": "https://github.com/huggingface/datasets/pull/6926", "diff_url": "https://github.com/huggingface/datasets/pull/6926.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6926.patch", "merged_at": null }
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6926/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6926/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6925
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6925/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6925/comments
https://api.github.com/repos/huggingface/datasets/issues/6925/events
https://github.com/huggingface/datasets/pull/6925
2,321,084,967
PR_kwDODunzps5wxDRE
6,925
Fix NonMatchingSplitsSizesError/ExpectedMoreSplits when passing data_dir/data_files in no-code Hub datasets
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6925). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "Do you think this is worth making a patch release for?\r\nCC: @huggingface/datasets", "I will add some regression tests before merging.\r\n\r\nAnd I will make a patch release afterwards.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004959 / 0.011353 (-0.006394) | 0.003654 / 0.011008 (-0.007354) | 0.064087 / 0.038508 (0.025579) | 0.031942 / 0.023109 (0.008833) | 0.236830 / 0.275898 (-0.039068) | 0.265359 / 0.323480 (-0.058121) | 0.003108 / 0.007986 (-0.004878) | 0.002824 / 0.004328 (-0.001504) | 0.049102 / 0.004250 (0.044852) | 0.046070 / 0.037052 (0.009017) | 0.248830 / 0.258489 (-0.009659) | 0.283900 / 0.293841 (-0.009941) | 0.027799 / 0.128546 (-0.100747) | 0.010572 / 0.075646 (-0.065074) | 0.223595 / 0.419271 (-0.195677) | 0.036951 / 0.043533 (-0.006582) | 0.238813 / 0.255139 (-0.016326) | 0.253841 / 0.283200 (-0.029359) | 0.018471 / 0.141683 (-0.123212) | 1.131969 / 1.452155 (-0.320186) | 1.173763 / 1.492716 (-0.318954) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095504 / 0.018006 (0.077498) | 0.301469 / 0.000490 (0.300979) | 0.000212 / 0.000200 (0.000012) | 0.000052 / 0.000054 (-0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019194 / 0.037411 (-0.018217) | 0.062313 / 0.014526 (0.047787) | 0.075852 / 0.176557 (-0.100704) | 0.121996 / 0.737135 (-0.615140) | 0.076416 / 0.296338 (-0.219923) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.292465 / 0.215209 (0.077256) | 2.910234 / 2.077655 (0.832579) | 1.479672 / 1.504120 (-0.024448) | 1.332281 / 1.541195 (-0.208913) | 1.354095 / 1.468490 (-0.114395) | 0.573438 / 4.584777 (-4.011339) | 2.382406 / 3.745712 (-1.363307) | 2.708289 / 5.269862 (-2.561572) | 1.739665 / 4.565676 (-2.826011) | 0.063514 / 0.424275 (-0.360761) | 0.005008 / 0.007607 (-0.002599) | 0.350070 / 0.226044 (0.124025) | 3.475837 / 2.268929 (1.206909) | 1.804639 / 55.444624 (-53.639985) | 1.520472 / 6.876477 (-5.356005) | 1.658061 / 2.142072 (-0.484011) | 0.648495 / 4.805227 (-4.156732) | 0.118394 / 6.500664 (-6.382270) | 0.042557 / 0.075469 (-0.032912) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.960772 / 1.841788 (-0.881016) | 11.451629 / 8.074308 (3.377321) | 9.613331 / 10.191392 (-0.578061) | 0.130259 / 0.680424 (-0.550164) | 0.015828 / 0.534201 (-0.518373) | 0.287581 / 0.579283 (-0.291702) | 0.266517 / 0.434364 (-0.167847) | 0.327334 / 0.540337 (-0.213003) | 0.427881 / 1.386936 (-0.959055) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005364 / 0.011353 (-0.005989) | 0.003723 / 0.011008 (-0.007285) | 0.049990 / 0.038508 (0.011482) | 0.032023 / 0.023109 (0.008913) | 0.258609 / 0.275898 (-0.017289) | 0.281250 / 0.323480 (-0.042230) | 0.004222 / 0.007986 (-0.003764) | 0.002799 / 0.004328 (-0.001529) | 0.049546 / 0.004250 (0.045296) | 0.040298 / 0.037052 (0.003246) | 0.273552 / 0.258489 (0.015063) | 0.304042 / 0.293841 (0.010201) | 0.030116 / 0.128546 (-0.098430) | 0.010792 / 0.075646 (-0.064855) | 0.058427 / 0.419271 (-0.360845) | 0.033415 / 0.043533 (-0.010118) | 0.258794 / 0.255139 (0.003655) | 0.275304 / 0.283200 (-0.007896) | 0.017944 / 0.141683 (-0.123739) | 1.109291 / 1.452155 (-0.342864) | 1.156627 / 1.492716 (-0.336090) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.096700 / 0.018006 (0.078693) | 0.301108 / 0.000490 (0.300618) | 0.000208 / 0.000200 (0.000008) | 0.000054 / 0.000054 (-0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022632 / 0.037411 (-0.014779) | 0.075813 / 0.014526 (0.061287) | 0.090302 / 0.176557 (-0.086254) | 0.130375 / 0.737135 (-0.606760) | 0.089710 / 0.296338 (-0.206629) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.297091 / 0.215209 (0.081882) | 2.910379 / 2.077655 (0.832725) | 1.570460 / 1.504120 (0.066340) | 1.441619 / 1.541195 (-0.099576) | 1.442417 / 1.468490 (-0.026073) | 0.570034 / 4.584777 (-4.014743) | 0.952613 / 3.745712 (-2.793099) | 2.659274 / 5.269862 (-2.610588) | 1.751013 / 4.565676 (-2.814663) | 0.064639 / 0.424275 (-0.359636) | 0.005145 / 0.007607 (-0.002462) | 0.347478 / 0.226044 (0.121434) | 3.443862 / 2.268929 (1.174933) | 1.897246 / 55.444624 (-53.547379) | 1.609267 / 6.876477 (-5.267210) | 1.755116 / 2.142072 (-0.386956) | 0.658982 / 4.805227 (-4.146245) | 0.117000 / 6.500664 (-6.383664) | 0.041453 / 0.075469 (-0.034016) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.005843 / 1.841788 (-0.835944) | 12.101306 / 8.074308 (4.026998) | 10.370706 / 10.191392 (0.179314) | 0.139374 / 0.680424 (-0.541050) | 0.015605 / 0.534201 (-0.518596) | 0.286978 / 0.579283 (-0.292305) | 0.122951 / 0.434364 (-0.311413) | 0.331729 / 0.540337 (-0.208609) | 0.422088 / 1.386936 (-0.964848) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#157585f964b1c7f675860af0d21712555b34aabc \"CML watermark\")\n" ]
"2024-05-28T13:33:38"
"2024-06-02T14:11:13"
"2024-05-31T17:10:37"
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6925", "html_url": "https://github.com/huggingface/datasets/pull/6925", "diff_url": "https://github.com/huggingface/datasets/pull/6925.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6925.patch", "merged_at": "2024-05-31T17:10:37" }
Fix `NonMatchingSplitsSizesError` or `ExpectedMoreSplits` error for no-code Hub datasets if the user passes: - `data_dir` - `data_files` The proposed solution is to avoid using exported dataset info (from Parquet exports) in these cases. Additionally, also if the user passes `revision` other than "main" (so that no network requests are made). This PR fixes a bug introduced by: - #6714 Fix #6918, fix #6939.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6925/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6925/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6924
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6924/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6924/comments
https://api.github.com/repos/huggingface/datasets/issues/6924/events
https://github.com/huggingface/datasets/issues/6924
2,320,531,015
I_kwDODunzps6KUH5H
6,924
Caching map result of DatasetDict.
{ "login": "MostHumble", "id": 56939432, "node_id": "MDQ6VXNlcjU2OTM5NDMy", "avatar_url": "https://avatars.githubusercontent.com/u/56939432?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MostHumble", "html_url": "https://github.com/MostHumble", "followers_url": "https://api.github.com/users/MostHumble/followers", "following_url": "https://api.github.com/users/MostHumble/following{/other_user}", "gists_url": "https://api.github.com/users/MostHumble/gists{/gist_id}", "starred_url": "https://api.github.com/users/MostHumble/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MostHumble/subscriptions", "organizations_url": "https://api.github.com/users/MostHumble/orgs", "repos_url": "https://api.github.com/users/MostHumble/repos", "events_url": "https://api.github.com/users/MostHumble/events{/privacy}", "received_events_url": "https://api.github.com/users/MostHumble/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
"2024-05-28T09:07:41"
"2024-05-28T09:07:41"
null
NONE
null
null
null
Hi! I'm currenty using the map function to tokenize a somewhat large dataset, so I need to use the cache to save ~25 mins. Changing num_proc incduces the recomputation of the map, I'm not sure why and if this is excepted behavior? here it says, that cached files are loaded sequentially: https://github.com/huggingface/datasets/blob/bb2664cf540d5ce4b066365e7c8b26e7f1ca4743/src/datasets/arrow_dataset.py#L3005-L3006 it seems like I can pass in a fingerprint, and load it directly: https://github.com/huggingface/datasets/blob/bb2664cf540d5ce4b066365e7c8b26e7f1ca4743/src/datasets/arrow_dataset.py#L3108-L3125 **Environment Setup:** - Python 3.11.9 - datasets 2.19.1 conda-forge - Linux 6.1.83-1.el9.elrepo.x86_64 **MRE** ```python fixed raw_datasets fixed tokenize_function tokenized_datasets = raw_datasets.map( tokenize_function, batched=True, num_proc=9, remove_columns=['text'], load_from_cache_file= True, desc="Running tokenizer on dataset line_by_line", ) tokenized_datasets = raw_datasets.map( tokenize_function, batched=True, num_proc=5, remove_columns=['text'], load_from_cache_file= True, desc="Running tokenizer on dataset line_by_line", ) ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6924/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6924/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6923
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6923/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6923/comments
https://api.github.com/repos/huggingface/datasets/issues/6923/events
https://github.com/huggingface/datasets/issues/6923
2,319,292,872
I_kwDODunzps6KPZnI
6,923
Export Parquet Tablet Audio-Set is null bytes in Arrow
{ "login": "anioji", "id": 140120605, "node_id": "U_kgDOCFoSHQ", "avatar_url": "https://avatars.githubusercontent.com/u/140120605?v=4", "gravatar_id": "", "url": "https://api.github.com/users/anioji", "html_url": "https://github.com/anioji", "followers_url": "https://api.github.com/users/anioji/followers", "following_url": "https://api.github.com/users/anioji/following{/other_user}", "gists_url": "https://api.github.com/users/anioji/gists{/gist_id}", "starred_url": "https://api.github.com/users/anioji/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/anioji/subscriptions", "organizations_url": "https://api.github.com/users/anioji/orgs", "repos_url": "https://api.github.com/users/anioji/repos", "events_url": "https://api.github.com/users/anioji/events{/privacy}", "received_events_url": "https://api.github.com/users/anioji/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
"2024-05-27T14:27:57"
"2024-05-27T14:27:57"
null
NONE
null
null
null
### Describe the bug Exporting the processed audio inside the table with the dataset.to_parquet function, the object pyarrow {bytes: null, path: "Some/Path"} At the same time, the same dataset uploaded to the hub has bit arrays ![Screenshot from 2024-05-27 19-14-49](https://github.com/huggingface/datasets/assets/140120605/ddfba089-426f-4659-9df4-7a634c948b9e) ![Screenshot from 2024-05-27 19-12-51](https://github.com/huggingface/datasets/assets/140120605/4cf8c0a1-650e-491b-86c8-b475c284a021) ### Steps to reproduce the bug 1.Get dataset from audio and cast it 2.Export and push dataset 3.It’s scary to be indignant at the difference in the uploaded dataset and the fact that it was saved locally ```py from datasets import Dataset, Audio df = Dataset.from_csv("./datasets.csv") df = df.cast_column("audio", Audio(16000)) df.to_parquet("./datasets.parquet") df.push_to_hub(repo_id="************", token="**********************") ``` You can use "try replicate case" for this [replicate_packet.zip](https://github.com/huggingface/datasets/files/15457114/replicate_packet.zip) ### Expected behavior Two parquet tables identical in content. It is obvious? ### Environment info Python 3.11+ (I try did it in 3.12 and got same result )
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6923/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6923/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6922
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6922/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6922/comments
https://api.github.com/repos/huggingface/datasets/issues/6922/events
https://github.com/huggingface/datasets/pull/6922
2,318,602,059
PR_kwDODunzps5wolTm
6,922
Remove torchaudio remnants from code
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6922). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005525 / 0.011353 (-0.005828) | 0.004013 / 0.011008 (-0.006996) | 0.063931 / 0.038508 (0.025423) | 0.033857 / 0.023109 (0.010748) | 0.250910 / 0.275898 (-0.024988) | 0.278289 / 0.323480 (-0.045191) | 0.004289 / 0.007986 (-0.003697) | 0.002800 / 0.004328 (-0.001529) | 0.050127 / 0.004250 (0.045877) | 0.048901 / 0.037052 (0.011848) | 0.260628 / 0.258489 (0.002139) | 0.293904 / 0.293841 (0.000063) | 0.028339 / 0.128546 (-0.100207) | 0.010879 / 0.075646 (-0.064767) | 0.203618 / 0.419271 (-0.215654) | 0.036241 / 0.043533 (-0.007292) | 0.250481 / 0.255139 (-0.004657) | 0.274274 / 0.283200 (-0.008926) | 0.018912 / 0.141683 (-0.122771) | 1.146785 / 1.452155 (-0.305370) | 1.199795 / 1.492716 (-0.292921) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095571 / 0.018006 (0.077564) | 0.302961 / 0.000490 (0.302471) | 0.000217 / 0.000200 (0.000017) | 0.000109 / 0.000054 (0.000055) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.020121 / 0.037411 (-0.017290) | 0.063231 / 0.014526 (0.048705) | 0.075434 / 0.176557 (-0.101122) | 0.123994 / 0.737135 (-0.613141) | 0.076479 / 0.296338 (-0.219860) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.277816 / 0.215209 (0.062607) | 2.775481 / 2.077655 (0.697826) | 1.454881 / 1.504120 (-0.049239) | 1.339055 / 1.541195 (-0.202140) | 1.347810 / 1.468490 (-0.120681) | 0.572802 / 4.584777 (-4.011975) | 2.357490 / 3.745712 (-1.388222) | 2.822548 / 5.269862 (-2.447313) | 1.746538 / 4.565676 (-2.819138) | 0.066159 / 0.424275 (-0.358116) | 0.005037 / 0.007607 (-0.002570) | 0.329256 / 0.226044 (0.103212) | 3.277511 / 2.268929 (1.008582) | 1.807855 / 55.444624 (-53.636769) | 1.505507 / 6.876477 (-5.370970) | 1.634237 / 2.142072 (-0.507835) | 0.643999 / 4.805227 (-4.161229) | 0.117494 / 6.500664 (-6.383170) | 0.042634 / 0.075469 (-0.032835) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.977689 / 1.841788 (-0.864098) | 12.261836 / 8.074308 (4.187528) | 9.871541 / 10.191392 (-0.319851) | 0.147293 / 0.680424 (-0.533130) | 0.015134 / 0.534201 (-0.519067) | 0.287677 / 0.579283 (-0.291606) | 0.264622 / 0.434364 (-0.169742) | 0.330511 / 0.540337 (-0.209826) | 0.467618 / 1.386936 (-0.919318) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005690 / 0.011353 (-0.005663) | 0.003801 / 0.011008 (-0.007207) | 0.051817 / 0.038508 (0.013309) | 0.033355 / 0.023109 (0.010246) | 0.264416 / 0.275898 (-0.011482) | 0.288494 / 0.323480 (-0.034986) | 0.004246 / 0.007986 (-0.003740) | 0.002814 / 0.004328 (-0.001515) | 0.050547 / 0.004250 (0.046297) | 0.042977 / 0.037052 (0.005925) | 0.276884 / 0.258489 (0.018395) | 0.303758 / 0.293841 (0.009917) | 0.029412 / 0.128546 (-0.099134) | 0.010697 / 0.075646 (-0.064949) | 0.059497 / 0.419271 (-0.359775) | 0.033670 / 0.043533 (-0.009862) | 0.261311 / 0.255139 (0.006172) | 0.286478 / 0.283200 (0.003278) | 0.019386 / 0.141683 (-0.122297) | 1.155943 / 1.452155 (-0.296211) | 1.198512 / 1.492716 (-0.294205) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092954 / 0.018006 (0.074948) | 0.294144 / 0.000490 (0.293655) | 0.000213 / 0.000200 (0.000013) | 0.000043 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023013 / 0.037411 (-0.014398) | 0.077161 / 0.014526 (0.062635) | 0.089957 / 0.176557 (-0.086600) | 0.129305 / 0.737135 (-0.607831) | 0.091006 / 0.296338 (-0.205333) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.294091 / 0.215209 (0.078882) | 2.885395 / 2.077655 (0.807741) | 1.555658 / 1.504120 (0.051538) | 1.423276 / 1.541195 (-0.117919) | 1.476485 / 1.468490 (0.007995) | 0.569507 / 4.584777 (-4.015270) | 0.979221 / 3.745712 (-2.766491) | 2.818503 / 5.269862 (-2.451358) | 1.871938 / 4.565676 (-2.693739) | 0.064342 / 0.424275 (-0.359933) | 0.005495 / 0.007607 (-0.002112) | 0.351451 / 0.226044 (0.125407) | 3.516078 / 2.268929 (1.247149) | 1.928351 / 55.444624 (-53.516273) | 1.625362 / 6.876477 (-5.251115) | 1.813756 / 2.142072 (-0.328317) | 0.657642 / 4.805227 (-4.147585) | 0.117893 / 6.500664 (-6.382771) | 0.042009 / 0.075469 (-0.033460) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.032893 / 1.841788 (-0.808894) | 12.983400 / 8.074308 (4.909092) | 10.747204 / 10.191392 (0.555812) | 0.133163 / 0.680424 (-0.547261) | 0.015875 / 0.534201 (-0.518326) | 0.312592 / 0.579283 (-0.266691) | 0.124780 / 0.434364 (-0.309584) | 0.350735 / 0.540337 (-0.189603) | 0.447130 / 1.386936 (-0.939806) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#048c789607af0370c1f2337248897956f7a91617 \"CML watermark\")\n" ]
"2024-05-27T08:45:07"
"2024-05-27T09:08:19"
"2024-05-27T08:59:21"
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6922", "html_url": "https://github.com/huggingface/datasets/pull/6922", "diff_url": "https://github.com/huggingface/datasets/pull/6922.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6922.patch", "merged_at": "2024-05-27T08:59:21" }
Remove torchaudio remnants from code. Follow-up on: - #5573
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6922/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6922/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6921
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6921/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6921/comments
https://api.github.com/repos/huggingface/datasets/issues/6921/events
https://github.com/huggingface/datasets/pull/6921
2,318,394,398
PR_kwDODunzps5wn4Dz
6,921
Support fsspec 2024.5.0
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6921). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005252 / 0.011353 (-0.006100) | 0.003752 / 0.011008 (-0.007257) | 0.064034 / 0.038508 (0.025526) | 0.031205 / 0.023109 (0.008096) | 0.248903 / 0.275898 (-0.026995) | 0.275808 / 0.323480 (-0.047671) | 0.003135 / 0.007986 (-0.004851) | 0.002635 / 0.004328 (-0.001693) | 0.049869 / 0.004250 (0.045619) | 0.047602 / 0.037052 (0.010549) | 0.259738 / 0.258489 (0.001249) | 0.296131 / 0.293841 (0.002290) | 0.027467 / 0.128546 (-0.101080) | 0.010449 / 0.075646 (-0.065197) | 0.201369 / 0.419271 (-0.217903) | 0.036317 / 0.043533 (-0.007216) | 0.244347 / 0.255139 (-0.010792) | 0.267597 / 0.283200 (-0.015602) | 0.019930 / 0.141683 (-0.121753) | 1.149012 / 1.452155 (-0.303143) | 1.188083 / 1.492716 (-0.304633) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095190 / 0.018006 (0.077184) | 0.300705 / 0.000490 (0.300215) | 0.000222 / 0.000200 (0.000022) | 0.000051 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019297 / 0.037411 (-0.018115) | 0.063183 / 0.014526 (0.048657) | 0.075094 / 0.176557 (-0.101463) | 0.123556 / 0.737135 (-0.613579) | 0.076721 / 0.296338 (-0.219618) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.284136 / 0.215209 (0.068927) | 2.814041 / 2.077655 (0.736387) | 1.471038 / 1.504120 (-0.033082) | 1.344002 / 1.541195 (-0.197193) | 1.353875 / 1.468490 (-0.114615) | 0.599495 / 4.584777 (-3.985282) | 2.394491 / 3.745712 (-1.351221) | 2.781734 / 5.269862 (-2.488128) | 1.729829 / 4.565676 (-2.835848) | 0.064194 / 0.424275 (-0.360081) | 0.005022 / 0.007607 (-0.002585) | 0.343384 / 0.226044 (0.117340) | 3.357067 / 2.268929 (1.088139) | 1.816323 / 55.444624 (-53.628301) | 1.549405 / 6.876477 (-5.327072) | 1.594394 / 2.142072 (-0.547679) | 0.660650 / 4.805227 (-4.144578) | 0.120271 / 6.500664 (-6.380393) | 0.042422 / 0.075469 (-0.033047) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.975776 / 1.841788 (-0.866011) | 11.828093 / 8.074308 (3.753784) | 9.384164 / 10.191392 (-0.807228) | 0.140761 / 0.680424 (-0.539663) | 0.014038 / 0.534201 (-0.520163) | 0.284904 / 0.579283 (-0.294379) | 0.263430 / 0.434364 (-0.170934) | 0.320856 / 0.540337 (-0.219482) | 0.419199 / 1.386936 (-0.967737) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005672 / 0.011353 (-0.005681) | 0.003667 / 0.011008 (-0.007341) | 0.049989 / 0.038508 (0.011481) | 0.033115 / 0.023109 (0.010006) | 0.269808 / 0.275898 (-0.006090) | 0.293286 / 0.323480 (-0.030193) | 0.004238 / 0.007986 (-0.003748) | 0.002722 / 0.004328 (-0.001606) | 0.049516 / 0.004250 (0.045265) | 0.042076 / 0.037052 (0.005024) | 0.282182 / 0.258489 (0.023693) | 0.310817 / 0.293841 (0.016976) | 0.029824 / 0.128546 (-0.098722) | 0.010516 / 0.075646 (-0.065130) | 0.058223 / 0.419271 (-0.361049) | 0.033263 / 0.043533 (-0.010270) | 0.268769 / 0.255139 (0.013630) | 0.288308 / 0.283200 (0.005108) | 0.018531 / 0.141683 (-0.123151) | 1.136806 / 1.452155 (-0.315349) | 1.192636 / 1.492716 (-0.300080) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.096583 / 0.018006 (0.078577) | 0.303678 / 0.000490 (0.303188) | 0.000211 / 0.000200 (0.000011) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022741 / 0.037411 (-0.014670) | 0.075799 / 0.014526 (0.061273) | 0.089930 / 0.176557 (-0.086626) | 0.129093 / 0.737135 (-0.608042) | 0.089672 / 0.296338 (-0.206666) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.292789 / 0.215209 (0.077580) | 2.860137 / 2.077655 (0.782483) | 1.566678 / 1.504120 (0.062558) | 1.437756 / 1.541195 (-0.103439) | 1.472347 / 1.468490 (0.003857) | 0.566814 / 4.584777 (-4.017963) | 0.963918 / 3.745712 (-2.781794) | 2.717199 / 5.269862 (-2.552663) | 1.763612 / 4.565676 (-2.802064) | 0.063601 / 0.424275 (-0.360674) | 0.005308 / 0.007607 (-0.002299) | 0.363111 / 0.226044 (0.137066) | 3.458222 / 2.268929 (1.189293) | 1.939185 / 55.444624 (-53.505440) | 1.659552 / 6.876477 (-5.216925) | 1.801006 / 2.142072 (-0.341067) | 0.648884 / 4.805227 (-4.156343) | 0.116259 / 6.500664 (-6.384405) | 0.041384 / 0.075469 (-0.034085) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.001594 / 1.841788 (-0.840194) | 12.371125 / 8.074308 (4.296817) | 10.489763 / 10.191392 (0.298371) | 0.132500 / 0.680424 (-0.547924) | 0.014742 / 0.534201 (-0.519459) | 0.282258 / 0.579283 (-0.297026) | 0.122755 / 0.434364 (-0.311608) | 0.346068 / 0.540337 (-0.194269) | 0.424943 / 1.386936 (-0.961994) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#df445c20346a34c08e7e039e4ec1a302eef3a69c \"CML watermark\")\n" ]
"2024-05-27T07:00:59"
"2024-05-27T08:07:16"
"2024-05-27T08:01:08"
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6921", "html_url": "https://github.com/huggingface/datasets/pull/6921", "diff_url": "https://github.com/huggingface/datasets/pull/6921.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6921.patch", "merged_at": "2024-05-27T08:01:08" }
Support fsspec 2024.5.0.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6921/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6921/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6920
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6920/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6920/comments
https://api.github.com/repos/huggingface/datasets/issues/6920/events
https://github.com/huggingface/datasets/pull/6920
2,317,648,021
PR_kwDODunzps5wlchX
6,920
[WebDataset] Add `.pth` support for torch tensors
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6920). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005643 / 0.011353 (-0.005710) | 0.003810 / 0.011008 (-0.007198) | 0.065896 / 0.038508 (0.027388) | 0.031692 / 0.023109 (0.008583) | 0.258297 / 0.275898 (-0.017601) | 0.294555 / 0.323480 (-0.028925) | 0.004403 / 0.007986 (-0.003583) | 0.002857 / 0.004328 (-0.001472) | 0.049848 / 0.004250 (0.045597) | 0.049719 / 0.037052 (0.012666) | 0.266393 / 0.258489 (0.007904) | 0.306214 / 0.293841 (0.012373) | 0.028283 / 0.128546 (-0.100264) | 0.010450 / 0.075646 (-0.065196) | 0.203064 / 0.419271 (-0.216208) | 0.036535 / 0.043533 (-0.006998) | 0.247839 / 0.255139 (-0.007300) | 0.270538 / 0.283200 (-0.012661) | 0.018748 / 0.141683 (-0.122935) | 1.117478 / 1.452155 (-0.334677) | 1.162575 / 1.492716 (-0.330141) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.101074 / 0.018006 (0.083068) | 0.304321 / 0.000490 (0.303831) | 0.000270 / 0.000200 (0.000070) | 0.000045 / 0.000054 (-0.000009) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019036 / 0.037411 (-0.018376) | 0.064496 / 0.014526 (0.049970) | 0.076848 / 0.176557 (-0.099709) | 0.122979 / 0.737135 (-0.614156) | 0.078008 / 0.296338 (-0.218330) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.287009 / 0.215209 (0.071800) | 2.839084 / 2.077655 (0.761429) | 1.495977 / 1.504120 (-0.008143) | 1.379147 / 1.541195 (-0.162047) | 1.413170 / 1.468490 (-0.055320) | 0.616408 / 4.584777 (-3.968369) | 2.419183 / 3.745712 (-1.326529) | 2.905720 / 5.269862 (-2.364142) | 1.801634 / 4.565676 (-2.764043) | 0.064034 / 0.424275 (-0.360241) | 0.005098 / 0.007607 (-0.002509) | 0.341732 / 0.226044 (0.115688) | 3.365262 / 2.268929 (1.096334) | 1.844335 / 55.444624 (-53.600289) | 1.561450 / 6.876477 (-5.315027) | 1.646254 / 2.142072 (-0.495819) | 0.654993 / 4.805227 (-4.150234) | 0.119837 / 6.500664 (-6.380827) | 0.043375 / 0.075469 (-0.032094) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.000352 / 1.841788 (-0.841435) | 12.765122 / 8.074308 (4.690813) | 9.818879 / 10.191392 (-0.372513) | 0.133986 / 0.680424 (-0.546438) | 0.014065 / 0.534201 (-0.520136) | 0.295859 / 0.579283 (-0.283424) | 0.268497 / 0.434364 (-0.165867) | 0.330909 / 0.540337 (-0.209429) | 0.449218 / 1.386936 (-0.937718) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005646 / 0.011353 (-0.005707) | 0.003926 / 0.011008 (-0.007082) | 0.050437 / 0.038508 (0.011929) | 0.031828 / 0.023109 (0.008719) | 0.268218 / 0.275898 (-0.007680) | 0.292987 / 0.323480 (-0.030493) | 0.004353 / 0.007986 (-0.003633) | 0.002933 / 0.004328 (-0.001395) | 0.050357 / 0.004250 (0.046107) | 0.042988 / 0.037052 (0.005935) | 0.281627 / 0.258489 (0.023138) | 0.305664 / 0.293841 (0.011824) | 0.030162 / 0.128546 (-0.098385) | 0.010856 / 0.075646 (-0.064790) | 0.059528 / 0.419271 (-0.359744) | 0.033800 / 0.043533 (-0.009733) | 0.268200 / 0.255139 (0.013061) | 0.284982 / 0.283200 (0.001782) | 0.019105 / 0.141683 (-0.122578) | 1.171714 / 1.452155 (-0.280441) | 1.205690 / 1.492716 (-0.287026) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.100979 / 0.018006 (0.082973) | 0.314691 / 0.000490 (0.314201) | 0.000217 / 0.000200 (0.000017) | 0.000050 / 0.000054 (-0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023816 / 0.037411 (-0.013596) | 0.081749 / 0.014526 (0.067223) | 0.090118 / 0.176557 (-0.086438) | 0.131615 / 0.737135 (-0.605520) | 0.091821 / 0.296338 (-0.204517) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.301222 / 0.215209 (0.086013) | 2.835310 / 2.077655 (0.757655) | 1.562396 / 1.504120 (0.058276) | 1.432365 / 1.541195 (-0.108830) | 1.468358 / 1.468490 (-0.000132) | 0.561300 / 4.584777 (-4.023477) | 0.962294 / 3.745712 (-2.783419) | 2.799705 / 5.269862 (-2.470157) | 1.803035 / 4.565676 (-2.762642) | 0.064104 / 0.424275 (-0.360171) | 0.005480 / 0.007607 (-0.002127) | 0.342519 / 0.226044 (0.116475) | 3.406286 / 2.268929 (1.137357) | 1.966962 / 55.444624 (-53.477663) | 1.654664 / 6.876477 (-5.221813) | 1.829303 / 2.142072 (-0.312769) | 0.650932 / 4.805227 (-4.154295) | 0.119211 / 6.500664 (-6.381453) | 0.043739 / 0.075469 (-0.031730) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.006657 / 1.841788 (-0.835130) | 12.915348 / 8.074308 (4.841040) | 10.808156 / 10.191392 (0.616764) | 0.132664 / 0.680424 (-0.547760) | 0.015574 / 0.534201 (-0.518627) | 0.284525 / 0.579283 (-0.294758) | 0.122322 / 0.434364 (-0.312042) | 0.326826 / 0.540337 (-0.213511) | 0.416593 / 1.386936 (-0.970343) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#15ffefe5be194790a50af88ae1236a51b0ac95e6 \"CML watermark\")\n" ]
"2024-05-26T11:12:07"
"2024-05-27T09:11:17"
"2024-05-27T09:04:54"
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6920", "html_url": "https://github.com/huggingface/datasets/pull/6920", "diff_url": "https://github.com/huggingface/datasets/pull/6920.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6920.patch", "merged_at": "2024-05-27T09:04:54" }
In this PR I add support for `.pth` but with `weights_only=True` to disallow the use of pickle
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6920/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6920/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6919
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6919/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6919/comments
https://api.github.com/repos/huggingface/datasets/issues/6919/events
https://github.com/huggingface/datasets/issues/6919
2,315,618,993
I_kwDODunzps6KBYqx
6,919
Invalid YAML in README.md: unknown tag !<tag:yaml.org,2002:python/tuple>
{ "login": "juanqui", "id": 67964, "node_id": "MDQ6VXNlcjY3OTY0", "avatar_url": "https://avatars.githubusercontent.com/u/67964?v=4", "gravatar_id": "", "url": "https://api.github.com/users/juanqui", "html_url": "https://github.com/juanqui", "followers_url": "https://api.github.com/users/juanqui/followers", "following_url": "https://api.github.com/users/juanqui/following{/other_user}", "gists_url": "https://api.github.com/users/juanqui/gists{/gist_id}", "starred_url": "https://api.github.com/users/juanqui/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/juanqui/subscriptions", "organizations_url": "https://api.github.com/users/juanqui/orgs", "repos_url": "https://api.github.com/users/juanqui/repos", "events_url": "https://api.github.com/users/juanqui/events{/privacy}", "received_events_url": "https://api.github.com/users/juanqui/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
"2024-05-24T14:59:45"
"2024-05-24T14:59:45"
null
NONE
null
null
null
### Describe the bug I wrote a notebook to load an existing dataset, process it, and upload as a private dataset using `dataset.push_to_hub(...)` at the end. The push to hub is failing with: ``` ValueError: Invalid metadata in README.md. - Invalid YAML in README.md: unknown tag !<tag:yaml.org,2002:python[/tuple](http://192.168.1.128:8888/tuple)> (50:11) 47 | - 4 48 | - 4 49 | - 8 50 | - !!binary | ----------------^ 51 | TwAAAA== 52 | '1': !!python[/object/apply](http://192.168.1.128:8888/object/apply):nump ... ``` My dataset has a `train` and `validation` dataset. These are the features: ``` {'c1': Value(dtype='string', id=None), 'c2': Value(dtype='string', id=None), 'c3': [{'value': Value(dtype='string', id=None), 'start': Value(dtype='int64', id=None), 'end': Value(dtype='int64', id=None), 'label': Value(dtype='string', id=None)}], 'c4': Value(dtype='string', id=None), 'c5': Value(dtype='string', id=None), 'c6': Value(dtype='string', id=None), 'c7': Value(dtype='string', id=None), 'c8': Sequence(feature=Value(dtype='int32', id=None), length=-1, id=None), 'c9': Sequence(feature=Value(dtype='int8', id=None), length=-1, id=None), 'c10': Sequence(feature=Value(dtype='int8', id=None), length=-1, id=None), 'labels': Sequence(feature=ClassLabel(names=['O', 'B-ABC', 'I-ABC', ...], id=None), length=-1, id=None), 'c12': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)} ``` This used to work until I decided to cast the `labels` feature to a `Sequence(ClassLabel(...))` type with: ``` ds['train'] = ds['train'].cast_column("labels", Sequence(ClassLabel(names=list(labels)))) ds['validation'] = ds['validation'].cast_column("labels", Sequence(ClassLabel(names=list(labels)))) ``` ### Steps to reproduce the bug 1. Start with any token classification dataset. 2. Add a `labels` column with data such as `[0,0,0,12,13,13,13,0,0]`. 3. Cast the label column from `Sequence` to `Sequence(ClassLabel))` with: ``` labels = ['O', 'B-TEST', 'I-TEST'] ds = ds.cast_column("labels", Sequence(ClassLabel(names=labels))) ``` 4. Push to hub with `ds.push_to_hub("me/awesome-stuff-dataset")` ### Expected behavior I expected `push_to_hub` to successfully push my dataset to the hub without error. ### Environment info Python 3.11.9 datasets==2.19.1 transformers==4.41.1 PyYAML==6.0.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6919/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6919/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6918
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6918/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6918/comments
https://api.github.com/repos/huggingface/datasets/issues/6918/events
https://github.com/huggingface/datasets/issues/6918
2,315,322,738
I_kwDODunzps6KAQVy
6,918
NonMatchingSplitsSizesError when using data_dir
{ "login": "srehaag", "id": 86664538, "node_id": "MDQ6VXNlcjg2NjY0NTM4", "avatar_url": "https://avatars.githubusercontent.com/u/86664538?v=4", "gravatar_id": "", "url": "https://api.github.com/users/srehaag", "html_url": "https://github.com/srehaag", "followers_url": "https://api.github.com/users/srehaag/followers", "following_url": "https://api.github.com/users/srehaag/following{/other_user}", "gists_url": "https://api.github.com/users/srehaag/gists{/gist_id}", "starred_url": "https://api.github.com/users/srehaag/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/srehaag/subscriptions", "organizations_url": "https://api.github.com/users/srehaag/orgs", "repos_url": "https://api.github.com/users/srehaag/repos", "events_url": "https://api.github.com/users/srehaag/events{/privacy}", "received_events_url": "https://api.github.com/users/srehaag/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Thanks for reporting, @srehaag.\r\n\r\nWe are investigating this issue.", "I confirm there is a bug for data-based Hub datasets when the user passes `data_dir`, which was introduced by PR:\r\n- #6714" ]
"2024-05-24T12:43:39"
"2024-05-31T17:10:38"
"2024-05-31T17:10:38"
NONE
null
null
null
### Describe the bug Loading a dataset from with a data_dir argument generates a NonMatchingSplitsSizesError if there are multiple directories in the dataset. This appears to happen because the expected split is calculated based on the data in all the directories whereas the recorded split is calculated based on the data in the directory specified using the data_dir argument. This is recent behavior. Until the past few weeks loading using the data_dir argument worked without any issue. ### Steps to reproduce the bug Simple test dataset available here: https://huggingface.co/datasets/srehaag/hf-bug-temp The dataset contains two directories "data1" and "data2", each with a file called "train.parquet" with a 2 x 5 table. from datasets import load_dataset dataset = load_dataset("srehaag/hf-bug-temp", data_dir = "data1") Generates: --------------------------------------------------------------------------- NonMatchingSplitsSizesError Traceback (most recent call last) Cell In[3], <a href='vscode-notebook-cell:?execution_count=3&line=2'>line 2</a> <a href='vscode-notebook-cell:?execution_count=3&line=1'>1</a> from datasets import load_dataset ----> <a href='vscode-notebook-cell:?execution_count=3&line=2'>2</a> dataset = load_dataset("srehaag/hf-bug-temp", data_dir = "data1") File ~/.python/current/lib/python3.10/site-packages/datasets/load.py:2609, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, trust_remote_code, **config_kwargs) <a href='~/.python/current/lib/python3.10/site-packages/datasets/load.py:2606'>2606</a> return builder_instance.as_streaming_dataset(split=split) <a href='~/.python/current/lib/python3.10/site-packages/datasets/load.py:2608'>2608</a> # Download and prepare data -> <a href='~/.python/current/lib/python3.10/site-packages/datasets/load.py:2609'>2609</a> builder_instance.download_and_prepare( <a href='~/.python/current/lib/python3.10/site-packages/datasets/load.py:2610'>2610</a> download_config=download_config, <a href='~/.python/current/lib/python3.10/site-packages/datasets/load.py:2611'>2611</a> download_mode=download_mode, <a href='~/.python/current/lib/python3.10/site-packages/datasets/load.py:2612'>2612</a> verification_mode=verification_mode, <a href='~/.python/current/lib/python3.10/site-packages/datasets/load.py:2613'>2613</a> num_proc=num_proc, <a href='~/.python/current/lib/python3.10/site-packages/datasets/load.py:2614'>2614</a> storage_options=storage_options, <a href='~/.python/current/lib/python3.10/site-packages/datasets/load.py:2615'>2615</a> ) <a href='~/.python/current/lib/python3.10/site-packages/datasets/load.py:2617'>2617</a> # Build dataset for splits <a href='~/.python/current/lib/python3.10/site-packages/datasets/load.py:2618'>2618</a> keep_in_memory = ( <a href='~/.python/current/lib/python3.10/site-packages/datasets/load.py:2619'>2619</a> keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size) <a href='~/.python/current/lib/python3.10/site-packages/datasets/load.py:2620'>2620</a> ) File ~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1027, in DatasetBuilder.download_and_prepare(self, output_dir, download_config, download_mode, verification_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, file_format, max_shard_size, num_proc, storage_options, **download_and_prepare_kwargs) <a href='~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1025'>1025</a> if num_proc is not None: <a href='~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1026'>1026</a> prepare_split_kwargs["num_proc"] = num_proc -> <a href='~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1027'>1027</a> self._download_and_prepare( <a href='~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1028'>1028</a> dl_manager=dl_manager, <a href='~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1029'>1029</a> verification_mode=verification_mode, <a href='~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1030'>1030</a> **prepare_split_kwargs, <a href='~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1031'>1031</a> **download_and_prepare_kwargs, <a href='~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1032'>1032</a> ) <a href='~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1033'>1033</a> # Sync info <a href='~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1034'>1034</a> self.info.dataset_size = sum(split.num_bytes for split in self.info.splits.values()) File ~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1140, in DatasetBuilder._download_and_prepare(self, dl_manager, verification_mode, **prepare_split_kwargs) <a href='~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1137'>1137</a> dl_manager.manage_extracted_files() <a href='~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1139'>1139</a> if verification_mode == VerificationMode.BASIC_CHECKS or verification_mode == VerificationMode.ALL_CHECKS: -> <a href='~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1140'>1140</a> verify_splits(self.info.splits, split_dict) <a href='~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1142'>1142</a> # Update the info object with the splits. <a href='~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1143'>1143</a> self.info.splits = split_dict File ~/.python/current/lib/python3.10/site-packages/datasets/utils/info_utils.py:101, in verify_splits(expected_splits, recorded_splits) <a href='~/.python/current/lib/python3.10/site-packages/datasets/utils/info_utils.py:95'>95</a> bad_splits = [ <a href='~/.python/current/lib/python3.10/site-packages/datasets/utils/info_utils.py:96'>96</a> {"expected": expected_splits[name], "recorded": recorded_splits[name]} <a href='~/.python/current/lib/python3.10/site-packages/datasets/utils/info_utils.py:97'>97</a> for name in expected_splits <a href='~/.python/current/lib/python3.10/site-packages/datasets/utils/info_utils.py:98'>98</a> if expected_splits[name].num_examples != recorded_splits[name].num_examples <a href='~/.python/current/lib/python3.10/site-packages/datasets/utils/info_utils.py:99'>99</a> ] <a href='~/.python/current/lib/python3.10/site-packages/datasets/utils/info_utils.py:100'>100</a> if len(bad_splits) > 0: --> <a href='~/.python/current/lib/python3.10/site-packages/datasets/utils/info_utils.py:101'>101</a> raise NonMatchingSplitsSizesError(str(bad_splits)) <a href='~/.python/current/lib/python3.10/site-packages/datasets/utils/info_utils.py:102'>102</a> logger.info("All the splits matched successfully.") NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=212, num_examples=10, shard_lengths=None, dataset_name=None), 'recorded': SplitInfo(name='train', num_bytes=106, num_examples=5, shard_lengths=None, dataset_name='hf-bug-temp')}] __________ By contrast, this loads the data from both data1/train.parquet and data2/train.parquet without any error message: from datasets import load_dataset dataset = load_dataset("srehaag/hf-bug-temp") ### Expected behavior Should load the 5 x 2 table from data1/train.parquet without error message. ### Environment info Used Codespaces to simplify environment (see details below), but bug is present across various configurations. - `datasets` version: 2.19.1 - Platform: Linux-6.5.0-1021-azure-x86_64-with-glibc2.31 - Python version: 3.10.13 - `huggingface_hub` version: 0.23.1 - PyArrow version: 16.1.0 - Pandas version: 2.2.2 - `fsspec` version: 2024.3.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6918/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6918/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6917
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6917/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6917/comments
https://api.github.com/repos/huggingface/datasets/issues/6917/events
https://github.com/huggingface/datasets/issues/6917
2,314,683,663
I_kwDODunzps6J90UP
6,917
WinError 32 The process cannot access the file during load_dataset
{ "login": "elwe-2808", "id": 56682168, "node_id": "MDQ6VXNlcjU2NjgyMTY4", "avatar_url": "https://avatars.githubusercontent.com/u/56682168?v=4", "gravatar_id": "", "url": "https://api.github.com/users/elwe-2808", "html_url": "https://github.com/elwe-2808", "followers_url": "https://api.github.com/users/elwe-2808/followers", "following_url": "https://api.github.com/users/elwe-2808/following{/other_user}", "gists_url": "https://api.github.com/users/elwe-2808/gists{/gist_id}", "starred_url": "https://api.github.com/users/elwe-2808/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/elwe-2808/subscriptions", "organizations_url": "https://api.github.com/users/elwe-2808/orgs", "repos_url": "https://api.github.com/users/elwe-2808/repos", "events_url": "https://api.github.com/users/elwe-2808/events{/privacy}", "received_events_url": "https://api.github.com/users/elwe-2808/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
"2024-05-24T07:54:51"
"2024-05-24T07:54:51"
null
NONE
null
null
null
### Describe the bug When I try to load the opus_book from hugging face (following the [guide on the website](https://huggingface.co/docs/transformers/main/en/tasks/translation)) ```python from datasets import load_dataset, Dataset dataset = load_dataset("Helsinki-NLP/opus_books", "en-fr", features=["id", "translation"]) ``` I get an error: `PermissionError: [WinError 32] The process cannot access the file because it is being used by another process: 'C:/Users/Me/.cache/huggingface/datasets/Helsinki-NLP___parquet/ca-de-a39f1ef185b9b73b/0.0.0/2a3b91fbd88a2c90d1dbbb32b460cf621d31bd5b05b934492fdef7d8d6f236ec.incomplete\\parquet-train-00000-00000-of-NNNNN.arrow' ` <details><summary>Full stacktrace</summary> <p> ```python AttributeError Traceback (most recent call last) File c:\Users\Me\.conda\envs\ia\lib\site-packages\datasets\builder.py:1858, in ArrowBasedBuilder._prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id) [1857](file:///C:/Users/Me/.conda/envs/ia/lib/site-packages/datasets/builder.py:1857) _time = time.time() -> [1858](file:///C:/Users/Me/.conda/envs/ia/lib/site-packages/datasets/builder.py:1858) for _, table in generator: [1859](file:///C:/Users/Me/.conda/envs/ia/lib/site-packages/datasets/builder.py:1859) if max_shard_size is not None and writer._num_bytes > max_shard_size: File c:\Users\Me\.conda\envs\ia\lib\site-packages\datasets\packaged_modules\parquet\parquet.py:59, in Parquet._generate_tables(self, files) [58](file:///C:/Users/Me/.conda/envs/ia/lib/site-packages/datasets/packaged_modules/parquet/parquet.py:58) def _generate_tables(self, files): ---> [59](file:///C:/Users/Me/.conda/envs/ia/lib/site-packages/datasets/packaged_modules/parquet/parquet.py:59) schema = self.config.features.arrow_schema if self.config.features is not None else None [60](file:///C:/Users/Me/.conda/envs/ia/lib/site-packages/datasets/packaged_modules/parquet/parquet.py:60) if self.config.features is not None and self.config.columns is not None: AttributeError: 'list' object has no attribute 'arrow_schema' During handling of the above exception, another exception occurred: AttributeError Traceback (most recent call last) File c:\Users\Me\.conda\envs\ia\lib\site-packages\datasets\builder.py:1882, in ArrowBasedBuilder._prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id) [1881](file:///C:/Users/Me/.conda/envs/ia/lib/site-packages/datasets/builder.py:1881) num_shards = shard_id + 1 -> [1882](file:///C:/Users/Me/.conda/envs/ia/lib/site-packages/datasets/builder.py:1882) num_examples, num_bytes = writer.finalize() [1883](file:///C:/Users/Me/.conda/envs/ia/lib/site-packages/datasets/builder.py:1883) writer.close() File c:\Users\Me\.conda\envs\ia\lib\site-packages\datasets\arrow_writer.py:584, in ArrowWriter.finalize(self, close_stream) [583](file:///C:/Users/Me/.conda/envs/ia/lib/site-packages/datasets/arrow_writer.py:583) # If schema is known, infer features even if no examples were written --> [584](file:///C:/Users/Me/.conda/envs/ia/lib/site-packages/datasets/arrow_writer.py:584) if self.pa_writer is None and self.schema: ... --> [627](file:///C:/Users/Me/.conda/envs/ia/lib/shutil.py:627) os.unlink(fullname) [628](file:///C:/Users/Me/.conda/envs/ia/lib/shutil.py:628) except OSError: [629](file:///C:/Users/Me/.conda/envs/ia/lib/shutil.py:629) onerror(os.unlink, fullname, sys.exc_info()) PermissionError: [WinError 32] The process cannot access the file because it is being used by another process: 'C:/Users/Me/.cache/huggingface/datasets/Helsinki-NLP___parquet/ca-de-a39f1ef185b9b73b/0.0.0/2a3b91fbd88a2c90d1dbbb32b460cf621d31bd5b05b934492fdef7d8d6f236ec.incomplete\\parquet-train-00000-00000-of-NNNNN.arrow' ``` </p> </details> ### Steps to reproduce the bug Steps to reproduce: Just execute these lines ```python from datasets import load_dataset, Dataset dataset = load_dataset("Helsinki-NLP/opus_books", "en-fr", features=["id", "translation"]) ``` ### Expected behavior I expect the dataset to be loaded without any errors. ### Environment info | Package| Version| |--------|--------| | transformers| 4.37.2| | python| 3.9.19| | pytorch| 2.3.0| | datasets|2.12.0 | | arrow | 1.2.3| I am using Conda on Windows 11.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6917/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6917/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6916
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6916/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6916/comments
https://api.github.com/repos/huggingface/datasets/issues/6916/events
https://github.com/huggingface/datasets/issues/6916
2,311,675,564
I_kwDODunzps6JyV6s
6,916
```push_to_hub()``` - Prevent Automatic Generation of Splits
{ "login": "jetlime", "id": 29337128, "node_id": "MDQ6VXNlcjI5MzM3MTI4", "avatar_url": "https://avatars.githubusercontent.com/u/29337128?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jetlime", "html_url": "https://github.com/jetlime", "followers_url": "https://api.github.com/users/jetlime/followers", "following_url": "https://api.github.com/users/jetlime/following{/other_user}", "gists_url": "https://api.github.com/users/jetlime/gists{/gist_id}", "starred_url": "https://api.github.com/users/jetlime/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jetlime/subscriptions", "organizations_url": "https://api.github.com/users/jetlime/orgs", "repos_url": "https://api.github.com/users/jetlime/repos", "events_url": "https://api.github.com/users/jetlime/events{/privacy}", "received_events_url": "https://api.github.com/users/jetlime/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2024-05-22T23:52:15"
"2024-05-23T00:07:53"
"2024-05-23T00:07:53"
NONE
null
null
null
### Describe the bug I currently have a dataset which has not been splited. When pushing the dataset to my hugging face dataset repository, it is split into a testing and training set. How can I prevent the split from happening? ### Steps to reproduce the bug 1. Have a unsplit dataset ```python Dataset({ features: ['input', 'output', 'Attack', '__index_level_0__'], num_rows: 944685 }) ``` 2. Push it to huggingface ```python dataset.push_to_hub(dataset_name) ``` 3. On the hugging face dataset repo, the dataset then appears to be splited: ![image](https://github.com/huggingface/datasets/assets/29337128/b4fbc141-42b0-4f49-98df-dd479648fe09) 4. Indeed, when loading the dataset from this repo, the dataset is split in two testing and training set. ```python from datasets import load_dataset, Dataset dataset = load_dataset("Jetlime/NF-CSE-CIC-IDS2018-v2", streaming=True) dataset ``` output: ``` IterableDatasetDict({ train: IterableDataset({ features: ['input', 'output', 'Attack', '__index_level_0__'], n_shards: 2 }) test: IterableDataset({ features: ['input', 'output', 'Attack', '__index_level_0__'], n_shards: 1 }) ``` ### Expected behavior The dataset shall not be splited, as not requested. ### Environment info - `datasets` version: 2.19.1 - Platform: Linux-6.2.0-35-generic-x86_64-with-glibc2.35 - Python version: 3.10.12 - `huggingface_hub` version: 0.23.0 - PyArrow version: 15.0.2 - Pandas version: 2.2.2 - `fsspec` version: 2024.3.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6916/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6916/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6915
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6915/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6915/comments
https://api.github.com/repos/huggingface/datasets/issues/6915/events
https://github.com/huggingface/datasets/pull/6915
2,310,564,961
PR_kwDODunzps5wNIUh
6,915
Validate config name and data_files in packaged modules
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6915). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "I pushed a change that fixes 2.15 cache reloading (I fixed the packaged module hash), feel free to merge if this change is fine for you", "Something weird happened in GitHub: I just merged this PR to main, See: https://github.com/huggingface/datasets/commit/5bbbf1b19766e31a6905f3e82bf3aa3f9f84a982\r\n\r\nHowever this PR still appears as Open...\r\n\r\nIf I retry to merge this PR, an error appears: \"Merge attempt failed: Merge already in progress\"\r\n![Screenshot from 2024-06-06 06-29-22](https://github.com/huggingface/datasets/assets/8515462/5fe87442-cc5d-4e9b-b60e-fdfbab830c81)\r\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005543 / 0.011353 (-0.005810) | 0.004059 / 0.011008 (-0.006949) | 0.064678 / 0.038508 (0.026170) | 0.032615 / 0.023109 (0.009506) | 0.245883 / 0.275898 (-0.030015) | 0.273545 / 0.323480 (-0.049935) | 0.004268 / 0.007986 (-0.003718) | 0.003160 / 0.004328 (-0.001168) | 0.051982 / 0.004250 (0.047731) | 0.051186 / 0.037052 (0.014134) | 0.254009 / 0.258489 (-0.004480) | 0.289594 / 0.293841 (-0.004247) | 0.028459 / 0.128546 (-0.100087) | 0.011061 / 0.075646 (-0.064585) | 0.203571 / 0.419271 (-0.215700) | 0.038049 / 0.043533 (-0.005484) | 0.243700 / 0.255139 (-0.011439) | 0.264816 / 0.283200 (-0.018383) | 0.019556 / 0.141683 (-0.122127) | 1.114395 / 1.452155 (-0.337759) | 1.168915 / 1.492716 (-0.323802) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.098814 / 0.018006 (0.080808) | 0.308218 / 0.000490 (0.307728) | 0.000221 / 0.000200 (0.000022) | 0.000047 / 0.000054 (-0.000008) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019660 / 0.037411 (-0.017752) | 0.070542 / 0.014526 (0.056017) | 0.078906 / 0.176557 (-0.097650) | 0.126658 / 0.737135 (-0.610477) | 0.080427 / 0.296338 (-0.215911) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.280686 / 0.215209 (0.065477) | 2.767480 / 2.077655 (0.689825) | 1.455325 / 1.504120 (-0.048795) | 1.336677 / 1.541195 (-0.204518) | 1.380359 / 1.468490 (-0.088131) | 0.576310 / 4.584777 (-4.008467) | 2.431829 / 3.745712 (-1.313883) | 2.815266 / 5.269862 (-2.454595) | 1.908962 / 4.565676 (-2.656714) | 0.065306 / 0.424275 (-0.358969) | 0.005229 / 0.007607 (-0.002378) | 0.336018 / 0.226044 (0.109973) | 3.349283 / 2.268929 (1.080355) | 1.814696 / 55.444624 (-53.629929) | 1.520969 / 6.876477 (-5.355508) | 1.735322 / 2.142072 (-0.406751) | 0.661513 / 4.805227 (-4.143714) | 0.121465 / 6.500664 (-6.379199) | 0.044505 / 0.075469 (-0.030964) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.989204 / 1.841788 (-0.852584) | 12.608414 / 8.074308 (4.534106) | 10.133358 / 10.191392 (-0.058034) | 0.133986 / 0.680424 (-0.546438) | 0.014332 / 0.534201 (-0.519869) | 0.293207 / 0.579283 (-0.286076) | 0.265657 / 0.434364 (-0.168707) | 0.325972 / 0.540337 (-0.214365) | 0.478103 / 1.386936 (-0.908833) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006070 / 0.011353 (-0.005283) | 0.004122 / 0.011008 (-0.006886) | 0.050572 / 0.038508 (0.012064) | 0.033732 / 0.023109 (0.010623) | 0.271282 / 0.275898 (-0.004616) | 0.296247 / 0.323480 (-0.027233) | 0.004400 / 0.007986 (-0.003585) | 0.002914 / 0.004328 (-0.001415) | 0.049332 / 0.004250 (0.045082) | 0.042213 / 0.037052 (0.005161) | 0.281230 / 0.258489 (0.022741) | 0.315514 / 0.293841 (0.021673) | 0.030864 / 0.128546 (-0.097682) | 0.011185 / 0.075646 (-0.064461) | 0.059227 / 0.419271 (-0.360044) | 0.034006 / 0.043533 (-0.009527) | 0.270059 / 0.255139 (0.014920) | 0.284014 / 0.283200 (0.000814) | 0.019502 / 0.141683 (-0.122181) | 1.143650 / 1.452155 (-0.308505) | 1.190968 / 1.492716 (-0.301749) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.100502 / 0.018006 (0.082496) | 0.307863 / 0.000490 (0.307373) | 0.000212 / 0.000200 (0.000012) | 0.000043 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023442 / 0.037411 (-0.013969) | 0.080185 / 0.014526 (0.065659) | 0.089372 / 0.176557 (-0.087185) | 0.131030 / 0.737135 (-0.606105) | 0.091174 / 0.296338 (-0.205165) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.304187 / 0.215209 (0.088978) | 3.043055 / 2.077655 (0.965400) | 1.629578 / 1.504120 (0.125459) | 1.533762 / 1.541195 (-0.007432) | 1.546134 / 1.468490 (0.077643) | 0.577739 / 4.584777 (-4.007038) | 0.986310 / 3.745712 (-2.759402) | 2.791650 / 5.269862 (-2.478212) | 1.841190 / 4.565676 (-2.724487) | 0.064943 / 0.424275 (-0.359333) | 0.005251 / 0.007607 (-0.002356) | 0.355009 / 0.226044 (0.128965) | 3.560935 / 2.268929 (1.292007) | 1.991995 / 55.444624 (-53.452629) | 1.708796 / 6.876477 (-5.167681) | 1.917721 / 2.142072 (-0.224351) | 0.667667 / 4.805227 (-4.137561) | 0.119956 / 6.500664 (-6.380708) | 0.042069 / 0.075469 (-0.033400) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.006242 / 1.841788 (-0.835546) | 13.321644 / 8.074308 (5.247336) | 10.712409 / 10.191392 (0.521017) | 0.134036 / 0.680424 (-0.546388) | 0.017645 / 0.534201 (-0.516555) | 0.289077 / 0.579283 (-0.290206) | 0.131356 / 0.434364 (-0.303007) | 0.333062 / 0.540337 (-0.207275) | 0.425327 / 1.386936 (-0.961609) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#09ebf5190afbd017f3ca24ef444be2d933411eed \"CML watermark\")\n", "Indeed, the merge commit is: https://github.com/huggingface/datasets/commit/5bbbf1b19766e31a6905f3e82bf3aa3f9f84a982\r\n\r\nThe following commit is just empty: https://github.com/huggingface/datasets/commit/09ebf5190afbd017f3ca24ef444be2d933411eed" ]
"2024-05-22T13:36:33"
"2024-06-06T09:32:10"
"2024-06-06T09:24:35"
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6915", "html_url": "https://github.com/huggingface/datasets/pull/6915", "diff_url": "https://github.com/huggingface/datasets/pull/6915.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6915.patch", "merged_at": "2024-06-06T09:24:35" }
Validate the config attributes `name` and `data_files` in packaged modules by making the derived classes call their parent `__post_init__` method. Note that their parent `BuilderConfig` validates its attributes `name` and `data_files` in its `__post_init__` method: https://github.com/huggingface/datasets/blob/60d21efbc01e15d0b596ac1072750cbecd91548a/src/datasets/builder.py#L128-L137 This PR makes the derived config classes call their parent `__post_init__` method to validate their `name` and `data_files` attributes.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6915/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6915/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6914
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6914/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6914/comments
https://api.github.com/repos/huggingface/datasets/issues/6914/events
https://github.com/huggingface/datasets/pull/6914
2,310,107,326
PR_kwDODunzps5wLi3e
6,914
Preserve JSON column order and support list of strings field
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6914). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005492 / 0.011353 (-0.005861) | 0.004087 / 0.011008 (-0.006921) | 0.065334 / 0.038508 (0.026826) | 0.032282 / 0.023109 (0.009173) | 0.246441 / 0.275898 (-0.029457) | 0.278807 / 0.323480 (-0.044673) | 0.003245 / 0.007986 (-0.004741) | 0.003795 / 0.004328 (-0.000534) | 0.050082 / 0.004250 (0.045832) | 0.050613 / 0.037052 (0.013561) | 0.258885 / 0.258489 (0.000396) | 0.297257 / 0.293841 (0.003416) | 0.028847 / 0.128546 (-0.099699) | 0.011377 / 0.075646 (-0.064270) | 0.206089 / 0.419271 (-0.213182) | 0.037354 / 0.043533 (-0.006178) | 0.257319 / 0.255139 (0.002180) | 0.275134 / 0.283200 (-0.008066) | 0.018064 / 0.141683 (-0.123619) | 1.112371 / 1.452155 (-0.339783) | 1.160909 / 1.492716 (-0.331807) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.101893 / 0.018006 (0.083887) | 0.311084 / 0.000490 (0.310594) | 0.000208 / 0.000200 (0.000008) | 0.000042 / 0.000054 (-0.000013) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019548 / 0.037411 (-0.017863) | 0.064396 / 0.014526 (0.049870) | 0.074900 / 0.176557 (-0.101656) | 0.122750 / 0.737135 (-0.614385) | 0.076693 / 0.296338 (-0.219646) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.288609 / 0.215209 (0.073400) | 2.831354 / 2.077655 (0.753699) | 1.453961 / 1.504120 (-0.050159) | 1.327702 / 1.541195 (-0.213493) | 1.382140 / 1.468490 (-0.086351) | 0.568465 / 4.584777 (-4.016312) | 2.427199 / 3.745712 (-1.318513) | 2.810586 / 5.269862 (-2.459275) | 1.839227 / 4.565676 (-2.726449) | 0.063219 / 0.424275 (-0.361056) | 0.005111 / 0.007607 (-0.002496) | 0.341447 / 0.226044 (0.115403) | 3.357429 / 2.268929 (1.088501) | 1.806501 / 55.444624 (-53.638123) | 1.541696 / 6.876477 (-5.334781) | 1.755400 / 2.142072 (-0.386673) | 0.661442 / 4.805227 (-4.143785) | 0.120203 / 6.500664 (-6.380461) | 0.044429 / 0.075469 (-0.031040) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.987810 / 1.841788 (-0.853978) | 12.765467 / 8.074308 (4.691159) | 10.497788 / 10.191392 (0.306396) | 0.132723 / 0.680424 (-0.547701) | 0.014484 / 0.534201 (-0.519717) | 0.285763 / 0.579283 (-0.293520) | 0.264377 / 0.434364 (-0.169987) | 0.326971 / 0.540337 (-0.213367) | 0.429432 / 1.386936 (-0.957504) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005996 / 0.011353 (-0.005357) | 0.004092 / 0.011008 (-0.006916) | 0.051660 / 0.038508 (0.013152) | 0.036661 / 0.023109 (0.013552) | 0.271133 / 0.275898 (-0.004765) | 0.295728 / 0.323480 (-0.027752) | 0.004452 / 0.007986 (-0.003534) | 0.002915 / 0.004328 (-0.001413) | 0.050669 / 0.004250 (0.046418) | 0.044431 / 0.037052 (0.007378) | 0.284683 / 0.258489 (0.026194) | 0.318799 / 0.293841 (0.024958) | 0.031094 / 0.128546 (-0.097452) | 0.010810 / 0.075646 (-0.064836) | 0.059740 / 0.419271 (-0.359531) | 0.034912 / 0.043533 (-0.008621) | 0.268779 / 0.255139 (0.013640) | 0.291294 / 0.283200 (0.008095) | 0.019769 / 0.141683 (-0.121914) | 1.124833 / 1.452155 (-0.327322) | 1.168301 / 1.492716 (-0.324416) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.097080 / 0.018006 (0.079074) | 0.304636 / 0.000490 (0.304146) | 0.000232 / 0.000200 (0.000032) | 0.000060 / 0.000054 (0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023186 / 0.037411 (-0.014225) | 0.082232 / 0.014526 (0.067706) | 0.089427 / 0.176557 (-0.087130) | 0.132715 / 0.737135 (-0.604421) | 0.092820 / 0.296338 (-0.203518) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.300672 / 0.215209 (0.085463) | 2.969603 / 2.077655 (0.891948) | 1.577827 / 1.504120 (0.073707) | 1.440768 / 1.541195 (-0.100427) | 1.494526 / 1.468490 (0.026035) | 0.574599 / 4.584777 (-4.010178) | 0.963300 / 3.745712 (-2.782412) | 2.847854 / 5.269862 (-2.422008) | 1.841248 / 4.565676 (-2.724428) | 0.062321 / 0.424275 (-0.361954) | 0.005389 / 0.007607 (-0.002218) | 0.350853 / 0.226044 (0.124808) | 3.463514 / 2.268929 (1.194586) | 1.937661 / 55.444624 (-53.506964) | 1.665320 / 6.876477 (-5.211157) | 1.849028 / 2.142072 (-0.293044) | 0.655333 / 4.805227 (-4.149894) | 0.119062 / 6.500664 (-6.381602) | 0.043387 / 0.075469 (-0.032082) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.004118 / 1.841788 (-0.837670) | 13.350894 / 8.074308 (5.276585) | 11.179363 / 10.191392 (0.987971) | 0.135169 / 0.680424 (-0.545255) | 0.016298 / 0.534201 (-0.517903) | 0.288467 / 0.579283 (-0.290816) | 0.132712 / 0.434364 (-0.301651) | 0.325436 / 0.540337 (-0.214901) | 0.413406 / 1.386936 (-0.973530) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#670e1cf31606f397ae0f858b568b1b4ed50c1843 \"CML watermark\")\n" ]
"2024-05-22T09:58:54"
"2024-05-29T13:18:47"
"2024-05-29T13:12:23"
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6914", "html_url": "https://github.com/huggingface/datasets/pull/6914", "diff_url": "https://github.com/huggingface/datasets/pull/6914.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6914.patch", "merged_at": "2024-05-29T13:12:23" }
Preserve column order when loading from a JSON file with a list of dict (or with a field containing a list of dicts). Additionally, support JSON file with a list of strings field. Fix #6913.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6914/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6914/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6913
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6913/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6913/comments
https://api.github.com/repos/huggingface/datasets/issues/6913/events
https://github.com/huggingface/datasets/issues/6913
2,309,605,889
I_kwDODunzps6JqcoB
6,913
Column order is nondeterministic when loading from JSON
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[]
"2024-05-22T05:30:14"
"2024-05-29T13:12:24"
"2024-05-29T13:12:24"
MEMBER
null
null
null
As reported by @meg-huggingface, the order of the JSON object keys is not preserved while loading a dataset from a JSON file with a list of objects. For example, when loading a JSON files with a list of objects, each with the following ordered keys: - [ID, Language, Topic], the resulting dataset may have columns: - [ID, Topic, Language], or - [Topic, Language, ID], or - [Topic, ID, Language],... This issue is caused by the use of a Python set (which does not preserve the order): https://github.com/huggingface/datasets/blob/60d21efbc01e15d0b596ac1072750cbecd91548a/src/datasets/packaged_modules/json/json.py#L168 introduced in - #5772
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6913/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6913/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6912
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6912/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6912/comments
https://api.github.com/repos/huggingface/datasets/issues/6912/events
https://github.com/huggingface/datasets/issues/6912
2,309,365,961
I_kwDODunzps6JpiDJ
6,912
Add MedImg for streaming
{ "login": "lhallee", "id": 72926928, "node_id": "MDQ6VXNlcjcyOTI2OTI4", "avatar_url": "https://avatars.githubusercontent.com/u/72926928?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhallee", "html_url": "https://github.com/lhallee", "followers_url": "https://api.github.com/users/lhallee/followers", "following_url": "https://api.github.com/users/lhallee/following{/other_user}", "gists_url": "https://api.github.com/users/lhallee/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhallee/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhallee/subscriptions", "organizations_url": "https://api.github.com/users/lhallee/orgs", "repos_url": "https://api.github.com/users/lhallee/repos", "events_url": "https://api.github.com/users/lhallee/events{/privacy}", "received_events_url": "https://api.github.com/users/lhallee/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
open
false
null
[]
null
[ "@mariosasko, @lhoestq, @albertvillanova\r\nHello! Can anyone help? or can you guys suggest who can help with this?", "Hi ! Feel free to download the dataset and create a `Dataset` object with it.\r\n\r\nThen your'll be able to use `push_to_hub()` to upload the dataset to HF in Parquet format and make it streamable :)", "> Hi ! Feel free to download the dataset and create a `Dataset` object with it.\r\n> \r\n> Then your'll be able to use `push_to_hub()` to upload the dataset to HF in Parquet format and make it streamable :)\r\n\r\nThe dataset is several TB in total, which I do not have the resources to handle." ]
"2024-05-22T00:55:30"
"2024-06-03T14:40:10"
null
NONE
null
null
null
### Feature request Host the MedImg dataset (similar to Imagenet but for biomedical images). ### Motivation There is a clear need for biomedical image foundation models and large scale biomedical datasets that are easily streamable. This would be an excellent tool for the biomedical community. ### Your contribution MedImg can be found [here](https://www.cuilab.cn/medimg/#).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6912/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6912/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6911
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6911/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6911/comments
https://api.github.com/repos/huggingface/datasets/issues/6911/events
https://github.com/huggingface/datasets/pull/6911
2,308,152,711
PR_kwDODunzps5wE2ah
6,911
Remove dead code for non-dict data_files from packaged modules
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6911). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005136 / 0.011353 (-0.006217) | 0.003136 / 0.011008 (-0.007872) | 0.063752 / 0.038508 (0.025244) | 0.031060 / 0.023109 (0.007950) | 0.249848 / 0.275898 (-0.026050) | 0.275918 / 0.323480 (-0.047561) | 0.004047 / 0.007986 (-0.003938) | 0.002696 / 0.004328 (-0.001632) | 0.049884 / 0.004250 (0.045634) | 0.044646 / 0.037052 (0.007593) | 0.264769 / 0.258489 (0.006280) | 0.299874 / 0.293841 (0.006033) | 0.027530 / 0.128546 (-0.101016) | 0.010026 / 0.075646 (-0.065620) | 0.204007 / 0.419271 (-0.215265) | 0.035982 / 0.043533 (-0.007550) | 0.253560 / 0.255139 (-0.001579) | 0.276206 / 0.283200 (-0.006993) | 0.017770 / 0.141683 (-0.123913) | 1.156008 / 1.452155 (-0.296146) | 1.197265 / 1.492716 (-0.295451) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092960 / 0.018006 (0.074954) | 0.302876 / 0.000490 (0.302386) | 0.000214 / 0.000200 (0.000014) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019060 / 0.037411 (-0.018351) | 0.062262 / 0.014526 (0.047737) | 0.073836 / 0.176557 (-0.102721) | 0.122327 / 0.737135 (-0.614809) | 0.076050 / 0.296338 (-0.220289) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.282489 / 0.215209 (0.067280) | 2.745084 / 2.077655 (0.667429) | 1.453044 / 1.504120 (-0.051076) | 1.339065 / 1.541195 (-0.202130) | 1.341395 / 1.468490 (-0.127095) | 0.586497 / 4.584777 (-3.998280) | 2.342198 / 3.745712 (-1.403514) | 2.684984 / 5.269862 (-2.584878) | 1.703738 / 4.565676 (-2.861939) | 0.062489 / 0.424275 (-0.361786) | 0.004906 / 0.007607 (-0.002701) | 0.332325 / 0.226044 (0.106280) | 3.255381 / 2.268929 (0.986452) | 1.797045 / 55.444624 (-53.647579) | 1.515197 / 6.876477 (-5.361280) | 1.508317 / 2.142072 (-0.633756) | 0.635973 / 4.805227 (-4.169254) | 0.117292 / 6.500664 (-6.383372) | 0.041456 / 0.075469 (-0.034013) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.973934 / 1.841788 (-0.867853) | 11.288665 / 8.074308 (3.214356) | 9.269404 / 10.191392 (-0.921988) | 0.143190 / 0.680424 (-0.537234) | 0.014366 / 0.534201 (-0.519835) | 0.285936 / 0.579283 (-0.293347) | 0.261632 / 0.434364 (-0.172732) | 0.327191 / 0.540337 (-0.213146) | 0.418900 / 1.386936 (-0.968036) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005131 / 0.011353 (-0.006222) | 0.003181 / 0.011008 (-0.007827) | 0.049697 / 0.038508 (0.011189) | 0.032754 / 0.023109 (0.009645) | 0.263954 / 0.275898 (-0.011944) | 0.285110 / 0.323480 (-0.038370) | 0.004133 / 0.007986 (-0.003852) | 0.002713 / 0.004328 (-0.001615) | 0.051684 / 0.004250 (0.047433) | 0.040607 / 0.037052 (0.003554) | 0.277919 / 0.258489 (0.019429) | 0.304773 / 0.293841 (0.010932) | 0.029530 / 0.128546 (-0.099016) | 0.010176 / 0.075646 (-0.065470) | 0.058501 / 0.419271 (-0.360771) | 0.033436 / 0.043533 (-0.010097) | 0.269899 / 0.255139 (0.014760) | 0.284490 / 0.283200 (0.001290) | 0.017092 / 0.141683 (-0.124591) | 1.132399 / 1.452155 (-0.319756) | 1.167290 / 1.492716 (-0.325427) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094460 / 0.018006 (0.076454) | 0.301462 / 0.000490 (0.300972) | 0.000202 / 0.000200 (0.000002) | 0.000052 / 0.000054 (-0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022767 / 0.037411 (-0.014645) | 0.075993 / 0.014526 (0.061467) | 0.087729 / 0.176557 (-0.088827) | 0.127599 / 0.737135 (-0.609536) | 0.088873 / 0.296338 (-0.207465) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.286420 / 0.215209 (0.071211) | 2.811376 / 2.077655 (0.733722) | 1.558645 / 1.504120 (0.054525) | 1.426371 / 1.541195 (-0.114824) | 1.422347 / 1.468490 (-0.046143) | 0.567181 / 4.584777 (-4.017596) | 0.936731 / 3.745712 (-2.808982) | 2.643566 / 5.269862 (-2.626296) | 1.727843 / 4.565676 (-2.837834) | 0.062748 / 0.424275 (-0.361527) | 0.005033 / 0.007607 (-0.002574) | 0.339708 / 0.226044 (0.113663) | 3.354119 / 2.268929 (1.085190) | 1.877594 / 55.444624 (-53.567030) | 1.589202 / 6.876477 (-5.287274) | 1.707780 / 2.142072 (-0.434292) | 0.644520 / 4.805227 (-4.160708) | 0.115226 / 6.500664 (-6.385438) | 0.040004 / 0.075469 (-0.035465) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.002774 / 1.841788 (-0.839014) | 11.812647 / 8.074308 (3.738339) | 10.384198 / 10.191392 (0.192806) | 0.131120 / 0.680424 (-0.549304) | 0.014862 / 0.534201 (-0.519339) | 0.282873 / 0.579283 (-0.296410) | 0.120415 / 0.434364 (-0.313949) | 0.321995 / 0.540337 (-0.218343) | 0.441987 / 1.386936 (-0.944949) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#b12a2c5016499cc1d110798c6815f0245f61010e \"CML watermark\")\n" ]
"2024-05-21T12:10:24"
"2024-05-23T08:05:58"
"2024-05-23T07:59:57"
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6911", "html_url": "https://github.com/huggingface/datasets/pull/6911", "diff_url": "https://github.com/huggingface/datasets/pull/6911.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6911.patch", "merged_at": "2024-05-23T07:59:57" }
Remove dead code for non-dict data_files from packaged modules. Since the merge of this PR: - #2986 the builders' variable self.config.data_files is always a dict, which makes the condition on (str, list, tuple) dead code.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6911/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6911/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6910
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6910/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6910/comments
https://api.github.com/repos/huggingface/datasets/issues/6910/events
https://github.com/huggingface/datasets/pull/6910
2,307,570,084
PR_kwDODunzps5wC2An
6,910
Fix wrong type hints in data_files
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6910). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005135 / 0.011353 (-0.006218) | 0.003757 / 0.011008 (-0.007251) | 0.063122 / 0.038508 (0.024614) | 0.029837 / 0.023109 (0.006727) | 0.246120 / 0.275898 (-0.029778) | 0.268529 / 0.323480 (-0.054951) | 0.004136 / 0.007986 (-0.003849) | 0.002650 / 0.004328 (-0.001678) | 0.048749 / 0.004250 (0.044499) | 0.045279 / 0.037052 (0.008226) | 0.257970 / 0.258489 (-0.000519) | 0.285993 / 0.293841 (-0.007848) | 0.027612 / 0.128546 (-0.100935) | 0.010175 / 0.075646 (-0.065471) | 0.207373 / 0.419271 (-0.211899) | 0.037672 / 0.043533 (-0.005861) | 0.249603 / 0.255139 (-0.005536) | 0.271081 / 0.283200 (-0.012119) | 0.018174 / 0.141683 (-0.123509) | 1.116703 / 1.452155 (-0.335452) | 1.169261 / 1.492716 (-0.323455) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095161 / 0.018006 (0.077155) | 0.301112 / 0.000490 (0.300623) | 0.000221 / 0.000200 (0.000021) | 0.000042 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023218 / 0.037411 (-0.014193) | 0.063125 / 0.014526 (0.048599) | 0.075857 / 0.176557 (-0.100699) | 0.137922 / 0.737135 (-0.599213) | 0.076989 / 0.296338 (-0.219349) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.279272 / 0.215209 (0.064063) | 2.776463 / 2.077655 (0.698809) | 1.472220 / 1.504120 (-0.031900) | 1.347105 / 1.541195 (-0.194090) | 1.361014 / 1.468490 (-0.107476) | 0.589233 / 4.584777 (-3.995544) | 2.395212 / 3.745712 (-1.350500) | 2.794855 / 5.269862 (-2.475007) | 1.698350 / 4.565676 (-2.867327) | 0.063328 / 0.424275 (-0.360947) | 0.005020 / 0.007607 (-0.002588) | 0.335872 / 0.226044 (0.109828) | 3.293486 / 2.268929 (1.024558) | 1.837270 / 55.444624 (-53.607354) | 1.535694 / 6.876477 (-5.340782) | 1.559696 / 2.142072 (-0.582376) | 0.639302 / 4.805227 (-4.165925) | 0.116554 / 6.500664 (-6.384110) | 0.042305 / 0.075469 (-0.033164) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.971562 / 1.841788 (-0.870226) | 11.710500 / 8.074308 (3.636192) | 9.505935 / 10.191392 (-0.685457) | 0.139161 / 0.680424 (-0.541263) | 0.014351 / 0.534201 (-0.519850) | 0.285790 / 0.579283 (-0.293493) | 0.265718 / 0.434364 (-0.168646) | 0.323558 / 0.540337 (-0.216780) | 0.412635 / 1.386936 (-0.974301) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005987 / 0.011353 (-0.005366) | 0.003787 / 0.011008 (-0.007221) | 0.049839 / 0.038508 (0.011331) | 0.032817 / 0.023109 (0.009708) | 0.268304 / 0.275898 (-0.007594) | 0.303409 / 0.323480 (-0.020071) | 0.004924 / 0.007986 (-0.003061) | 0.002740 / 0.004328 (-0.001589) | 0.048906 / 0.004250 (0.044655) | 0.044266 / 0.037052 (0.007213) | 0.290506 / 0.258489 (0.032017) | 0.314124 / 0.293841 (0.020283) | 0.030242 / 0.128546 (-0.098304) | 0.010555 / 0.075646 (-0.065091) | 0.058849 / 0.419271 (-0.360423) | 0.033540 / 0.043533 (-0.009993) | 0.267833 / 0.255139 (0.012694) | 0.291056 / 0.283200 (0.007857) | 0.018611 / 0.141683 (-0.123072) | 1.137620 / 1.452155 (-0.314534) | 1.199554 / 1.492716 (-0.293162) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.096716 / 0.018006 (0.078709) | 0.302033 / 0.000490 (0.301543) | 0.000217 / 0.000200 (0.000017) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023208 / 0.037411 (-0.014203) | 0.076231 / 0.014526 (0.061705) | 0.088672 / 0.176557 (-0.087884) | 0.129033 / 0.737135 (-0.608103) | 0.090709 / 0.296338 (-0.205630) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.297033 / 0.215209 (0.081824) | 2.951181 / 2.077655 (0.873526) | 1.567690 / 1.504120 (0.063570) | 1.436809 / 1.541195 (-0.104385) | 1.469696 / 1.468490 (0.001206) | 0.567963 / 4.584777 (-4.016813) | 0.954168 / 3.745712 (-2.791544) | 2.700473 / 5.269862 (-2.569389) | 1.742144 / 4.565676 (-2.823532) | 0.065027 / 0.424275 (-0.359248) | 0.005319 / 0.007607 (-0.002288) | 0.346459 / 0.226044 (0.120415) | 3.446117 / 2.268929 (1.177189) | 1.953142 / 55.444624 (-53.491483) | 1.639131 / 6.876477 (-5.237346) | 1.830664 / 2.142072 (-0.311409) | 0.657807 / 4.805227 (-4.147420) | 0.117987 / 6.500664 (-6.382678) | 0.040726 / 0.075469 (-0.034744) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.992666 / 1.841788 (-0.849122) | 12.305377 / 8.074308 (4.231069) | 10.274829 / 10.191392 (0.083437) | 0.141731 / 0.680424 (-0.538692) | 0.015100 / 0.534201 (-0.519101) | 0.282298 / 0.579283 (-0.296985) | 0.124301 / 0.434364 (-0.310063) | 0.320914 / 0.540337 (-0.219424) | 0.445855 / 1.386936 (-0.941081) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#3b66daa02b3307079a90fbfd13856e9bec0fc1ab \"CML watermark\")\n" ]
"2024-05-21T07:41:09"
"2024-05-23T06:04:05"
"2024-05-23T05:58:05"
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6910", "html_url": "https://github.com/huggingface/datasets/pull/6910", "diff_url": "https://github.com/huggingface/datasets/pull/6910.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6910.patch", "merged_at": "2024-05-23T05:58:05" }
Fix wrong type hints in data_files introduced in: - #6493
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6910/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6910/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6909
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6909/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6909/comments
https://api.github.com/repos/huggingface/datasets/issues/6909/events
https://github.com/huggingface/datasets/pull/6909
2,307,508,120
PR_kwDODunzps5wCoiE
6,909
Update requests >=2.32.1 to fix vulnerability
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6909). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005375 / 0.011353 (-0.005978) | 0.004005 / 0.011008 (-0.007003) | 0.062407 / 0.038508 (0.023899) | 0.032241 / 0.023109 (0.009131) | 0.256092 / 0.275898 (-0.019806) | 0.285740 / 0.323480 (-0.037740) | 0.004146 / 0.007986 (-0.003839) | 0.002831 / 0.004328 (-0.001497) | 0.049179 / 0.004250 (0.044928) | 0.048303 / 0.037052 (0.011251) | 0.270841 / 0.258489 (0.012352) | 0.303209 / 0.293841 (0.009368) | 0.027642 / 0.128546 (-0.100905) | 0.010661 / 0.075646 (-0.064985) | 0.201999 / 0.419271 (-0.217272) | 0.036532 / 0.043533 (-0.007001) | 0.262441 / 0.255139 (0.007302) | 0.280944 / 0.283200 (-0.002256) | 0.018369 / 0.141683 (-0.123314) | 1.122249 / 1.452155 (-0.329906) | 1.171352 / 1.492716 (-0.321364) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.096433 / 0.018006 (0.078427) | 0.297272 / 0.000490 (0.296782) | 0.000222 / 0.000200 (0.000023) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019645 / 0.037411 (-0.017766) | 0.062744 / 0.014526 (0.048219) | 0.076096 / 0.176557 (-0.100460) | 0.121882 / 0.737135 (-0.615253) | 0.076267 / 0.296338 (-0.220072) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.274159 / 0.215209 (0.058950) | 2.729371 / 2.077655 (0.651716) | 1.454328 / 1.504120 (-0.049792) | 1.330517 / 1.541195 (-0.210678) | 1.338832 / 1.468490 (-0.129658) | 0.600252 / 4.584777 (-3.984525) | 2.388658 / 3.745712 (-1.357054) | 2.837717 / 5.269862 (-2.432145) | 1.747329 / 4.565676 (-2.818347) | 0.064620 / 0.424275 (-0.359655) | 0.004955 / 0.007607 (-0.002653) | 0.340253 / 0.226044 (0.114209) | 3.351559 / 2.268929 (1.082630) | 1.822718 / 55.444624 (-53.621907) | 1.518663 / 6.876477 (-5.357814) | 1.548066 / 2.142072 (-0.594006) | 0.663525 / 4.805227 (-4.141702) | 0.118334 / 6.500664 (-6.382331) | 0.042060 / 0.075469 (-0.033410) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.976509 / 1.841788 (-0.865278) | 11.703321 / 8.074308 (3.629013) | 9.305605 / 10.191392 (-0.885787) | 0.131016 / 0.680424 (-0.549408) | 0.014299 / 0.534201 (-0.519902) | 0.293963 / 0.579283 (-0.285320) | 0.264018 / 0.434364 (-0.170345) | 0.330265 / 0.540337 (-0.210073) | 0.427239 / 1.386936 (-0.959697) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005437 / 0.011353 (-0.005916) | 0.003774 / 0.011008 (-0.007234) | 0.049927 / 0.038508 (0.011419) | 0.032246 / 0.023109 (0.009137) | 0.271808 / 0.275898 (-0.004090) | 0.295652 / 0.323480 (-0.027828) | 0.004220 / 0.007986 (-0.003766) | 0.002803 / 0.004328 (-0.001525) | 0.049656 / 0.004250 (0.045406) | 0.041938 / 0.037052 (0.004885) | 0.282199 / 0.258489 (0.023710) | 0.310206 / 0.293841 (0.016365) | 0.030389 / 0.128546 (-0.098157) | 0.010593 / 0.075646 (-0.065054) | 0.057862 / 0.419271 (-0.361409) | 0.033937 / 0.043533 (-0.009596) | 0.268920 / 0.255139 (0.013781) | 0.286000 / 0.283200 (0.002800) | 0.018766 / 0.141683 (-0.122917) | 1.118556 / 1.452155 (-0.333599) | 1.175083 / 1.492716 (-0.317633) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095135 / 0.018006 (0.077129) | 0.304735 / 0.000490 (0.304245) | 0.000210 / 0.000200 (0.000010) | 0.000054 / 0.000054 (-0.000000) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022971 / 0.037411 (-0.014441) | 0.076204 / 0.014526 (0.061678) | 0.090801 / 0.176557 (-0.085756) | 0.130149 / 0.737135 (-0.606987) | 0.090986 / 0.296338 (-0.205352) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.298535 / 0.215209 (0.083326) | 2.882959 / 2.077655 (0.805304) | 1.574018 / 1.504120 (0.069899) | 1.445251 / 1.541195 (-0.095944) | 1.483651 / 1.468490 (0.015160) | 0.572012 / 4.584777 (-4.012765) | 0.972223 / 3.745712 (-2.773489) | 2.745776 / 5.269862 (-2.524085) | 1.783980 / 4.565676 (-2.781697) | 0.063910 / 0.424275 (-0.360365) | 0.005397 / 0.007607 (-0.002210) | 0.349104 / 0.226044 (0.123059) | 3.433303 / 2.268929 (1.164374) | 1.961506 / 55.444624 (-53.483119) | 1.665905 / 6.876477 (-5.210571) | 1.800977 / 2.142072 (-0.341095) | 0.655843 / 4.805227 (-4.149384) | 0.118320 / 6.500664 (-6.382345) | 0.041748 / 0.075469 (-0.033722) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.006835 / 1.841788 (-0.834952) | 12.506123 / 8.074308 (4.431815) | 10.564310 / 10.191392 (0.372918) | 0.143121 / 0.680424 (-0.537303) | 0.016340 / 0.534201 (-0.517861) | 0.284181 / 0.579283 (-0.295102) | 0.125975 / 0.434364 (-0.308389) | 0.324369 / 0.540337 (-0.215969) | 0.443713 / 1.386936 (-0.943223) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#60d21efbc01e15d0b596ac1072750cbecd91548a \"CML watermark\")\n" ]
"2024-05-21T07:11:20"
"2024-05-21T07:45:58"
"2024-05-21T07:38:25"
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6909", "html_url": "https://github.com/huggingface/datasets/pull/6909", "diff_url": "https://github.com/huggingface/datasets/pull/6909.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6909.patch", "merged_at": "2024-05-21T07:38:25" }
Update requests >=2.32.1 to fix vulnerability.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6909/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6909/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6908
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6908/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6908/comments
https://api.github.com/repos/huggingface/datasets/issues/6908/events
https://github.com/huggingface/datasets/issues/6908
2,304,958,116
I_kwDODunzps6JYt6k
6,908
Fail to load "stas/c4-en-10k" dataset since 2.16 version
{ "login": "guch8017", "id": 38173059, "node_id": "MDQ6VXNlcjM4MTczMDU5", "avatar_url": "https://avatars.githubusercontent.com/u/38173059?v=4", "gravatar_id": "", "url": "https://api.github.com/users/guch8017", "html_url": "https://github.com/guch8017", "followers_url": "https://api.github.com/users/guch8017/followers", "following_url": "https://api.github.com/users/guch8017/following{/other_user}", "gists_url": "https://api.github.com/users/guch8017/gists{/gist_id}", "starred_url": "https://api.github.com/users/guch8017/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/guch8017/subscriptions", "organizations_url": "https://api.github.com/users/guch8017/orgs", "repos_url": "https://api.github.com/users/guch8017/repos", "events_url": "https://api.github.com/users/guch8017/events{/privacy}", "received_events_url": "https://api.github.com/users/guch8017/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "I am not able to reproduce the error with datasets 2.19.1:\r\n```python\r\nIn [1]: from datasets import load_dataset; ds = load_dataset(\"stas/c4-en-10k\", streaming=True); item = next(iter(ds[\"train\"])); item\r\nOut[1]: {'text': 'Beginners BBQ Class Taking Place in Missoula!\\nDo you want to get better at making delicious BBQ? You will have the opportunity, put this on your calendar now. Thursday, September 22nd join World Class BBQ Champion, Tony Balay from Lonestar Smoke Rangers. He will be teaching a beginner level class for everyone who wants to get better with their culinary skills.\\nHe will teach you everything you need to know to compete in a KCBS BBQ competition, including techniques, recipes, timelines, meat selection and trimming, plus smoker and fire information.\\nThe cost to be in the class is $35 per person, and for spectators it is free. Included in the cost will be either a t-shirt or apron and you will be tasting samples of each meat that is prepared.'}\r\n\r\nIn [2]: from datasets import load_dataset; ds = load_dataset(\"stas/c4-en-10k\", download_mode=\"force_redownload\"); ds\r\nDownloading data: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 13.3M/13.3M [00:00<00:00, 18.7MB/s]\r\nGenerating train split: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 10000/10000 [00:00<00:00, 78548.55 examples/s]\r\nOut[2]: \r\nDatasetDict({\r\n train: Dataset({\r\n features: ['text'],\r\n num_rows: 10000\r\n })\r\n})\r\n```\r\n\r\nLooking at your error traceback, I notice that the code line numbers do not correspond to the ones of datasets 2.19.1.\r\n\r\nAdditionally, I can't reproduce the issue with `HfFileSystem`:\r\n```python\r\nIn [1]: from huggingface_hub import HfFileSystem\r\n\r\nIn [2]: fs = HfFileSystem()\r\n\r\nIn [3]: with fs.open(\"datasets/stas/c4-en-10k/c4-en-10k.py\", \"rb\") as f:\r\n ...: data = f.read()\r\n ...: \r\n\r\nIn [4]: data[:20]\r\nOut[4]: b'# coding=utf-8\\n# Cop'\r\n```\r\n\r\nCould you please verify the `datasets` and `huggingface_hub` versions you are indeed using?\r\n```python\r\nimport datasets; print(datasets.__version__)\r\n\r\nimport huggingface_hub; print(huggingface_hub.__version__)\r\n```", "Thanks for your reply! After I update the datasets version from 2.15.0 back to 2.19.1 again, it seems everything work well. Sorry for bordering you!" ]
"2024-05-20T02:43:59"
"2024-05-24T10:58:09"
"2024-05-24T10:58:09"
NONE
null
null
null
### Describe the bug When update datasets library to version 2.16+ ( I test it on 2.16, 2.19.0 and 2.19.1), using the following code to load stas/c4-en-10k dataset ```python from datasets import load_dataset, Dataset dataset = load_dataset('stas/c4-en-10k') ``` and then it raise UnicodeDecodeError like ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/*/conda3/envs/watermark/lib/python3.10/site-packages/datasets/load.py", line 2523, in load_dataset builder_instance = load_dataset_builder( File "/home/*/conda3/envs/watermark/lib/python3.10/site-packages/datasets/load.py", line 2195, in load_dataset_builder dataset_module = dataset_module_factory( File "/home/*/conda3/envs/watermark/lib/python3.10/site-packages/datasets/load.py", line 1846, in dataset_module_factory raise e1 from None File "/home/*/conda3/envs/watermark/lib/python3.10/site-packages/datasets/load.py", line 1798, in dataset_module_factory can_load_config_from_parquet_export = "DEFAULT_CONFIG_NAME" not in f.read() File "/home/*/conda3/envs/watermark/lib/python3.10/codecs.py", line 322, in decode (result, consumed) = self._buffer_decode(data, self.errors, final) UnicodeDecodeError: 'utf-8' codec can't decode byte 0x8b in position 1: invalid start byte ``` I found that fs.open loads a gzip file and parses it like plain text using utf-8 encoder. ```python fs = HfFileSystem('https://huggingface.co') fs.open("datasets/stas/c4-en-10k/c4-en-10k.py", "rb") data = fs.read() # data is gzip bytes begin with b'\x1f\x8b\x08\x00\x00\tn\x88\x00...' data2 = unzip_gzip_bytes(data) # data2 is what we want: '# coding=utf-8\n# Copyright 2020 The HuggingFace Datasets...' ``` ### Steps to reproduce the bug 1. Install datasets between version 2.16 and 2.19 2. Use `datasets.load_dataset` method to load `stas/c4-en-10k` dataset. ### Expected behavior Load dataset normally. ### Environment info Platform = Linux-5.4.0-159-generic-x86_64-with-glibc2.35 Python = 3.10.14 Datasets = 2.19
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6908/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6908/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6907
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6907/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6907/comments
https://api.github.com/repos/huggingface/datasets/issues/6907/events
https://github.com/huggingface/datasets/issues/6907
2,303,855,833
I_kwDODunzps6JUgzZ
6,907
Support the deserialization of json lines files comprised of lists
{ "login": "umarbutler", "id": 8473183, "node_id": "MDQ6VXNlcjg0NzMxODM=", "avatar_url": "https://avatars.githubusercontent.com/u/8473183?v=4", "gravatar_id": "", "url": "https://api.github.com/users/umarbutler", "html_url": "https://github.com/umarbutler", "followers_url": "https://api.github.com/users/umarbutler/followers", "following_url": "https://api.github.com/users/umarbutler/following{/other_user}", "gists_url": "https://api.github.com/users/umarbutler/gists{/gist_id}", "starred_url": "https://api.github.com/users/umarbutler/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/umarbutler/subscriptions", "organizations_url": "https://api.github.com/users/umarbutler/orgs", "repos_url": "https://api.github.com/users/umarbutler/repos", "events_url": "https://api.github.com/users/umarbutler/events{/privacy}", "received_events_url": "https://api.github.com/users/umarbutler/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Update: I ended up deciding to go back to use lines of dictionaries instead of arrays, not because of this issue as my users would be capable of downloading my corpus without `datasets`, but the speed and storage savings are not currently worth breaking my API and harming the backwards compatibility of each new revision.\r\n\r\nWith that said, for a static dataset that is not regularly updated like mine, and particularly for extremely large datasets with millions or billions of rows, using arrays could have a meaningful impact, and so there is probably still value in supporting this structure, provided the effort is not too much." ]
"2024-05-18T05:07:23"
"2024-05-18T08:53:28"
null
NONE
null
null
null
### Feature request I manage a somewhat large and popular Hugging Face dataset known as the [Open Australian Legal Corpus](https://huggingface.co/datasets/umarbutler/open-australian-legal-corpus). I recently updated my corpus to be stored in a json lines file where each line is an array and each element represents a value at a particular column. Previously, my corpus was stored as a json lines file where each line was a dictionary and the keys were the fields. Essentially, a line in my json lines file used to look like this: ```json {"version_id":"","type":"","jurisdiction":"","source":"","citation":"","url":"","when_scraped":"","text":""} ``` And now it looks like this: ```json ["","","","","","","",""] ``` This saves 65 bytes per document and allows me very quickly serialise and deserialise documents via `msgspec`. After making this change, I found that `datasets` was incapable of deserialising my Corpus without a custom loading script, even if I ensured that the `dataset_info` field in my dataset card contained the desired names of my features. I would like to request that functionality be added to support this format which is more memory-efficent and faster than using dictionaries. ### Motivation The [documentation](https://huggingface.co/docs/datasets/en/dataset_script) for creating dataset loading scripts asserts that: > In the next major release, the new safety features of 🤗 Datasets will disable running dataset loading scripts by default, and you will have to pass trust_remote_code=True to load datasets that require running a dataset script. I would rather not require my users to pass `trust_remote_code=True` which means that I will need built-in support for this format. ### Your contribution I would be happy to submit a PR for this if this is something you would incorporate into `datasets` and if I can be pointed to where the code would need to go.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6907/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6907/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6906
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6906/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6906/comments
https://api.github.com/repos/huggingface/datasets/issues/6906/events
https://github.com/huggingface/datasets/issues/6906
2,303,679,119
I_kwDODunzps6JT1qP
6,906
irc_disentangle - Issue with splitting data
{ "login": "eor51355", "id": 114260604, "node_id": "U_kgDOBs96fA", "avatar_url": "https://avatars.githubusercontent.com/u/114260604?v=4", "gravatar_id": "", "url": "https://api.github.com/users/eor51355", "html_url": "https://github.com/eor51355", "followers_url": "https://api.github.com/users/eor51355/followers", "following_url": "https://api.github.com/users/eor51355/following{/other_user}", "gists_url": "https://api.github.com/users/eor51355/gists{/gist_id}", "starred_url": "https://api.github.com/users/eor51355/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/eor51355/subscriptions", "organizations_url": "https://api.github.com/users/eor51355/orgs", "repos_url": "https://api.github.com/users/eor51355/repos", "events_url": "https://api.github.com/users/eor51355/events{/privacy}", "received_events_url": "https://api.github.com/users/eor51355/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Thank you I will try this out!\r\n\r\nOn Tue, Jun 11, 2024 at 3:55 AM Vincent Lau ***@***.***>\r\nwrote:\r\n\r\n> I add a \"streaming=True\" after the name of the dataset, and it\r\n> works.....hope it can help you\r\n>\r\n> And if you install the version datasets==2.15.0, this bug will not happen.\r\n> I don't know why, but all of them works\r\n>\r\n> —\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/datasets/issues/6906#issuecomment-2160041812>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/A3HXU7AMBT2MNO34SC3Z5G3ZG2UOXAVCNFSM6AAAAABH45CNPWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCNRQGA2DCOBRGI>\r\n> .\r\n> You are receiving this because you authored the thread.Message ID:\r\n> ***@***.***>\r\n>\r\n", "I still find out that there are some strange bug in v2.15.0 of datasets. it seems like that the *.arrow file cannot be established. it may be an index of the subsets. well I still try to debug it. but, one of the most efficient way may be using the google colab to build this index in the ~/huggingface/datasets, and than download them to replace the local file.....lol......it works!" ]
"2024-05-17T23:19:37"
"2024-06-12T02:33:02"
null
NONE
null
null
null
### Describe the bug I am trying to access your database through python using "datasets.load_dataset("irc_disentangle")" and I am getting this error message: ValueError: Instruction "train" corresponds to no data! ### Steps to reproduce the bug import datasets ds = datasets.load_dataset('irc_disentangle') ds ### Expected behavior The data is supposed to load into ds and be accessable as such: ds['train'][1050], ds['train'][1055] ### Environment info I tired Python 3.12 and 3.10
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6906/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6906/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6905
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6905/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6905/comments
https://api.github.com/repos/huggingface/datasets/issues/6905/events
https://github.com/huggingface/datasets/issues/6905
2,303,098,587
I_kwDODunzps6JRn7b
6,905
Extraction protocol for arrow files is not defined
{ "login": "radulescupetru", "id": 26553095, "node_id": "MDQ6VXNlcjI2NTUzMDk1", "avatar_url": "https://avatars.githubusercontent.com/u/26553095?v=4", "gravatar_id": "", "url": "https://api.github.com/users/radulescupetru", "html_url": "https://github.com/radulescupetru", "followers_url": "https://api.github.com/users/radulescupetru/followers", "following_url": "https://api.github.com/users/radulescupetru/following{/other_user}", "gists_url": "https://api.github.com/users/radulescupetru/gists{/gist_id}", "starred_url": "https://api.github.com/users/radulescupetru/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/radulescupetru/subscriptions", "organizations_url": "https://api.github.com/users/radulescupetru/orgs", "repos_url": "https://api.github.com/users/radulescupetru/repos", "events_url": "https://api.github.com/users/radulescupetru/events{/privacy}", "received_events_url": "https://api.github.com/users/radulescupetru/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
"2024-05-17T16:01:41"
"2024-05-17T16:01:41"
null
NONE
null
null
null
### Describe the bug Passing files with `.arrow` extension into data_files argument, at least when `streaming=True` is very slow. ### Steps to reproduce the bug Basically it goes through the `_get_extraction_protocol` method located [here](https://github.com/huggingface/datasets/blob/main/src/datasets/utils/file_utils.py#L820) The method then looks at some base known extensions where `arrow` is not defined so it proceeds to determine the compression with the magic number method which is slow when dealing with a lot of files which are stored in s3 and by looking at this predefined list, I don't see `arrow` in there either so in the end it return None: ``` MAGIC_NUMBER_TO_COMPRESSION_PROTOCOL = { bytes.fromhex("504B0304"): "zip", bytes.fromhex("504B0506"): "zip", # empty archive bytes.fromhex("504B0708"): "zip", # spanned archive bytes.fromhex("425A68"): "bz2", bytes.fromhex("1F8B"): "gzip", bytes.fromhex("FD377A585A00"): "xz", bytes.fromhex("04224D18"): "lz4", bytes.fromhex("28B52FFD"): "zstd", } ``` ### Expected behavior My expectation is that `arrow` would be in the known lists so it would return None without going through the magic number method. ### Environment info datasets 2.19.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6905/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6905/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6904
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6904/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6904/comments
https://api.github.com/repos/huggingface/datasets/issues/6904/events
https://github.com/huggingface/datasets/pull/6904
2,302,912,179
PR_kwDODunzps5vzRlD
6,904
Fix decoding multi part extension
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6904). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "takign the liberty to merge this for the viewer and a new dataset being released", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005004 / 0.011353 (-0.006349) | 0.003352 / 0.011008 (-0.007657) | 0.063035 / 0.038508 (0.024527) | 0.032031 / 0.023109 (0.008922) | 0.244801 / 0.275898 (-0.031097) | 0.270622 / 0.323480 (-0.052857) | 0.003110 / 0.007986 (-0.004876) | 0.002629 / 0.004328 (-0.001700) | 0.048784 / 0.004250 (0.044534) | 0.045779 / 0.037052 (0.008726) | 0.258642 / 0.258489 (0.000153) | 0.291606 / 0.293841 (-0.002235) | 0.028237 / 0.128546 (-0.100310) | 0.010184 / 0.075646 (-0.065463) | 0.202455 / 0.419271 (-0.216816) | 0.036012 / 0.043533 (-0.007521) | 0.248209 / 0.255139 (-0.006930) | 0.267315 / 0.283200 (-0.015884) | 0.019249 / 0.141683 (-0.122434) | 1.120420 / 1.452155 (-0.331735) | 1.169515 / 1.492716 (-0.323201) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095193 / 0.018006 (0.077187) | 0.300544 / 0.000490 (0.300055) | 0.000214 / 0.000200 (0.000014) | 0.000050 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019001 / 0.037411 (-0.018411) | 0.061857 / 0.014526 (0.047331) | 0.073379 / 0.176557 (-0.103178) | 0.121293 / 0.737135 (-0.615843) | 0.075665 / 0.296338 (-0.220673) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.285153 / 0.215209 (0.069944) | 2.875527 / 2.077655 (0.797873) | 1.479851 / 1.504120 (-0.024269) | 1.360691 / 1.541195 (-0.180504) | 1.385581 / 1.468490 (-0.082909) | 0.566312 / 4.584777 (-4.018465) | 2.400202 / 3.745712 (-1.345510) | 2.719241 / 5.269862 (-2.550620) | 1.706469 / 4.565676 (-2.859208) | 0.062129 / 0.424275 (-0.362146) | 0.005291 / 0.007607 (-0.002316) | 0.334585 / 0.226044 (0.108540) | 3.293347 / 2.268929 (1.024419) | 1.790490 / 55.444624 (-53.654134) | 1.505519 / 6.876477 (-5.370958) | 1.527730 / 2.142072 (-0.614343) | 0.644554 / 4.805227 (-4.160673) | 0.119775 / 6.500664 (-6.380889) | 0.056912 / 0.075469 (-0.018557) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.977512 / 1.841788 (-0.864275) | 11.293883 / 8.074308 (3.219575) | 9.669439 / 10.191392 (-0.521953) | 0.129910 / 0.680424 (-0.550514) | 0.014322 / 0.534201 (-0.519879) | 0.284967 / 0.579283 (-0.294316) | 0.265355 / 0.434364 (-0.169008) | 0.321965 / 0.540337 (-0.218372) | 0.415254 / 1.386936 (-0.971682) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005138 / 0.011353 (-0.006215) | 0.003321 / 0.011008 (-0.007687) | 0.049731 / 0.038508 (0.011223) | 0.032307 / 0.023109 (0.009198) | 0.266331 / 0.275898 (-0.009567) | 0.290863 / 0.323480 (-0.032617) | 0.004151 / 0.007986 (-0.003835) | 0.002684 / 0.004328 (-0.001644) | 0.048760 / 0.004250 (0.044510) | 0.042251 / 0.037052 (0.005199) | 0.280414 / 0.258489 (0.021925) | 0.305089 / 0.293841 (0.011248) | 0.029118 / 0.128546 (-0.099428) | 0.010276 / 0.075646 (-0.065370) | 0.057790 / 0.419271 (-0.361482) | 0.033290 / 0.043533 (-0.010243) | 0.267250 / 0.255139 (0.012111) | 0.285233 / 0.283200 (0.002034) | 0.018587 / 0.141683 (-0.123096) | 1.136198 / 1.452155 (-0.315957) | 1.185274 / 1.492716 (-0.307442) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.096355 / 0.018006 (0.078349) | 0.301827 / 0.000490 (0.301337) | 0.000216 / 0.000200 (0.000016) | 0.000052 / 0.000054 (-0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022607 / 0.037411 (-0.014805) | 0.075724 / 0.014526 (0.061198) | 0.088197 / 0.176557 (-0.088359) | 0.127864 / 0.737135 (-0.609271) | 0.089294 / 0.296338 (-0.207044) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.289321 / 0.215209 (0.074112) | 2.832456 / 2.077655 (0.754802) | 1.559208 / 1.504120 (0.055088) | 1.426229 / 1.541195 (-0.114966) | 1.424564 / 1.468490 (-0.043926) | 0.557754 / 4.584777 (-4.027023) | 0.940179 / 3.745712 (-2.805533) | 2.713640 / 5.269862 (-2.556222) | 1.697583 / 4.565676 (-2.868093) | 0.062024 / 0.424275 (-0.362251) | 0.005270 / 0.007607 (-0.002337) | 0.339450 / 0.226044 (0.113406) | 3.333024 / 2.268929 (1.064096) | 1.946087 / 55.444624 (-53.498537) | 1.601057 / 6.876477 (-5.275420) | 1.599862 / 2.142072 (-0.542210) | 0.642838 / 4.805227 (-4.162390) | 0.120470 / 6.500664 (-6.380194) | 0.040815 / 0.075469 (-0.034654) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.012904 / 1.841788 (-0.828884) | 11.917035 / 8.074308 (3.842727) | 9.717822 / 10.191392 (-0.473570) | 0.141730 / 0.680424 (-0.538694) | 0.015750 / 0.534201 (-0.518451) | 0.284470 / 0.579283 (-0.294813) | 0.125662 / 0.434364 (-0.308702) | 0.380740 / 0.540337 (-0.159598) | 0.418119 / 1.386936 (-0.968817) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#b3f772468b2bbf77a7510e265f9d41e9eb77d53f \"CML watermark\")\n" ]
"2024-05-17T14:32:57"
"2024-05-17T14:52:56"
"2024-05-17T14:46:54"
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6904", "html_url": "https://github.com/huggingface/datasets/pull/6904", "diff_url": "https://github.com/huggingface/datasets/pull/6904.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6904.patch", "merged_at": "2024-05-17T14:46:54" }
e.g. a field named `url.txt` should be a treated as text I also included a small fix to support .npz correctly
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6904/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6904/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6903
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6903/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6903/comments
https://api.github.com/repos/huggingface/datasets/issues/6903/events
https://github.com/huggingface/datasets/issues/6903
2,300,436,053
I_kwDODunzps6JHd5V
6,903
Add the option of saving in parquet instead of arrow
{ "login": "arita37", "id": 18707623, "node_id": "MDQ6VXNlcjE4NzA3NjIz", "avatar_url": "https://avatars.githubusercontent.com/u/18707623?v=4", "gravatar_id": "", "url": "https://api.github.com/users/arita37", "html_url": "https://github.com/arita37", "followers_url": "https://api.github.com/users/arita37/followers", "following_url": "https://api.github.com/users/arita37/following{/other_user}", "gists_url": "https://api.github.com/users/arita37/gists{/gist_id}", "starred_url": "https://api.github.com/users/arita37/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/arita37/subscriptions", "organizations_url": "https://api.github.com/users/arita37/orgs", "repos_url": "https://api.github.com/users/arita37/repos", "events_url": "https://api.github.com/users/arita37/events{/privacy}", "received_events_url": "https://api.github.com/users/arita37/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[ "I think [`Dataset.to_parquet`](https://huggingface.co/docs/datasets/v1.10.2/package_reference/main_classes.html#datasets.Dataset.to_parquet) is what you're looking for.\r\n\r\nLet me know if I'm wrong ", "No, it does not save the metadata json.\r\n\r\nWe have to recode all meta json load/save\r\nwith another custome functions.\r\n\r\nsave_to_disk\r\nand load should have option with\r\n“Parquet” instead of “arrow”\r\n\r\nsince “arrow” is never user for production \r\n(only parquet).\r\n\r\nThanks !\r\n\r\n> On May 17, 2024, at 5:38, Frédéric Branchaud-Charron ***@***.***> wrote:\r\n> \r\n> \r\n> I think Dataset.to_parquet is what you're looking for.\r\n> \r\n> Let me know if I'm wrong\r\n> \r\n> —\r\n> Reply to this email directly, view it on GitHub, or unsubscribe.\r\n> You are receiving this because you authored the thread.\r\n" ]
"2024-05-16T13:35:51"
"2024-05-17T03:40:04"
null
NONE
null
null
null
### Feature request In dataset.save_to_disk('/path/to/save/dataset'), add the option to save in parquet format dataset.save_to_disk('/path/to/save/dataset', format="parquet"), because arrow is not used for Production Big data.... (only parquet) ### Motivation because arrow is not used for Production Big data.... (only parquet) ### Your contribution I can do the testing !
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6903/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6903/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6902
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6902/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6902/comments
https://api.github.com/repos/huggingface/datasets/issues/6902/events
https://github.com/huggingface/datasets/pull/6902
2,300,256,241
PR_kwDODunzps5vqLIv
6,902
Make CLI convert_to_parquet not raise error if no rights to create script branch
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6902). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005026 / 0.011353 (-0.006327) | 0.003672 / 0.011008 (-0.007336) | 0.062776 / 0.038508 (0.024268) | 0.032056 / 0.023109 (0.008947) | 0.245359 / 0.275898 (-0.030540) | 0.269371 / 0.323480 (-0.054109) | 0.004205 / 0.007986 (-0.003780) | 0.002774 / 0.004328 (-0.001555) | 0.048958 / 0.004250 (0.044708) | 0.046442 / 0.037052 (0.009390) | 0.263924 / 0.258489 (0.005434) | 0.291854 / 0.293841 (-0.001987) | 0.027299 / 0.128546 (-0.101248) | 0.010332 / 0.075646 (-0.065315) | 0.202677 / 0.419271 (-0.216595) | 0.037732 / 0.043533 (-0.005801) | 0.246028 / 0.255139 (-0.009111) | 0.272100 / 0.283200 (-0.011099) | 0.018497 / 0.141683 (-0.123186) | 1.101192 / 1.452155 (-0.350962) | 1.149683 / 1.492716 (-0.343033) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.097838 / 0.018006 (0.079832) | 0.305598 / 0.000490 (0.305108) | 0.000230 / 0.000200 (0.000030) | 0.000045 / 0.000054 (-0.000009) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019489 / 0.037411 (-0.017922) | 0.061902 / 0.014526 (0.047376) | 0.074825 / 0.176557 (-0.101732) | 0.121664 / 0.737135 (-0.615472) | 0.076440 / 0.296338 (-0.219898) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.279194 / 0.215209 (0.063985) | 2.756777 / 2.077655 (0.679123) | 1.429298 / 1.504120 (-0.074822) | 1.313423 / 1.541195 (-0.227771) | 1.340466 / 1.468490 (-0.128024) | 0.556349 / 4.584777 (-4.028428) | 2.355910 / 3.745712 (-1.389802) | 2.806733 / 5.269862 (-2.463128) | 1.741903 / 4.565676 (-2.823773) | 0.061556 / 0.424275 (-0.362719) | 0.005477 / 0.007607 (-0.002130) | 0.327856 / 0.226044 (0.101812) | 3.283092 / 2.268929 (1.014164) | 1.797776 / 55.444624 (-53.646848) | 1.498683 / 6.876477 (-5.377794) | 1.518501 / 2.142072 (-0.623572) | 0.632267 / 4.805227 (-4.172960) | 0.116505 / 6.500664 (-6.384159) | 0.042446 / 0.075469 (-0.033023) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.982841 / 1.841788 (-0.858947) | 11.709436 / 8.074308 (3.635128) | 9.570519 / 10.191392 (-0.620873) | 0.141968 / 0.680424 (-0.538456) | 0.014299 / 0.534201 (-0.519902) | 0.285101 / 0.579283 (-0.294182) | 0.267118 / 0.434364 (-0.167246) | 0.324720 / 0.540337 (-0.215617) | 0.423626 / 1.386936 (-0.963310) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005567 / 0.011353 (-0.005786) | 0.003703 / 0.011008 (-0.007306) | 0.050516 / 0.038508 (0.012008) | 0.032617 / 0.023109 (0.009508) | 0.276546 / 0.275898 (0.000648) | 0.299798 / 0.323480 (-0.023682) | 0.004282 / 0.007986 (-0.003704) | 0.002719 / 0.004328 (-0.001609) | 0.049424 / 0.004250 (0.045173) | 0.042924 / 0.037052 (0.005871) | 0.287785 / 0.258489 (0.029296) | 0.315490 / 0.293841 (0.021649) | 0.029533 / 0.128546 (-0.099013) | 0.010575 / 0.075646 (-0.065071) | 0.058210 / 0.419271 (-0.361061) | 0.033269 / 0.043533 (-0.010263) | 0.273325 / 0.255139 (0.018186) | 0.291762 / 0.283200 (0.008563) | 0.018922 / 0.141683 (-0.122761) | 1.118913 / 1.452155 (-0.333242) | 1.175554 / 1.492716 (-0.317162) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.099920 / 0.018006 (0.081914) | 0.317188 / 0.000490 (0.316698) | 0.000211 / 0.000200 (0.000011) | 0.000045 / 0.000054 (-0.000009) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022297 / 0.037411 (-0.015114) | 0.077775 / 0.014526 (0.063249) | 0.090239 / 0.176557 (-0.086317) | 0.130498 / 0.737135 (-0.606638) | 0.092010 / 0.296338 (-0.204328) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.293534 / 0.215209 (0.078325) | 2.866070 / 2.077655 (0.788415) | 1.547147 / 1.504120 (0.043027) | 1.419684 / 1.541195 (-0.121510) | 1.432128 / 1.468490 (-0.036362) | 0.571365 / 4.584777 (-4.013412) | 0.968879 / 3.745712 (-2.776833) | 2.797415 / 5.269862 (-2.472446) | 1.767821 / 4.565676 (-2.797856) | 0.063281 / 0.424275 (-0.360994) | 0.005072 / 0.007607 (-0.002535) | 0.344547 / 0.226044 (0.118502) | 3.383888 / 2.268929 (1.114959) | 1.879537 / 55.444624 (-53.565087) | 1.598392 / 6.876477 (-5.278085) | 1.627788 / 2.142072 (-0.514284) | 0.641199 / 4.805227 (-4.164028) | 0.116349 / 6.500664 (-6.384315) | 0.041940 / 0.075469 (-0.033529) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.002494 / 1.841788 (-0.839294) | 12.310056 / 8.074308 (4.235748) | 9.819718 / 10.191392 (-0.371674) | 0.134745 / 0.680424 (-0.545679) | 0.016223 / 0.534201 (-0.517978) | 0.284791 / 0.579283 (-0.294492) | 0.124665 / 0.434364 (-0.309699) | 0.381601 / 0.540337 (-0.158737) | 0.413007 / 1.386936 (-0.973929) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#6255b36be14ae22890c78749575f1f0793901f14 \"CML watermark\")\n" ]
"2024-05-16T12:21:27"
"2024-06-03T04:43:17"
"2024-05-16T12:51:05"
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6902", "html_url": "https://github.com/huggingface/datasets/pull/6902", "diff_url": "https://github.com/huggingface/datasets/pull/6902.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6902.patch", "merged_at": "2024-05-16T12:51:04" }
Make CLI convert_to_parquet not raise error if no rights to create "script" branch. Not that before this PR, the error was not critical because it was raised at the end of the script, once all the rest of the steps were already performed. Fix #6901. Bug introduced in datasets-2.19.0 by: - #6809
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6902/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6902/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6901
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6901/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6901/comments
https://api.github.com/repos/huggingface/datasets/issues/6901/events
https://github.com/huggingface/datasets/issues/6901
2,300,167,465
I_kwDODunzps6JGcUp
6,901
HTTPError 403 raised by CLI convert_to_parquet when creating script branch on 3rd party repos
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[]
"2024-05-16T11:40:22"
"2024-05-16T12:51:06"
"2024-05-16T12:51:06"
MEMBER
null
null
null
CLI convert_to_parquet cannot create "script" branch on 3rd party repos. It can only create it on repos where the user executing the script has write access. Otherwise, a 403 Forbidden HTTPError is raised: ``` Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/huggingface_hub/utils/_errors.py", line 304, in hf_raise_for_status response.raise_for_status() File "/usr/local/lib/python3.10/dist-packages/requests/models.py", line 1021, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 403 Client Error: Forbidden for url: https://huggingface.co/api/datasets/ORG/DATASET/branch/script The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/local/bin/datasets-cli", line 8, in <module> sys.exit(main()) File "/usr/local/lib/python3.10/dist-packages/datasets/commands/datasets_cli.py", line 41, in main service.run() File "/usr/local/lib/python3.10/dist-packages/datasets/commands/convert_to_parquet.py", line 92, in run create_branch(dataset_id, branch="script", repo_type="dataset", token=token, exist_ok=True) File "/usr/local/lib/python3.10/dist-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn return fn(*args, **kwargs) File "/usr/local/lib/python3.10/dist-packages/huggingface_hub/hf_api.py", line 5503, in create_branch hf_raise_for_status(response) File "/usr/local/lib/python3.10/dist-packages/huggingface_hub/utils/_errors.py", line 367, in hf_raise_for_status raise HfHubHTTPError(message, response=response) from e huggingface_hub.utils._errors.HfHubHTTPError: (Request ID: Root=1-6645ee0d-4db1ed8a1fbe04956be15897;139a6e23-df7d-4f62-b5ba-adb6d8e6e696) 403 Forbidden: Forbidden: cannot write to script. Cannot access content at: https://huggingface.co/api/datasets/ORG/DATASET/branch/script. If you are trying to create or update content,make sure you have a token with the `write` role. ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6901/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6901/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6900
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6900/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6900/comments
https://api.github.com/repos/huggingface/datasets/issues/6900/events
https://github.com/huggingface/datasets/issues/6900
2,298,489,733
I_kwDODunzps6JACuF
6,900
[WebDataset] KeyError with user-defined `Features` when a field is missing in an example
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
"2024-05-15T17:48:34"
"2024-05-15T17:48:49"
null
MEMBER
null
null
null
reported at https://huggingface.co/datasets/ProGamerGov/synthetic-dataset-1m-dalle3-high-quality-captions/discussions/1 ``` File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/webdataset/webdataset.py", line 109, in _generate_examples example[field_name] = {"path": example["__key__"] + "." + field_name, "bytes": example[field_name]} ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6900/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/huggingface/datasets/issues/6900/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6899
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6899/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6899/comments
https://api.github.com/repos/huggingface/datasets/issues/6899/events
https://github.com/huggingface/datasets/issues/6899
2,298,059,597
I_kwDODunzps6I-ZtN
6,899
List of dictionary features get standardized
{ "login": "sohamparikh94", "id": 11831521, "node_id": "MDQ6VXNlcjExODMxNTIx", "avatar_url": "https://avatars.githubusercontent.com/u/11831521?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sohamparikh94", "html_url": "https://github.com/sohamparikh94", "followers_url": "https://api.github.com/users/sohamparikh94/followers", "following_url": "https://api.github.com/users/sohamparikh94/following{/other_user}", "gists_url": "https://api.github.com/users/sohamparikh94/gists{/gist_id}", "starred_url": "https://api.github.com/users/sohamparikh94/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sohamparikh94/subscriptions", "organizations_url": "https://api.github.com/users/sohamparikh94/orgs", "repos_url": "https://api.github.com/users/sohamparikh94/repos", "events_url": "https://api.github.com/users/sohamparikh94/events{/privacy}", "received_events_url": "https://api.github.com/users/sohamparikh94/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
"2024-05-15T14:11:35"
"2024-05-15T14:11:35"
null
NONE
null
null
null
### Describe the bug Hi, i’m trying to create a HF dataset from a list using Dataset.from_list. Each sample in the list is a dict with the same keys (which will be my features). The values for each feature are a list of dictionaries, and each such dictionary has a different set of keys. However, the datasets library standardizes all dictionaries under a feature and adds all possible keys (with None value) from all the dictionaries under that feature. How can I keep the same set of keys as in the original list for each dictionary under a feature? ### Steps to reproduce the bug ``` from datasets import Dataset # Define a function to generate a sample with "tools" feature def generate_sample(): # Generate random sample data sample_data = { "text": "Sample text", "feature_1": [] } # Add feature_1 with random keys for this sample feature_1 = [{"key1": "value1"}, {"key2": "value2"}] # Example feature_1 with random keys sample_data["feature_1"].extend(feature_1) return sample_data # Generate multiple samples num_samples = 10 samples = [generate_sample() for _ in range(num_samples)] # Create a Hugging Face Dataset dataset = Dataset.from_list(samples) dataset[0] ``` ```{'text': 'Sample text', 'feature_1': [{'key1': 'value1', 'key2': None}, {'key1': None, 'key2': 'value2'}]}``` ### Expected behavior ```{'text': 'Sample text', 'feature_1': [{'key1': 'value1'}, {'key2': 'value2'}]}``` ### Environment info - `datasets` version: 2.19.1 - Platform: Linux-5.15.0-1040-nvidia-x86_64-with-glibc2.35 - Python version: 3.10.13 - `huggingface_hub` version: 0.23.0 - PyArrow version: 15.0.0 - Pandas version: 2.2.0 - `fsspec` version: 2023.10.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6899/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6899/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6898
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6898/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6898/comments
https://api.github.com/repos/huggingface/datasets/issues/6898/events
https://github.com/huggingface/datasets/pull/6898
2,294,432,108
PR_kwDODunzps5vWJ9v
6,898
Fix YAML error in README files appearing on GitHub
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6898). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "After this PR, the README file looks like:\r\n\r\n![Screenshot from 2024-05-14 14-19-29](https://github.com/huggingface/datasets/assets/8515462/1f665a06-98be-4dd7-ba7e-7cc025489503)\r\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004936 / 0.011353 (-0.006417) | 0.003591 / 0.011008 (-0.007418) | 0.062967 / 0.038508 (0.024459) | 0.031314 / 0.023109 (0.008205) | 0.248040 / 0.275898 (-0.027858) | 0.271630 / 0.323480 (-0.051850) | 0.003085 / 0.007986 (-0.004901) | 0.002605 / 0.004328 (-0.001724) | 0.049452 / 0.004250 (0.045202) | 0.044929 / 0.037052 (0.007876) | 0.264254 / 0.258489 (0.005765) | 0.287531 / 0.293841 (-0.006310) | 0.027197 / 0.128546 (-0.101349) | 0.009925 / 0.075646 (-0.065721) | 0.203165 / 0.419271 (-0.216107) | 0.035658 / 0.043533 (-0.007875) | 0.250207 / 0.255139 (-0.004932) | 0.269258 / 0.283200 (-0.013941) | 0.019975 / 0.141683 (-0.121708) | 1.093703 / 1.452155 (-0.358452) | 1.134031 / 1.492716 (-0.358685) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095089 / 0.018006 (0.077082) | 0.301410 / 0.000490 (0.300920) | 0.000251 / 0.000200 (0.000051) | 0.000047 / 0.000054 (-0.000007) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018453 / 0.037411 (-0.018958) | 0.061674 / 0.014526 (0.047148) | 0.073442 / 0.176557 (-0.103114) | 0.119743 / 0.737135 (-0.617392) | 0.074518 / 0.296338 (-0.221820) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.276351 / 0.215209 (0.061142) | 2.757670 / 2.077655 (0.680015) | 1.471199 / 1.504120 (-0.032921) | 1.363620 / 1.541195 (-0.177575) | 1.374175 / 1.468490 (-0.094315) | 0.556444 / 4.584777 (-4.028333) | 2.340637 / 3.745712 (-1.405075) | 2.728341 / 5.269862 (-2.541521) | 1.701214 / 4.565676 (-2.864463) | 0.061832 / 0.424275 (-0.362443) | 0.005287 / 0.007607 (-0.002320) | 0.331848 / 0.226044 (0.105804) | 3.334204 / 2.268929 (1.065276) | 1.791203 / 55.444624 (-53.653421) | 1.512246 / 6.876477 (-5.364231) | 1.529570 / 2.142072 (-0.612503) | 0.632193 / 4.805227 (-4.173034) | 0.116512 / 6.500664 (-6.384153) | 0.041271 / 0.075469 (-0.034198) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.981813 / 1.841788 (-0.859974) | 11.271398 / 8.074308 (3.197090) | 9.654613 / 10.191392 (-0.536780) | 0.140235 / 0.680424 (-0.540188) | 0.014336 / 0.534201 (-0.519865) | 0.284286 / 0.579283 (-0.294997) | 0.260265 / 0.434364 (-0.174099) | 0.321064 / 0.540337 (-0.219274) | 0.417554 / 1.386936 (-0.969382) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005265 / 0.011353 (-0.006088) | 0.003237 / 0.011008 (-0.007772) | 0.049723 / 0.038508 (0.011215) | 0.031705 / 0.023109 (0.008596) | 0.255548 / 0.275898 (-0.020350) | 0.281651 / 0.323480 (-0.041829) | 0.004099 / 0.007986 (-0.003886) | 0.002739 / 0.004328 (-0.001589) | 0.049713 / 0.004250 (0.045463) | 0.041563 / 0.037052 (0.004511) | 0.269500 / 0.258489 (0.011011) | 0.293948 / 0.293841 (0.000107) | 0.029259 / 0.128546 (-0.099287) | 0.010391 / 0.075646 (-0.065255) | 0.057772 / 0.419271 (-0.361500) | 0.033125 / 0.043533 (-0.010408) | 0.258838 / 0.255139 (0.003699) | 0.278616 / 0.283200 (-0.004584) | 0.017543 / 0.141683 (-0.124139) | 1.130319 / 1.452155 (-0.321835) | 1.185976 / 1.492716 (-0.306740) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094827 / 0.018006 (0.076821) | 0.296820 / 0.000490 (0.296331) | 0.000212 / 0.000200 (0.000012) | 0.000046 / 0.000054 (-0.000008) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022583 / 0.037411 (-0.014828) | 0.076318 / 0.014526 (0.061792) | 0.087435 / 0.176557 (-0.089121) | 0.127351 / 0.737135 (-0.609784) | 0.089051 / 0.296338 (-0.207287) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.289476 / 0.215209 (0.074267) | 2.842065 / 2.077655 (0.764410) | 1.536857 / 1.504120 (0.032737) | 1.393914 / 1.541195 (-0.147281) | 1.392636 / 1.468490 (-0.075854) | 0.570299 / 4.584777 (-4.014478) | 0.982246 / 3.745712 (-2.763466) | 2.758773 / 5.269862 (-2.511088) | 1.728615 / 4.565676 (-2.837062) | 0.063944 / 0.424275 (-0.360331) | 0.005014 / 0.007607 (-0.002593) | 0.347474 / 0.226044 (0.121430) | 3.398092 / 2.268929 (1.129164) | 1.855134 / 55.444624 (-53.589491) | 1.568705 / 6.876477 (-5.307772) | 1.574201 / 2.142072 (-0.567871) | 0.649466 / 4.805227 (-4.155761) | 0.116330 / 6.500664 (-6.384334) | 0.040730 / 0.075469 (-0.034739) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.000675 / 1.841788 (-0.841113) | 11.899660 / 8.074308 (3.825352) | 9.913335 / 10.191392 (-0.278058) | 0.132517 / 0.680424 (-0.547907) | 0.016467 / 0.534201 (-0.517734) | 0.282221 / 0.579283 (-0.297062) | 0.125205 / 0.434364 (-0.309159) | 0.374986 / 0.540337 (-0.165351) | 0.418666 / 1.386936 (-0.968270) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#e2f989d01b49e3d6f98b2014d9ece3307e885b7a \"CML watermark\")\n" ]
"2024-05-14T05:21:57"
"2024-05-16T14:36:57"
"2024-05-16T14:28:16"
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6898", "html_url": "https://github.com/huggingface/datasets/pull/6898", "diff_url": "https://github.com/huggingface/datasets/pull/6898.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6898.patch", "merged_at": "2024-05-16T14:28:16" }
Fix YAML error in README files appearing on GitHub. See error message: ![Screenshot from 2024-05-14 06-58-02](https://github.com/huggingface/datasets/assets/8515462/7984cc4e-96ee-4e83-99a4-4c0c5791fa05) Fix #6897.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6898/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6898/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6897
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6897/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6897/comments
https://api.github.com/repos/huggingface/datasets/issues/6897/events
https://github.com/huggingface/datasets/issues/6897
2,293,428,243
I_kwDODunzps6IsvAT
6,897
datasets template guide :: issue in documentation YAML
{ "login": "bghira", "id": 59658056, "node_id": "MDQ6VXNlcjU5NjU4MDU2", "avatar_url": "https://avatars.githubusercontent.com/u/59658056?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bghira", "html_url": "https://github.com/bghira", "followers_url": "https://api.github.com/users/bghira/followers", "following_url": "https://api.github.com/users/bghira/following{/other_user}", "gists_url": "https://api.github.com/users/bghira/gists{/gist_id}", "starred_url": "https://api.github.com/users/bghira/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bghira/subscriptions", "organizations_url": "https://api.github.com/users/bghira/orgs", "repos_url": "https://api.github.com/users/bghira/repos", "events_url": "https://api.github.com/users/bghira/events{/privacy}", "received_events_url": "https://api.github.com/users/bghira/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Hello, @bghira.\r\n\r\nThanks for reporting. Please note that the text originating the error is not supposed to be valid YAML: it contains the instructions to generate the actual YAML content, that should replace the instructions comment.\r\n\r\nOn the other hand, I agree that it is not nice to have that YAML error message at the top of the page: \r\n![Screenshot from 2024-05-14 06-58-02](https://github.com/huggingface/datasets/assets/8515462/28409eb4-99e7-4b24-8eaa-21a65a8f23b2)\r\n\r\nI am proposing a change to make the YAML error disappear.", "thanks albert! i looked at it for a while to figure it out. i think the `raw` view option is the correct way to look at it?" ]
"2024-05-13T17:33:59"
"2024-05-16T14:28:17"
"2024-05-16T14:28:17"
NONE
null
null
null
### Describe the bug There is a YAML error at the top of the page, and I don't think it's supposed to be there ### Steps to reproduce the bug 1. Browse to [this tutorial document](https://github.com/huggingface/datasets/blob/main/templates/README_guide.md) 2. Observe a big red error at the top 3. The rest of the document remains functional ### Expected behavior I think the YAML block should be displayed or ignored. ### Environment info N/A
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6897/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6897/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6896
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6896/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6896/comments
https://api.github.com/repos/huggingface/datasets/issues/6896/events
https://github.com/huggingface/datasets/issues/6896
2,293,176,061
I_kwDODunzps6Irxb9
6,896
Regression bug: `NonMatchingSplitsSizesError` for (possibly) overwritten dataset
{ "login": "finiteautomata", "id": 167943, "node_id": "MDQ6VXNlcjE2Nzk0Mw==", "avatar_url": "https://avatars.githubusercontent.com/u/167943?v=4", "gravatar_id": "", "url": "https://api.github.com/users/finiteautomata", "html_url": "https://github.com/finiteautomata", "followers_url": "https://api.github.com/users/finiteautomata/followers", "following_url": "https://api.github.com/users/finiteautomata/following{/other_user}", "gists_url": "https://api.github.com/users/finiteautomata/gists{/gist_id}", "starred_url": "https://api.github.com/users/finiteautomata/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/finiteautomata/subscriptions", "organizations_url": "https://api.github.com/users/finiteautomata/orgs", "repos_url": "https://api.github.com/users/finiteautomata/repos", "events_url": "https://api.github.com/users/finiteautomata/events{/privacy}", "received_events_url": "https://api.github.com/users/finiteautomata/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
"2024-05-13T15:41:57"
"2024-05-13T15:44:48"
null
NONE
null
null
null
### Describe the bug While trying to load the dataset `https://huggingface.co/datasets/pysentimiento/spanish-tweets-small`, I get this error: ```python --------------------------------------------------------------------------- NonMatchingSplitsSizesError Traceback (most recent call last) [<ipython-input-1-d6a3c721d3b8>](https://localhost:8080/#) in <cell line: 3>() 1 from datasets import load_dataset 2 ----> 3 ds = load_dataset("pysentimiento/spanish-tweets-small") 3 frames [/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://localhost:8080/#) in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, **config_kwargs) 2150 2151 # Download and prepare data -> 2152 builder_instance.download_and_prepare( 2153 download_config=download_config, 2154 download_mode=download_mode, [/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in download_and_prepare(self, output_dir, download_config, download_mode, verification_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, file_format, max_shard_size, num_proc, storage_options, **download_and_prepare_kwargs) 946 if num_proc is not None: 947 prepare_split_kwargs["num_proc"] = num_proc --> 948 self._download_and_prepare( 949 dl_manager=dl_manager, 950 verification_mode=verification_mode, [/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in _download_and_prepare(self, dl_manager, verification_mode, **prepare_split_kwargs) 1059 1060 if verification_mode == VerificationMode.BASIC_CHECKS or verification_mode == VerificationMode.ALL_CHECKS: -> 1061 verify_splits(self.info.splits, split_dict) 1062 1063 # Update the info object with the splits. [/usr/local/lib/python3.10/dist-packages/datasets/utils/info_utils.py](https://localhost:8080/#) in verify_splits(expected_splits, recorded_splits) 98 ] 99 if len(bad_splits) > 0: --> 100 raise NonMatchingSplitsSizesError(str(bad_splits)) 101 logger.info("All the splits matched successfully.") 102 NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=82649695458, num_examples=597433111, shard_lengths=None, dataset_name=None), 'recorded': SplitInfo(name='train', num_bytes=3358310095, num_examples=24898932, shard_lengths=[3626991, 3716991, 4036990, 3506990, 3676990, 3716990, 2616990], dataset_name='spanish-tweets-small')}] ``` I think I had this dataset updated, might be related to #6271 It is working fine as late in `2.10.0` , but not in `2.13.0` onwards. ### Steps to reproduce the bug ```python from datasets import load_dataset ds = load_dataset("pysentimiento/spanish-tweets-small") ``` You can run it in [this notebook](https://colab.research.google.com/drive/1FdhqLiVimHIlkn7B54DbhizeQ4U3vGVl#scrollTo=YgA50cBSibUg) ### Expected behavior Load the dataset without any error ### Environment info - `datasets` version: 2.13.0 - Platform: Linux-6.1.58+-x86_64-with-glibc2.35 - Python version: 3.10.12 - Huggingface_hub version: 0.20.3 - PyArrow version: 14.0.2 - Pandas version: 2.0.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6896/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6896/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6895
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6895/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6895/comments
https://api.github.com/repos/huggingface/datasets/issues/6895/events
https://github.com/huggingface/datasets/pull/6895
2,292,993,156
PR_kwDODunzps5vRK8P
6,895
Document that to_json defaults to JSON Lines
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6895). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004914 / 0.011353 (-0.006439) | 0.003621 / 0.011008 (-0.007387) | 0.062841 / 0.038508 (0.024333) | 0.031630 / 0.023109 (0.008520) | 0.247666 / 0.275898 (-0.028232) | 0.288192 / 0.323480 (-0.035288) | 0.003145 / 0.007986 (-0.004841) | 0.002655 / 0.004328 (-0.001674) | 0.049484 / 0.004250 (0.045233) | 0.046593 / 0.037052 (0.009540) | 0.271550 / 0.258489 (0.013061) | 0.293228 / 0.293841 (-0.000613) | 0.026941 / 0.128546 (-0.101606) | 0.009936 / 0.075646 (-0.065710) | 0.201741 / 0.419271 (-0.217530) | 0.035435 / 0.043533 (-0.008098) | 0.251868 / 0.255139 (-0.003271) | 0.272082 / 0.283200 (-0.011118) | 0.019731 / 0.141683 (-0.121952) | 1.125752 / 1.452155 (-0.326403) | 1.152058 / 1.492716 (-0.340659) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.099695 / 0.018006 (0.081689) | 0.308306 / 0.000490 (0.307816) | 0.000223 / 0.000200 (0.000023) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018616 / 0.037411 (-0.018795) | 0.061886 / 0.014526 (0.047360) | 0.074059 / 0.176557 (-0.102498) | 0.124902 / 0.737135 (-0.612234) | 0.075108 / 0.296338 (-0.221230) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.336707 / 0.215209 (0.121498) | 2.805197 / 2.077655 (0.727542) | 1.565826 / 1.504120 (0.061706) | 1.443708 / 1.541195 (-0.097486) | 1.341167 / 1.468490 (-0.127323) | 0.566814 / 4.584777 (-4.017963) | 2.374536 / 3.745712 (-1.371176) | 2.804921 / 5.269862 (-2.464941) | 1.739848 / 4.565676 (-2.825829) | 0.062779 / 0.424275 (-0.361496) | 0.005341 / 0.007607 (-0.002266) | 0.326482 / 0.226044 (0.100438) | 3.273460 / 2.268929 (1.004531) | 1.803656 / 55.444624 (-53.640968) | 1.502518 / 6.876477 (-5.373958) | 1.523665 / 2.142072 (-0.618407) | 0.642443 / 4.805227 (-4.162784) | 0.117820 / 6.500664 (-6.382844) | 0.042540 / 0.075469 (-0.032929) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.963399 / 1.841788 (-0.878388) | 11.503648 / 8.074308 (3.429340) | 9.483957 / 10.191392 (-0.707435) | 0.129118 / 0.680424 (-0.551306) | 0.014136 / 0.534201 (-0.520065) | 0.286766 / 0.579283 (-0.292517) | 0.273328 / 0.434364 (-0.161036) | 0.324075 / 0.540337 (-0.216262) | 0.420408 / 1.386936 (-0.966528) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005099 / 0.011353 (-0.006254) | 0.003721 / 0.011008 (-0.007288) | 0.050614 / 0.038508 (0.012106) | 0.031882 / 0.023109 (0.008773) | 0.267619 / 0.275898 (-0.008279) | 0.291874 / 0.323480 (-0.031606) | 0.004254 / 0.007986 (-0.003731) | 0.002766 / 0.004328 (-0.001563) | 0.049291 / 0.004250 (0.045041) | 0.043302 / 0.037052 (0.006249) | 0.274891 / 0.258489 (0.016402) | 0.304977 / 0.293841 (0.011136) | 0.029088 / 0.128546 (-0.099459) | 0.010425 / 0.075646 (-0.065221) | 0.057781 / 0.419271 (-0.361491) | 0.033589 / 0.043533 (-0.009943) | 0.264293 / 0.255139 (0.009154) | 0.284861 / 0.283200 (0.001661) | 0.018025 / 0.141683 (-0.123658) | 1.124954 / 1.452155 (-0.327200) | 1.161957 / 1.492716 (-0.330759) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.103622 / 0.018006 (0.085615) | 0.310915 / 0.000490 (0.310425) | 0.000241 / 0.000200 (0.000041) | 0.000051 / 0.000054 (-0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022550 / 0.037411 (-0.014862) | 0.076466 / 0.014526 (0.061940) | 0.088297 / 0.176557 (-0.088260) | 0.128659 / 0.737135 (-0.608477) | 0.091823 / 0.296338 (-0.204516) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.293431 / 0.215209 (0.078222) | 2.888105 / 2.077655 (0.810450) | 1.559581 / 1.504120 (0.055461) | 1.421424 / 1.541195 (-0.119771) | 1.437941 / 1.468490 (-0.030549) | 0.577544 / 4.584777 (-4.007233) | 0.968840 / 3.745712 (-2.776872) | 2.799796 / 5.269862 (-2.470066) | 1.744791 / 4.565676 (-2.820885) | 0.064159 / 0.424275 (-0.360116) | 0.005043 / 0.007607 (-0.002564) | 0.341039 / 0.226044 (0.114995) | 3.354402 / 2.268929 (1.085474) | 1.904093 / 55.444624 (-53.540532) | 1.604046 / 6.876477 (-5.272431) | 1.610384 / 2.142072 (-0.531688) | 0.658129 / 4.805227 (-4.147098) | 0.119297 / 6.500664 (-6.381367) | 0.041396 / 0.075469 (-0.034073) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.001109 / 1.841788 (-0.840678) | 12.081856 / 8.074308 (4.007548) | 10.090943 / 10.191392 (-0.100449) | 0.150433 / 0.680424 (-0.529991) | 0.015850 / 0.534201 (-0.518351) | 0.286590 / 0.579283 (-0.292693) | 0.131137 / 0.434364 (-0.303227) | 0.389033 / 0.540337 (-0.151304) | 0.421382 / 1.386936 (-0.965554) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#22b7baed53f9f295a5dda2fe3eb0b7434bf57e89 \"CML watermark\")\n" ]
"2024-05-13T14:22:34"
"2024-05-16T14:37:25"
"2024-05-16T14:31:26"
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6895", "html_url": "https://github.com/huggingface/datasets/pull/6895", "diff_url": "https://github.com/huggingface/datasets/pull/6895.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6895.patch", "merged_at": "2024-05-16T14:31:26" }
Document that `Dataset.to_json` defaults to JSON Lines, by adding explanation in the corresponding docstring. Fix #6894.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6895/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6895/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6894
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6894/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6894/comments
https://api.github.com/repos/huggingface/datasets/issues/6894/events
https://github.com/huggingface/datasets/issues/6894
2,292,840,226
I_kwDODunzps6Iqfci
6,894
Better document defaults of to_json
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892861, "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation", "name": "documentation", "color": "0075ca", "default": true, "description": "Improvements or additions to documentation" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[]
"2024-05-13T13:30:54"
"2024-05-16T14:31:27"
"2024-05-16T14:31:27"
MEMBER
null
null
null
Better document defaults of `to_json`: the default format is [JSON-Lines](https://jsonlines.org/). Related to: - #6891
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6894/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6894/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6893
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6893/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6893/comments
https://api.github.com/repos/huggingface/datasets/issues/6893/events
https://github.com/huggingface/datasets/pull/6893
2,292,677,439
PR_kwDODunzps5vQFEv
6,893
Close gzipped files properly
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6893). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005388 / 0.011353 (-0.005965) | 0.003822 / 0.011008 (-0.007187) | 0.063285 / 0.038508 (0.024777) | 0.033780 / 0.023109 (0.010671) | 0.239580 / 0.275898 (-0.036318) | 0.264203 / 0.323480 (-0.059277) | 0.004207 / 0.007986 (-0.003778) | 0.002716 / 0.004328 (-0.001612) | 0.049569 / 0.004250 (0.045319) | 0.048591 / 0.037052 (0.011538) | 0.252606 / 0.258489 (-0.005884) | 0.285998 / 0.293841 (-0.007843) | 0.028650 / 0.128546 (-0.099896) | 0.010652 / 0.075646 (-0.064994) | 0.203962 / 0.419271 (-0.215310) | 0.036207 / 0.043533 (-0.007326) | 0.240374 / 0.255139 (-0.014765) | 0.263564 / 0.283200 (-0.019636) | 0.017722 / 0.141683 (-0.123961) | 1.143741 / 1.452155 (-0.308414) | 1.192452 / 1.492716 (-0.300264) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.141329 / 0.018006 (0.123323) | 0.320169 / 0.000490 (0.319679) | 0.000240 / 0.000200 (0.000041) | 0.000045 / 0.000054 (-0.000009) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019885 / 0.037411 (-0.017526) | 0.063322 / 0.014526 (0.048796) | 0.075446 / 0.176557 (-0.101110) | 0.122619 / 0.737135 (-0.614517) | 0.077175 / 0.296338 (-0.219163) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.281292 / 0.215209 (0.066083) | 2.796220 / 2.077655 (0.718565) | 1.456035 / 1.504120 (-0.048085) | 1.334445 / 1.541195 (-0.206750) | 1.380223 / 1.468490 (-0.088267) | 0.575895 / 4.584777 (-4.008882) | 2.375791 / 3.745712 (-1.369921) | 2.926273 / 5.269862 (-2.343589) | 1.832586 / 4.565676 (-2.733090) | 0.064323 / 0.424275 (-0.359952) | 0.005403 / 0.007607 (-0.002204) | 0.334088 / 0.226044 (0.108043) | 3.321174 / 2.268929 (1.052246) | 1.821432 / 55.444624 (-53.623193) | 1.520181 / 6.876477 (-5.356296) | 1.582487 / 2.142072 (-0.559585) | 0.645641 / 4.805227 (-4.159586) | 0.119596 / 6.500664 (-6.381068) | 0.043144 / 0.075469 (-0.032325) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.985104 / 1.841788 (-0.856684) | 12.518240 / 8.074308 (4.443932) | 10.017118 / 10.191392 (-0.174274) | 0.133900 / 0.680424 (-0.546524) | 0.014591 / 0.534201 (-0.519610) | 0.288326 / 0.579283 (-0.290957) | 0.262292 / 0.434364 (-0.172072) | 0.327601 / 0.540337 (-0.212736) | 0.421525 / 1.386936 (-0.965411) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005546 / 0.011353 (-0.005807) | 0.003961 / 0.011008 (-0.007047) | 0.051745 / 0.038508 (0.013237) | 0.032587 / 0.023109 (0.009478) | 0.266886 / 0.275898 (-0.009012) | 0.301327 / 0.323480 (-0.022153) | 0.004273 / 0.007986 (-0.003713) | 0.002851 / 0.004328 (-0.001477) | 0.049333 / 0.004250 (0.045082) | 0.044530 / 0.037052 (0.007478) | 0.286829 / 0.258489 (0.028340) | 0.310732 / 0.293841 (0.016892) | 0.029925 / 0.128546 (-0.098621) | 0.011270 / 0.075646 (-0.064377) | 0.059071 / 0.419271 (-0.360200) | 0.033899 / 0.043533 (-0.009633) | 0.270448 / 0.255139 (0.015309) | 0.286935 / 0.283200 (0.003735) | 0.019516 / 0.141683 (-0.122167) | 1.125815 / 1.452155 (-0.326339) | 1.179893 / 1.492716 (-0.312823) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.096476 / 0.018006 (0.078470) | 0.305149 / 0.000490 (0.304660) | 0.000207 / 0.000200 (0.000008) | 0.000046 / 0.000054 (-0.000009) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023648 / 0.037411 (-0.013763) | 0.082847 / 0.014526 (0.068322) | 0.089210 / 0.176557 (-0.087347) | 0.130194 / 0.737135 (-0.606941) | 0.091700 / 0.296338 (-0.204639) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.290995 / 0.215209 (0.075786) | 2.870335 / 2.077655 (0.792680) | 1.595661 / 1.504120 (0.091541) | 1.452319 / 1.541195 (-0.088876) | 1.505647 / 1.468490 (0.037157) | 0.575856 / 4.584777 (-4.008921) | 1.005527 / 3.745712 (-2.740185) | 2.927824 / 5.269862 (-2.342038) | 1.791702 / 4.565676 (-2.773975) | 0.064804 / 0.424275 (-0.359471) | 0.005203 / 0.007607 (-0.002404) | 0.348615 / 0.226044 (0.122570) | 3.463989 / 2.268929 (1.195060) | 1.947758 / 55.444624 (-53.496866) | 1.669974 / 6.876477 (-5.206502) | 1.721663 / 2.142072 (-0.420410) | 0.650999 / 4.805227 (-4.154228) | 0.117769 / 6.500664 (-6.382895) | 0.041738 / 0.075469 (-0.033731) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.004140 / 1.841788 (-0.837648) | 13.035487 / 8.074308 (4.961179) | 10.318152 / 10.191392 (0.126760) | 0.143776 / 0.680424 (-0.536648) | 0.016272 / 0.534201 (-0.517929) | 0.286564 / 0.579283 (-0.292719) | 0.126579 / 0.434364 (-0.307785) | 0.397253 / 0.540337 (-0.143085) | 0.424968 / 1.386936 (-0.961968) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#ddb6a283d7dfccc81a9fb12e761b819fed86c7a0 \"CML watermark\")\n", "Supersede and close: #6889" ]
"2024-05-13T12:24:39"
"2024-05-13T13:53:17"
"2024-05-13T13:01:54"
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6893", "html_url": "https://github.com/huggingface/datasets/pull/6893", "diff_url": "https://github.com/huggingface/datasets/pull/6893.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6893.patch", "merged_at": "2024-05-13T13:01:54" }
close https://github.com/huggingface/datasets/issues/6877
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6893/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 1, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6893/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6892
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6892/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6892/comments
https://api.github.com/repos/huggingface/datasets/issues/6892/events
https://github.com/huggingface/datasets/pull/6892
2,291,201,347
PR_kwDODunzps5vLIlp
6,892
Add support for categorical/dictionary types
{ "login": "EthanSteinberg", "id": 342233, "node_id": "MDQ6VXNlcjM0MjIzMw==", "avatar_url": "https://avatars.githubusercontent.com/u/342233?v=4", "gravatar_id": "", "url": "https://api.github.com/users/EthanSteinberg", "html_url": "https://github.com/EthanSteinberg", "followers_url": "https://api.github.com/users/EthanSteinberg/followers", "following_url": "https://api.github.com/users/EthanSteinberg/following{/other_user}", "gists_url": "https://api.github.com/users/EthanSteinberg/gists{/gist_id}", "starred_url": "https://api.github.com/users/EthanSteinberg/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/EthanSteinberg/subscriptions", "organizations_url": "https://api.github.com/users/EthanSteinberg/orgs", "repos_url": "https://api.github.com/users/EthanSteinberg/repos", "events_url": "https://api.github.com/users/EthanSteinberg/events{/privacy}", "received_events_url": "https://api.github.com/users/EthanSteinberg/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6892). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005388 / 0.011353 (-0.005965) | 0.004004 / 0.011008 (-0.007005) | 0.064037 / 0.038508 (0.025529) | 0.031666 / 0.023109 (0.008557) | 0.236493 / 0.275898 (-0.039405) | 0.269047 / 0.323480 (-0.054432) | 0.005008 / 0.007986 (-0.002977) | 0.002964 / 0.004328 (-0.001364) | 0.049926 / 0.004250 (0.045675) | 0.048092 / 0.037052 (0.011039) | 0.245563 / 0.258489 (-0.012926) | 0.282614 / 0.293841 (-0.011227) | 0.027488 / 0.128546 (-0.101058) | 0.010904 / 0.075646 (-0.064742) | 0.204892 / 0.419271 (-0.214379) | 0.037161 / 0.043533 (-0.006372) | 0.238488 / 0.255139 (-0.016651) | 0.258192 / 0.283200 (-0.025008) | 0.018819 / 0.141683 (-0.122864) | 1.131573 / 1.452155 (-0.320582) | 1.204084 / 1.492716 (-0.288632) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095852 / 0.018006 (0.077846) | 0.300225 / 0.000490 (0.299735) | 0.000217 / 0.000200 (0.000017) | 0.000052 / 0.000054 (-0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018592 / 0.037411 (-0.018819) | 0.062297 / 0.014526 (0.047772) | 0.074344 / 0.176557 (-0.102212) | 0.120654 / 0.737135 (-0.616481) | 0.075567 / 0.296338 (-0.220772) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.287700 / 0.215209 (0.072491) | 2.829536 / 2.077655 (0.751882) | 1.446296 / 1.504120 (-0.057824) | 1.320912 / 1.541195 (-0.220283) | 1.362744 / 1.468490 (-0.105746) | 0.563732 / 4.584777 (-4.021045) | 2.399904 / 3.745712 (-1.345808) | 2.676706 / 5.269862 (-2.593156) | 1.744780 / 4.565676 (-2.820896) | 0.062884 / 0.424275 (-0.361391) | 0.004936 / 0.007607 (-0.002671) | 0.338084 / 0.226044 (0.112040) | 3.309532 / 2.268929 (1.040603) | 1.792791 / 55.444624 (-53.651833) | 1.502038 / 6.876477 (-5.374439) | 1.662417 / 2.142072 (-0.479655) | 0.642835 / 4.805227 (-4.162393) | 0.117002 / 6.500664 (-6.383662) | 0.041880 / 0.075469 (-0.033589) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.974814 / 1.841788 (-0.866974) | 11.430883 / 8.074308 (3.356575) | 10.314734 / 10.191392 (0.123342) | 0.139838 / 0.680424 (-0.540586) | 0.014939 / 0.534201 (-0.519262) | 0.288048 / 0.579283 (-0.291235) | 0.269146 / 0.434364 (-0.165218) | 0.324300 / 0.540337 (-0.216037) | 0.421612 / 1.386936 (-0.965324) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005660 / 0.011353 (-0.005692) | 0.003723 / 0.011008 (-0.007285) | 0.049909 / 0.038508 (0.011401) | 0.033079 / 0.023109 (0.009970) | 0.270940 / 0.275898 (-0.004958) | 0.291173 / 0.323480 (-0.032307) | 0.004336 / 0.007986 (-0.003650) | 0.002793 / 0.004328 (-0.001535) | 0.049619 / 0.004250 (0.045368) | 0.041062 / 0.037052 (0.004010) | 0.285026 / 0.258489 (0.026537) | 0.322119 / 0.293841 (0.028278) | 0.029653 / 0.128546 (-0.098894) | 0.010785 / 0.075646 (-0.064861) | 0.058680 / 0.419271 (-0.360591) | 0.033300 / 0.043533 (-0.010233) | 0.269452 / 0.255139 (0.014313) | 0.285426 / 0.283200 (0.002226) | 0.017655 / 0.141683 (-0.124028) | 1.144713 / 1.452155 (-0.307442) | 1.196828 / 1.492716 (-0.295888) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.096719 / 0.018006 (0.078713) | 0.303532 / 0.000490 (0.303042) | 0.000223 / 0.000200 (0.000023) | 0.000049 / 0.000054 (-0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022620 / 0.037411 (-0.014791) | 0.077057 / 0.014526 (0.062532) | 0.088570 / 0.176557 (-0.087987) | 0.128715 / 0.737135 (-0.608421) | 0.090844 / 0.296338 (-0.205494) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.298101 / 0.215209 (0.082892) | 2.919861 / 2.077655 (0.842206) | 1.608945 / 1.504120 (0.104825) | 1.487756 / 1.541195 (-0.053439) | 1.520800 / 1.468490 (0.052310) | 0.576615 / 4.584777 (-4.008162) | 0.964250 / 3.745712 (-2.781462) | 2.852968 / 5.269862 (-2.416893) | 1.868768 / 4.565676 (-2.696908) | 0.063934 / 0.424275 (-0.360341) | 0.005093 / 0.007607 (-0.002514) | 0.352984 / 0.226044 (0.126939) | 3.507441 / 2.268929 (1.238513) | 1.944467 / 55.444624 (-53.500158) | 1.663985 / 6.876477 (-5.212492) | 1.847029 / 2.142072 (-0.295043) | 0.669228 / 4.805227 (-4.136000) | 0.118990 / 6.500664 (-6.381675) | 0.041788 / 0.075469 (-0.033681) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.004541 / 1.841788 (-0.837247) | 12.525181 / 8.074308 (4.450873) | 10.488167 / 10.191392 (0.296775) | 0.141182 / 0.680424 (-0.539242) | 0.016432 / 0.534201 (-0.517769) | 0.283682 / 0.579283 (-0.295601) | 0.128277 / 0.434364 (-0.306087) | 0.321933 / 0.540337 (-0.218404) | 0.416430 / 1.386936 (-0.970506) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#686f5df47442bf4b3a2a73ba255427ae8d659eea \"CML watermark\")\n", "@lhoestq Thanks a ton for helping this get merged!" ]
"2024-05-12T07:15:08"
"2024-06-07T15:01:39"
"2024-06-07T12:20:42"
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6892", "html_url": "https://github.com/huggingface/datasets/pull/6892", "diff_url": "https://github.com/huggingface/datasets/pull/6892.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6892.patch", "merged_at": "2024-06-07T12:20:42" }
Arrow has a very useful dictionary/categorical type (https://arrow.apache.org/docs/python/generated/pyarrow.dictionary.html). This data type has significant speed, memory and disk benefits over pa.string() when there are only a few unique text strings in a column. Unfortunately, huggingface datasets currently does not support this type. So huggingface datasets cannot natively read many parquet files that use this datatype .This PR adds support for Huggingface Datasets to read categorical/dictionary data. Note: This PR functions by simply converting those dictionary/categorical types to strings. This means that huggingface datasets cannot take advantage of the compute benefits of categoricals, but it significantly simplifies logic. At this time, I do not think it makes sense to optimize categorical support within huggingface datasets and that we should only try to optimize later, if necessary. Closes #5706
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6892/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6892/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6891
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6891/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6891/comments
https://api.github.com/repos/huggingface/datasets/issues/6891/events
https://github.com/huggingface/datasets/issues/6891
2,291,118,869
I_kwDODunzps6Ij7MV
6,891
Unable to load JSON saved using `to_json`
{ "login": "DarshanDeshpande", "id": 39432636, "node_id": "MDQ6VXNlcjM5NDMyNjM2", "avatar_url": "https://avatars.githubusercontent.com/u/39432636?v=4", "gravatar_id": "", "url": "https://api.github.com/users/DarshanDeshpande", "html_url": "https://github.com/DarshanDeshpande", "followers_url": "https://api.github.com/users/DarshanDeshpande/followers", "following_url": "https://api.github.com/users/DarshanDeshpande/following{/other_user}", "gists_url": "https://api.github.com/users/DarshanDeshpande/gists{/gist_id}", "starred_url": "https://api.github.com/users/DarshanDeshpande/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/DarshanDeshpande/subscriptions", "organizations_url": "https://api.github.com/users/DarshanDeshpande/orgs", "repos_url": "https://api.github.com/users/DarshanDeshpande/repos", "events_url": "https://api.github.com/users/DarshanDeshpande/events{/privacy}", "received_events_url": "https://api.github.com/users/DarshanDeshpande/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi @DarshanDeshpande,\r\n\r\nPlease note that the default format of the method `Dataset.to_json` is [JSON-Lines](https://jsonlines.org/): it passes `orient=\"records\", lines=True` to `pandas.DataFrame.to_json`. This format is specially useful for large datasets, since unlike regular JSON files, it does not require loading all the data into memory at once, but can be done iteratively by batches.\r\n\r\nIn order to read this file using the `json` library, you should parse line by line:\r\n```python\r\nwith open(\"full_dataset.json\", \"r\") as f:\r\n data = [json.loads(line) for line in f]\r\nlen(data)\r\n```\r\nMaybe we should explain this better in our docs.", "Now we explain this better in out docs:\r\n- #6895" ]
"2024-05-12T01:02:51"
"2024-05-16T14:32:55"
"2024-05-12T07:02:02"
NONE
null
null
null
### Describe the bug Datasets stored in the JSON format cannot be loaded using `json.load()` ### Steps to reproduce the bug ``` import json from datasets import load_dataset dataset = load_dataset("squad") train_dataset, test_dataset = dataset["train"], dataset["validation"] test_dataset.to_json("full_dataset.json") # This works loaded_test = load_dataset("json", data_files="full_dataset.json") # This fails loaded_test = json.load(open("full_dataset.json", "r")) ``` ### Expected behavior The JSON should be correctly formatted when writing so that it can be loaded using `json.load()`. ### Environment info Colab: https://colab.research.google.com/drive/1st1iStFUVgu9ZPvnzSzL4vDeYWDwYpUm?usp=sharing
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6891/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6891/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6890
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6890/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6890/comments
https://api.github.com/repos/huggingface/datasets/issues/6890/events
https://github.com/huggingface/datasets/issues/6890
2,288,699,041
I_kwDODunzps6Iasah
6,890
add `with_transform` and/or `set_transform` to IterableDataset
{ "login": "not-lain", "id": 70411813, "node_id": "MDQ6VXNlcjcwNDExODEz", "avatar_url": "https://avatars.githubusercontent.com/u/70411813?v=4", "gravatar_id": "", "url": "https://api.github.com/users/not-lain", "html_url": "https://github.com/not-lain", "followers_url": "https://api.github.com/users/not-lain/followers", "following_url": "https://api.github.com/users/not-lain/following{/other_user}", "gists_url": "https://api.github.com/users/not-lain/gists{/gist_id}", "starred_url": "https://api.github.com/users/not-lain/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/not-lain/subscriptions", "organizations_url": "https://api.github.com/users/not-lain/orgs", "repos_url": "https://api.github.com/users/not-lain/repos", "events_url": "https://api.github.com/users/not-lain/events{/privacy}", "received_events_url": "https://api.github.com/users/not-lain/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[]
"2024-05-10T01:00:12"
"2024-05-10T01:00:46"
null
NONE
null
null
null
### Feature request when working with a really large dataset it would save us a lot of time (and compute resources) to use either with_transform or the set_transform from the Dataset class instead of waiting for the entire dataset to map ### Motivation don't want to wait for a really long dataset to map, this would give IterableDataset an extra advantage over the Dataset class. reducing time and resources ### Your contribution I am a little busy with my job search lately, but would post about this feature in my social media. Apologies again (dad going to kick me out soon), if I ever have some free time I will contribute to making this a reality, but that's going to be hard     / (┬┬﹏┬┬)\
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6890/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6890/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6889
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6889/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6889/comments
https://api.github.com/repos/huggingface/datasets/issues/6889/events
https://github.com/huggingface/datasets/pull/6889
2,287,720,539
PR_kwDODunzps5u_hW-
6,889
fix bug #6877
{ "login": "arthasking123", "id": 16257131, "node_id": "MDQ6VXNlcjE2MjU3MTMx", "avatar_url": "https://avatars.githubusercontent.com/u/16257131?v=4", "gravatar_id": "", "url": "https://api.github.com/users/arthasking123", "html_url": "https://github.com/arthasking123", "followers_url": "https://api.github.com/users/arthasking123/followers", "following_url": "https://api.github.com/users/arthasking123/following{/other_user}", "gists_url": "https://api.github.com/users/arthasking123/gists{/gist_id}", "starred_url": "https://api.github.com/users/arthasking123/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/arthasking123/subscriptions", "organizations_url": "https://api.github.com/users/arthasking123/orgs", "repos_url": "https://api.github.com/users/arthasking123/repos", "events_url": "https://api.github.com/users/arthasking123/events{/privacy}", "received_events_url": "https://api.github.com/users/arthasking123/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "@loicmagne, @KennethEnevoldsen", "Can you give more details on why this fix works ?", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6889). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "> Can you give more details on why this fix works ?\r\n\r\nIn order to locate this file handle problem, I defined a print_open_files_count() function using psutil library:\r\n```python\r\ndef print_open_files_count(markstr):\r\n pid = os.getpid()\r\n p = psutil.Process(pid)\r\n open_files = p.open_files()\r\n print(f\"{markstr}_Open files count: {len(open_files)}\")\r\n\r\n\r\n```\r\n\r\nand added this function as below:\r\n```python\r\n\r\nwith open(file, \"rb\") as f:\r\n print_open_files_count('Before')\r\n...\r\n...\r\n batch_idx += 1\r\nprint_open_files_count('After')\r\n```\r\nand the console output as below when loading the 'mteb/biblenlp-corpus-mmteb' dataset :\r\n```shell\r\nBefore_Open files count: 1\r\nAfter_Open files count: 1\r\nBefore_Open files count: 2\r\nAfter_Open files count: 2\r\nBefore_Open files count: 3\r\nAfter_Open files count: 3\r\n...\r\n```\r\nwhich indicated there was a file handle leakage in the dataset loading process. So I tried to close the file handle manually using os library and found it works although the core issue has not been found temporarily", "I think it would be better to find the cause and have a cleaner fix, because while your suggested fix works for a simple case, it will lead to files that will stay open if there is an error during the dataset generation for example.\r\n\r\n\r\nBtw I was not able to reproduce locally (macbook pro m2) or on colab, so it might be something related to your environment. Also `open()` should close the file at the end of the `with` block so I don't really get how you can get this issue :/", "> Btw I was not able to reproduce locally (macbook pro m2) or on colab, so it might be something related to your environment. Also `open()` should close the file at the end of the `with` block so I don't really get how you can get this issue :/\r\n\r\nhow about setting the limitation of open files to 1024?", "I was able to reproduce on colab with\r\n\r\n```\r\n!ulimit -n 256 && python -c \"from datasets import load_dataset; load_dataset('mteb/biblenlp-corpus-mmteb')\"\r\n```\r\n\r\n(also needed to `!pip install -qq git+https://github.com/huggingface/huggingface_hub.git@less-paths-info-calls` to fix a rate limit for some reason)\r\n\r\nwhich led to me find that the issue came from the `GzipFileSystem` that wasn't closing files.\r\n\r\nto reproduce:\r\n\r\n```python\r\nimport gzip\r\nimport os\r\n\r\nimport datasets\r\nimport fsspec\r\n\r\n# os.mkdir(\"tmp\")\r\n# for i in range(300):\r\n# with gzip.open(f\"tmp/{i}.txt.gz\", \"wt\") as f:\r\n# f.write(\"yo\")\r\n\r\nfor i in range(300):\r\n with fsspec.open(f\"gzip://{i}.txt::tmp/{i}.txt.gz\", \"rb\") as f:\r\n f.read()\r\n```\r\n\r\nI opened https://github.com/huggingface/datasets/pull/6893 to fix this, can you try if it works on your side ?", "ok\n\n\n\n---- Replied Message ----\n| From | Quentin ***@***.***> |\n| Date | 05/13/2024 20:28 |\n| To | ***@***.***> |\n| Cc | ***@***.***>***@***.***> |\n| Subject | Re: [huggingface/datasets] fix bug #6877 (PR #6889) |\n\nI was able to reproduce on colab with\n\n!ulimit -n 256 && python -c \"from datasets import load_dataset; load_dataset('mteb/biblenlp-corpus-mmteb')\"\n\n\n(also needed to !pip install -qq ***@***.*** to fix a rate limit for some reason)\n\nwhich lead to me find that the issue came from the GzipFileSystem that wasn't closing files.\n\nto reproduce:\n\nimportgzipimportosimportdatasetsimportfsspec# os.mkdir(\"tmp\")# for i in range(300):# with gzip.open(f\"tmp/{i}.txt.gz\", \"wt\") as f:# f.write(\"yo\")foriinrange(300):\n withfsspec.open(f\"gzip://::tmp/{i}.txt.gz\", \"rb\") asf:\n f.read()\n\nI opened #6893 to fix this, can you try if it works on your side ?\n\n—\nReply to this email directly, view it on GitHub, or unsubscribe.\nYou are receiving this because you authored the thread.Message ID: ***@***.***>", "Superseded by:\r\n- #6893" ]
"2024-05-09T13:38:40"
"2024-05-13T13:35:32"
"2024-05-13T13:35:32"
NONE
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6889", "html_url": "https://github.com/huggingface/datasets/pull/6889", "diff_url": "https://github.com/huggingface/datasets/pull/6889.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6889.patch", "merged_at": null }
fix bug #6877 due to maybe f becomes invaild after yield process the results are below: Resolving data files: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 828/828 [00:01<00:00, 420.41it/s] Resolving data files: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 828/828 [00:00<00:00, 26148.48it/s] Resolving data files: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 828/828 [00:00<00:00, 409731.44it/s] Resolving data files: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 828/828 [00:00<00:00, 289720.84it/s] Resolving data files: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 828/828 [00:00<00:00, 26663.42it/s] Resolving data files: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 828/828 [00:00<00:00, 434056.21it/s] Downloading data: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 828/828 [00:00<00:00, 13222.33files/s] Downloading data: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 828/828 [00:04<00:00, 180.67files/s] Downloading data: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 828/828 [01:35<00:00, 8.70files/s] Generating train split: 1571592 examples [00:08, 176736.09 examples/s] Generating test split: 85533 examples [00:01, 48224.56 examples/s] Generating validation split: 86246 examples [00:01, 50164.16 examples/s] Fix https://github.com/huggingface/datasets/issues/6877. CC: @natolambert
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6889/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6889/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6888
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6888/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6888/comments
https://api.github.com/repos/huggingface/datasets/issues/6888/events
https://github.com/huggingface/datasets/pull/6888
2,287,169,676
PR_kwDODunzps5u9omr
6,888
Support WebDataset containing file basenames with dots
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6888). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "I think webdataset splits the file name and extension using the first dot no ?\r\n\r\nhttps://github.com/webdataset/webdataset/blob/945b251a872ec0d337be8f9ea17a9c5b0d017ff3/webdataset/tariterators.py#L226\r\n\r\nlinks to this function that splits on first dot\r\n\r\n```python\r\n\r\ndef base_plus_ext(path):\r\n \"\"\"Split off all file extensions.\r\n\r\n Returns base, allext.\r\n\r\n Args:\r\n path: path with extensions\r\n\r\n Returns:\r\n path with all extensions removed\r\n \"\"\"\r\n match = re.match(r\"^((?:.*/|)[^.]+)[.]([^/]*)$\", path)\r\n if not match:\r\n return None, None\r\n return match.group(1), match.group(2)\r\n```", "So maybe the original issue is actually due to one of the files containing a dot in its file name that is not for the extension\r\n\r\n```python\r\n>>> base_plus_ext(\"15_Cohen_1-s2.0-S0929664620300449-gr3_lrg-b.png\")\r\n('15_Cohen_1-s2', '0-S0929664620300449-gr3_lrg-b.png')\r\n```", "Thanks for your review, @lhoestq.\r\n\r\nI was not aware that `webdataset` requires filenames without dots in their basenames.", "I they can have dots for the extension (that becomes the column name) but not in the key used to group files into samples" ]
"2024-05-09T08:25:30"
"2024-05-10T13:54:06"
"2024-05-10T13:54:06"
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6888", "html_url": "https://github.com/huggingface/datasets/pull/6888", "diff_url": "https://github.com/huggingface/datasets/pull/6888.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6888.patch", "merged_at": null }
Support WebDataset containing file basenames with dots. Fix #6880.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6888/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6888/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6887
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6887/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6887/comments
https://api.github.com/repos/huggingface/datasets/issues/6887/events
https://github.com/huggingface/datasets/issues/6887
2,286,786,396
I_kwDODunzps6ITZdc
6,887
FAISS load to None
{ "login": "brainer3220", "id": 40418544, "node_id": "MDQ6VXNlcjQwNDE4NTQ0", "avatar_url": "https://avatars.githubusercontent.com/u/40418544?v=4", "gravatar_id": "", "url": "https://api.github.com/users/brainer3220", "html_url": "https://github.com/brainer3220", "followers_url": "https://api.github.com/users/brainer3220/followers", "following_url": "https://api.github.com/users/brainer3220/following{/other_user}", "gists_url": "https://api.github.com/users/brainer3220/gists{/gist_id}", "starred_url": "https://api.github.com/users/brainer3220/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/brainer3220/subscriptions", "organizations_url": "https://api.github.com/users/brainer3220/orgs", "repos_url": "https://api.github.com/users/brainer3220/repos", "events_url": "https://api.github.com/users/brainer3220/events{/privacy}", "received_events_url": "https://api.github.com/users/brainer3220/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Hello,\r\n\r\nI'm not sure I understand. \r\nThe return value of `ds.load_faiss_index` is None as expected.\r\n\r\nI see that loading an Index on a dataset that doesn't have an `embedding` column doesn't raise an Issue. Is that the issue?\r\n\r\nSo `ds` doesn't have an `embedding` column, but we load an index that looks for it. But this will raise an issue only when calling `ds.search`." ]
"2024-05-09T02:43:50"
"2024-05-16T20:44:23"
null
NONE
null
null
null
### Describe the bug I've use FAISS with Datasets and save to FAISS. Then load to save FAISS then no error, then ds to None ```python ds.load_faiss_index('embeddings', 'my_index.faiss') ``` ### Steps to reproduce the bug # 1. ```python ds_with_embeddings = ds.map(lambda example: {'embeddings': model(transforms(example['image']).unsqueeze(0)).squeeze()}, batch_size=64) ds_with_embeddings.add_faiss_index(column='embeddings') ds_with_embeddings.save_faiss_index('embeddings', 'index.faiss') ``` # 2. ```python ds.load_faiss_index('embeddings', 'my_index.faiss') ``` ### Expected behavior Add column in Datasets. ### Environment info Google Colab, SageMaker Notebook
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6887/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6887/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6886
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6886/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6886/comments
https://api.github.com/repos/huggingface/datasets/issues/6886/events
https://github.com/huggingface/datasets/issues/6886
2,286,328,984
I_kwDODunzps6IRpyY
6,886
load_dataset with data_dir and cache_dir set fail with not supported
{ "login": "fah", "id": 322496, "node_id": "MDQ6VXNlcjMyMjQ5Ng==", "avatar_url": "https://avatars.githubusercontent.com/u/322496?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fah", "html_url": "https://github.com/fah", "followers_url": "https://api.github.com/users/fah/followers", "following_url": "https://api.github.com/users/fah/following{/other_user}", "gists_url": "https://api.github.com/users/fah/gists{/gist_id}", "starred_url": "https://api.github.com/users/fah/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fah/subscriptions", "organizations_url": "https://api.github.com/users/fah/orgs", "repos_url": "https://api.github.com/users/fah/repos", "events_url": "https://api.github.com/users/fah/events{/privacy}", "received_events_url": "https://api.github.com/users/fah/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
"2024-05-08T19:52:35"
"2024-05-08T19:58:11"
null
NONE
null
null
null
### Describe the bug with python 3.11 I execute: ```py from transformers import Wav2Vec2Processor, Data2VecAudioModel import torch from torch import nn from datasets import load_dataset, concatenate_datasets # load demo audio and set processor dataset_clean = load_dataset("librispeech_asr", "clean", split="validation", data_dir="data", cache_dir="cache") ``` This fails in the last line with ```log Found cached dataset librispeech_asr (file:///Users/as/Documents/Project/git/audio2vec/cache/librispeech_asr/clean-data_dir=data/2.1.0/cff5df6e7955c80a67f80e27e7e655de71c689e2d2364bece785b972acb37fe7) Traceback (most recent call last): File "/Users/as/Documents/Project/git/audio2vec/src/music2vec-v1.py", line 7, in <module> dataset_clean = load_dataset("librispeech_asr", "clean", split="validation", data_dir="data", cache_dir="cache") ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/as/anaconda3/lib/python3.11/site-packages/datasets/load.py", line 1810, in load_dataset ds = builder_instance.as_dataset(split=split, verification_mode=verification_mode, in_memory=keep_in_memory) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/as/anaconda3/lib/python3.11/site-packages/datasets/builder.py", line 1113, in as_dataset raise NotImplementedError(f"Loading a dataset cached in a {type(self._fs).__name__} is not supported.") NotImplementedError: Loading a dataset cached in a LocalFileSystem is not supported. ``` ### Steps to reproduce the bug I setup an venv with requirements.txt ```txt transformers==4.40.2 torch==2.2.2 datasets==2.16.0 fsspec==2023.9.2 ``` pip freeze is: ``` aiohttp==3.9.5 aiosignal==1.3.1 attrs==23.2.0 certifi==2024.2.2 charset-normalizer==3.3.2 datasets==2.16.0 dill==0.3.7 filelock==3.14.0 frozenlist==1.4.1 fsspec==2023.9.2 huggingface-hub==0.23.0 idna==3.7 Jinja2==3.1.4 MarkupSafe==2.1.5 mpmath==1.3.0 multidict==6.0.5 multiprocess==0.70.15 networkx==3.3 numpy==1.26.4 packaging==24.0 pandas==2.2.2 pyarrow==16.0.0 pyarrow-hotfix==0.6 python-dateutil==2.9.0.post0 pytz==2024.1 PyYAML==6.0.1 regex==2024.4.28 requests==2.31.0 safetensors==0.4.3 six==1.16.0 sympy==1.12 tokenizers==0.19.1 torch==2.2.2 tqdm==4.66.4 transformers==4.40.2 typing_extensions==4.11.0 tzdata==2024.1 urllib3==2.2.1 xxhash==3.4.1 yarl==1.9.4 ``` I execute this on a M1 Mac. ### Expected behavior I don't understand the error message. Why is "local" caching not supported. Would it possible to give some additional hint with the error message how to solve this issue? ### Environment info source .... python -u example.py
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6886/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6886/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6885
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6885/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6885/comments
https://api.github.com/repos/huggingface/datasets/issues/6885/events
https://github.com/huggingface/datasets/pull/6885
2,285,115,400
PR_kwDODunzps5u2urB
6,885
Support jax 0.4.27 in CI tests
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6885). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005232 / 0.011353 (-0.006121) | 0.003749 / 0.011008 (-0.007260) | 0.063451 / 0.038508 (0.024943) | 0.031164 / 0.023109 (0.008055) | 0.252024 / 0.275898 (-0.023874) | 0.274479 / 0.323480 (-0.049001) | 0.003238 / 0.007986 (-0.004748) | 0.002668 / 0.004328 (-0.001660) | 0.049570 / 0.004250 (0.045320) | 0.046159 / 0.037052 (0.009107) | 0.273416 / 0.258489 (0.014927) | 0.299064 / 0.293841 (0.005223) | 0.027758 / 0.128546 (-0.100788) | 0.010702 / 0.075646 (-0.064944) | 0.207244 / 0.419271 (-0.212028) | 0.036139 / 0.043533 (-0.007394) | 0.249966 / 0.255139 (-0.005173) | 0.270685 / 0.283200 (-0.012515) | 0.019938 / 0.141683 (-0.121745) | 1.133642 / 1.452155 (-0.318512) | 1.170712 / 1.492716 (-0.322004) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.098352 / 0.018006 (0.080346) | 0.310738 / 0.000490 (0.310248) | 0.000225 / 0.000200 (0.000025) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018151 / 0.037411 (-0.019261) | 0.061169 / 0.014526 (0.046644) | 0.073275 / 0.176557 (-0.103281) | 0.120320 / 0.737135 (-0.616815) | 0.083945 / 0.296338 (-0.212394) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.283285 / 0.215209 (0.068075) | 2.766129 / 2.077655 (0.688475) | 1.477831 / 1.504120 (-0.026289) | 1.363365 / 1.541195 (-0.177830) | 1.402081 / 1.468490 (-0.066409) | 0.554100 / 4.584777 (-4.030677) | 2.374885 / 3.745712 (-1.370827) | 2.866260 / 5.269862 (-2.403601) | 1.775109 / 4.565676 (-2.790567) | 0.062416 / 0.424275 (-0.361859) | 0.005490 / 0.007607 (-0.002117) | 0.379293 / 0.226044 (0.153248) | 3.330534 / 2.268929 (1.061606) | 1.881648 / 55.444624 (-53.562977) | 1.549847 / 6.876477 (-5.326629) | 1.660350 / 2.142072 (-0.481722) | 0.631013 / 4.805227 (-4.174214) | 0.116646 / 6.500664 (-6.384018) | 0.042977 / 0.075469 (-0.032492) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.996102 / 1.841788 (-0.845685) | 12.079143 / 8.074308 (4.004835) | 9.903568 / 10.191392 (-0.287824) | 0.141447 / 0.680424 (-0.538976) | 0.014115 / 0.534201 (-0.520086) | 0.287576 / 0.579283 (-0.291707) | 0.262951 / 0.434364 (-0.171413) | 0.325167 / 0.540337 (-0.215170) | 0.425780 / 1.386936 (-0.961156) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005213 / 0.011353 (-0.006139) | 0.003686 / 0.011008 (-0.007322) | 0.049963 / 0.038508 (0.011455) | 0.030635 / 0.023109 (0.007525) | 0.263992 / 0.275898 (-0.011906) | 0.289960 / 0.323480 (-0.033520) | 0.004281 / 0.007986 (-0.003704) | 0.002709 / 0.004328 (-0.001619) | 0.049147 / 0.004250 (0.044897) | 0.041036 / 0.037052 (0.003984) | 0.277621 / 0.258489 (0.019132) | 0.305689 / 0.293841 (0.011848) | 0.029342 / 0.128546 (-0.099205) | 0.010350 / 0.075646 (-0.065296) | 0.058221 / 0.419271 (-0.361051) | 0.033774 / 0.043533 (-0.009759) | 0.266163 / 0.255139 (0.011024) | 0.286866 / 0.283200 (0.003666) | 0.018463 / 0.141683 (-0.123219) | 1.136930 / 1.452155 (-0.315225) | 1.193974 / 1.492716 (-0.298742) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.106787 / 0.018006 (0.088781) | 0.304229 / 0.000490 (0.303740) | 0.000209 / 0.000200 (0.000009) | 0.000043 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022066 / 0.037411 (-0.015346) | 0.075510 / 0.014526 (0.060984) | 0.087273 / 0.176557 (-0.089284) | 0.128050 / 0.737135 (-0.609085) | 0.090492 / 0.296338 (-0.205847) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.299034 / 0.215209 (0.083825) | 2.899115 / 2.077655 (0.821461) | 1.625169 / 1.504120 (0.121049) | 1.456491 / 1.541195 (-0.084703) | 1.433063 / 1.468490 (-0.035427) | 0.565416 / 4.584777 (-4.019361) | 0.979298 / 3.745712 (-2.766415) | 2.748965 / 5.269862 (-2.520897) | 1.738671 / 4.565676 (-2.827005) | 0.062869 / 0.424275 (-0.361407) | 0.005001 / 0.007607 (-0.002606) | 0.348534 / 0.226044 (0.122489) | 3.437791 / 2.268929 (1.168862) | 1.896804 / 55.444624 (-53.547821) | 1.658544 / 6.876477 (-5.217933) | 1.649106 / 2.142072 (-0.492966) | 0.653791 / 4.805227 (-4.151436) | 0.125522 / 6.500664 (-6.375142) | 0.051260 / 0.075469 (-0.024209) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.025170 / 1.841788 (-0.816617) | 12.247968 / 8.074308 (4.173660) | 9.863777 / 10.191392 (-0.327615) | 0.140498 / 0.680424 (-0.539926) | 0.015158 / 0.534201 (-0.519043) | 0.288210 / 0.579283 (-0.291073) | 0.128207 / 0.434364 (-0.306157) | 0.398735 / 0.540337 (-0.141603) | 0.418217 / 1.386936 (-0.968719) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#871eabc7b23c27d677bc06ae2cc1ec3a2a04b10f \"CML watermark\")\n" ]
"2024-05-08T09:19:37"
"2024-05-08T09:43:19"
"2024-05-08T09:35:16"
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6885", "html_url": "https://github.com/huggingface/datasets/pull/6885", "diff_url": "https://github.com/huggingface/datasets/pull/6885.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6885.patch", "merged_at": "2024-05-08T09:35:16" }
Support jax 0.4.27 in CI tests by using jax Array `devices` method instead of `device` (which no longer exists). Fix #6884.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6885/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6885/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6884
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6884/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6884/comments
https://api.github.com/repos/huggingface/datasets/issues/6884/events
https://github.com/huggingface/datasets/issues/6884
2,284,839,687
I_kwDODunzps6IL-MH
6,884
CI is broken after jax-0.4.27 release: AttributeError: 'jaxlib.xla_extension.DeviceList' object has no attribute 'device'
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[]
"2024-05-08T07:01:47"
"2024-05-08T09:35:17"
"2024-05-08T09:35:17"
MEMBER
null
null
null
After jax-0.4.27 release (https://github.com/google/jax/releases/tag/jax-v0.4.27), our CI is broken with the error: ```Python traceback AttributeError: 'jaxlib.xla_extension.DeviceList' object has no attribute 'device'. Did you mean: 'devices'? ``` See: https://github.com/huggingface/datasets/actions/runs/8997488610/job/24715736153 ```Python traceback ___________________ FormatterTest.test_jax_formatter_device ____________________ [gw1] linux -- Python 3.10.14 /opt/hostedtoolcache/Python/3.10.14/x64/bin/python self = <tests.test_formatting.FormatterTest testMethod=test_jax_formatter_device> @require_jax def test_jax_formatter_device(self): import jax from datasets.formatting import JaxFormatter pa_table = self._create_dummy_table() device = jax.devices()[0] formatter = JaxFormatter(device=str(device)) row = formatter.format_row(pa_table) > assert row["a"].device() == device E AttributeError: 'jaxlib.xla_extension.DeviceList' object has no attribute 'device'. Did you mean: 'devices'? tests/test_formatting.py:630: AttributeError ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6884/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6884/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6883
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6883/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6883/comments
https://api.github.com/repos/huggingface/datasets/issues/6883/events
https://github.com/huggingface/datasets/pull/6883
2,284,808,399
PR_kwDODunzps5u1sL1
6,883
Require Pillow >= 9.4.0 to avoid AttributeError when loading image dataset
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6883). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "Do you think this is worth making a patch release for?\r\nCC: @huggingface/datasets ", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005764 / 0.011353 (-0.005589) | 0.004182 / 0.011008 (-0.006826) | 0.064520 / 0.038508 (0.026012) | 0.034260 / 0.023109 (0.011151) | 0.245677 / 0.275898 (-0.030221) | 0.277889 / 0.323480 (-0.045591) | 0.004569 / 0.007986 (-0.003417) | 0.002905 / 0.004328 (-0.001423) | 0.049346 / 0.004250 (0.045095) | 0.050529 / 0.037052 (0.013476) | 0.264718 / 0.258489 (0.006229) | 0.295705 / 0.293841 (0.001864) | 0.028144 / 0.128546 (-0.100402) | 0.011048 / 0.075646 (-0.064598) | 0.206290 / 0.419271 (-0.212982) | 0.035886 / 0.043533 (-0.007647) | 0.245038 / 0.255139 (-0.010101) | 0.269835 / 0.283200 (-0.013365) | 0.018927 / 0.141683 (-0.122756) | 1.136536 / 1.452155 (-0.315619) | 1.183256 / 1.492716 (-0.309460) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.115372 / 0.018006 (0.097366) | 0.315471 / 0.000490 (0.314982) | 0.000238 / 0.000200 (0.000038) | 0.000043 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021201 / 0.037411 (-0.016210) | 0.070374 / 0.014526 (0.055848) | 0.077557 / 0.176557 (-0.099000) | 0.124713 / 0.737135 (-0.612423) | 0.078850 / 0.296338 (-0.217489) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.278674 / 0.215209 (0.063465) | 2.739597 / 2.077655 (0.661942) | 1.438214 / 1.504120 (-0.065906) | 1.326373 / 1.541195 (-0.214822) | 1.370961 / 1.468490 (-0.097529) | 0.569160 / 4.584777 (-4.015617) | 2.411890 / 3.745712 (-1.333822) | 2.954073 / 5.269862 (-2.315788) | 1.816883 / 4.565676 (-2.748794) | 0.063123 / 0.424275 (-0.361152) | 0.005531 / 0.007607 (-0.002076) | 0.328184 / 0.226044 (0.102140) | 3.263083 / 2.268929 (0.994155) | 1.809159 / 55.444624 (-53.635465) | 1.535257 / 6.876477 (-5.341220) | 1.583428 / 2.142072 (-0.558644) | 0.642950 / 4.805227 (-4.162277) | 0.122240 / 6.500664 (-6.378424) | 0.044596 / 0.075469 (-0.030873) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.999993 / 1.841788 (-0.841795) | 12.941508 / 8.074308 (4.867200) | 10.417519 / 10.191392 (0.226127) | 0.134345 / 0.680424 (-0.546079) | 0.014651 / 0.534201 (-0.519550) | 0.288660 / 0.579283 (-0.290623) | 0.274550 / 0.434364 (-0.159814) | 0.327785 / 0.540337 (-0.212553) | 0.422954 / 1.386936 (-0.963982) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006051 / 0.011353 (-0.005302) | 0.003926 / 0.011008 (-0.007082) | 0.051480 / 0.038508 (0.012972) | 0.036102 / 0.023109 (0.012992) | 0.273358 / 0.275898 (-0.002540) | 0.293261 / 0.323480 (-0.030219) | 0.004562 / 0.007986 (-0.003424) | 0.002918 / 0.004328 (-0.001410) | 0.050386 / 0.004250 (0.046135) | 0.048427 / 0.037052 (0.011375) | 0.280178 / 0.258489 (0.021689) | 0.314599 / 0.293841 (0.020758) | 0.030876 / 0.128546 (-0.097670) | 0.010571 / 0.075646 (-0.065076) | 0.058555 / 0.419271 (-0.360717) | 0.034974 / 0.043533 (-0.008559) | 0.266604 / 0.255139 (0.011465) | 0.284712 / 0.283200 (0.001512) | 0.020296 / 0.141683 (-0.121387) | 1.116760 / 1.452155 (-0.335395) | 1.157794 / 1.492716 (-0.334922) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.103777 / 0.018006 (0.085771) | 0.314267 / 0.000490 (0.313778) | 0.000226 / 0.000200 (0.000026) | 0.000047 / 0.000054 (-0.000008) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023837 / 0.037411 (-0.013574) | 0.082145 / 0.014526 (0.067619) | 0.090434 / 0.176557 (-0.086123) | 0.132096 / 0.737135 (-0.605040) | 0.092426 / 0.296338 (-0.203913) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.299554 / 0.215209 (0.084345) | 2.932382 / 2.077655 (0.854727) | 1.549994 / 1.504120 (0.045874) | 1.454944 / 1.541195 (-0.086251) | 1.474987 / 1.468490 (0.006497) | 0.586149 / 4.584777 (-3.998628) | 0.972118 / 3.745712 (-2.773594) | 2.991719 / 5.269862 (-2.278142) | 1.876365 / 4.565676 (-2.689311) | 0.065178 / 0.424275 (-0.359098) | 0.005114 / 0.007607 (-0.002493) | 0.353704 / 0.226044 (0.127660) | 3.500940 / 2.268929 (1.232012) | 1.965581 / 55.444624 (-53.479043) | 1.662594 / 6.876477 (-5.213883) | 1.702761 / 2.142072 (-0.439311) | 0.663879 / 4.805227 (-4.141348) | 0.120036 / 6.500664 (-6.380628) | 0.043195 / 0.075469 (-0.032274) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.997690 / 1.841788 (-0.844098) | 13.448914 / 8.074308 (5.374606) | 10.132469 / 10.191392 (-0.058923) | 0.148493 / 0.680424 (-0.531930) | 0.016670 / 0.534201 (-0.517531) | 0.289708 / 0.579283 (-0.289575) | 0.132938 / 0.434364 (-0.301425) | 0.411425 / 0.540337 (-0.128913) | 0.430748 / 1.386936 (-0.956188) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#70e38090f070d323d452b5e746686f31b1086bd8 \"CML watermark\")\n", "maybe not super important since it was not reported by users, this can be included in the next release", "I observed the same AttributeError with Pillow == 10.3.0, while 9.4.0 works for me.", "What's the error you're getting @Eric2i ?\r\n\r\nOn my side on 10.3.0 I could run this without errors:\r\n\r\n```python\r\nimport PIL.Image\r\nPIL.Image.ExifTags.Base.Orientation is not None # True\r\n```", "Sorry, false alarm. I double-checked that 10.3.0 is also good on my side. Thanks for your sample codes." ]
"2024-05-08T06:43:29"
"2024-05-21T18:37:55"
"2024-05-16T14:34:02"
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6883", "html_url": "https://github.com/huggingface/datasets/pull/6883", "diff_url": "https://github.com/huggingface/datasets/pull/6883.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6883.patch", "merged_at": "2024-05-16T14:34:02" }
Require Pillow >= 9.4.0 to avoid AttributeError when loading image dataset. The `PIL.Image.ExifTags` that we use in our code was implemented in Pillow-9.4.0: https://github.com/python-pillow/Pillow/commit/24a5405a9f7ea22f28f9c98b3e407292ea5ee1d3 The bug #6881 was introduced in datasets-2.19.0 by this PR: - #6739 Fix #6881.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6883/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6883/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6882
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6882/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6882/comments
https://api.github.com/repos/huggingface/datasets/issues/6882/events
https://github.com/huggingface/datasets/issues/6882
2,284,803,158
I_kwDODunzps6IL1RW
6,882
Connection Error When Using By-pass Proxies
{ "login": "MRNOBODY-ZST", "id": 78351684, "node_id": "MDQ6VXNlcjc4MzUxNjg0", "avatar_url": "https://avatars.githubusercontent.com/u/78351684?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MRNOBODY-ZST", "html_url": "https://github.com/MRNOBODY-ZST", "followers_url": "https://api.github.com/users/MRNOBODY-ZST/followers", "following_url": "https://api.github.com/users/MRNOBODY-ZST/following{/other_user}", "gists_url": "https://api.github.com/users/MRNOBODY-ZST/gists{/gist_id}", "starred_url": "https://api.github.com/users/MRNOBODY-ZST/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MRNOBODY-ZST/subscriptions", "organizations_url": "https://api.github.com/users/MRNOBODY-ZST/orgs", "repos_url": "https://api.github.com/users/MRNOBODY-ZST/repos", "events_url": "https://api.github.com/users/MRNOBODY-ZST/events{/privacy}", "received_events_url": "https://api.github.com/users/MRNOBODY-ZST/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Changing the supplier of the proxy will solve this problem, or you can visit and follow the instructions in https://hf-mirror.com " ]
"2024-05-08T06:40:14"
"2024-05-17T06:38:30"
null
NONE
null
null
null
### Describe the bug I'm currently using Clash for Windows as my proxy tunnel, after exporting HTTP_PROXY and HTTPS_PROXY to the port that clash provides🤔, it runs into a connection error saying "Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/2.19.1/metrics/seqeval/seqeval.py (ConnectionError(MaxRetryError("HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /huggingface/datasets/2.19.1/metrics/seqeval/seqeval.py (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f969d391870>: Failed to establish a new connection: [Errno 111] Connection refused'))")))" I have already read the documentation provided on the hugginface, but I think I didn't see the detailed instruction on how to set up proxies for this library. ### Steps to reproduce the bug 1. Turn on any proxy software like Clash / ShadosocksR etc. 2. export system varibles to the port provided by your proxy software in wsl (It's ok for other applications to use proxy expect dataset-library) 3. load any dataset from hugginface online ### Expected behavior --------------------------------------------------------------------------- ConnectionError Traceback (most recent call last) Cell In[33], [line 3](vscode-notebook-cell:?execution_count=33&line=3) [1](vscode-notebook-cell:?execution_count=33&line=1) from datasets import load_metric ----> [3](vscode-notebook-cell:?execution_count=33&line=3) metric = load_metric("seqeval") File ~/.local/lib/python3.10/site-packages/datasets/utils/deprecation_utils.py:46, in deprecated.<locals>.decorator.<locals>.wrapper(*args, **kwargs) [44](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/utils/deprecation_utils.py:44) warnings.warn(warning_msg, category=FutureWarning, stacklevel=2) [45](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/utils/deprecation_utils.py:45) _emitted_deprecation_warnings.add(func_hash) ---> [46](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/utils/deprecation_utils.py:46) return deprecated_function(*args, **kwargs) File ~/.local/lib/python3.10/site-packages/datasets/load.py:2104, in load_metric(path, config_name, process_id, num_process, cache_dir, experiment_id, keep_in_memory, download_config, download_mode, revision, trust_remote_code, **metric_init_kwargs) [2101](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2101) warnings.filterwarnings("ignore", message=".*https://huggingface.co/docs/evaluate$", category=FutureWarning) [2103](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2103) download_mode = DownloadMode(download_mode or DownloadMode.REUSE_DATASET_IF_EXISTS) -> [2104](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2104) metric_module = metric_module_factory( [2105](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2105) path, [2106](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2106) revision=revision, [2107](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2107) download_config=download_config, [2108](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2108) download_mode=download_mode, [2109](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2109) trust_remote_code=trust_remote_code, [2110](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2110) ).module_path [2111](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2111) metric_cls = import_main_class(metric_module, dataset=False) [2112](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2112) metric = metric_cls( [2113](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2113) config_name=config_name, [2114](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2114) process_id=process_id, ... --> [633](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/utils/file_utils.py:633) raise ConnectionError(f"Couldn't reach {url} ({repr(head_error)})") [634](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/utils/file_utils.py:634) elif response is not None: [635](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/utils/file_utils.py:635) raise ConnectionError(f"Couldn't reach {url} (error {response.status_code})") ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/2.19.1/metrics/seqeval/seqeval.py (SSLError(MaxRetryError("HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /huggingface/datasets/2.19.1/metrics/seqeval/seqeval.py (Caused by SSLError(SSLEOFError(8, '[SSL: UNEXPECTED_EOF_WHILE_READING] EOF occurred in violation of protocol (_ssl.c:1007)')))"))) ### Environment info - `datasets` version: 2.19.1 - Platform: Linux-5.10.102.1-microsoft-standard-WSL2-x86_64-with-glibc2.35 - Python version: 3.10.12 - `huggingface_hub` version: 0.23.0 - PyArrow version: 16.0.0 - Pandas version: 2.2.2 - `fsspec` version: 2024.2.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6882/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6882/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6881
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6881/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6881/comments
https://api.github.com/repos/huggingface/datasets/issues/6881/events
https://github.com/huggingface/datasets/issues/6881
2,284,794,009
I_kwDODunzps6ILzCZ
6,881
AttributeError: module 'PIL.Image' has no attribute 'ExifTags'
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[]
"2024-05-08T06:33:57"
"2024-05-16T14:34:03"
"2024-05-16T14:34:03"
MEMBER
null
null
null
When trying to load an image dataset in an old Python environment (with Pillow-8.4.0), an error is raised: ```Python traceback AttributeError: module 'PIL.Image' has no attribute 'ExifTags' ``` The error traceback: ```Python traceback ~/huggingface/datasets/src/datasets/iterable_dataset.py in __iter__(self) 1391 # `IterableDataset` automatically fills missing columns with None. 1392 # This is done with `_apply_feature_types_on_example`. -> 1393 example = _apply_feature_types_on_example( 1394 example, self.features, token_per_repo_id=self._token_per_repo_id 1395 ) ~/huggingface/datasets/src/datasets/iterable_dataset.py in _apply_feature_types_on_example(example, features, token_per_repo_id) 1080 encoded_example = features.encode_example(example) 1081 # Decode example for Audio feature, e.g. -> 1082 decoded_example = features.decode_example(encoded_example, token_per_repo_id=token_per_repo_id) 1083 return decoded_example 1084 ~/huggingface/datasets/src/datasets/features/features.py in decode_example(self, example, token_per_repo_id) 1974 -> 1975 return { 1976 column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id) 1977 if self._column_requires_decoding[column_name] ~/huggingface/datasets/src/datasets/features/features.py in <dictcomp>(.0) 1974 1975 return { -> 1976 column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id) 1977 if self._column_requires_decoding[column_name] 1978 else value ~/huggingface/datasets/src/datasets/features/features.py in decode_nested_example(schema, obj, token_per_repo_id) 1339 # we pass the token to read and decode files from private repositories in streaming mode 1340 if obj is not None and schema.decode: -> 1341 return schema.decode_example(obj, token_per_repo_id=token_per_repo_id) 1342 return obj 1343 ~/huggingface/datasets/src/datasets/features/image.py in decode_example(self, value, token_per_repo_id) 187 image = PIL.Image.open(BytesIO(bytes_)) 188 image.load() # to avoid "Too many open files" errors --> 189 if image.getexif().get(PIL.Image.ExifTags.Base.Orientation) is not None: 190 image = PIL.ImageOps.exif_transpose(image) 191 if self.mode and self.mode != image.mode: ~/huggingface/datasets/venv/lib/python3.9/site-packages/PIL/Image.py in __getattr__(name) 75 ) 76 return categories[name] ---> 77 raise AttributeError(f"module '{__name__}' has no attribute '{name}'") 78 79 AttributeError: module 'PIL.Image' has no attribute 'ExifTags' ``` ### Environment info Since datasets 2.19.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6881/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6881/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6880
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6880/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6880/comments
https://api.github.com/repos/huggingface/datasets/issues/6880/events
https://github.com/huggingface/datasets/issues/6880
2,283,278,337
I_kwDODunzps6IGBAB
6,880
Webdataset: KeyError: 'png' on some datasets when streaming
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
open
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "The error is caused by malformed basenames of the files within the TARs:\r\n- `15_Cohen_1-s2.0-S0929664620300449-gr3_lrg-b.png` becomes `15_Cohen_1-s2` as the grouping `__key__`, and `0-S0929664620300449-gr3_lrg-b.png` as the additional key to be added to the example\r\n- whereas the intended behavior was to use `15_Cohen_1-s2.0-S0929664620300449-gr3_lrg-b` as the grouping `__key__`, and `png` as the additional key to be added to the example\r\n\r\nTo get the expected behavior, the basenames of the files within the TARs should be fixed so that they only contain a single dot, the one separating the file extension.", "I reopen it because I think we should try to give a clearer error message with a specific error code.\r\n\r\nFor now, it's hard for the user to understand where the error comes from (not everybody knows the subtleties of the webdataset filename structure).\r\n\r\n(we can transfer it to https://github.com/huggingface/dataset-viewer if it fits better there)", "same with .jpg -> https://huggingface.co/datasets/ProGamerGov/synthetic-dataset-1m-dalle3-high-quality-captions\r\n\r\n```\r\nError code: DatasetGenerationError\r\nException: DatasetGenerationError\r\nMessage: An error occurred while generating the dataset\r\nTraceback: Traceback (most recent call last):\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py\", line 1748, in _prepare_split_single\r\n for key, record in generator:\r\n File \"/src/services/worker/src/worker/job_runners/config/parquet_and_info.py\", line 818, in wrapped\r\n for item in generator(*args, **kwargs):\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/webdataset/webdataset.py\", line 109, in _generate_examples\r\n example[field_name] = {\"path\": example[\"__key__\"] + \".\" + field_name, \"bytes\": example[field_name]}\r\n KeyError: 'jpg'\r\n \r\n The above exception was the direct cause of the following exception:\r\n \r\n Traceback (most recent call last):\r\n File \"/src/services/worker/src/worker/job_runners/config/parquet_and_info.py\", line 1316, in compute_config_parquet_and_info_response\r\n parquet_operations, partial = stream_convert_to_parquet(\r\n File \"/src/services/worker/src/worker/job_runners/config/parquet_and_info.py\", line 909, in stream_convert_to_parquet\r\n builder._prepare_split(\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py\", line 1627, in _prepare_split\r\n for job_id, done, content in self._prepare_split_single(\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py\", line 1784, in _prepare_split_single\r\n raise DatasetGenerationError(\"An error occurred while generating the dataset\") from e\r\n datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset\r\n```\r\n", "More details in the spec (https://docs.google.com/document/d/18OdLjruFNX74ILmgrdiCI9J1fQZuhzzRBCHV9URWto0/edit#heading=h.hkptaq2kct2s)\r\n\r\n> The prefix of a file is all directory components of the file plus the file name component up to the first “.” in the file name.\r\n> The last extension (i.e., the portion after the last “.”) in a file name determines the file type.\r\n\r\n> Example:\r\n\timages17/image194.left.jpg\r\n\timages17/image194.right.jpg\r\n\timages17/image194.json\r\n\timages17/image12.left.jpg\r\n\timages17/image12.json\r\n\timages17/image12.right.jpg\r\n\timages3/image1459.left.jpg\r\n> \t…\r\n> When reading this with a WebDataset library, you would get the following two dictionaries back in sequence:\r\n\r\n { “__key__”: “images17/image194”, “left.jpg”: b”...”, “right.jpg”: b”...”, “json”: b”...”}\r\n { “__key__”: “images17/image12”, “left.jpg”: b”...”, “right.jpg”: b”...”, “json”: b”...”}\r\n", "OK, the issue is different in the latter case: some files are suffixed as `.jpeg`, and others as `.jpg` :)\r\n\r\nIs it a limitation of the webdataset format, or of the datasets library @lhoestq? And could we be able to give a clearer error?" ]
"2024-05-07T13:09:02"
"2024-05-14T20:34:05"
null
MEMBER
null
null
null
reported at https://huggingface.co/datasets/tbone5563/tar_images/discussions/1 ```python >>> from datasets import load_dataset >>> ds = load_dataset("tbone5563/tar_images") Downloading data: 100%  1.41G/1.41G [00:48<00:00, 17.2MB/s] Downloading data: 100%  619M/619M [00:11<00:00, 57.4MB/s] Generating train split:   970/0 [00:02<00:00, 534.94 examples/s] --------------------------------------------------------------------------- KeyError Traceback (most recent call last) [/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in _prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, split_info, check_duplicate_keys, job_id) 1747 _time = time.time() -> 1748 for key, record in generator: 1749 if max_shard_size is not None and writer._num_bytes > max_shard_size: 7 frames [/usr/local/lib/python3.10/dist-packages/datasets/packaged_modules/webdataset/webdataset.py](https://localhost:8080/#) in _generate_examples(self, tar_paths, tar_iterators) 108 for field_name in image_field_names + audio_field_names: --> 109 example[field_name] = {"path": example["__key__"] + "." + field_name, "bytes": example[field_name]} 110 yield f"{tar_idx}_{example_idx}", example KeyError: 'png' The above exception was the direct cause of the following exception: DatasetGenerationError Traceback (most recent call last) [<ipython-input-2-8e0fbb7badc9>](https://localhost:8080/#) in <cell line: 3>() 1 from datasets import load_dataset 2 ----> 3 ds = load_dataset("tbone5563/tar_images") [/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://localhost:8080/#) in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, trust_remote_code, **config_kwargs) 2607 2608 # Download and prepare data -> 2609 builder_instance.download_and_prepare( 2610 download_config=download_config, 2611 download_mode=download_mode, [/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in download_and_prepare(self, output_dir, download_config, download_mode, verification_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, file_format, max_shard_size, num_proc, storage_options, **download_and_prepare_kwargs) 1025 if num_proc is not None: 1026 prepare_split_kwargs["num_proc"] = num_proc -> 1027 self._download_and_prepare( 1028 dl_manager=dl_manager, 1029 verification_mode=verification_mode, [/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in _download_and_prepare(self, dl_manager, verification_mode, **prepare_splits_kwargs) 1787 1788 def _download_and_prepare(self, dl_manager, verification_mode, **prepare_splits_kwargs): -> 1789 super()._download_and_prepare( 1790 dl_manager, 1791 verification_mode, [/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in _download_and_prepare(self, dl_manager, verification_mode, **prepare_split_kwargs) 1120 try: 1121 # Prepare split will record examples associated to the split -> 1122 self._prepare_split(split_generator, **prepare_split_kwargs) 1123 except OSError as e: 1124 raise OSError( [/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in _prepare_split(self, split_generator, check_duplicate_keys, file_format, num_proc, max_shard_size) 1625 job_id = 0 1626 with pbar: -> 1627 for job_id, done, content in self._prepare_split_single( 1628 gen_kwargs=gen_kwargs, job_id=job_id, **_prepare_split_args 1629 ): [/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in _prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, split_info, check_duplicate_keys, job_id) 1782 if isinstance(e, SchemaInferenceError) and e.__context__ is not None: 1783 e = e.__context__ -> 1784 raise DatasetGenerationError("An error occurred while generating the dataset") from e 1785 1786 yield job_id, True, (total_num_examples, total_num_bytes, writer._features, num_shards, shard_lengths) DatasetGenerationError: An error occurred while generating the dataset ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6880/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6880/timeline
null
reopened
false
https://api.github.com/repos/huggingface/datasets/issues/6879
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6879/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6879/comments
https://api.github.com/repos/huggingface/datasets/issues/6879/events
https://github.com/huggingface/datasets/issues/6879
2,282,968,259
I_kwDODunzps6IE1TD
6,879
Batched mapping does not raise an error if values for an existing column are empty
{ "login": "felix-schneider", "id": 208336, "node_id": "MDQ6VXNlcjIwODMzNg==", "avatar_url": "https://avatars.githubusercontent.com/u/208336?v=4", "gravatar_id": "", "url": "https://api.github.com/users/felix-schneider", "html_url": "https://github.com/felix-schneider", "followers_url": "https://api.github.com/users/felix-schneider/followers", "following_url": "https://api.github.com/users/felix-schneider/following{/other_user}", "gists_url": "https://api.github.com/users/felix-schneider/gists{/gist_id}", "starred_url": "https://api.github.com/users/felix-schneider/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/felix-schneider/subscriptions", "organizations_url": "https://api.github.com/users/felix-schneider/orgs", "repos_url": "https://api.github.com/users/felix-schneider/repos", "events_url": "https://api.github.com/users/felix-schneider/events{/privacy}", "received_events_url": "https://api.github.com/users/felix-schneider/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
"2024-05-07T11:02:40"
"2024-05-07T11:02:40"
null
NONE
null
null
null
### Describe the bug Using `Dataset.map(fn, batched=True)` allows resizing the dataset by returning a dict of lists, all of which must be the same size. If they are not the same size, an error like `pyarrow.lib.ArrowInvalid: Column 1 named x expected length 1 but got length 0` is raised. This is not the case if the function returns an empty list for an existing column in the dataset. In that case, the dataset is silently resized to 0 rows. ### Steps to reproduce the bug MWE: ``` import datasets data = datasets.Dataset.from_dict({"test": [1]}) def mapping_fn(examples): return {"test": [], "y": [1]} data = data.map(mapping_fn, batched=True) print(len(data)) ``` Note that when returning `"x": []`, the error is raised correctly, also when returning `"test": [1,2]`. ### Expected behavior Expected an exception: `pyarrow.lib.ArrowInvalid: Column 1 named test expected length 1 but got length 0` or `pyarrow.lib.ArrowInvalid: Column 2 named y expected length 0 but got length 1`. Any exception would be acceptable. ### Environment info - `datasets` version: 2.19.1 - Platform: Linux-5.4.0-153-generic-x86_64-with-glibc2.31 - Python version: 3.11.8 - `huggingface_hub` version: 0.22.2 - PyArrow version: 15.0.2 - Pandas version: 2.2.1 - `fsspec` version: 2024.2.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6879/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6879/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6878
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6878/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6878/comments
https://api.github.com/repos/huggingface/datasets/issues/6878/events
https://github.com/huggingface/datasets/pull/6878
2,282,879,491
PR_kwDODunzps5uviBh
6,878
Create function to convert to parquet
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6878). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005519 / 0.011353 (-0.005834) | 0.003877 / 0.011008 (-0.007131) | 0.063989 / 0.038508 (0.025480) | 0.032348 / 0.023109 (0.009239) | 0.238288 / 0.275898 (-0.037611) | 0.265337 / 0.323480 (-0.058143) | 0.004363 / 0.007986 (-0.003623) | 0.002755 / 0.004328 (-0.001574) | 0.049836 / 0.004250 (0.045585) | 0.048456 / 0.037052 (0.011403) | 0.246526 / 0.258489 (-0.011963) | 0.280753 / 0.293841 (-0.013088) | 0.027721 / 0.128546 (-0.100825) | 0.011031 / 0.075646 (-0.064615) | 0.204168 / 0.419271 (-0.215104) | 0.036203 / 0.043533 (-0.007330) | 0.238282 / 0.255139 (-0.016857) | 0.259608 / 0.283200 (-0.023591) | 0.017781 / 0.141683 (-0.123902) | 1.147821 / 1.452155 (-0.304334) | 1.194855 / 1.492716 (-0.297861) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.102837 / 0.018006 (0.084831) | 0.312300 / 0.000490 (0.311811) | 0.000224 / 0.000200 (0.000024) | 0.000047 / 0.000054 (-0.000008) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019410 / 0.037411 (-0.018001) | 0.065114 / 0.014526 (0.050588) | 0.076828 / 0.176557 (-0.099728) | 0.121741 / 0.737135 (-0.615394) | 0.079864 / 0.296338 (-0.216474) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.287773 / 0.215209 (0.072564) | 2.848936 / 2.077655 (0.771281) | 1.543819 / 1.504120 (0.039700) | 1.412708 / 1.541195 (-0.128487) | 1.454685 / 1.468490 (-0.013805) | 0.580155 / 4.584777 (-4.004622) | 2.372783 / 3.745712 (-1.372929) | 2.910514 / 5.269862 (-2.359347) | 1.813542 / 4.565676 (-2.752134) | 0.064569 / 0.424275 (-0.359706) | 0.005434 / 0.007607 (-0.002173) | 0.339309 / 0.226044 (0.113265) | 3.329972 / 2.268929 (1.061043) | 1.827597 / 55.444624 (-53.617028) | 1.592324 / 6.876477 (-5.284152) | 1.619743 / 2.142072 (-0.522329) | 0.659358 / 4.805227 (-4.145869) | 0.119887 / 6.500664 (-6.380777) | 0.043649 / 0.075469 (-0.031821) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.984563 / 1.841788 (-0.857225) | 12.395302 / 8.074308 (4.320994) | 9.904944 / 10.191392 (-0.286448) | 0.136141 / 0.680424 (-0.544282) | 0.014779 / 0.534201 (-0.519422) | 0.286146 / 0.579283 (-0.293137) | 0.265392 / 0.434364 (-0.168972) | 0.329484 / 0.540337 (-0.210854) | 0.425530 / 1.386936 (-0.961406) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005920 / 0.011353 (-0.005433) | 0.004068 / 0.011008 (-0.006940) | 0.052281 / 0.038508 (0.013773) | 0.034907 / 0.023109 (0.011798) | 0.269551 / 0.275898 (-0.006347) | 0.292390 / 0.323480 (-0.031090) | 0.004340 / 0.007986 (-0.003646) | 0.002864 / 0.004328 (-0.001464) | 0.051466 / 0.004250 (0.047216) | 0.046410 / 0.037052 (0.009358) | 0.280103 / 0.258489 (0.021614) | 0.310616 / 0.293841 (0.016775) | 0.031044 / 0.128546 (-0.097502) | 0.011004 / 0.075646 (-0.064643) | 0.059955 / 0.419271 (-0.359316) | 0.034156 / 0.043533 (-0.009377) | 0.268113 / 0.255139 (0.012974) | 0.283569 / 0.283200 (0.000369) | 0.019758 / 0.141683 (-0.121925) | 1.155583 / 1.452155 (-0.296572) | 1.225611 / 1.492716 (-0.267106) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.104302 / 0.018006 (0.086295) | 0.307324 / 0.000490 (0.306834) | 0.000219 / 0.000200 (0.000019) | 0.000045 / 0.000054 (-0.000009) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023672 / 0.037411 (-0.013739) | 0.081110 / 0.014526 (0.066584) | 0.091783 / 0.176557 (-0.084773) | 0.131738 / 0.737135 (-0.605397) | 0.092391 / 0.296338 (-0.203948) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.289341 / 0.215209 (0.074132) | 2.849894 / 2.077655 (0.772239) | 1.539679 / 1.504120 (0.035559) | 1.417975 / 1.541195 (-0.123220) | 1.473631 / 1.468490 (0.005141) | 0.583013 / 4.584777 (-4.001764) | 0.960106 / 3.745712 (-2.785606) | 2.962785 / 5.269862 (-2.307077) | 1.827539 / 4.565676 (-2.738138) | 0.063875 / 0.424275 (-0.360400) | 0.005251 / 0.007607 (-0.002356) | 0.347127 / 0.226044 (0.121082) | 3.417364 / 2.268929 (1.148435) | 1.965901 / 55.444624 (-53.478723) | 1.632337 / 6.876477 (-5.244140) | 1.683100 / 2.142072 (-0.458972) | 0.664951 / 4.805227 (-4.140277) | 0.119046 / 6.500664 (-6.381618) | 0.042828 / 0.075469 (-0.032641) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.999569 / 1.841788 (-0.842218) | 13.366482 / 8.074308 (5.292174) | 10.635396 / 10.191392 (0.444004) | 0.133840 / 0.680424 (-0.546584) | 0.016232 / 0.534201 (-0.517969) | 0.292764 / 0.579283 (-0.286519) | 0.128558 / 0.434364 (-0.305806) | 0.405596 / 0.540337 (-0.134741) | 0.429633 / 1.386936 (-0.957303) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#4d92856bbfda0d48d07e82bb520d9434d20fae4b \"CML watermark\")\n" ]
"2024-05-07T10:27:07"
"2024-05-16T14:46:44"
"2024-05-16T14:38:23"
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6878", "html_url": "https://github.com/huggingface/datasets/pull/6878", "diff_url": "https://github.com/huggingface/datasets/pull/6878.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6878.patch", "merged_at": "2024-05-16T14:38:22" }
Analogously with `delete_from_hub`, this PR: - creates the Python function `convert_to_parquet` - makes the corresponding CLI command use that function. This way, the functionality can be used both from a terminal and from a Python console. This PR also implements a test for convert_to_parquet function.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6878/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6878/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6877
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6877/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6877/comments
https://api.github.com/repos/huggingface/datasets/issues/6877/events
https://github.com/huggingface/datasets/issues/6877
2,282,068,337
I_kwDODunzps6IBZlx
6,877
OSError: [Errno 24] Too many open files
{ "login": "loicmagne", "id": 53355258, "node_id": "MDQ6VXNlcjUzMzU1MjU4", "avatar_url": "https://avatars.githubusercontent.com/u/53355258?v=4", "gravatar_id": "", "url": "https://api.github.com/users/loicmagne", "html_url": "https://github.com/loicmagne", "followers_url": "https://api.github.com/users/loicmagne/followers", "following_url": "https://api.github.com/users/loicmagne/following{/other_user}", "gists_url": "https://api.github.com/users/loicmagne/gists{/gist_id}", "starred_url": "https://api.github.com/users/loicmagne/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/loicmagne/subscriptions", "organizations_url": "https://api.github.com/users/loicmagne/orgs", "repos_url": "https://api.github.com/users/loicmagne/repos", "events_url": "https://api.github.com/users/loicmagne/events{/privacy}", "received_events_url": "https://api.github.com/users/loicmagne/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "ulimit -n 8192 can solve this problem", "> ulimit -n 8192 can solve this problem\r\n\r\nWould there be a systematic way to do this ? The data loading is part of the [MTEB](https://github.com/embeddings-benchmark/mteb) library", "> > ulimit -n 8192 can solve this problem\r\n> \r\n> Would there be a systematic way to do this ? The data loading is part of the [MTEB](https://github.com/embeddings-benchmark/mteb) library\r\n\r\n I think we could modify the _prepare_split_single function", "I fixed it with https://github.com/huggingface/datasets/pull/6893, feel free to re-open if you're still having the issue :)", "> I fixed it with #6893, feel free to re-open if you're still having the issue :)\r\n\r\nThanks a lot!" ]
"2024-05-07T01:15:09"
"2024-06-02T14:22:23"
"2024-05-13T13:01:55"
NONE
null
null
null
### Describe the bug I am trying to load the 'default' subset of the following dataset which contains lots of files (828 per split): [https://huggingface.co/datasets/mteb/biblenlp-corpus-mmteb](https://huggingface.co/datasets/mteb/biblenlp-corpus-mmteb) When trying to load it using the `load_dataset` function I get the following error ```python >>> from datasets import load_dataset >>> d = load_dataset('mteb/biblenlp-corpus-mmteb') Downloading readme: 100%|████████████████████████████████████████████████████████████████████████| 201k/201k [00:00<00:00, 1.07MB/s] Resolving data files: 100%|█████████████████████████████████████████████████████████████████████| 828/828 [00:00<00:00, 1069.15it/s] Resolving data files: 100%|███████████████████████████████████████████████████████████████████| 828/828 [00:00<00:00, 436182.33it/s] Resolving data files: 100%|█████████████████████████████████████████████████████████████████████| 828/828 [00:00<00:00, 2228.75it/s] Resolving data files: 100%|███████████████████████████████████████████████████████████████████| 828/828 [00:00<00:00, 646478.73it/s] Resolving data files: 100%|███████████████████████████████████████████████████████████████████| 828/828 [00:00<00:00, 831032.24it/s] Resolving data files: 100%|███████████████████████████████████████████████████████████████████| 828/828 [00:00<00:00, 517645.51it/s] Downloading data: 100%|████████████████████████████████████████████████████████████████████████| 828/828 [00:33<00:00, 24.87files/s] Downloading data: 100%|████████████████████████████████████████████████████████████████████████| 828/828 [00:30<00:00, 27.48files/s] Downloading data: 100%|████████████████████████████████████████████████████████████████████████| 828/828 [00:30<00:00, 26.94files/s] Generating train split: 1571592 examples [00:03, 461438.97 examples/s] Generating test split: 11163 examples [00:00, 118190.72 examples/s] Traceback (most recent call last): File ".env/lib/python3.12/site-packages/datasets/builder.py", line 1995, in _prepare_split_single for _, table in generator: File ".env/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 99, in _generate_tables with open(file, "rb") as f: ^^^^^^^^^^^^^^^^ File ".env/lib/python3.12/site-packages/datasets/streaming.py", line 75, in wrapper return function(*args, download_config=download_config, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File ".env/lib/python3.12/site-packages/datasets/utils/file_utils.py", line 1224, in xopen file_obj = fsspec.open(file, mode=mode, *args, **kwargs).open() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File ".env/lib/python3.12/site-packages/fsspec/core.py", line 135, in open return self.__enter__() ^^^^^^^^^^^^^^^^ File ".env/lib/python3.12/site-packages/fsspec/core.py", line 103, in __enter__ f = self.fs.open(self.path, mode=mode) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File ".env/lib/python3.12/site-packages/fsspec/spec.py", line 1293, in open f = self._open( ^^^^^^^^^^^ File ".env/lib/python3.12/site-packages/datasets/filesystems/compression.py", line 81, in _open return self.file.open() ^^^^^^^^^^^^^^^^ File ".env/lib/python3.12/site-packages/fsspec/core.py", line 135, in open return self.__enter__() ^^^^^^^^^^^^^^^^ File ".env/lib/python3.12/site-packages/fsspec/core.py", line 103, in __enter__ f = self.fs.open(self.path, mode=mode) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File ".env/lib/python3.12/site-packages/fsspec/spec.py", line 1293, in open f = self._open( ^^^^^^^^^^^ File ".env/lib/python3.12/site-packages/fsspec/implementations/local.py", line 197, in _open return LocalFileOpener(path, mode, fs=self, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File ".env/lib/python3.12/site-packages/fsspec/implementations/local.py", line 322, in __init__ self._open() File ".env/lib/python3.12/site-packages/fsspec/implementations/local.py", line 327, in _open self.f = open(self.path, mode=self.mode) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ OSError: [Errno 24] Too many open files: '.cache/huggingface/datasets/downloads/3a347186abfc0f9c924dde0221d246db758c7232c0101523f04a87c17d696618' The above exception was the direct cause of the following exception: Traceback (most recent call last): File ".env/lib/python3.12/site-packages/datasets/builder.py", line 981, in incomplete_dir yield tmp_dir File ".env/lib/python3.12/site-packages/datasets/builder.py", line 1027, in download_and_prepare self._download_and_prepare( File ".env/lib/python3.12/site-packages/datasets/builder.py", line 1122, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File ".env/lib/python3.12/site-packages/datasets/builder.py", line 1882, in _prepare_split for job_id, done, content in self._prepare_split_single( File ".env/lib/python3.12/site-packages/datasets/builder.py", line 2038, in _prepare_split_single raise DatasetGenerationError("An error occurred while generating the dataset") from e datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset During handling of the above exception, another exception occurred: Traceback (most recent call last): File "<stdin>", line 1, in <module> File ".env/lib/python3.12/site-packages/datasets/load.py", line 2609, in load_dataset builder_instance.download_and_prepare( File ".env/lib/python3.12/site-packages/datasets/builder.py", line 1007, in download_and_prepare with incomplete_dir(self._output_dir) as tmp_output_dir: File "/usr/lib/python3.12/contextlib.py", line 158, in __exit__ self.gen.throw(value) File ".env/lib/python3.12/site-packages/datasets/builder.py", line 988, in incomplete_dir shutil.rmtree(tmp_dir) File "/usr/lib/python3.12/shutil.py", line 785, in rmtree _rmtree_safe_fd(fd, path, onexc) File "/usr/lib/python3.12/shutil.py", line 661, in _rmtree_safe_fd onexc(os.scandir, path, err) File "/usr/lib/python3.12/shutil.py", line 657, in _rmtree_safe_fd with os.scandir(topfd) as scandir_it: ^^^^^^^^^^^^^^^^^ OSError: [Errno 24] Too many open files: '.cache/huggingface/datasets/mteb___biblenlp-corpus-mmteb/default/0.0.0/3912ed967b0834547f35b2da9470c4976b357c9a.incomplete' ``` I looked for the maximum number of open files on my machine (Ubuntu 24.04) and it seems to be 1024, but even when I try to load a single split (`load_dataset('mteb/biblenlp-corpus-mmteb', split='train')`) I get the same error ### Steps to reproduce the bug ```python from datasets import load_dataset d = load_dataset('mteb/biblenlp-corpus-mmteb') ``` ### Expected behavior Load the dataset without error ### Environment info - `datasets` version: 2.19.0 - Platform: Linux-6.8.0-31-generic-x86_64-with-glibc2.39 - Python version: 3.12.3 - `huggingface_hub` version: 0.23.0 - PyArrow version: 16.0.0 - Pandas version: 2.2.2 - `fsspec` version: 2024.3.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6877/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6877/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6876
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6876/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6876/comments
https://api.github.com/repos/huggingface/datasets/issues/6876/events
https://github.com/huggingface/datasets/pull/6876
2,281,450,743
PR_kwDODunzps5uqs46
6,876
Unpin hfh
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6876). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "transformers 4.40.2 was release yesterday but not sure if it contains the fix", "@lhoestq yes I knew transformers 4.40.2 was released yesterday, but I had checked that it does not contain the fix: only 2 bug fixes. That is why our CI continues failing in this PR. We will have to wait until the next minor version.", "> If we urgently need some dev feature for dataset-viewer, I would suggest pushing the feature (cherry-picked) to a dedicated branch with 2.19.1 as its starting point (without opening a PR), and install datasets from that branch.\r\n\r\nI have done so:\r\n- Created a branch from 2.19.1: https://github.com/huggingface/datasets/tree/datasets-2.19.1-hotfix\r\n- Cherry-picked the commit in this PR: https://github.com/huggingface/datasets/commit/3638183e2f7e0dce8924e46e7cc21bf6d5d7adfb\r\n- Opened a PR in dataset-viewer to update datasets to this revision: https://github.com/huggingface/dataset-viewer/pull/2783", "hfh 0.23.1 and transformers 4.41.0 as are out out, let's unpin no ?", "I have re-run the CI to check that is green before.", "The errors were coming from `transformers` having FutureWarning when loading models or tokenizers. I disabled the warnings for the `transformers`-related calls since they're not related to `datasets`", "I opened an issue in transformers:\r\n- https://github.com/huggingface/transformers/issues/31002", "It's because the error from the FutureWarning happened when running `cache_file()` from `transformers`, which has some code that try/except and re-raise an OSError", "Opened https://github.com/huggingface/transformers/pull/31007 to fix the FutureWarning in transformers. Sorry, thought it was fixed by https://github.com/huggingface/transformers/issues/30618 but clearly an oversight from my side.\r\n\r\nRegarding the pytest config, yes I remember adding it and in general I still think it's a good idea to have it. Will be more careful next time to update `transformers` before `huggingface_hub`'s release and not the other way around (first time it happens since I've set this value :grimacing:). For a temporary fix in `datasets` I would rather temporarily disable the filterwarnings in `datasets` rather then adding filters in the test code. ", "alright I disabled the errors on FutureWarning, do you see anything else @albertvillanova or we can merge ?", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005165 / 0.011353 (-0.006188) | 0.003991 / 0.011008 (-0.007017) | 0.064029 / 0.038508 (0.025521) | 0.031578 / 0.023109 (0.008468) | 0.242646 / 0.275898 (-0.033252) | 0.261834 / 0.323480 (-0.061646) | 0.003032 / 0.007986 (-0.004953) | 0.002659 / 0.004328 (-0.001670) | 0.049868 / 0.004250 (0.045618) | 0.047607 / 0.037052 (0.010555) | 0.250537 / 0.258489 (-0.007952) | 0.289460 / 0.293841 (-0.004381) | 0.027225 / 0.128546 (-0.101321) | 0.010496 / 0.075646 (-0.065151) | 0.208455 / 0.419271 (-0.210816) | 0.036813 / 0.043533 (-0.006720) | 0.243361 / 0.255139 (-0.011778) | 0.267477 / 0.283200 (-0.015723) | 0.020402 / 0.141683 (-0.121281) | 1.117118 / 1.452155 (-0.335037) | 1.154868 / 1.492716 (-0.337849) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.096796 / 0.018006 (0.078790) | 0.304588 / 0.000490 (0.304098) | 0.000217 / 0.000200 (0.000017) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019221 / 0.037411 (-0.018190) | 0.062897 / 0.014526 (0.048371) | 0.076446 / 0.176557 (-0.100111) | 0.124476 / 0.737135 (-0.612659) | 0.079921 / 0.296338 (-0.216418) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.284442 / 0.215209 (0.069233) | 2.799419 / 2.077655 (0.721764) | 1.468022 / 1.504120 (-0.036098) | 1.354013 / 1.541195 (-0.187182) | 1.379985 / 1.468490 (-0.088505) | 0.561723 / 4.584777 (-4.023054) | 2.408887 / 3.745712 (-1.336825) | 2.712591 / 5.269862 (-2.557271) | 1.803132 / 4.565676 (-2.762544) | 0.063010 / 0.424275 (-0.361265) | 0.005030 / 0.007607 (-0.002577) | 0.339065 / 0.226044 (0.113021) | 3.373667 / 2.268929 (1.104738) | 1.861569 / 55.444624 (-53.583056) | 1.551357 / 6.876477 (-5.325120) | 1.701885 / 2.142072 (-0.440187) | 0.645685 / 4.805227 (-4.159543) | 0.117915 / 6.500664 (-6.382749) | 0.042656 / 0.075469 (-0.032814) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.957397 / 1.841788 (-0.884391) | 11.544300 / 8.074308 (3.469992) | 9.761814 / 10.191392 (-0.429578) | 0.134766 / 0.680424 (-0.545658) | 0.015387 / 0.534201 (-0.518814) | 0.285692 / 0.579283 (-0.293591) | 0.269201 / 0.434364 (-0.165163) | 0.328198 / 0.540337 (-0.212140) | 0.422315 / 1.386936 (-0.964621) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005333 / 0.011353 (-0.006020) | 0.003638 / 0.011008 (-0.007370) | 0.050503 / 0.038508 (0.011994) | 0.032240 / 0.023109 (0.009130) | 0.267602 / 0.275898 (-0.008296) | 0.293125 / 0.323480 (-0.030355) | 0.004275 / 0.007986 (-0.003710) | 0.002714 / 0.004328 (-0.001615) | 0.049341 / 0.004250 (0.045090) | 0.040364 / 0.037052 (0.003311) | 0.281096 / 0.258489 (0.022607) | 0.312615 / 0.293841 (0.018774) | 0.029981 / 0.128546 (-0.098565) | 0.010230 / 0.075646 (-0.065416) | 0.059218 / 0.419271 (-0.360054) | 0.033360 / 0.043533 (-0.010172) | 0.269518 / 0.255139 (0.014379) | 0.287559 / 0.283200 (0.004360) | 0.018159 / 0.141683 (-0.123524) | 1.107148 / 1.452155 (-0.345006) | 1.170731 / 1.492716 (-0.321985) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095942 / 0.018006 (0.077936) | 0.304914 / 0.000490 (0.304425) | 0.000227 / 0.000200 (0.000027) | 0.000051 / 0.000054 (-0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022609 / 0.037411 (-0.014803) | 0.076455 / 0.014526 (0.061929) | 0.088170 / 0.176557 (-0.088386) | 0.128485 / 0.737135 (-0.608651) | 0.092471 / 0.296338 (-0.203867) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.291471 / 0.215209 (0.076262) | 2.822666 / 2.077655 (0.745012) | 1.531679 / 1.504120 (0.027559) | 1.405931 / 1.541195 (-0.135263) | 1.418893 / 1.468490 (-0.049597) | 0.576128 / 4.584777 (-4.008649) | 0.969466 / 3.745712 (-2.776246) | 2.831998 / 5.269862 (-2.437863) | 1.788814 / 4.565676 (-2.776863) | 0.064141 / 0.424275 (-0.360134) | 0.005126 / 0.007607 (-0.002482) | 0.341699 / 0.226044 (0.115654) | 3.320551 / 2.268929 (1.051622) | 1.903350 / 55.444624 (-53.541274) | 1.611809 / 6.876477 (-5.264668) | 1.729355 / 2.142072 (-0.412717) | 0.654622 / 4.805227 (-4.150605) | 0.118739 / 6.500664 (-6.381925) | 0.041453 / 0.075469 (-0.034016) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.017635 / 1.841788 (-0.824153) | 12.275948 / 8.074308 (4.201640) | 10.416224 / 10.191392 (0.224832) | 0.142288 / 0.680424 (-0.538135) | 0.015591 / 0.534201 (-0.518610) | 0.286515 / 0.579283 (-0.292768) | 0.128661 / 0.434364 (-0.305703) | 0.325728 / 0.540337 (-0.214609) | 0.415827 / 1.386936 (-0.971109) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#b442aa2d3efc83ba0dc369adaa63cc496e3d9836 \"CML watermark\")\n" ]
"2024-05-06T18:10:49"
"2024-05-27T10:20:42"
"2024-05-27T10:14:40"
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6876", "html_url": "https://github.com/huggingface/datasets/pull/6876", "diff_url": "https://github.com/huggingface/datasets/pull/6876.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6876.patch", "merged_at": "2024-05-27T10:14:40" }
Needed to use those in dataset-viewer: - dev version of hfh https://github.com/huggingface/dataset-viewer/pull/2781: don't span the hub with /paths-info requests - dev version of datasets at https://github.com/huggingface/datasets/pull/6875: don't write too big logs in the viewer close https://github.com/huggingface/datasets/issues/6863
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6876/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6876/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6875
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6875/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6875/comments
https://api.github.com/repos/huggingface/datasets/issues/6875/events
https://github.com/huggingface/datasets/pull/6875
2,281,428,826
PR_kwDODunzps5uqoJ_
6,875
Shorten long logs
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6875). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005191 / 0.011353 (-0.006162) | 0.003691 / 0.011008 (-0.007317) | 0.063511 / 0.038508 (0.025003) | 0.031849 / 0.023109 (0.008740) | 0.251691 / 0.275898 (-0.024207) | 0.276585 / 0.323480 (-0.046895) | 0.004080 / 0.007986 (-0.003906) | 0.002751 / 0.004328 (-0.001577) | 0.049572 / 0.004250 (0.045322) | 0.043010 / 0.037052 (0.005957) | 0.267161 / 0.258489 (0.008672) | 0.301054 / 0.293841 (0.007213) | 0.028068 / 0.128546 (-0.100479) | 0.010479 / 0.075646 (-0.065167) | 0.208458 / 0.419271 (-0.210814) | 0.035688 / 0.043533 (-0.007845) | 0.255985 / 0.255139 (0.000846) | 0.296016 / 0.283200 (0.012817) | 0.017041 / 0.141683 (-0.124642) | 1.168626 / 1.452155 (-0.283528) | 1.173419 / 1.492716 (-0.319297) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092975 / 0.018006 (0.074969) | 0.302309 / 0.000490 (0.301820) | 0.000219 / 0.000200 (0.000020) | 0.000042 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018809 / 0.037411 (-0.018602) | 0.062606 / 0.014526 (0.048080) | 0.073820 / 0.176557 (-0.102736) | 0.119451 / 0.737135 (-0.617684) | 0.075086 / 0.296338 (-0.221253) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.280342 / 0.215209 (0.065133) | 2.742477 / 2.077655 (0.664822) | 1.409221 / 1.504120 (-0.094899) | 1.291679 / 1.541195 (-0.249516) | 1.316628 / 1.468490 (-0.151862) | 0.554942 / 4.584777 (-4.029835) | 2.363301 / 3.745712 (-1.382411) | 2.775766 / 5.269862 (-2.494096) | 1.729123 / 4.565676 (-2.836554) | 0.061254 / 0.424275 (-0.363021) | 0.005444 / 0.007607 (-0.002163) | 0.330450 / 0.226044 (0.104406) | 3.249453 / 2.268929 (0.980524) | 1.782415 / 55.444624 (-53.662210) | 1.489778 / 6.876477 (-5.386699) | 1.521809 / 2.142072 (-0.620263) | 0.626622 / 4.805227 (-4.178605) | 0.117320 / 6.500664 (-6.383344) | 0.043110 / 0.075469 (-0.032359) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.981954 / 1.841788 (-0.859834) | 11.706373 / 8.074308 (3.632064) | 9.870815 / 10.191392 (-0.320577) | 0.141768 / 0.680424 (-0.538656) | 0.014455 / 0.534201 (-0.519746) | 0.287451 / 0.579283 (-0.291832) | 0.264559 / 0.434364 (-0.169805) | 0.326321 / 0.540337 (-0.214017) | 0.424084 / 1.386936 (-0.962852) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005461 / 0.011353 (-0.005892) | 0.003804 / 0.011008 (-0.007204) | 0.049872 / 0.038508 (0.011364) | 0.029543 / 0.023109 (0.006433) | 0.260772 / 0.275898 (-0.015126) | 0.291571 / 0.323480 (-0.031909) | 0.004305 / 0.007986 (-0.003681) | 0.002845 / 0.004328 (-0.001484) | 0.049129 / 0.004250 (0.044879) | 0.040743 / 0.037052 (0.003690) | 0.276497 / 0.258489 (0.018008) | 0.303126 / 0.293841 (0.009285) | 0.030423 / 0.128546 (-0.098123) | 0.010660 / 0.075646 (-0.064986) | 0.058857 / 0.419271 (-0.360415) | 0.033185 / 0.043533 (-0.010348) | 0.260452 / 0.255139 (0.005313) | 0.282648 / 0.283200 (-0.000552) | 0.018025 / 0.141683 (-0.123658) | 1.147432 / 1.452155 (-0.304723) | 1.192034 / 1.492716 (-0.300683) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093094 / 0.018006 (0.075088) | 0.301608 / 0.000490 (0.301119) | 0.000209 / 0.000200 (0.000009) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022071 / 0.037411 (-0.015340) | 0.075244 / 0.014526 (0.060718) | 0.087157 / 0.176557 (-0.089400) | 0.127339 / 0.737135 (-0.609797) | 0.088527 / 0.296338 (-0.207812) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.293033 / 0.215209 (0.077824) | 2.839842 / 2.077655 (0.762188) | 1.544730 / 1.504120 (0.040610) | 1.421727 / 1.541195 (-0.119468) | 1.446054 / 1.468490 (-0.022436) | 0.573285 / 4.584777 (-4.011492) | 0.980977 / 3.745712 (-2.764735) | 2.829034 / 5.269862 (-2.440828) | 1.800747 / 4.565676 (-2.764930) | 0.064916 / 0.424275 (-0.359360) | 0.005099 / 0.007607 (-0.002508) | 0.348054 / 0.226044 (0.122009) | 3.449111 / 2.268929 (1.180182) | 1.900115 / 55.444624 (-53.544509) | 1.620564 / 6.876477 (-5.255913) | 1.675474 / 2.142072 (-0.466598) | 0.652302 / 4.805227 (-4.152925) | 0.118438 / 6.500664 (-6.382226) | 0.041779 / 0.075469 (-0.033690) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.003703 / 1.841788 (-0.838085) | 12.466921 / 8.074308 (4.392613) | 9.800419 / 10.191392 (-0.390973) | 0.131567 / 0.680424 (-0.548856) | 0.015684 / 0.534201 (-0.518517) | 0.288754 / 0.579283 (-0.290530) | 0.126435 / 0.434364 (-0.307929) | 0.398608 / 0.540337 (-0.141729) | 0.427043 / 1.386936 (-0.959894) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#865e9b1f2ecbe934be49a2d8d46451aba4af3485 \"CML watermark\")\n" ]
"2024-05-06T17:57:07"
"2024-05-07T12:31:46"
"2024-05-07T12:25:45"
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6875", "html_url": "https://github.com/huggingface/datasets/pull/6875", "diff_url": "https://github.com/huggingface/datasets/pull/6875.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6875.patch", "merged_at": "2024-05-07T12:25:45" }
Some datasets may have unexpectedly long features/types (e.g. if the files are not formatted correctly). In that case we should still be able to log something readable
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6875/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6875/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6874
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6874/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6874/comments
https://api.github.com/repos/huggingface/datasets/issues/6874/events
https://github.com/huggingface/datasets/pull/6874
2,280,717,233
PR_kwDODunzps5uoOk-
6,874
Use pandas ujson in JSON loader to improve performance
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6874). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "Before pandas-2.2.0, the function `ujson_loads` was named `loads`: https://github.com/pandas-dev/pandas/blob/v2.1.0/pandas/io/json/__init__.py#L5\r\n```python\r\nimport ujson_loads as loads\r\n```", "Thanks for your review, @lhoestq.\r\n\r\nThe performance gain depends on many factors, such as underlying data structures, file size...\r\n\r\nIn my benchmark, the performance gain was around 8.1%. ", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005428 / 0.011353 (-0.005925) | 0.003682 / 0.011008 (-0.007326) | 0.064360 / 0.038508 (0.025852) | 0.032044 / 0.023109 (0.008934) | 0.238281 / 0.275898 (-0.037617) | 0.267542 / 0.323480 (-0.055937) | 0.003152 / 0.007986 (-0.004834) | 0.003292 / 0.004328 (-0.001037) | 0.050157 / 0.004250 (0.045906) | 0.048311 / 0.037052 (0.011259) | 0.253743 / 0.258489 (-0.004746) | 0.282729 / 0.293841 (-0.011112) | 0.027271 / 0.128546 (-0.101275) | 0.010238 / 0.075646 (-0.065408) | 0.208179 / 0.419271 (-0.211092) | 0.035607 / 0.043533 (-0.007925) | 0.246750 / 0.255139 (-0.008389) | 0.263362 / 0.283200 (-0.019837) | 0.018475 / 0.141683 (-0.123208) | 1.152978 / 1.452155 (-0.299177) | 1.158545 / 1.492716 (-0.334171) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.096645 / 0.018006 (0.078639) | 0.313186 / 0.000490 (0.312696) | 0.000209 / 0.000200 (0.000009) | 0.000052 / 0.000054 (-0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018800 / 0.037411 (-0.018612) | 0.065833 / 0.014526 (0.051307) | 0.073668 / 0.176557 (-0.102888) | 0.120608 / 0.737135 (-0.616527) | 0.074936 / 0.296338 (-0.221403) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.281596 / 0.215209 (0.066387) | 2.814537 / 2.077655 (0.736882) | 1.482781 / 1.504120 (-0.021338) | 1.349770 / 1.541195 (-0.191424) | 1.371571 / 1.468490 (-0.096919) | 0.555068 / 4.584777 (-4.029709) | 2.369588 / 3.745712 (-1.376124) | 2.742771 / 5.269862 (-2.527091) | 1.711519 / 4.565676 (-2.854158) | 0.060921 / 0.424275 (-0.363354) | 0.005263 / 0.007607 (-0.002344) | 0.333721 / 0.226044 (0.107677) | 3.329598 / 2.268929 (1.060669) | 1.806983 / 55.444624 (-53.637641) | 1.515730 / 6.876477 (-5.360746) | 1.557622 / 2.142072 (-0.584451) | 0.619564 / 4.805227 (-4.185663) | 0.115503 / 6.500664 (-6.385161) | 0.041728 / 0.075469 (-0.033741) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.967300 / 1.841788 (-0.874487) | 11.295081 / 8.074308 (3.220773) | 9.535119 / 10.191392 (-0.656273) | 0.140232 / 0.680424 (-0.540192) | 0.013774 / 0.534201 (-0.520427) | 0.281847 / 0.579283 (-0.297436) | 0.260076 / 0.434364 (-0.174288) | 0.323657 / 0.540337 (-0.216681) | 0.421116 / 1.386936 (-0.965820) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005276 / 0.011353 (-0.006077) | 0.003639 / 0.011008 (-0.007370) | 0.050451 / 0.038508 (0.011943) | 0.032787 / 0.023109 (0.009678) | 0.267029 / 0.275898 (-0.008869) | 0.299899 / 0.323480 (-0.023581) | 0.004177 / 0.007986 (-0.003809) | 0.002697 / 0.004328 (-0.001631) | 0.049631 / 0.004250 (0.045380) | 0.041942 / 0.037052 (0.004889) | 0.279249 / 0.258489 (0.020760) | 0.306512 / 0.293841 (0.012671) | 0.029340 / 0.128546 (-0.099207) | 0.010118 / 0.075646 (-0.065528) | 0.058243 / 0.419271 (-0.361028) | 0.033871 / 0.043533 (-0.009662) | 0.265949 / 0.255139 (0.010810) | 0.284263 / 0.283200 (0.001064) | 0.017351 / 0.141683 (-0.124332) | 1.107081 / 1.452155 (-0.345074) | 1.184946 / 1.492716 (-0.307770) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095621 / 0.018006 (0.077614) | 0.304758 / 0.000490 (0.304269) | 0.000204 / 0.000200 (0.000004) | 0.000052 / 0.000054 (-0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022444 / 0.037411 (-0.014967) | 0.075894 / 0.014526 (0.061368) | 0.089077 / 0.176557 (-0.087480) | 0.126960 / 0.737135 (-0.610176) | 0.089120 / 0.296338 (-0.207218) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.289885 / 0.215209 (0.074676) | 2.843219 / 2.077655 (0.765565) | 1.582704 / 1.504120 (0.078584) | 1.426551 / 1.541195 (-0.114644) | 1.431591 / 1.468490 (-0.036899) | 0.577265 / 4.584777 (-4.007512) | 0.956040 / 3.745712 (-2.789673) | 2.753517 / 5.269862 (-2.516345) | 1.732503 / 4.565676 (-2.833173) | 0.063511 / 0.424275 (-0.360764) | 0.005089 / 0.007607 (-0.002518) | 0.339205 / 0.226044 (0.113160) | 3.339148 / 2.268929 (1.070219) | 1.901543 / 55.444624 (-53.543081) | 1.618392 / 6.876477 (-5.258084) | 1.612885 / 2.142072 (-0.529188) | 0.656563 / 4.805227 (-4.148664) | 0.116740 / 6.500664 (-6.383924) | 0.040497 / 0.075469 (-0.034973) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.005568 / 1.841788 (-0.836219) | 11.872770 / 8.074308 (3.798462) | 9.867118 / 10.191392 (-0.324274) | 0.130193 / 0.680424 (-0.550231) | 0.022857 / 0.534201 (-0.511344) | 0.281908 / 0.579283 (-0.297375) | 0.125978 / 0.434364 (-0.308386) | 0.382604 / 0.540337 (-0.157733) | 0.415078 / 1.386936 (-0.971858) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#1eabcfaf87368a5cbfa0341aa2223f457508b3e9 \"CML watermark\")\n" ]
"2024-05-06T12:01:27"
"2024-05-17T16:28:29"
"2024-05-17T16:22:27"
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6874", "html_url": "https://github.com/huggingface/datasets/pull/6874", "diff_url": "https://github.com/huggingface/datasets/pull/6874.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6874.patch", "merged_at": "2024-05-17T16:22:27" }
Use pandas ujson in JSON loader to improve performance. Note that `datasets` has `pandas` as required dependency. And `pandas` includes `ujson` in `pd.io.json.ujson_loads`. Fix #6867. CC: @natolambert
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6874/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6874/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6873
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6873/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6873/comments
https://api.github.com/repos/huggingface/datasets/issues/6873/events
https://github.com/huggingface/datasets/pull/6873
2,280,463,182
PR_kwDODunzps5unXnq
6,873
Set dev version
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6873). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005301 / 0.011353 (-0.006052) | 0.003633 / 0.011008 (-0.007375) | 0.063414 / 0.038508 (0.024906) | 0.042406 / 0.023109 (0.019297) | 0.253414 / 0.275898 (-0.022484) | 0.276811 / 0.323480 (-0.046668) | 0.003148 / 0.007986 (-0.004837) | 0.002614 / 0.004328 (-0.001715) | 0.049208 / 0.004250 (0.044958) | 0.045819 / 0.037052 (0.008767) | 0.268027 / 0.258489 (0.009538) | 0.298821 / 0.293841 (0.004980) | 0.028460 / 0.128546 (-0.100086) | 0.010671 / 0.075646 (-0.064975) | 0.208602 / 0.419271 (-0.210669) | 0.036057 / 0.043533 (-0.007476) | 0.256079 / 0.255139 (0.000940) | 0.277040 / 0.283200 (-0.006160) | 0.019018 / 0.141683 (-0.122665) | 1.147070 / 1.452155 (-0.305085) | 1.175838 / 1.492716 (-0.316878) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092216 / 0.018006 (0.074210) | 0.304774 / 0.000490 (0.304284) | 0.000212 / 0.000200 (0.000012) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018242 / 0.037411 (-0.019170) | 0.061088 / 0.014526 (0.046562) | 0.074517 / 0.176557 (-0.102039) | 0.120444 / 0.737135 (-0.616691) | 0.074628 / 0.296338 (-0.221710) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.283914 / 0.215209 (0.068705) | 2.859123 / 2.077655 (0.781469) | 1.495152 / 1.504120 (-0.008967) | 1.395514 / 1.541195 (-0.145681) | 1.454076 / 1.468490 (-0.014414) | 0.568758 / 4.584777 (-4.016019) | 2.461304 / 3.745712 (-1.284408) | 2.836192 / 5.269862 (-2.433670) | 1.815463 / 4.565676 (-2.750213) | 0.065762 / 0.424275 (-0.358513) | 0.006872 / 0.007607 (-0.000736) | 0.339304 / 0.226044 (0.113260) | 3.326544 / 2.268929 (1.057616) | 1.847970 / 55.444624 (-53.596654) | 1.572667 / 6.876477 (-5.303809) | 1.595717 / 2.142072 (-0.546355) | 0.644196 / 4.805227 (-4.161031) | 0.120320 / 6.500664 (-6.380344) | 0.043334 / 0.075469 (-0.032135) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.965807 / 1.841788 (-0.875981) | 11.628715 / 8.074308 (3.554406) | 9.485618 / 10.191392 (-0.705774) | 0.152387 / 0.680424 (-0.528037) | 0.013852 / 0.534201 (-0.520349) | 0.285833 / 0.579283 (-0.293450) | 0.263692 / 0.434364 (-0.170672) | 0.323086 / 0.540337 (-0.217251) | 0.418178 / 1.386936 (-0.968758) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005505 / 0.011353 (-0.005848) | 0.003630 / 0.011008 (-0.007378) | 0.049780 / 0.038508 (0.011272) | 0.030469 / 0.023109 (0.007359) | 0.270052 / 0.275898 (-0.005846) | 0.294370 / 0.323480 (-0.029110) | 0.004207 / 0.007986 (-0.003779) | 0.002720 / 0.004328 (-0.001609) | 0.048952 / 0.004250 (0.044701) | 0.041006 / 0.037052 (0.003953) | 0.281585 / 0.258489 (0.023096) | 0.310600 / 0.293841 (0.016759) | 0.029457 / 0.128546 (-0.099089) | 0.010508 / 0.075646 (-0.065138) | 0.058090 / 0.419271 (-0.361181) | 0.032814 / 0.043533 (-0.010718) | 0.272755 / 0.255139 (0.017616) | 0.292154 / 0.283200 (0.008954) | 0.018312 / 0.141683 (-0.123371) | 1.177199 / 1.452155 (-0.274955) | 1.238803 / 1.492716 (-0.253913) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093889 / 0.018006 (0.075883) | 0.303054 / 0.000490 (0.302564) | 0.000204 / 0.000200 (0.000004) | 0.000044 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022556 / 0.037411 (-0.014856) | 0.075951 / 0.014526 (0.061425) | 0.086824 / 0.176557 (-0.089732) | 0.128091 / 0.737135 (-0.609044) | 0.088146 / 0.296338 (-0.208192) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.292563 / 0.215209 (0.077354) | 2.882656 / 2.077655 (0.805001) | 1.559814 / 1.504120 (0.055695) | 1.443760 / 1.541195 (-0.097435) | 1.460967 / 1.468490 (-0.007523) | 0.567812 / 4.584777 (-4.016965) | 0.964407 / 3.745712 (-2.781305) | 2.819782 / 5.269862 (-2.450079) | 1.733334 / 4.565676 (-2.832343) | 0.064745 / 0.424275 (-0.359530) | 0.005178 / 0.007607 (-0.002429) | 0.345322 / 0.226044 (0.119278) | 3.407204 / 2.268929 (1.138275) | 1.919337 / 55.444624 (-53.525288) | 1.643463 / 6.876477 (-5.233013) | 1.682191 / 2.142072 (-0.459881) | 0.639432 / 4.805227 (-4.165795) | 0.115659 / 6.500664 (-6.385005) | 0.041202 / 0.075469 (-0.034267) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.004664 / 1.841788 (-0.837123) | 12.043460 / 8.074308 (3.969152) | 9.856431 / 10.191392 (-0.334961) | 0.131351 / 0.680424 (-0.549072) | 0.015800 / 0.534201 (-0.518401) | 0.288211 / 0.579283 (-0.291072) | 0.126065 / 0.434364 (-0.308298) | 0.386494 / 0.540337 (-0.153843) | 0.424203 / 1.386936 (-0.962733) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#039e275549627f22d9e04278d7cad2e80c644459 \"CML watermark\")\n" ]
"2024-05-06T09:43:18"
"2024-05-06T10:03:19"
"2024-05-06T09:57:12"
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6873", "html_url": "https://github.com/huggingface/datasets/pull/6873", "diff_url": "https://github.com/huggingface/datasets/pull/6873.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6873.patch", "merged_at": "2024-05-06T09:57:12" }
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6873/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6873/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6872
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6872/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6872/comments
https://api.github.com/repos/huggingface/datasets/issues/6872/events
https://github.com/huggingface/datasets/pull/6872
2,280,438,432
PR_kwDODunzps5unSPA
6,872
Release 2.19.1
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2024-05-06T09:29:15"
"2024-05-06T09:35:33"
"2024-05-06T09:35:32"
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6872", "html_url": "https://github.com/huggingface/datasets/pull/6872", "diff_url": "https://github.com/huggingface/datasets/pull/6872.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6872.patch", "merged_at": "2024-05-06T09:35:32" }
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6872/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6872/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6871
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6871/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6871/comments
https://api.github.com/repos/huggingface/datasets/issues/6871/events
https://github.com/huggingface/datasets/pull/6871
2,280,102,869
PR_kwDODunzps5umJS6
6,871
Fix download for dict of dicts of URLs
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6871). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "Once merged, I think a patch release is needed.", "Once the CI is green, I am merging this PR and making a patch release, @huggingface/datasets. ", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005352 / 0.011353 (-0.006001) | 0.004140 / 0.011008 (-0.006868) | 0.063844 / 0.038508 (0.025336) | 0.030712 / 0.023109 (0.007603) | 0.232790 / 0.275898 (-0.043108) | 0.262334 / 0.323480 (-0.061145) | 0.003264 / 0.007986 (-0.004721) | 0.002654 / 0.004328 (-0.001674) | 0.049775 / 0.004250 (0.045524) | 0.046803 / 0.037052 (0.009751) | 0.250667 / 0.258489 (-0.007822) | 0.283581 / 0.293841 (-0.010260) | 0.027660 / 0.128546 (-0.100886) | 0.010560 / 0.075646 (-0.065087) | 0.208676 / 0.419271 (-0.210596) | 0.035415 / 0.043533 (-0.008118) | 0.235380 / 0.255139 (-0.019759) | 0.261220 / 0.283200 (-0.021980) | 0.019551 / 0.141683 (-0.122132) | 1.140196 / 1.452155 (-0.311959) | 1.173021 / 1.492716 (-0.319696) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092665 / 0.018006 (0.074659) | 0.301524 / 0.000490 (0.301034) | 0.000216 / 0.000200 (0.000016) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018485 / 0.037411 (-0.018927) | 0.061722 / 0.014526 (0.047196) | 0.074701 / 0.176557 (-0.101855) | 0.121443 / 0.737135 (-0.615692) | 0.076268 / 0.296338 (-0.220070) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.284143 / 0.215209 (0.068934) | 2.789979 / 2.077655 (0.712324) | 1.501156 / 1.504120 (-0.002964) | 1.379414 / 1.541195 (-0.161781) | 1.419092 / 1.468490 (-0.049398) | 0.554107 / 4.584777 (-4.030670) | 2.365659 / 3.745712 (-1.380053) | 2.763963 / 5.269862 (-2.505898) | 1.712587 / 4.565676 (-2.853090) | 0.060961 / 0.424275 (-0.363314) | 0.005301 / 0.007607 (-0.002306) | 0.346253 / 0.226044 (0.120209) | 3.351833 / 2.268929 (1.082905) | 1.831946 / 55.444624 (-53.612679) | 1.556530 / 6.876477 (-5.319947) | 1.574185 / 2.142072 (-0.567887) | 0.630396 / 4.805227 (-4.174831) | 0.116126 / 6.500664 (-6.384538) | 0.042391 / 0.075469 (-0.033078) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.981430 / 1.841788 (-0.860358) | 11.619671 / 8.074308 (3.545363) | 9.718227 / 10.191392 (-0.473165) | 0.130918 / 0.680424 (-0.549506) | 0.014116 / 0.534201 (-0.520085) | 0.288729 / 0.579283 (-0.290554) | 0.259183 / 0.434364 (-0.175181) | 0.323764 / 0.540337 (-0.216574) | 0.420336 / 1.386936 (-0.966600) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005255 / 0.011353 (-0.006098) | 0.003664 / 0.011008 (-0.007344) | 0.051376 / 0.038508 (0.012868) | 0.030429 / 0.023109 (0.007320) | 0.263090 / 0.275898 (-0.012808) | 0.289959 / 0.323480 (-0.033521) | 0.004214 / 0.007986 (-0.003772) | 0.002782 / 0.004328 (-0.001546) | 0.049043 / 0.004250 (0.044793) | 0.041016 / 0.037052 (0.003964) | 0.275616 / 0.258489 (0.017127) | 0.303350 / 0.293841 (0.009509) | 0.029484 / 0.128546 (-0.099062) | 0.010329 / 0.075646 (-0.065317) | 0.058680 / 0.419271 (-0.360591) | 0.032818 / 0.043533 (-0.010715) | 0.263368 / 0.255139 (0.008229) | 0.286839 / 0.283200 (0.003640) | 0.018029 / 0.141683 (-0.123654) | 1.169207 / 1.452155 (-0.282948) | 1.206568 / 1.492716 (-0.286148) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.101394 / 0.018006 (0.083387) | 0.310414 / 0.000490 (0.309924) | 0.000213 / 0.000200 (0.000013) | 0.000053 / 0.000054 (-0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021662 / 0.037411 (-0.015749) | 0.075320 / 0.014526 (0.060794) | 0.086607 / 0.176557 (-0.089949) | 0.127268 / 0.737135 (-0.609867) | 0.088244 / 0.296338 (-0.208095) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.293591 / 0.215209 (0.078382) | 2.871845 / 2.077655 (0.794190) | 1.543624 / 1.504120 (0.039504) | 1.426698 / 1.541195 (-0.114497) | 1.445348 / 1.468490 (-0.023142) | 0.565156 / 4.584777 (-4.019621) | 0.961782 / 3.745712 (-2.783930) | 2.827904 / 5.269862 (-2.441958) | 1.747728 / 4.565676 (-2.817949) | 0.063275 / 0.424275 (-0.361000) | 0.004987 / 0.007607 (-0.002620) | 0.349652 / 0.226044 (0.123607) | 3.448635 / 2.268929 (1.179707) | 1.891734 / 55.444624 (-53.552890) | 1.624274 / 6.876477 (-5.252202) | 1.641531 / 2.142072 (-0.500541) | 0.642081 / 4.805227 (-4.163146) | 0.116136 / 6.500664 (-6.384528) | 0.040807 / 0.075469 (-0.034662) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.002090 / 1.841788 (-0.839697) | 12.401097 / 8.074308 (4.326788) | 9.799316 / 10.191392 (-0.392076) | 0.131770 / 0.680424 (-0.548654) | 0.016817 / 0.534201 (-0.517384) | 0.301136 / 0.579283 (-0.278147) | 0.136810 / 0.434364 (-0.297554) | 0.384740 / 0.540337 (-0.155598) | 0.423779 / 1.386936 (-0.963157) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#2ebd8233ad8142da73bc8b4d380e9a32046d7829 \"CML watermark\")\n" ]
"2024-05-06T06:06:52"
"2024-05-06T09:32:03"
"2024-05-06T09:25:52"
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6871", "html_url": "https://github.com/huggingface/datasets/pull/6871", "diff_url": "https://github.com/huggingface/datasets/pull/6871.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6871.patch", "merged_at": "2024-05-06T09:25:52" }
Fix download for a dict of dicts of URLs when batched (default), introduced by: - #6794 This PR also implements regression tests. Fix #6869, fix #6850.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6871/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6871/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6870
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6870/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6870/comments
https://api.github.com/repos/huggingface/datasets/issues/6870/events
https://github.com/huggingface/datasets/pull/6870
2,280,084,008
PR_kwDODunzps5umFOL
6,870
Update tqdm >= 4.66.3 to fix vulnerability
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6870). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004997 / 0.011353 (-0.006356) | 0.003260 / 0.011008 (-0.007748) | 0.063342 / 0.038508 (0.024833) | 0.030399 / 0.023109 (0.007290) | 0.235665 / 0.275898 (-0.040233) | 0.256502 / 0.323480 (-0.066978) | 0.004113 / 0.007986 (-0.003873) | 0.002677 / 0.004328 (-0.001652) | 0.049614 / 0.004250 (0.045363) | 0.043075 / 0.037052 (0.006022) | 0.251788 / 0.258489 (-0.006701) | 0.280875 / 0.293841 (-0.012965) | 0.027479 / 0.128546 (-0.101067) | 0.010402 / 0.075646 (-0.065245) | 0.207296 / 0.419271 (-0.211975) | 0.035323 / 0.043533 (-0.008209) | 0.237719 / 0.255139 (-0.017420) | 0.259401 / 0.283200 (-0.023799) | 0.017574 / 0.141683 (-0.124109) | 1.109025 / 1.452155 (-0.343129) | 1.176264 / 1.492716 (-0.316452) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.098780 / 0.018006 (0.080774) | 0.304427 / 0.000490 (0.303937) | 0.000215 / 0.000200 (0.000016) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018189 / 0.037411 (-0.019222) | 0.061356 / 0.014526 (0.046830) | 0.073568 / 0.176557 (-0.102988) | 0.122412 / 0.737135 (-0.614723) | 0.074428 / 0.296338 (-0.221911) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.284719 / 0.215209 (0.069510) | 2.805719 / 2.077655 (0.728064) | 1.474386 / 1.504120 (-0.029734) | 1.341552 / 1.541195 (-0.199642) | 1.385354 / 1.468490 (-0.083136) | 0.575694 / 4.584777 (-4.009083) | 2.435102 / 3.745712 (-1.310610) | 2.822424 / 5.269862 (-2.447437) | 1.747609 / 4.565676 (-2.818068) | 0.064461 / 0.424275 (-0.359815) | 0.005370 / 0.007607 (-0.002237) | 0.341511 / 0.226044 (0.115467) | 3.384546 / 2.268929 (1.115617) | 1.846960 / 55.444624 (-53.597665) | 1.549294 / 6.876477 (-5.327183) | 1.562997 / 2.142072 (-0.579075) | 0.651108 / 4.805227 (-4.154120) | 0.118502 / 6.500664 (-6.382162) | 0.042356 / 0.075469 (-0.033113) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.015542 / 1.841788 (-0.826245) | 11.504899 / 8.074308 (3.430591) | 9.660870 / 10.191392 (-0.530522) | 0.145255 / 0.680424 (-0.535169) | 0.014602 / 0.534201 (-0.519599) | 0.286148 / 0.579283 (-0.293135) | 0.268358 / 0.434364 (-0.166006) | 0.323648 / 0.540337 (-0.216689) | 0.427384 / 1.386936 (-0.959552) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005671 / 0.011353 (-0.005681) | 0.004056 / 0.011008 (-0.006952) | 0.050673 / 0.038508 (0.012165) | 0.032334 / 0.023109 (0.009225) | 0.268541 / 0.275898 (-0.007357) | 0.294528 / 0.323480 (-0.028952) | 0.004592 / 0.007986 (-0.003393) | 0.002918 / 0.004328 (-0.001411) | 0.048857 / 0.004250 (0.044607) | 0.043072 / 0.037052 (0.006020) | 0.277031 / 0.258489 (0.018542) | 0.307189 / 0.293841 (0.013348) | 0.030500 / 0.128546 (-0.098046) | 0.010945 / 0.075646 (-0.064701) | 0.061067 / 0.419271 (-0.358204) | 0.060311 / 0.043533 (0.016778) | 0.268011 / 0.255139 (0.012872) | 0.290423 / 0.283200 (0.007224) | 0.019578 / 0.141683 (-0.122105) | 1.136353 / 1.452155 (-0.315802) | 1.196308 / 1.492716 (-0.296408) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.099429 / 0.018006 (0.081422) | 0.308350 / 0.000490 (0.307861) | 0.000221 / 0.000200 (0.000021) | 0.000045 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022221 / 0.037411 (-0.015190) | 0.076744 / 0.014526 (0.062218) | 0.087768 / 0.176557 (-0.088788) | 0.129939 / 0.737135 (-0.607196) | 0.089763 / 0.296338 (-0.206576) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.299566 / 0.215209 (0.084357) | 2.916789 / 2.077655 (0.839134) | 1.555535 / 1.504120 (0.051415) | 1.432787 / 1.541195 (-0.108407) | 1.470983 / 1.468490 (0.002493) | 0.581468 / 4.584777 (-4.003309) | 0.993418 / 3.745712 (-2.752294) | 2.917487 / 5.269862 (-2.352374) | 1.799045 / 4.565676 (-2.766632) | 0.064520 / 0.424275 (-0.359755) | 0.005131 / 0.007607 (-0.002477) | 0.352277 / 0.226044 (0.126232) | 3.456564 / 2.268929 (1.187636) | 1.949195 / 55.444624 (-53.495430) | 1.627568 / 6.876477 (-5.248909) | 1.685246 / 2.142072 (-0.456826) | 0.653161 / 4.805227 (-4.152066) | 0.118308 / 6.500664 (-6.382356) | 0.042106 / 0.075469 (-0.033364) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.048028 / 1.841788 (-0.793759) | 12.425232 / 8.074308 (4.350924) | 10.127637 / 10.191392 (-0.063755) | 0.133095 / 0.680424 (-0.547329) | 0.015255 / 0.534201 (-0.518946) | 0.287927 / 0.579283 (-0.291357) | 0.129384 / 0.434364 (-0.304980) | 0.384828 / 0.540337 (-0.155510) | 0.427881 / 1.386936 (-0.959055) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#a0bdb664436fad1d82c7988d5b413c76207f5037 \"CML watermark\")\n" ]
"2024-05-06T05:49:36"
"2024-05-06T06:08:06"
"2024-05-06T06:02:00"
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6870", "html_url": "https://github.com/huggingface/datasets/pull/6870", "diff_url": "https://github.com/huggingface/datasets/pull/6870.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6870.patch", "merged_at": "2024-05-06T06:02:00" }
Update tqdm >= 4.66.3 to fix vulnerability,
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6870/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6870/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6869
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6869/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6869/comments
https://api.github.com/repos/huggingface/datasets/issues/6869/events
https://github.com/huggingface/datasets/issues/6869
2,280,048,297
I_kwDODunzps6H5sap
6,869
Download is broken for dict of dicts: FileNotFoundError
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[]
"2024-05-06T05:13:36"
"2024-05-06T09:25:53"
"2024-05-06T09:25:53"
MEMBER
null
null
null
It seems there is a bug when downloading a dict of dicts of URLs introduced by: - #6794 ## Steps to reproduce the bug: ```python from datasets import DownloadManager dl_manager = DownloadManager() paths = dl_manager.download({"train": {"frr": "hf://datasets/wikimedia/wikipedia/20231101.frr/train-00000-of-00001.parquet"}}) ``` Stack trace: ``` --------------------------------------------------------------------------- FileNotFoundError Traceback (most recent call last) <ipython-input-7-0e0d76d25b09> in <module> ----> 1 paths = dl_manager.download({"train": {"frr": "hf://datasets/wikimedia/wikipedia/20231101.frr/train-00000-of-00001.parquet"}}) .../huggingface/datasets/src/datasets/download/download_manager.py in download(self, url_or_urls) 255 start_time = datetime.now() 256 with stack_multiprocessing_download_progress_bars(): --> 257 downloaded_path_or_paths = map_nested( 258 download_func, 259 url_or_urls, .../huggingface/datasets/src/datasets/utils/py_utils.py in map_nested(function, data_struct, dict_only, map_list, map_tuple, map_numpy, num_proc, parallel_min_length, batched, batch_size, types, disable_tqdm, desc) 506 batch_size = max(len(iterable) // num_proc + int(len(iterable) % num_proc > 0), 1) 507 iterable = list(iter_batched(iterable, batch_size)) --> 508 mapped = [ 509 _single_map_nested((function, obj, batched, batch_size, types, None, True, None)) 510 for obj in hf_tqdm(iterable, disable=disable_tqdm, desc=desc) .../huggingface/datasets/src/datasets/utils/py_utils.py in <listcomp>(.0) 507 iterable = list(iter_batched(iterable, batch_size)) 508 mapped = [ --> 509 _single_map_nested((function, obj, batched, batch_size, types, None, True, None)) 510 for obj in hf_tqdm(iterable, disable=disable_tqdm, desc=desc) 511 ] .../huggingface/datasets/src/datasets/utils/py_utils.py in _single_map_nested(args) 375 and all(not isinstance(v, types) for v in data_struct) 376 ): --> 377 return [mapped_item for batch in iter_batched(data_struct, batch_size) for mapped_item in function(batch)] 378 379 # Reduce logging to keep things readable in multiprocessing with tqdm .../huggingface/datasets/src/datasets/utils/py_utils.py in <listcomp>(.0) 375 and all(not isinstance(v, types) for v in data_struct) 376 ): --> 377 return [mapped_item for batch in iter_batched(data_struct, batch_size) for mapped_item in function(batch)] 378 379 # Reduce logging to keep things readable in multiprocessing with tqdm .../huggingface/datasets/src/datasets/download/download_manager.py in _download_batched(self, url_or_filenames, download_config) 311 ) 312 else: --> 313 return [ 314 self._download_single(url_or_filename, download_config=download_config) 315 for url_or_filename in url_or_filenames .../huggingface/datasets/src/datasets/download/download_manager.py in <listcomp>(.0) 312 else: 313 return [ --> 314 self._download_single(url_or_filename, download_config=download_config) 315 for url_or_filename in url_or_filenames 316 ] .../huggingface/datasets/src/datasets/download/download_manager.py in _download_single(self, url_or_filename, download_config) 321 # append the relative path to the base_path 322 url_or_filename = url_or_path_join(self._base_path, url_or_filename) --> 323 out = cached_path(url_or_filename, download_config=download_config) 324 out = tracked_str(out) 325 out.set_origin(url_or_filename) .../huggingface/datasets/src/datasets/utils/file_utils.py in cached_path(url_or_filename, download_config, **download_kwargs) 220 elif is_local_path(url_or_filename): 221 # File, but it doesn't exist. --> 222 raise FileNotFoundError(f"Local file {url_or_filename} doesn't exist") 223 else: 224 # Something unknown FileNotFoundError: Local file .../huggingface/datasets/{'frr': 'hf:/datasets/wikimedia/wikipedia/20231101.frr/train-00000-of-00001.parquet'} doesn't exist ``` Related to: - #6850
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6869/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6869/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6868
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6868/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6868/comments
https://api.github.com/repos/huggingface/datasets/issues/6868/events
https://github.com/huggingface/datasets/issues/6868
2,279,385,159
I_kwDODunzps6H3KhH
6,868
datasets.BuilderConfig does not work.
{ "login": "jdm4pku", "id": 148830652, "node_id": "U_kgDOCN75vA", "avatar_url": "https://avatars.githubusercontent.com/u/148830652?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jdm4pku", "html_url": "https://github.com/jdm4pku", "followers_url": "https://api.github.com/users/jdm4pku/followers", "following_url": "https://api.github.com/users/jdm4pku/following{/other_user}", "gists_url": "https://api.github.com/users/jdm4pku/gists{/gist_id}", "starred_url": "https://api.github.com/users/jdm4pku/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jdm4pku/subscriptions", "organizations_url": "https://api.github.com/users/jdm4pku/orgs", "repos_url": "https://api.github.com/users/jdm4pku/repos", "events_url": "https://api.github.com/users/jdm4pku/events{/privacy}", "received_events_url": "https://api.github.com/users/jdm4pku/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "I guess the issue is caused by the customization of BuilderConfig that you use from the repo [https://github.com/BeyonderXX/InstructUIE](https://github.com/BeyonderXX/InstructUIE/blob/master/src/uie_dataset.py). You should report to them.\r\n\r\nI see you already opened an issue in their repo:\r\n- https://github.com/BeyonderXX/InstructUIE/issues/40" ]
"2024-05-05T08:08:55"
"2024-05-05T12:15:02"
"2024-05-05T12:15:01"
NONE
null
null
null
### Describe the bug I custom a BuilderConfig and GeneratorBasedBuilder. Here is the code for BuilderConfig ``` class UIEConfig(datasets.BuilderConfig): def __init__( self, *args, data_dir=None, instruction_file=None, instruction_strategy=None, task_config_dir=None, num_examples=None, max_num_instances_per_task=None, max_num_instances_per_eval_task=None, over_sampling=None, **kwargs ): super().__init__(*args, **kwargs) self.data_dir = data_dir self.num_examples = num_examples self.over_sampling = over_sampling self.instructions = self._parse_instruction(instruction_file) self.task_configs = self._parse_task_config(task_config_dir) self.instruction_strategy = instruction_strategy self.max_num_instances_per_task = max_num_instances_per_task self.max_num_instances_per_eval_task = max_num_instances_per_eval_task ``` Besides, here is the code for GeneratorBasedBuilder. ``` class UIEInstructions(datasets.GeneratorBasedBuilder): VERSION = datasets.Version("2.0.0") BUILDER_CONFIG_CLASS = UIEConfig BUILDER_CONFIGS = [ UIEConfig(name="default", description="Default config for NaturalInstructions") ] DEFAULT_CONFIG_NAME = "default" ``` Here is the load_dataset ``` raw_datasets = load_dataset( os.path.join(CURRENT_DIR, "uie_dataset.py"), data_dir=data_args.data_dir, task_config_dir=data_args.task_config_dir, instruction_file=data_args.instruction_file, instruction_strategy=data_args.instruction_strategy, cache_dir=data_cache_dir, # for debug, change dataset size, otherwise open it max_num_instances_per_task=data_args.max_num_instances_per_task, max_num_instances_per_eval_task=data_args.max_num_instances_per_eval_task, num_examples=data_args.num_examples, over_sampling=data_args.over_sampling ) ``` Finally, I met the error. ``` BuilderConfig UIEConfig(name='default', version=0.0.0, data_dir=None, data_files=None, description='Default config for NaturalInstructions') doesn't have a 'task_config_dir' key. ``` I debugged the code, but I find the parameters added by me may not work. ### Steps to reproduce the bug https://github.com/BeyonderXX/InstructUIE/blob/master/src/uie_dataset.py ### Expected behavior ``` BuilderConfig UIEConfig(name='default', version=0.0.0, data_dir=None, data_files=None, description='Default config for NaturalInstructions') doesn't have a 'task_config_dir' key. ``` ### Environment info torch 2.3.0+cu118 transformers 4.40.1 python 3.8
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6868/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6868/timeline
null
not_planned
false
https://api.github.com/repos/huggingface/datasets/issues/6867
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6867/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6867/comments
https://api.github.com/repos/huggingface/datasets/issues/6867/events
https://github.com/huggingface/datasets/issues/6867
2,279,059,787
I_kwDODunzps6H17FL
6,867
Improve performance of JSON loader
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Thanks! Feel free to ping me for examples. May not respond immediately because we're all busy but would like to help.", "Hi @natolambert, could you please give some examples of JSON files to benchmark?\r\n\r\nPlease note that this JSON file (https://huggingface.co/datasets/allenai/reward-bench-results/blob/main/eval-set-scores/Ray2333/reward-model-Mistral-7B-instruct-Unified-Feedback.json) is not in \"records\" orient; instead it has the following structure:\r\n```json\r\n{\r\n \"chat_template\": \"tulu\",\r\n \"id\": [30, 34, 35,...],\r\n \"model\": \"Ray2333/reward-model-Mistral-7B-instruct-Unified-Feedback\",\r\n \"model_type\": \"Seq. Classifier\",\r\n \"results\": [1, 1, 1, ...],\r\n \"scores_chosen\": [4.421875, 1.8916015625, 3.8515625,...],\r\n \"scores_rejected\": [-2.416015625, -1.47265625, -0.9912109375,...],\r\n \"subset\": [\"alpacaeval-easy\", \"alpacaeval-easy\", \"alpacaeval-easy\",...]\r\n \"text_chosen\": [\"<s>[INST] How do I detail a...\",...],\r\n \"text_rejected\": [\"<s>[INST] How do I detail a...\",...]\r\n}\r\n```\r\n\r\nNote that \"records\" orient should be a list (not a dict) with each row as one item of the list:\r\n```json\r\n[\r\n {\"chat_template\": \"tulu\", \"id\": 30,... },\r\n {\"chat_template\": \"tulu\", \"id\": 34,... },\r\n ...\r\n]\r\n```", "We use a mix (which is a mess), here's an example with the records orient\r\nhttps://huggingface.co/datasets/allenai/reward-bench-results/blob/main/best-of-n/alpaca_eval/tulu-13b/OpenAssistant/oasst-rm-2.1-pythia-1.4b-epoch-2.5.json\r\n\r\nThere are more in that folder, ~40mb maybe?", "@albertvillanova here's a snippet so you don't need to click\r\n```\r\n{\r\n \"config\": \"top_p=0.9;temp=1.0\",\r\n \"dataset_details\": \"helpful_base\",\r\n \"id\": [\r\n 0,\r\n 0\r\n ],\r\n \"model\": \"allenai/tulu-2-dpo-13b\",\r\n \"scores\": 3.076171875\r\n}\r\n{\r\n \"config\": \"top_p=0.9;temp=1.0\",\r\n \"dataset_details\": \"helpful_base\",\r\n \"id\": [\r\n 0,\r\n 1\r\n ],\r\n \"model\": \"allenai/tulu-2-dpo-13b\",\r\n \"scores\": 3.87890625\r\n}\r\n{\r\n \"config\": \"top_p=0.9;temp=1.0\",\r\n \"dataset_details\": \"helpful_base\",\r\n \"id\": [\r\n 0,\r\n 2\r\n ],\r\n \"model\": \"allenai/tulu-2-dpo-13b\",\r\n \"scores\": 3.287109375\r\n}\r\n{\r\n \"config\": \"top_p=0.9;temp=1.0\",\r\n \"dataset_details\": \"helpful_base\",\r\n \"id\": [\r\n 0,\r\n 3\r\n ],\r\n \"model\": \"allenai/tulu-2-dpo-13b\",\r\n \"scores\": 1.6337890625\r\n}\r\n{\r\n \"config\": \"top_p=0.9;temp=1.0\",\r\n \"dataset_details\": \"helpful_base\",\r\n \"id\": [\r\n 0,\r\n 4\r\n ],\r\n \"model\": \"allenai/tulu-2-dpo-13b\",\r\n \"scores\": 5.27734375\r\n}\r\n{\r\n \"config\": \"top_p=0.9;temp=1.0\",\r\n \"dataset_details\": \"helpful_base\",\r\n \"id\": [\r\n 0,\r\n 5\r\n ],\r\n \"model\": \"allenai/tulu-2-dpo-13b\",\r\n \"scores\": 3.0625\r\n}\r\n{\r\n \"config\": \"top_p=0.9;temp=1.0\",\r\n \"dataset_details\": \"helpful_base\",\r\n \"id\": [\r\n 0,\r\n 6\r\n ],\r\n \"model\": \"allenai/tulu-2-dpo-13b\",\r\n \"scores\": 2.29296875\r\n}\r\n{\r\n \"config\": \"top_p=0.9;temp=1.0\",\r\n \"dataset_details\": \"helpful_base\",\r\n \"id\": [\r\n 0,\r\n 7\r\n ],\r\n \"model\": \"allenai/tulu-2-dpo-13b\",\r\n \"scores\": 6.77734375\r\n}\r\n{\r\n \"config\": \"top_p=0.9;temp=1.0\",\r\n \"dataset_details\": \"helpful_base\",\r\n \"id\": [\r\n 0,\r\n 8\r\n ],\r\n \"model\": \"allenai/tulu-2-dpo-13b\",\r\n \"scores\": 3.853515625\r\n}\r\n{\r\n \"config\": \"top_p=0.9;temp=1.0\",\r\n \"dataset_details\": \"helpful_base\",\r\n \"id\": [\r\n 0,\r\n 9\r\n ],\r\n \"model\": \"allenai/tulu-2-dpo-13b\",\r\n \"scores\": 4.86328125\r\n}\r\n{\r\n \"config\": \"top_p=0.9;temp=1.0\",\r\n \"dataset_details\": \"helpful_base\",\r\n \"id\": [\r\n 0,\r\n 10\r\n ],\r\n \"model\": \"allenai/tulu-2-dpo-13b\",\r\n \"scores\": 2.890625\r\n}\r\n{\r\n \"config\": \"top_p=0.9;temp=1.0\",\r\n \"dataset_details\": \"helpful_base\",\r\n \"id\": [\r\n 0,\r\n 11\r\n ],\r\n \"model\": \"allenai/tulu-2-dpo-13b\",\r\n \"scores\": 4.70703125\r\n}\r\n{\r\n \"config\": \"top_p=0.9;temp=1.0\",\r\n \"dataset_details\": \"helpful_base\",\r\n \"id\": [\r\n 0,\r\n 12\r\n ],\r\n \"model\": \"allenai/tulu-2-dpo-13b\",\r\n \"scores\": 4.45703125\r\n}\r\n```", "Thanks again for your feedback, @natolambert.\r\n\r\nHowever, strictly speaking, the last file is not in JSON format but in kind of JSON-Lines like format (although not properly either because there are multiple newline characters within each object). Not even pandas can read that file format.\r\n\r\nAnyway, for JSON-Lines, I would expect that `datasets` and `pandas` have the same performance for JSON Lines files, as both use `pyarrow` under the hood...\r\n\r\nA proper JSON file in records orient should be a list (a JSON array): the first character should be `[`.\r\n\r\nAnyway, I am generating a JSON file from your JSON-Lines file to test performance." ]
"2024-05-04T15:04:16"
"2024-05-17T16:22:28"
"2024-05-17T16:22:28"
MEMBER
null
null
null
As reported by @natolambert, loading regular JSON files with `datasets` shows poor performance. The cause is that we use the `json` Python standard library instead of other faster libraries. See my old comment: https://github.com/huggingface/datasets/pull/2638#pullrequestreview-706983714 > There are benchmarks that compare different JSON packages, with the Standard Library one among the worst performant: > - https://github.com/ultrajson/ultrajson#benchmarks > - https://github.com/ijl/orjson#performance I remember having a discussion about this and it was decided that it was better not to include an additional dependency on a 3rd-party library. However: - We already depend on `pandas` and `pandas` depends on `ujson`: so we have an indirect dependency on `ujson` - Even if the above were not the case, we always could include `ujson` as an optional extra dependency, and check at runtime if it is installed to decide which library to use, either json or ujson
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6867/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6867/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6866
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6866/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6866/comments
https://api.github.com/repos/huggingface/datasets/issues/6866/events
https://github.com/huggingface/datasets/issues/6866
2,278,736,221
I_kwDODunzps6H0sFd
6,866
DataFilesNotFoundError for datasets in the open-llm-leaderboard
{ "login": "jerome-white", "id": 6140840, "node_id": "MDQ6VXNlcjYxNDA4NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/6140840?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jerome-white", "html_url": "https://github.com/jerome-white", "followers_url": "https://api.github.com/users/jerome-white/followers", "following_url": "https://api.github.com/users/jerome-white/following{/other_user}", "gists_url": "https://api.github.com/users/jerome-white/gists{/gist_id}", "starred_url": "https://api.github.com/users/jerome-white/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jerome-white/subscriptions", "organizations_url": "https://api.github.com/users/jerome-white/orgs", "repos_url": "https://api.github.com/users/jerome-white/repos", "events_url": "https://api.github.com/users/jerome-white/events{/privacy}", "received_events_url": "https://api.github.com/users/jerome-white/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Potentially related:\r\n* #6864\r\n* #6850\r\n* #6848\r\n* #6819", "Hi @jerome-white, thnaks for reporting.\r\n\r\nHowever, I cannot reproduce your issue:\r\n```python\r\n>>> from datasets import get_dataset_config_names\r\n\r\n>>> get_dataset_config_names(\"open-llm-leaderboard/details_davidkim205__Rhea-72b-v0.5\")\r\n['harness_arc_challenge_25',\r\n 'harness_gsm8k_5',\r\n 'harness_hellaswag_10',\r\n 'harness_hendrycksTest_5',\r\n 'harness_hendrycksTest_abstract_algebra_5',\r\n 'harness_hendrycksTest_anatomy_5',\r\n 'harness_hendrycksTest_astronomy_5',\r\n 'harness_hendrycksTest_business_ethics_5',\r\n 'harness_hendrycksTest_clinical_knowledge_5',\r\n 'harness_hendrycksTest_college_biology_5',\r\n 'harness_hendrycksTest_college_chemistry_5',\r\n 'harness_hendrycksTest_college_computer_science_5',\r\n 'harness_hendrycksTest_college_mathematics_5',\r\n 'harness_hendrycksTest_college_medicine_5',\r\n 'harness_hendrycksTest_college_physics_5',\r\n 'harness_hendrycksTest_computer_security_5',\r\n 'harness_hendrycksTest_conceptual_physics_5',\r\n 'harness_hendrycksTest_econometrics_5',\r\n 'harness_hendrycksTest_electrical_engineering_5',\r\n 'harness_hendrycksTest_elementary_mathematics_5',\r\n 'harness_hendrycksTest_formal_logic_5',\r\n 'harness_hendrycksTest_global_facts_5',\r\n 'harness_hendrycksTest_high_school_biology_5',\r\n 'harness_hendrycksTest_high_school_chemistry_5',\r\n 'harness_hendrycksTest_high_school_computer_science_5',\r\n 'harness_hendrycksTest_high_school_european_history_5',\r\n 'harness_hendrycksTest_high_school_geography_5',\r\n 'harness_hendrycksTest_high_school_government_and_politics_5',\r\n 'harness_hendrycksTest_high_school_macroeconomics_5',\r\n 'harness_hendrycksTest_high_school_mathematics_5',\r\n 'harness_hendrycksTest_high_school_microeconomics_5',\r\n 'harness_hendrycksTest_high_school_physics_5',\r\n 'harness_hendrycksTest_high_school_psychology_5',\r\n 'harness_hendrycksTest_high_school_statistics_5',\r\n 'harness_hendrycksTest_high_school_us_history_5',\r\n 'harness_hendrycksTest_high_school_world_history_5',\r\n 'harness_hendrycksTest_human_aging_5',\r\n 'harness_hendrycksTest_human_sexuality_5',\r\n 'harness_hendrycksTest_international_law_5',\r\n 'harness_hendrycksTest_jurisprudence_5',\r\n 'harness_hendrycksTest_logical_fallacies_5',\r\n 'harness_hendrycksTest_machine_learning_5',\r\n 'harness_hendrycksTest_management_5',\r\n 'harness_hendrycksTest_marketing_5',\r\n 'harness_hendrycksTest_medical_genetics_5',\r\n 'harness_hendrycksTest_miscellaneous_5',\r\n 'harness_hendrycksTest_moral_disputes_5',\r\n 'harness_hendrycksTest_moral_scenarios_5',\r\n 'harness_hendrycksTest_nutrition_5',\r\n 'harness_hendrycksTest_philosophy_5',\r\n 'harness_hendrycksTest_prehistory_5',\r\n 'harness_hendrycksTest_professional_accounting_5',\r\n 'harness_hendrycksTest_professional_law_5',\r\n 'harness_hendrycksTest_professional_medicine_5',\r\n 'harness_hendrycksTest_professional_psychology_5',\r\n 'harness_hendrycksTest_public_relations_5',\r\n 'harness_hendrycksTest_security_studies_5',\r\n 'harness_hendrycksTest_sociology_5',\r\n 'harness_hendrycksTest_us_foreign_policy_5',\r\n 'harness_hendrycksTest_virology_5',\r\n 'harness_hendrycksTest_world_religions_5',\r\n 'harness_truthfulqa_mc_0',\r\n 'harness_winogrande_5',\r\n 'results']\r\n```\r\n\r\nMaybe it was just a temporary issue...", "> Maybe it was just a temporary issue...\r\n\r\nPerhaps. I've changed my workflow to use the hub's `HfFileSystem`, so for now this is no longer a blocker for me. I'll reopen the issue if that changes." ]
"2024-05-04T04:59:00"
"2024-05-14T08:09:56"
"2024-05-14T08:09:56"
NONE
null
null
null
### Describe the bug When trying to get config names or load any dataset within the open-llm-leaderboard ecosystem (`open-llm-leaderboard/details_`) I receive the DataFilesNotFoundError. For the last month or so I've been loading datasets from the leaderboard almost everyday; yesterday was the first time I started seeing this. ### Steps to reproduce the bug This snippet has three cells: 1. Loads the modules 2. Tries to get config names 3. Tries to load the dataset I've chosen "davidkim205"'s Rhea-72b-v0.5 model because it is one of the best performers on the leaderboard should likely have no dataset issues: ```python In [1]: from datasets import load_dataset, get_dataset_config_names In [2]: get_dataset_config_names("open-llm-leaderboard/details_davidkim205__Rhea ...: -72b-v0.5") --------------------------------------------------------------------------- DataFilesNotFoundError Traceback (most recent call last) Cell In[2], line 1 ----> 1 get_dataset_config_names("open-llm-leaderboard/details_davidkim205__Rhea-72b-v0.5") File ~/open-llm-bda/venv/lib/python3.11/site-packages/datasets/inspect.py:347, in get_dataset_config_names(path, revision, download_config, download_mode, dynamic_modules_path, data_files, **download_kwargs) 291 def get_dataset_config_names( 292 path: str, 293 revision: Optional[Union[str, Version]] = None, (...) 298 **download_kwargs, 299 ): 300 """Get the list of available config names for a particular dataset. 301 302 Args: (...) 345 ``` 346 """ --> 347 dataset_module = dataset_module_factory( 348 path, 349 revision=revision, 350 download_config=download_config, 351 download_mode=download_mode, 352 dynamic_modules_path=dynamic_modules_path, 353 data_files=data_files, 354 **download_kwargs, 355 ) 356 builder_cls = get_dataset_builder_class(dataset_module, dataset_name=os.path.basename(path)) 357 return list(builder_cls.builder_configs.keys()) or [ 358 dataset_module.builder_kwargs.get("config_name", builder_cls.DEFAULT_CONFIG_NAME or "default") 359 ] File ~/open-llm-bda/venv/lib/python3.11/site-packages/datasets/load.py:1821, in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, cache_dir, trust_remote_code, _require_default_config_name, _require_custom_configs, **download_kwargs) 1812 return LocalDatasetModuleFactoryWithScript( 1813 combined_path, 1814 download_mode=download_mode, 1815 dynamic_modules_path=dynamic_modules_path, 1816 trust_remote_code=trust_remote_code, 1817 ).get_module() 1818 elif os.path.isdir(path): 1819 return LocalDatasetModuleFactoryWithoutScript( 1820 path, data_dir=data_dir, data_files=data_files, download_mode=download_mode -> 1821 ).get_module() 1822 # Try remotely 1823 elif is_relative_path(path) and path.count("/") <= 1: File ~/open-llm-bda/venv/lib/python3.11/site-packages/datasets/load.py:1039, in LocalDatasetModuleFactoryWithoutScript.get_module(self) 1033 patterns = get_data_patterns(base_path) 1034 data_files = DataFilesDict.from_patterns( 1035 patterns, 1036 base_path=base_path, 1037 allowed_extensions=ALL_ALLOWED_EXTENSIONS, 1038 ) -> 1039 module_name, default_builder_kwargs = infer_module_for_data_files( 1040 data_files=data_files, 1041 path=self.path, 1042 ) 1043 data_files = data_files.filter_extensions(_MODULE_TO_EXTENSIONS[module_name]) 1044 # Collect metadata files if the module supports them File ~/open-llm-bda/venv/lib/python3.11/site-packages/datasets/load.py:597, in infer_module_for_data_files(data_files, path, download_config) 595 raise ValueError(f"Couldn't infer the same data file format for all splits. Got {split_modules}") 596 if not module_name: --> 597 raise DataFilesNotFoundError("No (supported) data files found" + (f" in {path}" if path else "")) 598 return module_name, default_builder_kwargs DataFilesNotFoundError: No (supported) data files found in open-llm-leaderboard/details_davidkim205__Rhea-72b-v0.5 In [3]: data = load_dataset("open-llm-leaderboard/details_davidkim205__Rhea-72b- ...: v0.5", "harness_winogrande_5") --------------------------------------------------------------------------- DataFilesNotFoundError Traceback (most recent call last) Cell In[3], line 1 ----> 1 data = load_dataset("open-llm-leaderboard/details_davidkim205__Rhea-72b-v0.5", "harness_winogrande_5") File ~/open-llm-bda/venv/lib/python3.11/site-packages/datasets/load.py:2587, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, trust_remote_code, **config_kwargs) 2582 verification_mode = VerificationMode( 2583 (verification_mode or VerificationMode.BASIC_CHECKS) if not save_infos else VerificationMode.ALL_CHECKS 2584 ) 2586 # Create a dataset builder -> 2587 builder_instance = load_dataset_builder( 2588 path=path, 2589 name=name, 2590 data_dir=data_dir, 2591 data_files=data_files, 2592 cache_dir=cache_dir, 2593 features=features, 2594 download_config=download_config, 2595 download_mode=download_mode, 2596 revision=revision, 2597 token=token, 2598 storage_options=storage_options, 2599 trust_remote_code=trust_remote_code, 2600 _require_default_config_name=name is None, 2601 **config_kwargs, 2602 ) 2604 # Return iterable dataset in case of streaming 2605 if streaming: File ~/open-llm-bda/venv/lib/python3.11/site-packages/datasets/load.py:2259, in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, token, use_auth_token, storage_options, trust_remote_code, _require_default_config_name, **config_kwargs) 2257 download_config = download_config.copy() if download_config else DownloadConfig() 2258 download_config.storage_options.update(storage_options) -> 2259 dataset_module = dataset_module_factory( 2260 path, 2261 revision=revision, 2262 download_config=download_config, 2263 download_mode=download_mode, 2264 data_dir=data_dir, 2265 data_files=data_files, 2266 cache_dir=cache_dir, 2267 trust_remote_code=trust_remote_code, 2268 _require_default_config_name=_require_default_config_name, 2269 _require_custom_configs=bool(config_kwargs), 2270 ) 2271 # Get dataset builder class from the processing script 2272 builder_kwargs = dataset_module.builder_kwargs File ~/open-llm-bda/venv/lib/python3.11/site-packages/datasets/load.py:1821, in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, cache_dir, trust_remote_code, _require_default_config_name, _require_custom_configs, **download_kwargs) 1812 return LocalDatasetModuleFactoryWithScript( 1813 combined_path, 1814 download_mode=download_mode, 1815 dynamic_modules_path=dynamic_modules_path, 1816 trust_remote_code=trust_remote_code, 1817 ).get_module() 1818 elif os.path.isdir(path): 1819 return LocalDatasetModuleFactoryWithoutScript( 1820 path, data_dir=data_dir, data_files=data_files, download_mode=download_mode -> 1821 ).get_module() 1822 # Try remotely 1823 elif is_relative_path(path) and path.count("/") <= 1: File ~/open-llm-bda/venv/lib/python3.11/site-packages/datasets/load.py:1039, in LocalDatasetModuleFactoryWithoutScript.get_module(self) 1033 patterns = get_data_patterns(base_path) 1034 data_files = DataFilesDict.from_patterns( 1035 patterns, 1036 base_path=base_path, 1037 allowed_extensions=ALL_ALLOWED_EXTENSIONS, 1038 ) -> 1039 module_name, default_builder_kwargs = infer_module_for_data_files( 1040 data_files=data_files, 1041 path=self.path, 1042 ) 1043 data_files = data_files.filter_extensions(_MODULE_TO_EXTENSIONS[module_name]) 1044 # Collect metadata files if the module supports them File ~/open-llm-bda/venv/lib/python3.11/site-packages/datasets/load.py:597, in infer_module_for_data_files(data_files, path, download_config) 595 raise ValueError(f"Couldn't infer the same data file format for all splits. Got {split_modules}") 596 if not module_name: --> 597 raise DataFilesNotFoundError("No (supported) data files found" + (f" in {path}" if path else "")) 598 return module_name, default_builder_kwargs DataFilesNotFoundError: No (supported) data files found in open-llm-leaderboard/details_davidkim205__Rhea-72b-v0.5 ``` ### Expected behavior No exceptions from `get_dataset_config_names` or `load_dataset` ### Environment info - `datasets` version: 2.19.0 - Platform: Linux-6.5.0-1018-aws-aarch64-with-glibc2.35 - Python version: 3.11.8 - `huggingface_hub` version: 0.23.0 - PyArrow version: 16.0.0 - Pandas version: 2.2.2 - `fsspec` version: 2024.3.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6866/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6866/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6865
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6865/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6865/comments
https://api.github.com/repos/huggingface/datasets/issues/6865/events
https://github.com/huggingface/datasets/issues/6865
2,277,304,832
I_kwDODunzps6HvOoA
6,865
Example on Semantic segmentation contains bug
{ "login": "ducha-aiki", "id": 4803565, "node_id": "MDQ6VXNlcjQ4MDM1NjU=", "avatar_url": "https://avatars.githubusercontent.com/u/4803565?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ducha-aiki", "html_url": "https://github.com/ducha-aiki", "followers_url": "https://api.github.com/users/ducha-aiki/followers", "following_url": "https://api.github.com/users/ducha-aiki/following{/other_user}", "gists_url": "https://api.github.com/users/ducha-aiki/gists{/gist_id}", "starred_url": "https://api.github.com/users/ducha-aiki/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ducha-aiki/subscriptions", "organizations_url": "https://api.github.com/users/ducha-aiki/orgs", "repos_url": "https://api.github.com/users/ducha-aiki/repos", "events_url": "https://api.github.com/users/ducha-aiki/events{/privacy}", "received_events_url": "https://api.github.com/users/ducha-aiki/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
"2024-05-03T09:40:12"
"2024-05-03T09:40:12"
null
NONE
null
null
null
### Describe the bug https://huggingface.co/docs/datasets/en/semantic_segmentation shows wrong example with torchvision transforms. Specifically, as one can see in screenshot below, the object boundaries have weird colors. <img width="689" alt="image" src="https://github.com/huggingface/datasets/assets/4803565/59aa0e2c-2e3e-415b-9d42-2314044c5aee"> Original example with `albumentations` is correct <img width="705" alt="image" src="https://github.com/huggingface/datasets/assets/4803565/27dbd725-cea5-4e48-ba59-7050c3ce17b3"> That is because `torch vision.transforms.Resize` interpolates with bilinear everything which is wrong when used for segmentation labels - you just cannot mix them. Overall, `torchvision.transforms` is designed for classification only and cannot be used to images and masks together, unless you write two separate branches of augmentations. The correct way would be to use `v2` version of transforms and convert the segmentation labels to https://pytorch.org/vision/main/generated/torchvision.tv_tensors.Mask.html#torchvision.tv_tensors.Mask object ### Steps to reproduce the bug Go to the website. <img width="689" alt="image" src="https://github.com/huggingface/datasets/assets/4803565/ea1276d0-d69a-48cf-b9c2-cd61217815ef"> https://huggingface.co/docs/datasets/en/semantic_segmentation ### Expected behavior Results, similar to `albumentation`. Or remove the torch vision part altogether. Or use `kornia` instead. ### Environment info Irrelevant
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6865/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6865/timeline
null
null
false