url
stringlengths
61
61
repository_url
stringclasses
1 value
labels_url
stringlengths
75
75
comments_url
stringlengths
70
70
events_url
stringlengths
68
68
html_url
stringlengths
49
51
id
int64
1.08B
1.73B
node_id
stringlengths
18
19
number
int64
3.45k
5.9k
title
stringlengths
1
290
user
dict
labels
list
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
list
milestone
dict
comments
sequence
created_at
unknown
updated_at
unknown
closed_at
unknown
author_association
stringclasses
3 values
active_lock_reason
null
draft
bool
2 classes
pull_request
dict
body
stringlengths
2
36.2k
reactions
dict
timeline_url
stringlengths
70
70
performed_via_github_app
null
state_reason
stringclasses
3 values
is_pull_request
bool
2 classes
https://api.github.com/repos/huggingface/datasets/issues/5899
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5899/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5899/comments
https://api.github.com/repos/huggingface/datasets/issues/5899/events
https://github.com/huggingface/datasets/pull/5899
1,726,279,011
PR_kwDODunzps5RXods
5,899
canonicalize data dir in config ID hash
{ "login": "kylrth", "id": 5044802, "node_id": "MDQ6VXNlcjUwNDQ4MDI=", "avatar_url": "https://avatars.githubusercontent.com/u/5044802?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kylrth", "html_url": "https://github.com/kylrth", "followers_url": "https://api.github.com/users/kylrth/followers", "following_url": "https://api.github.com/users/kylrth/following{/other_user}", "gists_url": "https://api.github.com/users/kylrth/gists{/gist_id}", "starred_url": "https://api.github.com/users/kylrth/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kylrth/subscriptions", "organizations_url": "https://api.github.com/users/kylrth/orgs", "repos_url": "https://api.github.com/users/kylrth/repos", "events_url": "https://api.github.com/users/kylrth/events{/privacy}", "received_events_url": "https://api.github.com/users/kylrth/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
"2023-05-25T18:17:10"
"2023-05-25T18:17:10"
null
NONE
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5899", "html_url": "https://github.com/huggingface/datasets/pull/5899", "diff_url": "https://github.com/huggingface/datasets/pull/5899.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5899.patch", "merged_at": null }
fixes #5871 The second commit is optional but improves readability.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5899/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5899/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5898
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5898/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5898/comments
https://api.github.com/repos/huggingface/datasets/issues/5898/events
https://github.com/huggingface/datasets/issues/5898
1,726,190,481
I_kwDODunzps5m45OR
5,898
Loading The flores data set for specific language
{ "login": "106AbdulBasit", "id": 36159918, "node_id": "MDQ6VXNlcjM2MTU5OTE4", "avatar_url": "https://avatars.githubusercontent.com/u/36159918?v=4", "gravatar_id": "", "url": "https://api.github.com/users/106AbdulBasit", "html_url": "https://github.com/106AbdulBasit", "followers_url": "https://api.github.com/users/106AbdulBasit/followers", "following_url": "https://api.github.com/users/106AbdulBasit/following{/other_user}", "gists_url": "https://api.github.com/users/106AbdulBasit/gists{/gist_id}", "starred_url": "https://api.github.com/users/106AbdulBasit/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/106AbdulBasit/subscriptions", "organizations_url": "https://api.github.com/users/106AbdulBasit/orgs", "repos_url": "https://api.github.com/users/106AbdulBasit/repos", "events_url": "https://api.github.com/users/106AbdulBasit/events{/privacy}", "received_events_url": "https://api.github.com/users/106AbdulBasit/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "got that the syntax is like this\r\n\r\ndataset = load_dataset(\"facebook/flores\", \"ace_Arab\")" ]
"2023-05-25T17:08:55"
"2023-05-25T17:21:38"
"2023-05-25T17:21:37"
NONE
null
null
null
### Describe the bug I am trying to load the Flores data set the code which is given is ``` from datasets import load_dataset dataset = load_dataset("facebook/flores") ``` This gives the error of config name ""ValueError: Config name is missing" Now if I add some config it gives me the some error "HFValidationError: Repo id must use alphanumeric chars or '-', '_', '.', '--' and '..' are forbidden, '-' and '.' cannot start or end the name, max length is 96: 'facebook/flores, 'ace_Arab''. " How I can load the data of the specific language ? Couldn't find any tutorial any one can help me out? ### Steps to reproduce the bug step one load the data set `from datasets import load_dataset dataset = load_dataset("facebook/flores")` it gives the error of config once config is given it gives the error of "HFValidationError: Repo id must use alphanumeric chars or '-', '_', '.', '--' and '..' are forbidden, '-' and '.' cannot start or end the name, max length is 96: 'facebook/flores, 'ace_Arab''. " ### Expected behavior Data set should be loaded but I am receiving error ### Environment info Datasets , python ,
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5898/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5898/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5897
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5897/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5897/comments
https://api.github.com/repos/huggingface/datasets/issues/5897/events
https://github.com/huggingface/datasets/pull/5897
1,726,135,494
PR_kwDODunzps5RXJaY
5,897
Fix `FixedSizeListArray` casting
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006213 / 0.011353 (-0.005140) | 0.004230 / 0.011008 (-0.006778) | 0.098014 / 0.038508 (0.059506) | 0.028659 / 0.023109 (0.005550) | 0.303272 / 0.275898 (0.027374) | 0.337186 / 0.323480 (0.013706) | 0.005126 / 0.007986 (-0.002860) | 0.003563 / 0.004328 (-0.000765) | 0.075295 / 0.004250 (0.071045) | 0.036836 / 0.037052 (-0.000216) | 0.309612 / 0.258489 (0.051123) | 0.346484 / 0.293841 (0.052643) | 0.025714 / 0.128546 (-0.102832) | 0.008562 / 0.075646 (-0.067085) | 0.323475 / 0.419271 (-0.095796) | 0.044072 / 0.043533 (0.000539) | 0.308261 / 0.255139 (0.053122) | 0.330903 / 0.283200 (0.047703) | 0.091805 / 0.141683 (-0.049878) | 1.517011 / 1.452155 (0.064856) | 1.570815 / 1.492716 (0.078099) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.211265 / 0.018006 (0.193259) | 0.438860 / 0.000490 (0.438370) | 0.001127 / 0.000200 (0.000927) | 0.000069 / 0.000054 (0.000015) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023337 / 0.037411 (-0.014074) | 0.096243 / 0.014526 (0.081717) | 0.103529 / 0.176557 (-0.073028) | 0.161171 / 0.737135 (-0.575964) | 0.105904 / 0.296338 (-0.190435) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.417042 / 0.215209 (0.201833) | 4.155067 / 2.077655 (2.077412) | 1.879657 / 1.504120 (0.375537) | 1.669341 / 1.541195 (0.128146) | 1.717623 / 1.468490 (0.249133) | 0.556246 / 4.584777 (-4.028531) | 3.484535 / 3.745712 (-0.261177) | 1.728845 / 5.269862 (-3.541017) | 0.997477 / 4.565676 (-3.568199) | 0.068355 / 0.424275 (-0.355920) | 0.012445 / 0.007607 (0.004837) | 0.519023 / 0.226044 (0.292978) | 5.173506 / 2.268929 (2.904577) | 2.332435 / 55.444624 (-53.112190) | 1.986348 / 6.876477 (-4.890129) | 2.076885 / 2.142072 (-0.065187) | 0.656738 / 4.805227 (-4.148489) | 0.135308 / 6.500664 (-6.365356) | 0.065486 / 0.075469 (-0.009984) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.208874 / 1.841788 (-0.632914) | 13.994200 / 8.074308 (5.919892) | 14.160978 / 10.191392 (3.969586) | 0.146009 / 0.680424 (-0.534415) | 0.016573 / 0.534201 (-0.517628) | 0.356082 / 0.579283 (-0.223202) | 0.387766 / 0.434364 (-0.046598) | 0.419130 / 0.540337 (-0.121208) | 0.508634 / 1.386936 (-0.878302) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006238 / 0.011353 (-0.005115) | 0.004221 / 0.011008 (-0.006788) | 0.075155 / 0.038508 (0.036646) | 0.028491 / 0.023109 (0.005382) | 0.355606 / 0.275898 (0.079708) | 0.388986 / 0.323480 (0.065506) | 0.005941 / 0.007986 (-0.002044) | 0.003510 / 0.004328 (-0.000819) | 0.074905 / 0.004250 (0.070655) | 0.039111 / 0.037052 (0.002059) | 0.358492 / 0.258489 (0.100003) | 0.398763 / 0.293841 (0.104922) | 0.025535 / 0.128546 (-0.103012) | 0.008580 / 0.075646 (-0.067067) | 0.080461 / 0.419271 (-0.338811) | 0.041381 / 0.043533 (-0.002152) | 0.355498 / 0.255139 (0.100359) | 0.379163 / 0.283200 (0.095963) | 0.096450 / 0.141683 (-0.045233) | 1.503248 / 1.452155 (0.051093) | 1.595616 / 1.492716 (0.102900) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.238065 / 0.018006 (0.220058) | 0.422800 / 0.000490 (0.422311) | 0.002274 / 0.000200 (0.002074) | 0.000074 / 0.000054 (0.000020) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025746 / 0.037411 (-0.011665) | 0.103319 / 0.014526 (0.088793) | 0.112155 / 0.176557 (-0.064401) | 0.163034 / 0.737135 (-0.574101) | 0.113377 / 0.296338 (-0.182962) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.440522 / 0.215209 (0.225313) | 4.398123 / 2.077655 (2.320468) | 2.143538 / 1.504120 (0.639418) | 1.946084 / 1.541195 (0.404890) | 1.996556 / 1.468490 (0.528066) | 0.550108 / 4.584777 (-4.034669) | 3.455774 / 3.745712 (-0.289938) | 2.862474 / 5.269862 (-2.407387) | 1.213446 / 4.565676 (-3.352230) | 0.067987 / 0.424275 (-0.356288) | 0.012413 / 0.007607 (0.004806) | 0.543990 / 0.226044 (0.317945) | 5.454807 / 2.268929 (3.185879) | 2.669195 / 55.444624 (-52.775429) | 2.332948 / 6.876477 (-4.543528) | 2.383870 / 2.142072 (0.241797) | 0.652017 / 4.805227 (-4.153210) | 0.135508 / 6.500664 (-6.365156) | 0.068238 / 0.075469 (-0.007231) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.322669 / 1.841788 (-0.519118) | 14.368136 / 8.074308 (6.293828) | 14.167431 / 10.191392 (3.976039) | 0.159371 / 0.680424 (-0.521052) | 0.016638 / 0.534201 (-0.517563) | 0.357106 / 0.579283 (-0.222177) | 0.392491 / 0.434364 (-0.041873) | 0.419458 / 0.540337 (-0.120880) | 0.504662 / 1.386936 (-0.882274) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#bf764819ba6754cb7edf15899db517be0548676f \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006296 / 0.011353 (-0.005057) | 0.004185 / 0.011008 (-0.006823) | 0.096170 / 0.038508 (0.057662) | 0.029212 / 0.023109 (0.006102) | 0.315356 / 0.275898 (0.039458) | 0.335214 / 0.323480 (0.011734) | 0.005108 / 0.007986 (-0.002877) | 0.003634 / 0.004328 (-0.000694) | 0.074186 / 0.004250 (0.069936) | 0.038716 / 0.037052 (0.001663) | 0.311041 / 0.258489 (0.052551) | 0.341202 / 0.293841 (0.047361) | 0.025584 / 0.128546 (-0.102962) | 0.008499 / 0.075646 (-0.067148) | 0.318660 / 0.419271 (-0.100611) | 0.043745 / 0.043533 (0.000212) | 0.314824 / 0.255139 (0.059685) | 0.328117 / 0.283200 (0.044917) | 0.093425 / 0.141683 (-0.048258) | 1.478732 / 1.452155 (0.026578) | 1.531743 / 1.492716 (0.039027) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.203484 / 0.018006 (0.185478) | 0.416131 / 0.000490 (0.415641) | 0.007352 / 0.000200 (0.007152) | 0.000211 / 0.000054 (0.000156) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022908 / 0.037411 (-0.014503) | 0.098641 / 0.014526 (0.084115) | 0.103426 / 0.176557 (-0.073131) | 0.161658 / 0.737135 (-0.575477) | 0.106506 / 0.296338 (-0.189832) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.430781 / 0.215209 (0.215572) | 4.315677 / 2.077655 (2.238022) | 2.022302 / 1.504120 (0.518182) | 1.832043 / 1.541195 (0.290849) | 1.789302 / 1.468490 (0.320812) | 0.560484 / 4.584777 (-4.024293) | 3.448204 / 3.745712 (-0.297508) | 1.725016 / 5.269862 (-3.544846) | 1.002649 / 4.565676 (-3.563027) | 0.068480 / 0.424275 (-0.355795) | 0.012617 / 0.007607 (0.005010) | 0.532291 / 0.226044 (0.306246) | 5.319352 / 2.268929 (3.050423) | 2.520730 / 55.444624 (-52.923894) | 2.213881 / 6.876477 (-4.662596) | 2.352477 / 2.142072 (0.210404) | 0.662516 / 4.805227 (-4.142711) | 0.136481 / 6.500664 (-6.364183) | 0.066597 / 0.075469 (-0.008872) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.224537 / 1.841788 (-0.617251) | 13.849920 / 8.074308 (5.775612) | 14.026358 / 10.191392 (3.834966) | 0.131018 / 0.680424 (-0.549405) | 0.016756 / 0.534201 (-0.517445) | 0.358091 / 0.579283 (-0.221192) | 0.397709 / 0.434364 (-0.036655) | 0.450024 / 0.540337 (-0.090314) | 0.542609 / 1.386936 (-0.844327) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006179 / 0.011353 (-0.005174) | 0.004145 / 0.011008 (-0.006863) | 0.077482 / 0.038508 (0.038974) | 0.028005 / 0.023109 (0.004896) | 0.400010 / 0.275898 (0.124112) | 0.408206 / 0.323480 (0.084726) | 0.005049 / 0.007986 (-0.002937) | 0.003608 / 0.004328 (-0.000721) | 0.076841 / 0.004250 (0.072590) | 0.036714 / 0.037052 (-0.000338) | 0.406020 / 0.258489 (0.147531) | 0.412392 / 0.293841 (0.118551) | 0.025626 / 0.128546 (-0.102920) | 0.008560 / 0.075646 (-0.067087) | 0.084088 / 0.419271 (-0.335183) | 0.039707 / 0.043533 (-0.003826) | 0.396909 / 0.255139 (0.141770) | 0.403623 / 0.283200 (0.120424) | 0.095137 / 0.141683 (-0.046546) | 1.515670 / 1.452155 (0.063515) | 1.568379 / 1.492716 (0.075662) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.181802 / 0.018006 (0.163795) | 0.408778 / 0.000490 (0.408289) | 0.000393 / 0.000200 (0.000193) | 0.000060 / 0.000054 (0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025940 / 0.037411 (-0.011471) | 0.099992 / 0.014526 (0.085466) | 0.106280 / 0.176557 (-0.070276) | 0.161729 / 0.737135 (-0.575406) | 0.108625 / 0.296338 (-0.187713) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.459802 / 0.215209 (0.244593) | 4.603002 / 2.077655 (2.525347) | 2.406851 / 1.504120 (0.902732) | 2.265422 / 1.541195 (0.724227) | 2.306305 / 1.468490 (0.837815) | 0.553903 / 4.584777 (-4.030874) | 3.482052 / 3.745712 (-0.263660) | 2.969855 / 5.269862 (-2.300007) | 1.309285 / 4.565676 (-3.256391) | 0.068130 / 0.424275 (-0.356145) | 0.012189 / 0.007607 (0.004582) | 0.571299 / 0.226044 (0.345254) | 5.711420 / 2.268929 (3.442492) | 2.716748 / 55.444624 (-52.727876) | 2.369869 / 6.876477 (-4.506608) | 2.544240 / 2.142072 (0.402167) | 0.659955 / 4.805227 (-4.145272) | 0.136684 / 6.500664 (-6.363980) | 0.068962 / 0.075469 (-0.006507) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.297659 / 1.841788 (-0.544129) | 14.012758 / 8.074308 (5.938449) | 14.324644 / 10.191392 (4.133252) | 0.144894 / 0.680424 (-0.535530) | 0.016751 / 0.534201 (-0.517450) | 0.361547 / 0.579283 (-0.217736) | 0.396595 / 0.434364 (-0.037769) | 0.422375 / 0.540337 (-0.117962) | 0.508209 / 1.386936 (-0.878727) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#ba5f81357b53099b1bedfbb277211dba3952257b \"CML watermark\")\n", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5897). All of your documentation changes will be reflected on that endpoint." ]
"2023-05-25T16:26:33"
"2023-05-25T18:42:18"
null
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5897", "html_url": "https://github.com/huggingface/datasets/pull/5897", "diff_url": "https://github.com/huggingface/datasets/pull/5897.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5897.patch", "merged_at": null }
Fix cast on sliced `FixedSizeListArray`s. Fix #5866
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5897/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5897/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5896
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5896/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5896/comments
https://api.github.com/repos/huggingface/datasets/issues/5896/events
https://github.com/huggingface/datasets/issues/5896
1,726,022,500
I_kwDODunzps5m4QNk
5,896
HuggingFace does not cache downloaded files aggressively/early enough
{ "login": "geajack", "id": 2124157, "node_id": "MDQ6VXNlcjIxMjQxNTc=", "avatar_url": "https://avatars.githubusercontent.com/u/2124157?v=4", "gravatar_id": "", "url": "https://api.github.com/users/geajack", "html_url": "https://github.com/geajack", "followers_url": "https://api.github.com/users/geajack/followers", "following_url": "https://api.github.com/users/geajack/following{/other_user}", "gists_url": "https://api.github.com/users/geajack/gists{/gist_id}", "starred_url": "https://api.github.com/users/geajack/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/geajack/subscriptions", "organizations_url": "https://api.github.com/users/geajack/orgs", "repos_url": "https://api.github.com/users/geajack/repos", "events_url": "https://api.github.com/users/geajack/events{/privacy}", "received_events_url": "https://api.github.com/users/geajack/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
"2023-05-25T15:14:36"
"2023-05-25T15:14:36"
null
NONE
null
null
null
### Describe the bug I wrote the following script: ``` import datasets dataset = datasets.load.load_dataset("wikipedia", "20220301.en", split="train[:10000]") ``` I ran it and spent 90 minutes downloading a 20GB file. Then I saw: ``` Downloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 20.3G/20.3G [1:30:29<00:00, 3.73MB/s] Traceback (most recent call last): File "/home/jack/Code/Projects/Transformers/Codebase/main.py", line 5, in <module> dataset = datasets.load.load_dataset("wikipedia", "20220301.en", split="train[:10000]") File "/home/jack/.local/lib/python3.10/site-packages/datasets/load.py", line 1782, in load_dataset builder_instance.download_and_prepare( File "/home/jack/.local/lib/python3.10/site-packages/datasets/builder.py", line 883, in download_and_prepare self._save_info() File "/home/jack/.local/lib/python3.10/site-packages/datasets/builder.py", line 2037, in _save_info import apache_beam as beam ModuleNotFoundError: No module named 'apache_beam' ``` And the 20GB of data was seemingly instantly gone forever, because when I ran the script again, it had to do the download again. ### Steps to reproduce the bug See above ### Expected behavior See above ### Environment info datasets 2.10.1 Python 3.10
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5896/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5896/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5895
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5895/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5895/comments
https://api.github.com/repos/huggingface/datasets/issues/5895/events
https://github.com/huggingface/datasets/issues/5895
1,725,467,252
I_kwDODunzps5m2Ip0
5,895
The dir name and split strings are confused when loading ArmelR/stack-exchange-instruction dataset
{ "login": "DongHande", "id": 45357817, "node_id": "MDQ6VXNlcjQ1MzU3ODE3", "avatar_url": "https://avatars.githubusercontent.com/u/45357817?v=4", "gravatar_id": "", "url": "https://api.github.com/users/DongHande", "html_url": "https://github.com/DongHande", "followers_url": "https://api.github.com/users/DongHande/followers", "following_url": "https://api.github.com/users/DongHande/following{/other_user}", "gists_url": "https://api.github.com/users/DongHande/gists{/gist_id}", "starred_url": "https://api.github.com/users/DongHande/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/DongHande/subscriptions", "organizations_url": "https://api.github.com/users/DongHande/orgs", "repos_url": "https://api.github.com/users/DongHande/repos", "events_url": "https://api.github.com/users/DongHande/events{/privacy}", "received_events_url": "https://api.github.com/users/DongHande/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
"2023-05-25T09:39:06"
"2023-05-25T09:39:39"
null
NONE
null
null
null
### Describe the bug When I load the ArmelR/stack-exchange-instruction dataset, I encounter a bug that may be raised by confusing the dir name string and the split string about the dataset. When I use the script "datasets.load_dataset('ArmelR/stack-exchange-instruction', data_dir="data/finetune", split="train", use_auth_token=True)", it fails. But it succeeds when I add the "streaming = True" parameter. The website of the dataset is https://huggingface.co/datasets/ArmelR/stack-exchange-instruction/ . The traceback logs are as below: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/xxx/miniconda3/envs/code/lib/python3.9/site-packages/datasets/load.py", line 1797, in load_dataset builder_instance.download_and_prepare( File "/home/xxx/miniconda3/envs/code/lib/python3.9/site-packages/datasets/builder.py", line 890, in download_and_prepare self._download_and_prepare( File "/home/xxx/miniconda3/envs/code/lib/python3.9/site-packages/datasets/builder.py", line 985, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/xxx/miniconda3/envs/code/lib/python3.9/site-packages/datasets/builder.py", line 1706, in _prepare_split split_info = self.info.splits[split_generator.name] File "/home/xxx/miniconda3/envs/code/lib/python3.9/site-packages/datasets/splits.py", line 530, in __getitem__ instructions = make_file_instructions( File "/home/xxx/miniconda3/envs/code/lib/python3.9/site-packages/datasets/arrow_reader.py", line 112, in make_file_instructions name2filenames = { File "/home/xxx/miniconda3/envs/code/lib/python3.9/site-packages/datasets/arrow_reader.py", line 113, in <dictcomp> info.name: filenames_for_dataset_split( File "/home/xxx/miniconda3/envs/code/lib/python3.9/site-packages/datasets/naming.py", line 70, in filenames_for_dataset_split prefix = filename_prefix_for_split(dataset_name, split) File "/home/xxx/miniconda3/envs/code/lib/python3.9/site-packages/datasets/naming.py", line 54, in filename_prefix_for_split if os.path.basename(name) != name: File "/home/xxx/miniconda3/envs/code/lib/python3.9/posixpath.py", line 142, in basename p = os.fspath(p) TypeError: expected str, bytes or os.PathLike object, not NoneType ### Steps to reproduce the bug 1. import datasets library function: ```from datasets import load_dataset``` 2. load dataset: ```ds=load_dataset('ArmelR/stack-exchange-instruction', data_dir="data/finetune", split="train", use_auth_token=True)``` ### Expected behavior The dataset can be loaded successfully without the streaming setting. ### Environment info Linux, python=3.9 datasets=2.12.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5895/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5895/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5894
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5894/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5894/comments
https://api.github.com/repos/huggingface/datasets/issues/5894/events
https://github.com/huggingface/datasets/pull/5894
1,724,774,910
PR_kwDODunzps5RSjot
5,894
Force overwrite existing filesystem protocol
{ "login": "baskrahmer", "id": 24520725, "node_id": "MDQ6VXNlcjI0NTIwNzI1", "avatar_url": "https://avatars.githubusercontent.com/u/24520725?v=4", "gravatar_id": "", "url": "https://api.github.com/users/baskrahmer", "html_url": "https://github.com/baskrahmer", "followers_url": "https://api.github.com/users/baskrahmer/followers", "following_url": "https://api.github.com/users/baskrahmer/following{/other_user}", "gists_url": "https://api.github.com/users/baskrahmer/gists{/gist_id}", "starred_url": "https://api.github.com/users/baskrahmer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/baskrahmer/subscriptions", "organizations_url": "https://api.github.com/users/baskrahmer/orgs", "repos_url": "https://api.github.com/users/baskrahmer/repos", "events_url": "https://api.github.com/users/baskrahmer/events{/privacy}", "received_events_url": "https://api.github.com/users/baskrahmer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009139 / 0.011353 (-0.002214) | 0.005634 / 0.011008 (-0.005374) | 0.129587 / 0.038508 (0.091079) | 0.038298 / 0.023109 (0.015189) | 0.428149 / 0.275898 (0.152251) | 0.443744 / 0.323480 (0.120264) | 0.007501 / 0.007986 (-0.000485) | 0.005999 / 0.004328 (0.001671) | 0.100796 / 0.004250 (0.096546) | 0.053236 / 0.037052 (0.016184) | 0.423868 / 0.258489 (0.165379) | 0.460110 / 0.293841 (0.166269) | 0.041255 / 0.128546 (-0.087291) | 0.013790 / 0.075646 (-0.061856) | 0.438398 / 0.419271 (0.019127) | 0.063086 / 0.043533 (0.019553) | 0.414826 / 0.255139 (0.159687) | 0.460652 / 0.283200 (0.177453) | 0.121223 / 0.141683 (-0.020460) | 1.754430 / 1.452155 (0.302275) | 1.900037 / 1.492716 (0.407320) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.027222 / 0.018006 (0.009216) | 0.617666 / 0.000490 (0.617176) | 0.022443 / 0.000200 (0.022243) | 0.000820 / 0.000054 (0.000766) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030397 / 0.037411 (-0.007014) | 0.125732 / 0.014526 (0.111206) | 0.149805 / 0.176557 (-0.026752) | 0.234048 / 0.737135 (-0.503087) | 0.143108 / 0.296338 (-0.153231) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.631189 / 0.215209 (0.415980) | 6.182871 / 2.077655 (4.105216) | 2.635730 / 1.504120 (1.131610) | 2.231429 / 1.541195 (0.690235) | 2.438360 / 1.468490 (0.969870) | 0.861170 / 4.584777 (-3.723607) | 5.785984 / 3.745712 (2.040272) | 2.758358 / 5.269862 (-2.511504) | 1.678095 / 4.565676 (-2.887582) | 0.105961 / 0.424275 (-0.318314) | 0.013659 / 0.007607 (0.006052) | 0.762943 / 0.226044 (0.536898) | 7.774399 / 2.268929 (5.505471) | 3.319027 / 55.444624 (-52.125598) | 2.700248 / 6.876477 (-4.176229) | 3.008581 / 2.142072 (0.866509) | 1.122522 / 4.805227 (-3.682705) | 0.214832 / 6.500664 (-6.285832) | 0.085281 / 0.075469 (0.009811) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.647610 / 1.841788 (-0.194177) | 18.178316 / 8.074308 (10.104008) | 21.199177 / 10.191392 (11.007785) | 0.247063 / 0.680424 (-0.433361) | 0.030443 / 0.534201 (-0.503758) | 0.512527 / 0.579283 (-0.066757) | 0.640758 / 0.434364 (0.206394) | 0.639986 / 0.540337 (0.099649) | 0.760113 / 1.386936 (-0.626823) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008293 / 0.011353 (-0.003060) | 0.005360 / 0.011008 (-0.005648) | 0.102932 / 0.038508 (0.064424) | 0.037457 / 0.023109 (0.014347) | 0.444114 / 0.275898 (0.168216) | 0.512855 / 0.323480 (0.189375) | 0.007030 / 0.007986 (-0.000956) | 0.004954 / 0.004328 (0.000625) | 0.095757 / 0.004250 (0.091507) | 0.051239 / 0.037052 (0.014187) | 0.471118 / 0.258489 (0.212629) | 0.517764 / 0.293841 (0.223923) | 0.041953 / 0.128546 (-0.086593) | 0.013748 / 0.075646 (-0.061898) | 0.118089 / 0.419271 (-0.301182) | 0.060159 / 0.043533 (0.016626) | 0.466011 / 0.255139 (0.210872) | 0.489180 / 0.283200 (0.205980) | 0.123250 / 0.141683 (-0.018433) | 1.714738 / 1.452155 (0.262584) | 1.838571 / 1.492716 (0.345855) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.267792 / 0.018006 (0.249785) | 0.624313 / 0.000490 (0.623824) | 0.007315 / 0.000200 (0.007115) | 0.000136 / 0.000054 (0.000082) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033751 / 0.037411 (-0.003661) | 0.122819 / 0.014526 (0.108293) | 0.148270 / 0.176557 (-0.028286) | 0.198581 / 0.737135 (-0.538554) | 0.144845 / 0.296338 (-0.151494) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.620631 / 0.215209 (0.405422) | 6.224665 / 2.077655 (4.147010) | 2.856592 / 1.504120 (1.352473) | 2.525089 / 1.541195 (0.983894) | 2.600198 / 1.468490 (1.131708) | 0.872038 / 4.584777 (-3.712739) | 5.571650 / 3.745712 (1.825937) | 5.907643 / 5.269862 (0.637782) | 2.348770 / 4.565676 (-2.216906) | 0.111665 / 0.424275 (-0.312610) | 0.013886 / 0.007607 (0.006278) | 0.762154 / 0.226044 (0.536109) | 7.792686 / 2.268929 (5.523758) | 3.601122 / 55.444624 (-51.843503) | 2.939412 / 6.876477 (-3.937064) | 2.973430 / 2.142072 (0.831358) | 1.065016 / 4.805227 (-3.740211) | 0.221701 / 6.500664 (-6.278963) | 0.088157 / 0.075469 (0.012688) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.771061 / 1.841788 (-0.070727) | 18.826926 / 8.074308 (10.752618) | 21.283830 / 10.191392 (11.092438) | 0.239233 / 0.680424 (-0.441191) | 0.026159 / 0.534201 (-0.508042) | 0.487074 / 0.579283 (-0.092209) | 0.623241 / 0.434364 (0.188877) | 0.600506 / 0.540337 (0.060169) | 0.691271 / 1.386936 (-0.695665) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#1bbe2c3496498a6415765b517ac4bc600a02ad06 \"CML watermark\")\n" ]
"2023-05-24T21:41:53"
"2023-05-25T06:52:08"
"2023-05-25T06:42:33"
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5894", "html_url": "https://github.com/huggingface/datasets/pull/5894", "diff_url": "https://github.com/huggingface/datasets/pull/5894.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5894.patch", "merged_at": "2023-05-25T06:42:33" }
Fix #5876
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5894/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5894/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5893
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5893/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5893/comments
https://api.github.com/repos/huggingface/datasets/issues/5893/events
https://github.com/huggingface/datasets/pull/5893
1,722,519,056
PR_kwDODunzps5RK40K
5,893
Load cached dataset as iterable
{ "login": "mariusz-jachimowicz-83", "id": 10278877, "node_id": "MDQ6VXNlcjEwMjc4ODc3", "avatar_url": "https://avatars.githubusercontent.com/u/10278877?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariusz-jachimowicz-83", "html_url": "https://github.com/mariusz-jachimowicz-83", "followers_url": "https://api.github.com/users/mariusz-jachimowicz-83/followers", "following_url": "https://api.github.com/users/mariusz-jachimowicz-83/following{/other_user}", "gists_url": "https://api.github.com/users/mariusz-jachimowicz-83/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariusz-jachimowicz-83/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariusz-jachimowicz-83/subscriptions", "organizations_url": "https://api.github.com/users/mariusz-jachimowicz-83/orgs", "repos_url": "https://api.github.com/users/mariusz-jachimowicz-83/repos", "events_url": "https://api.github.com/users/mariusz-jachimowicz-83/events{/privacy}", "received_events_url": "https://api.github.com/users/mariusz-jachimowicz-83/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "@lhoestq Could you please look into that and review?", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5893). All of your documentation changes will be reflected on that endpoint." ]
"2023-05-23T17:40:35"
"2023-05-24T16:30:58"
null
NONE
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5893", "html_url": "https://github.com/huggingface/datasets/pull/5893", "diff_url": "https://github.com/huggingface/datasets/pull/5893.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5893.patch", "merged_at": null }
To be used to train models it allows to load an IterableDataset from the cached Arrow file. See https://github.com/huggingface/datasets/issues/5481
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5893/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5893/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5892
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5892/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5892/comments
https://api.github.com/repos/huggingface/datasets/issues/5892/events
https://github.com/huggingface/datasets/issues/5892
1,722,503,824
I_kwDODunzps5mq1KQ
5,892
User access requests with manual review do not notify the dataset owner
{ "login": "leondz", "id": 121934, "node_id": "MDQ6VXNlcjEyMTkzNA==", "avatar_url": "https://avatars.githubusercontent.com/u/121934?v=4", "gravatar_id": "", "url": "https://api.github.com/users/leondz", "html_url": "https://github.com/leondz", "followers_url": "https://api.github.com/users/leondz/followers", "following_url": "https://api.github.com/users/leondz/following{/other_user}", "gists_url": "https://api.github.com/users/leondz/gists{/gist_id}", "starred_url": "https://api.github.com/users/leondz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/leondz/subscriptions", "organizations_url": "https://api.github.com/users/leondz/orgs", "repos_url": "https://api.github.com/users/leondz/repos", "events_url": "https://api.github.com/users/leondz/events{/privacy}", "received_events_url": "https://api.github.com/users/leondz/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "cc @SBrandeis" ]
"2023-05-23T17:27:46"
"2023-05-23T17:54:49"
null
CONTRIBUTOR
null
null
null
### Describe the bug When a user access requests are enabled, and new requests are set to Manual Review, the dataset owner should be notified of the pending requests. However, instead, currently nothing happens, and so the dataset request can go unanswered for quite some time until the owner happens to check that particular dataset's Settings pane. ### Steps to reproduce the bug 1. Enable a dataset's user access requests 2. Set to Manual Review 3. Ask another HF user to request access to the dataset 4. Dataset owner is not notified ### Expected behavior The dataset owner should receive some kind of notification, perhaps in their HF site inbox, or by email, when a dataset access request is made and manual review is enabled. ### Environment info n/a
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5892/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5892/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5891
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5891/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5891/comments
https://api.github.com/repos/huggingface/datasets/issues/5891/events
https://github.com/huggingface/datasets/pull/5891
1,722,384,135
PR_kwDODunzps5RKchn
5,891
Make split slicing consisten with list slicing
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5891). All of your documentation changes will be reflected on that endpoint.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006916 / 0.011353 (-0.004437) | 0.004749 / 0.011008 (-0.006259) | 0.096086 / 0.038508 (0.057578) | 0.035448 / 0.023109 (0.012338) | 0.299645 / 0.275898 (0.023747) | 0.331279 / 0.323480 (0.007799) | 0.006018 / 0.007986 (-0.001968) | 0.004210 / 0.004328 (-0.000118) | 0.072998 / 0.004250 (0.068747) | 0.050082 / 0.037052 (0.013030) | 0.297714 / 0.258489 (0.039225) | 0.365523 / 0.293841 (0.071682) | 0.028081 / 0.128546 (-0.100465) | 0.009072 / 0.075646 (-0.066574) | 0.327628 / 0.419271 (-0.091643) | 0.051165 / 0.043533 (0.007633) | 0.295091 / 0.255139 (0.039952) | 0.320052 / 0.283200 (0.036852) | 0.109841 / 0.141683 (-0.031842) | 1.467867 / 1.452155 (0.015712) | 1.572600 / 1.492716 (0.079884) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.281490 / 0.018006 (0.263484) | 0.499259 / 0.000490 (0.498770) | 0.000691 / 0.000200 (0.000491) | 0.000062 / 0.000054 (0.000008) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027548 / 0.037411 (-0.009863) | 0.106592 / 0.014526 (0.092066) | 0.118654 / 0.176557 (-0.057902) | 0.174313 / 0.737135 (-0.562822) | 0.124491 / 0.296338 (-0.171848) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.399674 / 0.215209 (0.184465) | 3.984092 / 2.077655 (1.906437) | 1.790935 / 1.504120 (0.286815) | 1.593612 / 1.541195 (0.052417) | 1.694595 / 1.468490 (0.226105) | 0.517588 / 4.584777 (-4.067189) | 3.724353 / 3.745712 (-0.021359) | 3.244807 / 5.269862 (-2.025054) | 1.602929 / 4.565676 (-2.962748) | 0.065334 / 0.424275 (-0.358941) | 0.012259 / 0.007607 (0.004652) | 0.501355 / 0.226044 (0.275311) | 4.996546 / 2.268929 (2.727618) | 2.279333 / 55.444624 (-53.165291) | 1.940126 / 6.876477 (-4.936351) | 2.122945 / 2.142072 (-0.019128) | 0.626104 / 4.805227 (-4.179123) | 0.141278 / 6.500664 (-6.359386) | 0.064522 / 0.075469 (-0.010947) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.195351 / 1.841788 (-0.646436) | 15.258932 / 8.074308 (7.184624) | 14.627623 / 10.191392 (4.436231) | 0.266897 / 0.680424 (-0.413527) | 0.017557 / 0.534201 (-0.516644) | 0.392932 / 0.579283 (-0.186351) | 0.416409 / 0.434364 (-0.017955) | 0.469100 / 0.540337 (-0.071237) | 0.556247 / 1.386936 (-0.830689) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006880 / 0.011353 (-0.004473) | 0.004837 / 0.011008 (-0.006171) | 0.074518 / 0.038508 (0.036010) | 0.034204 / 0.023109 (0.011095) | 0.365100 / 0.275898 (0.089202) | 0.394976 / 0.323480 (0.071496) | 0.006364 / 0.007986 (-0.001621) | 0.004269 / 0.004328 (-0.000060) | 0.073531 / 0.004250 (0.069281) | 0.051334 / 0.037052 (0.014281) | 0.373904 / 0.258489 (0.115415) | 0.413662 / 0.293841 (0.119821) | 0.028779 / 0.128546 (-0.099767) | 0.009292 / 0.075646 (-0.066354) | 0.081574 / 0.419271 (-0.337698) | 0.046531 / 0.043533 (0.002998) | 0.368995 / 0.255139 (0.113856) | 0.376938 / 0.283200 (0.093739) | 0.112576 / 0.141683 (-0.029107) | 1.458880 / 1.452155 (0.006725) | 1.550918 / 1.492716 (0.058202) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.319521 / 0.018006 (0.301515) | 0.510146 / 0.000490 (0.509656) | 0.000438 / 0.000200 (0.000238) | 0.000059 / 0.000054 (0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033082 / 0.037411 (-0.004329) | 0.118009 / 0.014526 (0.103483) | 0.127108 / 0.176557 (-0.049448) | 0.176600 / 0.737135 (-0.560535) | 0.133790 / 0.296338 (-0.162549) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.437360 / 0.215209 (0.222151) | 4.367426 / 2.077655 (2.289771) | 2.193646 / 1.504120 (0.689526) | 2.025002 / 1.541195 (0.483808) | 2.142347 / 1.468490 (0.673856) | 0.525497 / 4.584777 (-4.059280) | 3.751275 / 3.745712 (0.005563) | 1.912271 / 5.269862 (-3.357590) | 1.087286 / 4.565676 (-3.478390) | 0.066328 / 0.424275 (-0.357947) | 0.011904 / 0.007607 (0.004297) | 0.545870 / 0.226044 (0.319825) | 5.434481 / 2.268929 (3.165552) | 2.719745 / 55.444624 (-52.724880) | 2.445001 / 6.876477 (-4.431476) | 2.500205 / 2.142072 (0.358133) | 0.645735 / 4.805227 (-4.159492) | 0.144210 / 6.500664 (-6.356455) | 0.065688 / 0.075469 (-0.009781) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.273522 / 1.841788 (-0.568265) | 15.771778 / 8.074308 (7.697470) | 14.685261 / 10.191392 (4.493869) | 0.176523 / 0.680424 (-0.503900) | 0.017877 / 0.534201 (-0.516324) | 0.392687 / 0.579283 (-0.186596) | 0.449992 / 0.434364 (0.015628) | 0.462851 / 0.540337 (-0.077487) | 0.560178 / 1.386936 (-0.826758) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#0fa3ef6eba906ee1214e0596d15a78fc358909f4 \"CML watermark\")\n" ]
"2023-05-23T16:04:33"
"2023-05-23T16:11:12"
null
CONTRIBUTOR
null
true
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5891", "html_url": "https://github.com/huggingface/datasets/pull/5891", "diff_url": "https://github.com/huggingface/datasets/pull/5891.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5891.patch", "merged_at": null }
Fix #1774, fix #5875 TODO: a test
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5891/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5891/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5889
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5889/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5889/comments
https://api.github.com/repos/huggingface/datasets/issues/5889/events
https://github.com/huggingface/datasets/issues/5889
1,722,373,618
I_kwDODunzps5mqVXy
5,889
Token Alignment for input and output data over train and test batch/dataset.
{ "login": "akesh1235", "id": 125154243, "node_id": "U_kgDOB3Wzww", "avatar_url": "https://avatars.githubusercontent.com/u/125154243?v=4", "gravatar_id": "", "url": "https://api.github.com/users/akesh1235", "html_url": "https://github.com/akesh1235", "followers_url": "https://api.github.com/users/akesh1235/followers", "following_url": "https://api.github.com/users/akesh1235/following{/other_user}", "gists_url": "https://api.github.com/users/akesh1235/gists{/gist_id}", "starred_url": "https://api.github.com/users/akesh1235/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/akesh1235/subscriptions", "organizations_url": "https://api.github.com/users/akesh1235/orgs", "repos_url": "https://api.github.com/users/akesh1235/repos", "events_url": "https://api.github.com/users/akesh1235/events{/privacy}", "received_events_url": "https://api.github.com/users/akesh1235/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
"2023-05-23T15:58:55"
"2023-05-23T15:58:55"
null
NONE
null
null
null
`data` > DatasetDict({ train: Dataset({ features: ['input', 'output'], num_rows: 4500 }) test: Dataset({ features: ['input', 'output'], num_rows: 500 }) }) **# input (in-correct sentence)** `data['train'][0]['input']` **>>** 'We are meet sunday 10am12pmET in Crown Heights Brooklyn New York' **# output (correct sentence)** `data['train'][0]['output']` **>>** 'We meet Sundays 10am-12pmET in Crown Heights, Brooklyn, New York.' **I Want to align the output tokens with input** ``` `# tokenize both inputs and targets def tokenize_fn(batch): # tokenize the input sequence first # this populates input_ids, attention_mask, etc. tokenized_inputs = tokenizer( batch['input'] ) labels_batch = tokenizer.tokenize(batch['output']) # original targets aligned_labels_batch = [] for i, labels in enumerate(labels_batch): word_ids = tokenized_inputs[i].word_ids() aligned_labels_batch.append(align_targets(labels, word_ids)) # align_targets is another user defined function which is been called here # recall: the 'target' must be stored in key called 'labels' tokenized_inputs['labels'] = aligned_labels_batch return tokenized_inputs` ``` ``` data.map( tokenize_fn, batched=True, remove_columns=data['train'].column_names, ) ``` When this user defined function is mapped to every records of train and test batch am getting following error: **1.** **raise DatasetTransformationNotAllowedError( 3457 "Using `.map` in batched mode on a dataset with attached indexes is allowed only if it doesn't create or remove existing examples. You can first run `.drop_index() to remove your index and then re-add it."** **2.** **TypeError: TextEncodeInput must be Union[TextInputSequence, Tuple[InputSequence, InputSequence]]**
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5889/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5889/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5887
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5887/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5887/comments
https://api.github.com/repos/huggingface/datasets/issues/5887/events
https://github.com/huggingface/datasets/issues/5887
1,722,166,382
I_kwDODunzps5mpixu
5,887
HuggingsFace dataset example give error
{ "login": "donhuvy", "id": 1328316, "node_id": "MDQ6VXNlcjEzMjgzMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/1328316?v=4", "gravatar_id": "", "url": "https://api.github.com/users/donhuvy", "html_url": "https://github.com/donhuvy", "followers_url": "https://api.github.com/users/donhuvy/followers", "following_url": "https://api.github.com/users/donhuvy/following{/other_user}", "gists_url": "https://api.github.com/users/donhuvy/gists{/gist_id}", "starred_url": "https://api.github.com/users/donhuvy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/donhuvy/subscriptions", "organizations_url": "https://api.github.com/users/donhuvy/orgs", "repos_url": "https://api.github.com/users/donhuvy/repos", "events_url": "https://api.github.com/users/donhuvy/events{/privacy}", "received_events_url": "https://api.github.com/users/donhuvy/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
"2023-05-23T14:09:05"
"2023-05-23T14:44:54"
null
NONE
null
null
null
### Describe the bug ![image](https://github.com/huggingface/datasets/assets/1328316/1f4f0086-3db9-4c79-906b-05a375357cce) ![image](https://github.com/huggingface/datasets/assets/1328316/733ebd3d-89b9-4ece-b80a-00ab5b0a4122) ### Steps to reproduce the bug Use link as reference document written https://colab.research.google.com/github/huggingface/datasets/blob/main/notebooks/Overview.ipynb#scrollTo=biqDH9vpvSVz ```python # Now let's train our model device = 'cuda' if torch.cuda.is_available() else 'cpu' model.train().to(device) for i, batch in enumerate(dataloader): batch.to(device) outputs = model(**batch) loss = outputs.loss loss.backward() optimizer.step() model.zero_grad() print(f'Step {i} - loss: {loss:.3}') if i > 5: break ``` Error ```python --------------------------------------------------------------------------- TypeError Traceback (most recent call last) [<ipython-input-44-7040b885f382>](https://localhost:8080/#) in <cell line: 5>() 5 for i, batch in enumerate(dataloader): 6 batch.to(device) ----> 7 outputs = model(**batch) 8 loss = outputs.loss 9 loss.backward() [/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _call_impl(self, *args, **kwargs) 1499 or _global_backward_pre_hooks or _global_backward_hooks 1500 or _global_forward_hooks or _global_forward_pre_hooks): -> 1501 return forward_call(*args, **kwargs) 1502 # Do not call functions when jit is used 1503 full_backward_hooks, non_full_backward_hooks = [], [] TypeError: DistilBertForQuestionAnswering.forward() got an unexpected keyword argument 'token_type_ids' ``` https://github.com/huggingface/datasets/assets/1328316/5d8b1d61-9337-4d59-8423-4f37f834c156 ### Expected behavior Run success on Google Colab (free) ### Environment info Windows 11 x64, Google Colab free (my Google Drive just empty about 200 MB, but I don't think it cause problem)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5887/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5887/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5886
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5886/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5886/comments
https://api.github.com/repos/huggingface/datasets/issues/5886/events
https://github.com/huggingface/datasets/issues/5886
1,721,070,225
I_kwDODunzps5mlXKR
5,886
Use work-stealing algorithm when parallel computing
{ "login": "1014661165", "id": 46060451, "node_id": "MDQ6VXNlcjQ2MDYwNDUx", "avatar_url": "https://avatars.githubusercontent.com/u/46060451?v=4", "gravatar_id": "", "url": "https://api.github.com/users/1014661165", "html_url": "https://github.com/1014661165", "followers_url": "https://api.github.com/users/1014661165/followers", "following_url": "https://api.github.com/users/1014661165/following{/other_user}", "gists_url": "https://api.github.com/users/1014661165/gists{/gist_id}", "starred_url": "https://api.github.com/users/1014661165/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/1014661165/subscriptions", "organizations_url": "https://api.github.com/users/1014661165/orgs", "repos_url": "https://api.github.com/users/1014661165/repos", "events_url": "https://api.github.com/users/1014661165/events{/privacy}", "received_events_url": "https://api.github.com/users/1014661165/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[ "Alternatively we could set the number of shards to be a factor than the number of processes (current they're equal) - this way it will be less likely to end up with a shard that is significantly slower than all the other ones." ]
"2023-05-23T03:08:44"
"2023-05-24T15:30:09"
null
NONE
null
null
null
### Feature request when i used Dataset.map api to process data concurrently, i found that it gets slower and slower as it gets closer to completion. Then i read the source code of arrow_dataset.py and found that it shard the dataset and use multiprocessing pool to execute each shard.It may cause the slowest task to drag out the entire program's execution time,especially when processing huge dataset. ### Motivation using work-stealing algorithm instead of sharding and parallel computing to optimize performance. ### Your contribution just an idea.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5886/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5886/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5885
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5885/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5885/comments
https://api.github.com/repos/huggingface/datasets/issues/5885/events
https://github.com/huggingface/datasets/pull/5885
1,720,954,440
PR_kwDODunzps5RFjTL
5,885
Modify `is_remote_filesystem` to return True for FUSE-mounted paths
{ "login": "maddiedawson", "id": 106995444, "node_id": "U_kgDOBmCe9A", "avatar_url": "https://avatars.githubusercontent.com/u/106995444?v=4", "gravatar_id": "", "url": "https://api.github.com/users/maddiedawson", "html_url": "https://github.com/maddiedawson", "followers_url": "https://api.github.com/users/maddiedawson/followers", "following_url": "https://api.github.com/users/maddiedawson/following{/other_user}", "gists_url": "https://api.github.com/users/maddiedawson/gists{/gist_id}", "starred_url": "https://api.github.com/users/maddiedawson/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/maddiedawson/subscriptions", "organizations_url": "https://api.github.com/users/maddiedawson/orgs", "repos_url": "https://api.github.com/users/maddiedawson/repos", "events_url": "https://api.github.com/users/maddiedawson/events{/privacy}", "received_events_url": "https://api.github.com/users/maddiedawson/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5885). All of your documentation changes will be reflected on that endpoint.", "@lhoestq would you or another maintainer be able to review please? :)", "Why you do need to support FUSE mounted paths ?\r\n\r\n`datasets` uses data that live on disk for fast lookups - FUSE mounted disks would lead to poor performance and I wouldn't recomment using it.", "Fuse is commonly used to mount remote file systems (e.g. S3, DBFS) as a local directory. Since it's slower than using an actual local device, it's better to treat it as remote to reduce latency.", "I think people would be confused if they don't have the same dataset behavior depending on the disk type.\r\n\r\nIf they want to use a remote bucket they should use the remote URI instead, e.g. `s3://...`. Advancements on this are tracked at #5281 " ]
"2023-05-23T01:04:54"
"2023-05-25T08:50:48"
null
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5885", "html_url": "https://github.com/huggingface/datasets/pull/5885", "diff_url": "https://github.com/huggingface/datasets/pull/5885.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5885.patch", "merged_at": null }
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5885/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5885/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5888
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5888/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5888/comments
https://api.github.com/repos/huggingface/datasets/issues/5888/events
https://github.com/huggingface/datasets/issues/5888
1,722,290,363
I_kwDODunzps5mqBC7
5,888
A way to upload and visualize .mp4 files (millions of them) as part of a dataset
{ "login": "AntreasAntoniou", "id": 10792502, "node_id": "MDQ6VXNlcjEwNzkyNTAy", "avatar_url": "https://avatars.githubusercontent.com/u/10792502?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AntreasAntoniou", "html_url": "https://github.com/AntreasAntoniou", "followers_url": "https://api.github.com/users/AntreasAntoniou/followers", "following_url": "https://api.github.com/users/AntreasAntoniou/following{/other_user}", "gists_url": "https://api.github.com/users/AntreasAntoniou/gists{/gist_id}", "starred_url": "https://api.github.com/users/AntreasAntoniou/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AntreasAntoniou/subscriptions", "organizations_url": "https://api.github.com/users/AntreasAntoniou/orgs", "repos_url": "https://api.github.com/users/AntreasAntoniou/repos", "events_url": "https://api.github.com/users/AntreasAntoniou/events{/privacy}", "received_events_url": "https://api.github.com/users/AntreasAntoniou/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Hi! \r\n\r\nYou want to use `push_to_hub` (creates Parquet files) instead of `save_to_disk` (creates Arrow files) when creating a Hub dataset. Parquet is designed for long-term storage and takes less space than the Arrow format, and, most importantly, `load_dataset` can parse it, which should fix the viewer. \r\n\r\nRegarding the dataset generation, `Dataset.from_generator` with the video data represented as `datasets.Value(\"binary\")` followed by `push_to_hub` should work (if the `push_to_hub` step times out, restart it to resume uploading)\r\n\r\nPS: Once the dataset is uploaded, to make working with the dataset easier, it's a good idea to add a [transform](https://huggingface.co/docs/datasets/main/en/process#format-transform) to the README that shows how to decode the binary video data into something a model can understand. Also, if you get an `ArrowInvalid` error (can happen when working with large binary data) in `Dataset.from_generator`, reduce the value of `writer_batch_size` (the default is 1000) to fix it.", "One issue here is that Dataset.from_generator can work well for the non 'infinite sampling' version of the dataset. The training set for example is often sampled dynamically given the video files that I have uploaded. I worry that storing the video data as binary means that I'll end up duplicating a lot of the data. Furthermore, storing video data as anything but .mp4 would quickly make the dataset size from 1.9TB to 1PB. ", "> storing video data as anything but .mp4\r\n\r\nWhat I mean by storing as `datasets.Value(\"binary\")` is embedding raw MP4 bytes in the Arrow table, but, indeed, this would waste a lot of space if there are duplicates.\r\n\r\nSo I see two options:\r\n* if one video is not mapped to too many samples, you can embed the video bytes and do \"group by\" on the rest of the columns (this would turn them into lists) to avoid duplicating them (then, it should be easy to define a `map` in the README that samples the video data to \"unpack\" the samples)\r\n* you can create a dataset script that downloads the video files and embeds their file paths into the Arrow file\r\n\r\nAlso, I misread MP4 as MP3. We need to add a `Video` feature to the `datasets` lib to support MP4 files in the viewer (a bit trickier to implement than the `Image` feature due to the Arrow limitations).", "I'm transferring this issue to the `datasets` repo, as it's not related to `huggingface_hub`", "@mariosasko Right. If I want my dataset to be streamable, what are the necessary requirements to achieve that within the context of .mp4 binaries like we have here? I guess your second point here would not support that right?", "The streaming would work, but the video paths would require using `fsspec.open` to get the content." ]
"2023-05-22T18:05:26"
"2023-05-24T13:17:10"
null
NONE
null
null
null
**Is your feature request related to a problem? Please describe.** I recently chose to use huggingface hub as the home for a large multi modal dataset I've been building. https://huggingface.co/datasets/Antreas/TALI It combines images, text, audio and video. Now, I could very easily upload a dataset made via datasets.Dataset.from_generator, as long as it did not include video files. I found that including .mp4 files in the entries would not auto-upload those files. Hence I tried to upload them myself. I quickly found out that uploading many small files is a very bad way to use git lfs, and that it would take ages, so, I resorted to using 7z to pack them all up. But then I had a new problem. My dataset had a size of 1.9TB. Trying to upload such a large file with the default huggingface_hub API always resulted in time outs etc. So I decided to split the large files into chunks of 5GB each and reupload. So, eventually it all worked out. But now the dataset can't be properly and natively used by the datasets API because of all the needed preprocessing -- and furthermore the hub is unable to visualize things. **Describe the solution you'd like** A native way to upload large datasets that include .mp4 or other video types. **Describe alternatives you've considered** Already explained earlier **Additional context** https://huggingface.co/datasets/Antreas/TALI
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5888/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5888/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5884
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5884/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5884/comments
https://api.github.com/repos/huggingface/datasets/issues/5884/events
https://github.com/huggingface/datasets/issues/5884
1,719,548,172
I_kwDODunzps5mfjkM
5,884
`Dataset.to_tf_dataset` fails when strings cannot be encoded as `np.bytes_`
{ "login": "alvarobartt", "id": 36760800, "node_id": "MDQ6VXNlcjM2NzYwODAw", "avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alvarobartt", "html_url": "https://github.com/alvarobartt", "followers_url": "https://api.github.com/users/alvarobartt/followers", "following_url": "https://api.github.com/users/alvarobartt/following{/other_user}", "gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}", "starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions", "organizations_url": "https://api.github.com/users/alvarobartt/orgs", "repos_url": "https://api.github.com/users/alvarobartt/repos", "events_url": "https://api.github.com/users/alvarobartt/events{/privacy}", "received_events_url": "https://api.github.com/users/alvarobartt/received_events", "type": "User", "site_admin": false }
[]
open
false
{ "login": "alvarobartt", "id": 36760800, "node_id": "MDQ6VXNlcjM2NzYwODAw", "avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alvarobartt", "html_url": "https://github.com/alvarobartt", "followers_url": "https://api.github.com/users/alvarobartt/followers", "following_url": "https://api.github.com/users/alvarobartt/following{/other_user}", "gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}", "starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions", "organizations_url": "https://api.github.com/users/alvarobartt/orgs", "repos_url": "https://api.github.com/users/alvarobartt/repos", "events_url": "https://api.github.com/users/alvarobartt/events{/privacy}", "received_events_url": "https://api.github.com/users/alvarobartt/received_events", "type": "User", "site_admin": false }
[ { "login": "alvarobartt", "id": 36760800, "node_id": "MDQ6VXNlcjM2NzYwODAw", "avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alvarobartt", "html_url": "https://github.com/alvarobartt", "followers_url": "https://api.github.com/users/alvarobartt/followers", "following_url": "https://api.github.com/users/alvarobartt/following{/other_user}", "gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}", "starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions", "organizations_url": "https://api.github.com/users/alvarobartt/orgs", "repos_url": "https://api.github.com/users/alvarobartt/repos", "events_url": "https://api.github.com/users/alvarobartt/events{/privacy}", "received_events_url": "https://api.github.com/users/alvarobartt/received_events", "type": "User", "site_admin": false } ]
null
[ "May eventually be solved in #5883 ", "#self-assign" ]
"2023-05-22T12:03:06"
"2023-05-22T12:09:56"
null
CONTRIBUTOR
null
null
null
### Describe the bug When loading any dataset that contains a column with strings that are not ASCII-compatible, looping over those records raises the following exception e.g. for `é` character `UnicodeEncodeError: 'ascii' codec can't encode character '\xe9' in position 0: ordinal not in range(128)`. ### Steps to reproduce the bug Running the following script will eventually fail, when reaching to the batch that contains non-ASCII compatible strings. ```python from datasets import load_dataset ds = load_dataset("imdb", split="train") tfds = ds.to_tf_dataset(batch_size=16) for batch in tfds: print(batch) >>> UnicodeEncodeError: 'ascii' codec can't encode character '\xe9' in position 0: ordinal not in range(128) ``` ### Expected behavior The following script to run properly, making sure that the strings are either `numpy.unicode_` or `numpy.string` instead of `numpy.bytes_` since some characters are not ASCII compatible and that would lead to an issue when applying the `map`. ```python from datasets import load_dataset ds = load_dataset("imdb", split="train") tfds = ds.to_tf_dataset(batch_size=16) for batch in tfds: print(batch) ``` ### Environment info - `datasets` version: 2.12.1.dev0 - Platform: macOS-13.3.1-arm64-arm-64bit - Python version: 3.10.11 - Huggingface_hub version: 0.14.1 - PyArrow version: 11.0.0 - Pandas version: 1.5.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5884/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5884/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5883
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5883/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5883/comments
https://api.github.com/repos/huggingface/datasets/issues/5883/events
https://github.com/huggingface/datasets/pull/5883
1,719,527,597
PR_kwDODunzps5RAkYi
5,883
Fix `Dataset.to_tf_dataset` when encoding-strings & minor improvements
{ "login": "alvarobartt", "id": 36760800, "node_id": "MDQ6VXNlcjM2NzYwODAw", "avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alvarobartt", "html_url": "https://github.com/alvarobartt", "followers_url": "https://api.github.com/users/alvarobartt/followers", "following_url": "https://api.github.com/users/alvarobartt/following{/other_user}", "gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}", "starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions", "organizations_url": "https://api.github.com/users/alvarobartt/orgs", "repos_url": "https://api.github.com/users/alvarobartt/repos", "events_url": "https://api.github.com/users/alvarobartt/events{/privacy}", "received_events_url": "https://api.github.com/users/alvarobartt/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5883). All of your documentation changes will be reflected on that endpoint.", "To showcase the current issue, here's a Colab Gist, that shows that the `imdb` dataset cannot be read/iterated, since one or more samples contain a non-ascii character that is being converted to `numpy.bytes_`, and so on fails.\r\n\r\nColab Gist at https://gist.github.com/alvarobartt/1746959d1abb9a33e0c593f3bd82a2fb\r\n\r\nAlso, here's a quick sample of what's happening:\r\n\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nds = load_dataset(\"imdb\", split=\"train\")\r\ntfds = ds.to_tf_dataset(batch_size=16)\r\nfor batch in tfds:\r\n print(batch)\r\n>>> UnicodeEncodeError: 'ascii' codec can't encode character '\\xe9' in position 0: ordinal not in range(128)\r\n```\r\n\r\nA more detailed version of it:\r\n\r\n```python\r\nfrom datasets import Dataset\r\n\r\nds = Dataset.from_dict(\r\n {\r\n \"a\": [1],\r\n \"b\": [\"é\"],\r\n }\r\n)\r\ntfds = ds.to_tf_dataset(batch_size=1)\r\nfor batch in tfds:\r\n print(batch)\r\n>>> UnicodeEncodeError: 'ascii' codec can't encode character '\\xe9' in position 0: ordinal not in range(128)\r\n```\r\n\r\nThe original issue comes from https://github.com/tensorflow/tensorflow/blob/388d952114e59a1aeda440ed4737b29f8b7c6e8a/tensorflow/python/ops/script_ops.py#LL234C4-L234C4, which could easily be solved by replacing that line with `return result.astype(np.unicode_)` but they are mentioning that it may lead to issues.\r\n\r\nEven the following fails in `numpy`:\r\n\r\n```python\r\nimport numpy as np\r\n\r\nx = np.array([\"é\"]).astype(np.bytes_)\r\n```", "cc. @lhoestq :hugs:", "cc @Rocketknight1 ", "> Nice ! Could you add some tests to make sure that batch_size=None works as expected ?\r\n\r\nSure, I'll add the tests for everything, including the string-encoding issue to make sure it's solved!", "Thanks for the review @lhoestq and @Rocketknight1! I do understand that processing it in batches is always more efficient than processing it one-by-one, it was just to make `batch_size` optional. What we can do is default it to a certain batch size e.g. 16 as before, and that's it, but I think it can still remain optional.", "@Rocketknight1 then I'll add the integration tests for the optional `batch_size` as well as for the encoding of non-ASCII compatible characters 😄 Do we set the default `batch_size` to 16 instead of `None`?", "@alvarobartt I think 16 is a reasonable default, yep!", "I think default should be None, not 16.\r\nUsers won't expect to have it batched by default.", "Then I'll leave it as is, and add the unit/integration tests, thanks @Rocketknight1 and @lhoestq ", "Hi @Rocketknight1 @lhoestq! So the string-encoding issue is already solved, but I've got one doubt about the `batch_size` being optional in the multiprocessing approach, since in that case I assume the `batch_size` should be mandatory, for the moment I'm assuming it is/should be mandatory, but let me know if you want me to add a check to disallow `batch_size=None` when `num_workers>1`. Thanks!", "> To showcase the current issue, here's a Colab Gist, that shows that the `imdb` dataset cannot be read/iterated, since one or more samples contain a non-ascii character that is being converted to `numpy.bytes_`, and so on fails.\r\n> \r\n> Colab Gist at https://gist.github.com/alvarobartt/1746959d1abb9a33e0c593f3bd82a2fb\r\n\r\nI've used the Colab shared above for testing purposes, and it works fine, plus the unit/integration tests are passing. I've also trained a `KerasNLP` model with incoming data from 🤗`datasets` with no issue at all!", "> in the multiprocessing approach, since in that case I assume the batch_size should be mandatory,\r\n\r\nNo I think they're quite orthogonal, no need to have it mandatory", "> No I think they're quite orthogonal, no need to have it mandatory\r\n\r\nBut it will break if `batch_size=None` as the multiprocessing approach will aim to prepare batches and distribute those to every worker, and assuming `batch_size=1` when `batch_size=None` I guess is not a good assumption, right?", "Ah I see. Multiprocessing should support batch_size=None indeed. If you have ideas you can do it in this PR, or raise a NotImplementedError and we can see later", "Sure @lhoestq, I can add a `NotImplementedError` for the moment, and prepare the next PR straight-away to tackle the multiprocessing approach with `batch_size=None`, but not sure if that may eventually collide with @Rocketknight1 PR at https://github.com/huggingface/datasets/pull/5863", "Yes, let me merge the PR at #5863 after this one, and then we can open another to improve the behaviour with multiprocessing and `batch_size=None`!", "Sure @Rocketknight1 makes complete sense to me! Do you want me to add the `raise NotImplementedError` and then we merge this PR? Or you prefer to directly merge the current?", "`raise NotImplementedError` for now with an error telling the user that multiprocessing needs them to specify a batch size, I think!", "Ready to merge @Rocketknight1! 🤗 " ]
"2023-05-22T11:51:07"
"2023-05-25T19:45:15"
null
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5883", "html_url": "https://github.com/huggingface/datasets/pull/5883", "diff_url": "https://github.com/huggingface/datasets/pull/5883.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5883.patch", "merged_at": null }
## What's in this PR? This PR addresses some minor fixes and general improvements in the `to_tf_dataset` method of `datasets.Dataset`, to convert a 🤗HuggingFace Dataset as a TensorFlow Dataset. The main bug solved in this PR comes with the string-encoding, since for safety purposes the internal conversion of `numpy.arrays` when `dtype` is unicode/string, is to convert it into `numpy.bytes`, more information in the docstring of https://github.com/tensorflow/tensorflow/blob/388d952114e59a1aeda440ed4737b29f8b7c6e8a/tensorflow/python/ops/script_ops.py#L210. That's triggered when using `tensorflow.numpy_function` as it's applying another type cast besides the one that `datasets` does, so the casting is applied at least twice per entry/batch. So this means that the definition of the `numpy.unicode_` dtype when the data in the batch is a string, is ignored, and replaced by `numpy.bytes_`. Besides that, some other minor things have been fixed: * Made `batch_size` an optional parameter in `to_tf_dataset` * Map the `tensorflow` output dtypes just once, and not in every `tf.function` call during `map` * Keep `numpy` formatting in the `datasets.Dataset` if already formatted like it, no need to format it again as `numpy` * Docstring indentation in `dataset_to_tf` and `multiprocess_dataset_to_tf` ## What's missing in this PR? I can include some integration tests if needed, to validate that `batch_size` is optional, and that the tensors in the TF-Dataset can be looped over with no issues as before.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5883/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5883/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5881
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5881/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5881/comments
https://api.github.com/repos/huggingface/datasets/issues/5881/events
https://github.com/huggingface/datasets/issues/5881
1,719,402,643
I_kwDODunzps5mfACT
5,881
Split dataset by node: index error when sharding iterable dataset
{ "login": "sanchit-gandhi", "id": 93869735, "node_id": "U_kgDOBZhWpw", "avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sanchit-gandhi", "html_url": "https://github.com/sanchit-gandhi", "followers_url": "https://api.github.com/users/sanchit-gandhi/followers", "following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}", "gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}", "starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions", "organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs", "repos_url": "https://api.github.com/users/sanchit-gandhi/repos", "events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}", "received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "cc @lhoestq in case you have any ideas here! Might need a multi-host set-up to debug (can give you access to a JAX one if you need)" ]
"2023-05-22T10:36:13"
"2023-05-23T08:32:14"
null
CONTRIBUTOR
null
null
null
### Describe the bug Context: we're splitting an iterable dataset by node and then passing it to a torch data loader with multiple workers When we iterate over it for 5 steps, we don't get an error When we instead iterate over it for 8 steps, we get an `IndexError` when fetching the data if we have too many workers ### Steps to reproduce the bug Here, we have 2 JAX processes (`jax.process_count() = 2`) which we split the dataset over. The dataset loading script can be found here: https://huggingface.co/datasets/distil-whisper/librispeech_asr/blob/c6a1e805cbfeed5057400ac5937327d7e30281b8/librispeech_asr.py#L310 <details> <summary> Code to reproduce </summary> ```python from datasets import load_dataset import jax from datasets.distributed import split_dataset_by_node from torch.utils.data import DataLoader from tqdm import tqdm # load an example dataset (https://huggingface.co/datasets/distil-whisper/librispeech_asr) dataset = load_dataset("distil-whisper/librispeech_asr", "all", split="train.clean.100", streaming=True) # just keep the text column -> no need to define a collator dataset_text = dataset.remove_columns(set(dataset.features.keys()) - {"text"}) # define some constants batch_size = 256 num_examples = 5 # works for 5 examples, doesn't for 8 num_workers = dataset_text.n_shards # try with multiple workers dataloader = DataLoader(dataset_text, batch_size=batch_size, num_workers=num_workers, drop_last=True) for i, batch in tqdm(enumerate(dataloader), total=num_examples, desc="Multiple workers"): if i == num_examples: break # try splitting by node (we can't do this with `dataset_text` since `split_dataset_by_node` expects the Audio column for an ASR dataset) dataset = split_dataset_by_node(dataset, rank=jax.process_index(), world_size=jax.process_count()) # remove the text column again dataset_text = dataset.remove_columns(set(dataset.features.keys()) - {"text"}) dataloader = DataLoader(dataset_text, batch_size=16, num_workers=num_workers // 2, drop_last=True) for i, batch in tqdm(enumerate(dataloader), total=num_examples, desc="Split by node"): if i == num_examples: break # too many workers dataloader = DataLoader(dataset_text, batch_size=256, num_workers=num_workers, drop_last=True) for i, batch in tqdm(enumerate(dataloader), total=num_examples, desc="Too many workers"): if i == num_examples: break ``` </details> <details> <summary> With 5 examples: </summary> ``` Multiple workers: 100%|███████████████████████████████████████████████████████████████████| 5/5 [00:16<00:00, 3.33s/it] Assigning 7 shards (or data sources) of the dataset to each node. Split by node: 100%|██████████████████████████████████████████████████████████████████████| 5/5 [00:13<00:00, 2.76s/it] Assigning 7 shards (or data sources) of the dataset to each node. Too many dataloader workers: 14 (max is dataset.n_shards=7). Stopping 7 dataloader workers. To parallelize data loading, we give each process some shards (or data sources) to process. Therefore it's unnecessary t o have a number of workers greater than dataset.n_shards=7. To enable more parallelism, please split the dataset in more files than 7. Too many workers: 100%|███████████████████████████████████████████████████████████████████| 5/5 [00:15<00:00, 3.03s/it] ``` </details> <details> <summary> With 7 examples: </summary> ``` Multiple workers: 100%|███████████████████████████████████████████████████████████████████| 8/8 [00:13<00:00, 1.71s/it] Assigning 7 shards (or data sources) of the dataset to each node. Split by node: 100%|██████████████████████████████████████████████████████████████████████| 8/8 [00:11<00:00, 1.38s/it] Assigning 7 shards (or data sources) of the dataset to each node. Too many dataloader workers: 14 (max is dataset.n_shards=7). Stopping 7 dataloader workers. To parallelize data loading, we give each process some shards (or data sources) to process. Therefore it's unnecessary to have a number of workers greater than dataset.n_shards=7. To enable more parallelism, please split the dataset in more files than 7. Too many workers: 88%|██████████████████████████████████████████████████████████▋ | 7/8 [00:13<00:01, 1.89s/it] Traceback (most recent call last): File "distil-whisper/test_librispeech.py", line 36, in <module> for i, batch in tqdm(enumerate(dataloader), total=num_examples, desc="Too many workers"): File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/tqdm/std.py", line 1178, in __iter__ for obj in iterable: File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 633, in __next__ data = self._next_data() File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1325, in _next_data return self._process_data(data) File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1371, in _process_data data.reraise() File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/torch/_utils.py", line 644, in reraise raise exception IndexError: Caught IndexError in DataLoader worker process 7. Original Traceback (most recent call last): File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py", line 308, in _worker_loop data = fetcher.fetch(index) File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 32, in fetch data.append(next(self.dataset_iter)) File "/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py", line 986, in __iter__ yield from self._iter_pytorch(ex_iterable) File "/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py", line 920, in _iter_pytorch for key, example in ex_iterable.shard_data_sources(worker_info.id, worker_info.num_workers): File "/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py", line 540, in shard_data_sources self.ex_iterable.shard_data_sources(worker_id, num_workers), File "/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py", line 796, in shard_data_sources self.ex_iterable.shard_data_sources(worker_id, num_workers), File "/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py", line 126, in shard_data_sources requested_gen_kwargs = _merge_gen_kwargs([gen_kwargs_list[i] for i in shard_indices]) File "/home/sanchitgandhi/datasets/src/datasets/utils/sharding.py", line 76, in _merge_gen_kwargs for key in gen_kwargs_list[0] IndexError: list index out of range ``` </details> ### Expected behavior Should pass for both 5 and 7 examples ### Environment info - `datasets` version: 2.12.1.dev0 - Platform: Linux-5.13.0-1023-gcp-x86_64-with-glibc2.29 - Python version: 3.8.10 - Huggingface_hub version: 0.14.1 - PyArrow version: 12.0.0 - Pandas version: 2.0.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5881/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5881/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5880
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5880/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5880/comments
https://api.github.com/repos/huggingface/datasets/issues/5880/events
https://github.com/huggingface/datasets/issues/5880
1,719,090,101
I_kwDODunzps5mdzu1
5,880
load_dataset from s3 file system through streaming can't not iterate data
{ "login": "janineguo", "id": 59083384, "node_id": "MDQ6VXNlcjU5MDgzMzg0", "avatar_url": "https://avatars.githubusercontent.com/u/59083384?v=4", "gravatar_id": "", "url": "https://api.github.com/users/janineguo", "html_url": "https://github.com/janineguo", "followers_url": "https://api.github.com/users/janineguo/followers", "following_url": "https://api.github.com/users/janineguo/following{/other_user}", "gists_url": "https://api.github.com/users/janineguo/gists{/gist_id}", "starred_url": "https://api.github.com/users/janineguo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/janineguo/subscriptions", "organizations_url": "https://api.github.com/users/janineguo/orgs", "repos_url": "https://api.github.com/users/janineguo/repos", "events_url": "https://api.github.com/users/janineguo/events{/privacy}", "received_events_url": "https://api.github.com/users/janineguo/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "This sounds related to #5281.\r\n\r\nCan you try passing `storage_options=s3_client.storage_options` instead passing it to `use_auth_token=` ?", "I tried `storage_options` before, but it doesn't work, I checked our source code and I found that we even didn't pass this parameter to the following process. if I use `storage_options` instead of `use_auth_token`, then I also need to change another place of the code. the last line of `streaming_download_manager.py`. our code only passes the `use_auth_token` to the following handler, but does nothing to the `storage_options`\r\n<img width=\"1050\" alt=\"image\" src=\"https://github.com/huggingface/datasets/assets/59083384/5be90933-3331-4ecf-9e11-34f9852d8f92\">\r\n" ]
"2023-05-22T07:40:27"
"2023-05-26T06:12:04"
null
NONE
null
null
null
### Describe the bug I have a JSON file in my s3 file system(minio), I can use load_dataset to get the file link, but I can't iterate it <img width="816" alt="image" src="https://github.com/huggingface/datasets/assets/59083384/cc0778d3-36f3-45b5-ac68-4e7c664c2ed0"> <img width="1144" alt="image" src="https://github.com/huggingface/datasets/assets/59083384/76872af3-8b3c-42ff-9f55-528c920a7af1"> we can change 4 lines to fix this bug, you can check whether it is ok for us. <img width="941" alt="image" src="https://github.com/huggingface/datasets/assets/59083384/5a22155a-ece7-496c-8506-047e5c235cd3"> ### Steps to reproduce the bug 1. storage a file in you s3 file system 2. use load_dataset to read it through streaming 3. iterate it ### Expected behavior can iterate it successfully ### Environment info - `datasets` version: 2.12.0 - Platform: macOS-10.16-x86_64-i386-64bit - Python version: 3.8.16 - Huggingface_hub version: 0.14.1 - PyArrow version: 12.0.0 - Pandas version: 2.0.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5880/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/huggingface/datasets/issues/5880/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5878
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5878/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5878/comments
https://api.github.com/repos/huggingface/datasets/issues/5878/events
https://github.com/huggingface/datasets/issues/5878
1,718,203,843
I_kwDODunzps5mabXD
5,878
Prefetching for IterableDataset
{ "login": "vyeevani", "id": 30946190, "node_id": "MDQ6VXNlcjMwOTQ2MTkw", "avatar_url": "https://avatars.githubusercontent.com/u/30946190?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vyeevani", "html_url": "https://github.com/vyeevani", "followers_url": "https://api.github.com/users/vyeevani/followers", "following_url": "https://api.github.com/users/vyeevani/following{/other_user}", "gists_url": "https://api.github.com/users/vyeevani/gists{/gist_id}", "starred_url": "https://api.github.com/users/vyeevani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vyeevani/subscriptions", "organizations_url": "https://api.github.com/users/vyeevani/orgs", "repos_url": "https://api.github.com/users/vyeevani/repos", "events_url": "https://api.github.com/users/vyeevani/events{/privacy}", "received_events_url": "https://api.github.com/users/vyeevani/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[ "Very cool! Do you have a link to the code that you're using to eagerly fetch the data? Would also be interested in hacking around something here for pre-fetching iterable datasets", "I ended up just switching back to the pytorch dataloader and using it's multiprocessing functionality to handle this :(. I'm just not that familiar with python multiprocessing to get something to work in jupyter (kept having weird behaviors happening with zombies living after the cell finished).", "Ultimately settled on using webdataset to circumvent huggingface datasets entirely. Would definitely switch back if: https://github.com/huggingface/datasets/issues/5337 was resolved.", "Hi! You can combine `datasets` with `torchdata` to prefetch `IterableDataset`'s samples:\r\n```python\r\nfrom datasets import load_dataset\r\nfrom torchdata.datapipes.iter import IterableWrapper, HuggingFaceHubReader\r\nfrom torch.utils.data import DataLoader\r\n\r\nds = load_dataset(\"sst\", split=\"train\", streaming=True)\r\n# processing...\r\ndp = IterableWrapper(ds)\r\ndp = dp.prefetch(100)\r\ndl = DataLoader(dp, batch_size=8)\r\n\r\ni = iter(dl)\r\nnext(i)\r\n```" ]
"2023-05-20T15:25:40"
"2023-05-23T16:45:55"
null
NONE
null
null
null
### Feature request Add support for prefetching the next n batches through iterabledataset to reduce batch loading bottleneck in training loop. ### Motivation The primary motivation behind this is to use hardware accelerators alongside a streaming dataset. This is required when you are in a low ram or low disk space setting as well as quick iteration where you're iterating though different accelerator environments (e.x changing ec2 instances quickly to figure out batch/sec for a particular architecture). Currently, using the IterableDataset results in accelerators becoming basically useless due to the massive bottleneck induced by the dataset lazy loading/transform/mapping. I've considered two alternatives: PyTorch dataloader that handles this. However, I'm using jax, and I believe this is a piece of functionality that should live in the stream class. Replicating the "num_workers" part of the PyTorch DataLoader to eagerly load batches and apply the transform so Arrow caching will automatically cache results and make them accessible. ### Your contribution I may or may not have time to do this. Currently, I've written the basic multiprocessor approach to handle the eager DataLoader for my own use case with code that's not integrated to datasets. I'd definitely see this as being the default over the regular Dataset for most people given that they wouldn't have to wait on the datasets while also not worrying about performance.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5878/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5878/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5877
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5877/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5877/comments
https://api.github.com/repos/huggingface/datasets/issues/5877/events
https://github.com/huggingface/datasets/issues/5877
1,717,983,961
I_kwDODunzps5mZlrZ
5,877
Request for text deduplication feature
{ "login": "SupreethRao99", "id": 55043035, "node_id": "MDQ6VXNlcjU1MDQzMDM1", "avatar_url": "https://avatars.githubusercontent.com/u/55043035?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SupreethRao99", "html_url": "https://github.com/SupreethRao99", "followers_url": "https://api.github.com/users/SupreethRao99/followers", "following_url": "https://api.github.com/users/SupreethRao99/following{/other_user}", "gists_url": "https://api.github.com/users/SupreethRao99/gists{/gist_id}", "starred_url": "https://api.github.com/users/SupreethRao99/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SupreethRao99/subscriptions", "organizations_url": "https://api.github.com/users/SupreethRao99/orgs", "repos_url": "https://api.github.com/users/SupreethRao99/repos", "events_url": "https://api.github.com/users/SupreethRao99/events{/privacy}", "received_events_url": "https://api.github.com/users/SupreethRao99/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[]
"2023-05-20T01:56:00"
"2023-05-20T01:56:00"
null
NONE
null
null
null
### Feature request It would be great if there would be support for high performance, highly scalable text deduplication algorithms as part of the datasets library. ### Motivation Motivated by this blog post https://huggingface.co/blog/dedup and this library https://github.com/google-research/deduplicate-text-datasets, but slightly frustrated by how its not very easy to work with these tools I am proposing this feature. ### Your contribution I would be happy to contribute to the development effort of this feature. would love to collaborate with others in the development effort.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5877/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5877/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5876
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5876/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5876/comments
https://api.github.com/repos/huggingface/datasets/issues/5876/events
https://github.com/huggingface/datasets/issues/5876
1,717,978,985
I_kwDODunzps5mZkdp
5,876
Incompatibility with DataLab
{ "login": "helpmefindaname", "id": 26192135, "node_id": "MDQ6VXNlcjI2MTkyMTM1", "avatar_url": "https://avatars.githubusercontent.com/u/26192135?v=4", "gravatar_id": "", "url": "https://api.github.com/users/helpmefindaname", "html_url": "https://github.com/helpmefindaname", "followers_url": "https://api.github.com/users/helpmefindaname/followers", "following_url": "https://api.github.com/users/helpmefindaname/following{/other_user}", "gists_url": "https://api.github.com/users/helpmefindaname/gists{/gist_id}", "starred_url": "https://api.github.com/users/helpmefindaname/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/helpmefindaname/subscriptions", "organizations_url": "https://api.github.com/users/helpmefindaname/orgs", "repos_url": "https://api.github.com/users/helpmefindaname/repos", "events_url": "https://api.github.com/users/helpmefindaname/events{/privacy}", "received_events_url": "https://api.github.com/users/helpmefindaname/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892877, "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue", "name": "good first issue", "color": "7057ff", "default": true, "description": "Good for newcomers" } ]
closed
false
null
[]
null
[ "Indeed, `clobber=True` (with a warning if the existing protocol will be overwritten) should fix the issue, but maybe a better solution is to register our compression filesystem before the script is executed and unregister them afterward. WDYT @lhoestq @albertvillanova?", "I think we should use clobber and show a warning if it overwrote a registered filesystem indeed ! This way the user can re-register the filesystems if needed. Though they should probably be compatible (and maybe do the exact same thing) so I wouldn't de-register the `datasets` filesystems" ]
"2023-05-20T01:39:11"
"2023-05-25T06:42:34"
"2023-05-25T06:42:34"
NONE
null
null
null
### Describe the bug Hello, I am currently working on a project where both [DataLab](https://github.com/ExpressAI/DataLab) and [datasets](https://github.com/huggingface/datasets) are subdependencies. I noticed that I cannot import both libraries, as they both register FileSystems in `fsspec`, expecting the FileSystems not being registered before. When running the code below, I get the following error: ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:\Users\Bened\anaconda3\envs\ner-eval-dashboard2\lib\site-packages\datalabs\__init__.py", line 28, in <module> from datalabs.arrow_dataset import concatenate_datasets, Dataset File "C:\Users\Bened\anaconda3\envs\ner-eval-dashboard2\lib\site-packages\datalabs\arrow_dataset.py", line 60, in <module> from datalabs.arrow_writer import ArrowWriter, OptimizedTypedSequence File "C:\Users\Bened\anaconda3\envs\ner-eval-dashboard2\lib\site-packages\datalabs\arrow_writer.py", line 28, in <module> from datalabs.features import ( File "C:\Users\Bened\anaconda3\envs\ner-eval-dashboard2\lib\site-packages\datalabs\features\__init__.py", line 2, in <module> from datalabs.features.audio import Audio File "C:\Users\Bened\anaconda3\envs\ner-eval-dashboard2\lib\site-packages\datalabs\features\audio.py", line 21, in <module> from datalabs.utils.streaming_download_manager import xopen File "C:\Users\Bened\anaconda3\envs\ner-eval-dashboard2\lib\site-packages\datalabs\utils\streaming_download_manager.py", line 16, in <module> from datalabs.filesystems import COMPRESSION_FILESYSTEMS File "C:\Users\Bened\anaconda3\envs\ner-eval-dashboard2\lib\site-packages\datalabs\filesystems\__init__.py", line 37, in <module> fsspec.register_implementation(fs_class.protocol, fs_class) File "C:\Users\Bened\anaconda3\envs\ner-eval-dashboard2\lib\site-packages\fsspec\registry.py", line 51, in register_implementation raise ValueError( ValueError: Name (bz2) already in the registry and clobber is False ``` I think as simple solution would be to just set `clobber=True` in https://github.com/huggingface/datasets/blob/main/src/datasets/filesystems/__init__.py#L28. This allows the register to discard previous registrations. This should work, as the datalabs FileSystems are copies of the datasets FileSystems. However, I don't know if it is guaranteed to be compatible with other libraries that might use the same protocols. I am linking the symmetric issue on [DataLab](https://github.com/ExpressAI/DataLab/issues/425) as ideally the issue is solved in both libraries the same way. Otherwise, it could lead to different behaviors depending on which library gets imported first. ### Steps to reproduce the bug 1. Run `pip install datalabs==0.4.15 datasets==2.12.0` 2. Run the following python code: ``` import datalabs import datasets ``` ### Expected behavior It should be possible to import both libraries without getting a Value Error ### Environment info datalabs==0.4.15 datasets==2.12.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5876/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5876/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5875
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5875/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5875/comments
https://api.github.com/repos/huggingface/datasets/issues/5875/events
https://github.com/huggingface/datasets/issues/5875
1,716,770,394
I_kwDODunzps5mU9Za
5,875
Why split slicing doesn't behave like list slicing ?
{ "login": "astariul", "id": 43774355, "node_id": "MDQ6VXNlcjQzNzc0MzU1", "avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4", "gravatar_id": "", "url": "https://api.github.com/users/astariul", "html_url": "https://github.com/astariul", "followers_url": "https://api.github.com/users/astariul/followers", "following_url": "https://api.github.com/users/astariul/following{/other_user}", "gists_url": "https://api.github.com/users/astariul/gists{/gist_id}", "starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/astariul/subscriptions", "organizations_url": "https://api.github.com/users/astariul/orgs", "repos_url": "https://api.github.com/users/astariul/repos", "events_url": "https://api.github.com/users/astariul/events{/privacy}", "received_events_url": "https://api.github.com/users/astariul/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892865, "node_id": "MDU6TGFiZWwxOTM1ODkyODY1", "url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate", "name": "duplicate", "color": "cfd3d7", "default": true, "description": "This issue or pull request already exists" } ]
open
false
null
[]
null
[ "A duplicate of https://github.com/huggingface/datasets/issues/1774" ]
"2023-05-19T07:21:10"
"2023-05-23T16:02:14"
null
NONE
null
null
null
### Describe the bug If I want to get the first 10 samples of my dataset, I can do : ``` ds = datasets.load_dataset('mnist', split='train[:10]') ``` But if I exceed the number of samples in the dataset, an exception is raised : ``` ds = datasets.load_dataset('mnist', split='train[:999999999]') ``` > ValueError: Requested slice [:999999999] incompatible with 60000 examples. ### Steps to reproduce the bug ``` ds = datasets.load_dataset('mnist', split='train[:999999999]') ``` ### Expected behavior I would expect it to behave like python lists (no exception raised, the whole list is kept) : ``` d = list(range(1000))[:999999] print(len(d)) # > 1000 ``` ### Environment info - `datasets` version: 2.9.0 - Platform: macOS-12.6-arm64-arm-64bit - Python version: 3.9.12 - PyArrow version: 11.0.0 - Pandas version: 1.5.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5875/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5875/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5874
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5874/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5874/comments
https://api.github.com/repos/huggingface/datasets/issues/5874/events
https://github.com/huggingface/datasets/issues/5874
1,715,708,930
I_kwDODunzps5mQ6QC
5,874
Using as_dataset on a "parquet" builder
{ "login": "rems75", "id": 9039058, "node_id": "MDQ6VXNlcjkwMzkwNTg=", "avatar_url": "https://avatars.githubusercontent.com/u/9039058?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rems75", "html_url": "https://github.com/rems75", "followers_url": "https://api.github.com/users/rems75/followers", "following_url": "https://api.github.com/users/rems75/following{/other_user}", "gists_url": "https://api.github.com/users/rems75/gists{/gist_id}", "starred_url": "https://api.github.com/users/rems75/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rems75/subscriptions", "organizations_url": "https://api.github.com/users/rems75/orgs", "repos_url": "https://api.github.com/users/rems75/repos", "events_url": "https://api.github.com/users/rems75/events{/privacy}", "received_events_url": "https://api.github.com/users/rems75/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Hi! You can refer to [this doc](https://huggingface.co/docs/datasets/filesystems#load-and-save-your-datasets-using-your-cloud-storage-filesystem) to see the intended usage (basically, it skips the Arrow -> Parquet conversion step in `ds = load_dataset(...); ds.to_parquet(\"path/to/parquet\")`) and allows writing Parquet to remote storage unlike `to_parquet`).\r\n\r\n> I guess I'd expect as_dataset to generate the dataset in arrow format if it has to, or to suggest an alternative way to load the dataset (I've also tried other methods with load_dataset to no avail, probably due to misunderstandings on my part).\r\n\r\n`as_dataset` does not work with `file_format=\"parquet\"` files as Parquet files cannot be memory-mapped, so I think we should just raise an error in that case.\r\n" ]
"2023-05-18T14:09:03"
"2023-05-25T17:56:09"
null
NONE
null
null
null
### Describe the bug I used a custom builder to ``download_and_prepare`` a dataset. The first (very minor) issue is that the doc seems to suggest ``download_and_prepare`` will return the dataset, while it does not ([builder.py](https://github.com/huggingface/datasets/blob/main/src/datasets/builder.py#L718-L738)). ``` >>> from datasets import load_dataset_builder >>> builder = load_dataset_builder("rotten_tomatoes") >>> ds = builder.download_and_prepare("./output_dir", file_format="parquet") ``` The main issue I am facing is loading the dataset from those parquet files. I used the `as_dataset` method suggested by the doc, however it returns: ` FileNotFoundError: [Errno 2] Failed to open local file 'output_dir/__main__-train-00000-of-00245.arrow'. Detail: [errno 2] No such file or directory. ` ### Steps to reproduce the bug 1. Create a custom builder of some sort: `builder = CustomBuilder()`. 2. Run `download_and_prepare` with the parquet format: `builder.download_and_prepare("./output_dir", file_format="parquet")`. 3. Run `dataset = builder.as_dataset()`. ### Expected behavior I guess I'd expect `as_dataset` to generate the dataset in arrow format if it has to, or to suggest an alternative way to load the dataset (I've also tried other methods with `load_dataset` to no avail, probably due to misunderstandings on my part). ### Environment info ``` - `datasets` version: 2.12.0 - Platform: Linux-5.15.0-1027-gcp-x86_64-with-glibc2.31 - Python version: 3.10.0 - Huggingface_hub version: 0.14.1 - PyArrow version: 8.0.0 - Pandas version: 1.5.3 ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5874/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5874/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5873
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5873/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5873/comments
https://api.github.com/repos/huggingface/datasets/issues/5873/events
https://github.com/huggingface/datasets/issues/5873
1,713,269,724
I_kwDODunzps5mHmvc
5,873
Allow setting the environment variable for the lock file path
{ "login": "xin3he", "id": 83260933, "node_id": "MDQ6VXNlcjgzMjYwOTMz", "avatar_url": "https://avatars.githubusercontent.com/u/83260933?v=4", "gravatar_id": "", "url": "https://api.github.com/users/xin3he", "html_url": "https://github.com/xin3he", "followers_url": "https://api.github.com/users/xin3he/followers", "following_url": "https://api.github.com/users/xin3he/following{/other_user}", "gists_url": "https://api.github.com/users/xin3he/gists{/gist_id}", "starred_url": "https://api.github.com/users/xin3he/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/xin3he/subscriptions", "organizations_url": "https://api.github.com/users/xin3he/orgs", "repos_url": "https://api.github.com/users/xin3he/repos", "events_url": "https://api.github.com/users/xin3he/events{/privacy}", "received_events_url": "https://api.github.com/users/xin3he/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[]
"2023-05-17T07:10:02"
"2023-05-17T07:11:05"
null
NONE
null
null
null
### Feature request Add an environment variable to replace the default lock file path. ### Motivation Usually, dataset path is a read-only path while the lock file needs to be modified each time. It would be convenient if the path can be reset individually. ### Your contribution ```/src/datasets/utils/filelock.py class UnixFileLock(BaseFileLock): def __init__(self, lock_file, timeout=-1, max_filename_length=None): #------------------- if os.getenv('DS_TMP_PATH'): file_name = str(lock_file).split('/')[-1] dataset_tmp_path = os.getenv('DS_TMP_PATH') lock_file = os.path.join(dataset_tmp_path, file_name) #------------------- max_filename_length = os.statvfs(os.path.dirname(lock_file)).f_namemax super().__init__(lock_file, timeout=timeout, max_filename_length=max_filename_length) ``` A simple demo is as upper. Thanks.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5873/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5873/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5872
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5872/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5872/comments
https://api.github.com/repos/huggingface/datasets/issues/5872/events
https://github.com/huggingface/datasets/pull/5872
1,713,174,662
PR_kwDODunzps5QrQ5o
5,872
Fix infer module for uppercase extensions
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007049 / 0.011353 (-0.004304) | 0.005034 / 0.011008 (-0.005974) | 0.097737 / 0.038508 (0.059229) | 0.033280 / 0.023109 (0.010170) | 0.301017 / 0.275898 (0.025119) | 0.336593 / 0.323480 (0.013113) | 0.005567 / 0.007986 (-0.002419) | 0.005384 / 0.004328 (0.001056) | 0.072980 / 0.004250 (0.068730) | 0.045030 / 0.037052 (0.007978) | 0.303280 / 0.258489 (0.044791) | 0.367528 / 0.293841 (0.073687) | 0.034131 / 0.128546 (-0.094415) | 0.012118 / 0.075646 (-0.063528) | 0.331677 / 0.419271 (-0.087594) | 0.049211 / 0.043533 (0.005678) | 0.297535 / 0.255139 (0.042396) | 0.318136 / 0.283200 (0.034936) | 0.101574 / 0.141683 (-0.040109) | 1.472769 / 1.452155 (0.020615) | 1.541724 / 1.492716 (0.049007) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.014646 / 0.018006 (-0.003360) | 0.439050 / 0.000490 (0.438560) | 0.008575 / 0.000200 (0.008375) | 0.000297 / 0.000054 (0.000242) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027591 / 0.037411 (-0.009820) | 0.111639 / 0.014526 (0.097113) | 0.117098 / 0.176557 (-0.059458) | 0.173281 / 0.737135 (-0.563855) | 0.123197 / 0.296338 (-0.173141) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.397507 / 0.215209 (0.182298) | 3.971457 / 2.077655 (1.893803) | 1.781158 / 1.504120 (0.277038) | 1.590419 / 1.541195 (0.049224) | 1.716374 / 1.468490 (0.247884) | 0.687150 / 4.584777 (-3.897627) | 3.691009 / 3.745712 (-0.054703) | 2.050900 / 5.269862 (-3.218961) | 1.304893 / 4.565676 (-3.260784) | 0.084507 / 0.424275 (-0.339768) | 0.012231 / 0.007607 (0.004624) | 0.493033 / 0.226044 (0.266988) | 4.929957 / 2.268929 (2.661028) | 2.209069 / 55.444624 (-53.235555) | 1.885992 / 6.876477 (-4.990485) | 2.007004 / 2.142072 (-0.135069) | 0.827265 / 4.805227 (-3.977963) | 0.168225 / 6.500664 (-6.332439) | 0.064988 / 0.075469 (-0.010481) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.182341 / 1.841788 (-0.659447) | 14.691983 / 8.074308 (6.617674) | 14.350720 / 10.191392 (4.159328) | 0.164307 / 0.680424 (-0.516117) | 0.017480 / 0.534201 (-0.516720) | 0.421843 / 0.579283 (-0.157441) | 0.417481 / 0.434364 (-0.016883) | 0.496587 / 0.540337 (-0.043751) | 0.581208 / 1.386936 (-0.805728) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007070 / 0.011353 (-0.004283) | 0.005083 / 0.011008 (-0.005926) | 0.075009 / 0.038508 (0.036500) | 0.032343 / 0.023109 (0.009234) | 0.366788 / 0.275898 (0.090890) | 0.392273 / 0.323480 (0.068794) | 0.005512 / 0.007986 (-0.002474) | 0.003999 / 0.004328 (-0.000329) | 0.073743 / 0.004250 (0.069492) | 0.046203 / 0.037052 (0.009151) | 0.367874 / 0.258489 (0.109385) | 0.409154 / 0.293841 (0.115313) | 0.035227 / 0.128546 (-0.093319) | 0.012223 / 0.075646 (-0.063424) | 0.087149 / 0.419271 (-0.332122) | 0.045648 / 0.043533 (0.002115) | 0.362414 / 0.255139 (0.107275) | 0.379970 / 0.283200 (0.096770) | 0.100631 / 0.141683 (-0.041052) | 1.439733 / 1.452155 (-0.012422) | 1.506266 / 1.492716 (0.013550) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.227071 / 0.018006 (0.209065) | 0.451243 / 0.000490 (0.450753) | 0.000406 / 0.000200 (0.000206) | 0.000060 / 0.000054 (0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028952 / 0.037411 (-0.008459) | 0.111934 / 0.014526 (0.097408) | 0.124080 / 0.176557 (-0.052477) | 0.174022 / 0.737135 (-0.563113) | 0.126811 / 0.296338 (-0.169527) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.436423 / 0.215209 (0.221214) | 4.331959 / 2.077655 (2.254304) | 2.111914 / 1.504120 (0.607794) | 1.921338 / 1.541195 (0.380143) | 1.994425 / 1.468490 (0.525935) | 0.699164 / 4.584777 (-3.885613) | 3.722143 / 3.745712 (-0.023569) | 3.516538 / 5.269862 (-1.753323) | 1.867245 / 4.565676 (-2.698431) | 0.085923 / 0.424275 (-0.338352) | 0.012059 / 0.007607 (0.004452) | 0.586147 / 0.226044 (0.360102) | 5.395823 / 2.268929 (3.126894) | 2.594430 / 55.444624 (-52.850194) | 2.275021 / 6.876477 (-4.601456) | 2.347810 / 2.142072 (0.205737) | 0.835118 / 4.805227 (-3.970109) | 0.167089 / 6.500664 (-6.333575) | 0.064893 / 0.075469 (-0.010576) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.291423 / 1.841788 (-0.550365) | 14.992696 / 8.074308 (6.918388) | 13.307842 / 10.191392 (3.116450) | 0.163799 / 0.680424 (-0.516625) | 0.017315 / 0.534201 (-0.516886) | 0.461319 / 0.579283 (-0.117965) | 0.430474 / 0.434364 (-0.003889) | 0.568115 / 0.540337 (0.027777) | 0.647909 / 1.386936 (-0.739027) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#a5161c9ecdcdde9cc99c7f212da13523d5ba6bdb \"CML watermark\")\n" ]
"2023-05-17T05:56:45"
"2023-05-17T14:26:59"
"2023-05-17T14:19:18"
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5872", "html_url": "https://github.com/huggingface/datasets/pull/5872", "diff_url": "https://github.com/huggingface/datasets/pull/5872.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5872.patch", "merged_at": "2023-05-17T14:19:18" }
Fix the `infer_module_for_data_files` and `infer_module_for_data_files_in_archives` functions when passed a data file name with uppercase extension, e.g. `filename.TXT`. Before, `None` module was returned.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5872/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5872/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5871
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5871/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5871/comments
https://api.github.com/repos/huggingface/datasets/issues/5871/events
https://github.com/huggingface/datasets/issues/5871
1,712,573,073
I_kwDODunzps5mE8qR
5,871
data configuration hash suffix depends on uncanonicalized data_dir
{ "login": "kylrth", "id": 5044802, "node_id": "MDQ6VXNlcjUwNDQ4MDI=", "avatar_url": "https://avatars.githubusercontent.com/u/5044802?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kylrth", "html_url": "https://github.com/kylrth", "followers_url": "https://api.github.com/users/kylrth/followers", "following_url": "https://api.github.com/users/kylrth/following{/other_user}", "gists_url": "https://api.github.com/users/kylrth/gists{/gist_id}", "starred_url": "https://api.github.com/users/kylrth/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kylrth/subscriptions", "organizations_url": "https://api.github.com/users/kylrth/orgs", "repos_url": "https://api.github.com/users/kylrth/repos", "events_url": "https://api.github.com/users/kylrth/events{/privacy}", "received_events_url": "https://api.github.com/users/kylrth/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892877, "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue", "name": "good first issue", "color": "7057ff", "default": true, "description": "Good for newcomers" } ]
open
false
{ "login": "kylrth", "id": 5044802, "node_id": "MDQ6VXNlcjUwNDQ4MDI=", "avatar_url": "https://avatars.githubusercontent.com/u/5044802?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kylrth", "html_url": "https://github.com/kylrth", "followers_url": "https://api.github.com/users/kylrth/followers", "following_url": "https://api.github.com/users/kylrth/following{/other_user}", "gists_url": "https://api.github.com/users/kylrth/gists{/gist_id}", "starred_url": "https://api.github.com/users/kylrth/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kylrth/subscriptions", "organizations_url": "https://api.github.com/users/kylrth/orgs", "repos_url": "https://api.github.com/users/kylrth/repos", "events_url": "https://api.github.com/users/kylrth/events{/privacy}", "received_events_url": "https://api.github.com/users/kylrth/received_events", "type": "User", "site_admin": false }
[ { "login": "kylrth", "id": 5044802, "node_id": "MDQ6VXNlcjUwNDQ4MDI=", "avatar_url": "https://avatars.githubusercontent.com/u/5044802?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kylrth", "html_url": "https://github.com/kylrth", "followers_url": "https://api.github.com/users/kylrth/followers", "following_url": "https://api.github.com/users/kylrth/following{/other_user}", "gists_url": "https://api.github.com/users/kylrth/gists{/gist_id}", "starred_url": "https://api.github.com/users/kylrth/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kylrth/subscriptions", "organizations_url": "https://api.github.com/users/kylrth/orgs", "repos_url": "https://api.github.com/users/kylrth/repos", "events_url": "https://api.github.com/users/kylrth/events{/privacy}", "received_events_url": "https://api.github.com/users/kylrth/received_events", "type": "User", "site_admin": false } ]
null
[ "It could even use `os.path.realpath` to resolve symlinks.", "Indeed, it makes sense to normalize `data_dir`. Feel free to submit a PR (this can be \"fixed\" [here](https://github.com/huggingface/datasets/blob/89f775226321ba94e5bf4670a323c0fb44f5f65c/src/datasets/builder.py#L173))", "#self-assign" ]
"2023-05-16T18:56:04"
"2023-05-25T17:42:39"
null
NONE
null
null
null
### Describe the bug I am working with the `recipe_nlg` dataset, which requires manual download. Once it's downloaded, I've noticed that the hash in the custom data configuration is different if I add a trailing `/` to my `data_dir`. It took me a while to notice that the hashes were different, and to understand that that was the cause of my dataset being processed anew instead of the cached version being used. ### Steps to reproduce the bug 1. Follow the steps to manually download the `recipe_nlg` dataset to `/data/recipenlg`. 2. Load it using `load_dataset`, once without a trailing slash and once with one: ```python >>> ds = load_dataset("recipe_nlg", data_dir="/data/recipenlg") Using custom data configuration default-082278caeea85765 Downloading and preparing dataset recipe_nlg/default to /home/kyle/.cache/huggingface/datasets/recipe_nlg/default-082278caeea85765/1.0.0/aa4f120223637bedf7360cecb70a9bd108acfd64e38207ca90c9f385d21e5e74... Dataset recipe_nlg downloaded and prepared to /home/kyle/.cache/huggingface/datasets/recipe_nlg/default-082278caeea85765/1.0.0/aa4f120223637bedf7360cecb70a9bd108acfd64e38207ca90c9f385d21e5e74. Subsequent calls will reuse this data. 100%|███████████████████████████████████████████████████████████████████| 1/1 [00:01<00:00, 1.10s/it] DatasetDict({ train: Dataset({ features: ['id', 'title', 'ingredients', 'directions', 'link', 'source', 'ner'], num_rows: 2231142 }) }) >>> ds = load_dataset("recipe_nlg", data_dir="/data/recipenlg/") Using custom data configuration default-83e87680785d0493 Downloading and preparing dataset recipe_nlg/default to /home/user/.cache/huggingface/datasets/recipe_nlg/default-83e87680785d0493/1.0.0/aa4f120223637bedf7360cecb70a9bd108acfd64e38207ca90c9f385d21e5e74... Generating train split: 1%| | 12701/2231142 [00:04<13:15, 2790.25 examples/s ^C ``` 3. Observe that the hash suffix in the custom data configuration changes due to the altered string. ### Expected behavior I think I would expect the hash to remain constant if it actually points to the same location on disk. I would expect the use of `os.path.normpath` to canonicalize the paths. ### Environment info - `datasets` version: 2.8.0 - Platform: Linux-5.4.0-147-generic-x86_64-with-glibc2.31 - Python version: 3.10.8 - PyArrow version: 10.0.1 - Pandas version: 1.5.2
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5871/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5871/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5870
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5870/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5870/comments
https://api.github.com/repos/huggingface/datasets/issues/5870/events
https://github.com/huggingface/datasets/issues/5870
1,712,156,282
I_kwDODunzps5mDW56
5,870
Behaviour difference between datasets.map and IterableDatasets.map
{ "login": "llStringll", "id": 30209072, "node_id": "MDQ6VXNlcjMwMjA5MDcy", "avatar_url": "https://avatars.githubusercontent.com/u/30209072?v=4", "gravatar_id": "", "url": "https://api.github.com/users/llStringll", "html_url": "https://github.com/llStringll", "followers_url": "https://api.github.com/users/llStringll/followers", "following_url": "https://api.github.com/users/llStringll/following{/other_user}", "gists_url": "https://api.github.com/users/llStringll/gists{/gist_id}", "starred_url": "https://api.github.com/users/llStringll/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/llStringll/subscriptions", "organizations_url": "https://api.github.com/users/llStringll/orgs", "repos_url": "https://api.github.com/users/llStringll/repos", "events_url": "https://api.github.com/users/llStringll/events{/privacy}", "received_events_url": "https://api.github.com/users/llStringll/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "PS - some work is definitely needed for 'special cases' docs, not explanations, just usages of 'functions' under mixture of special cases, like a combination of custom databuilder + iterable dataset for large size + dynamic .map() application." ]
"2023-05-16T14:32:57"
"2023-05-16T14:36:05"
null
NONE
null
null
null
### Describe the bug All the examples in all the docs mentioned throughout huggingface datasets correspond to datasets object, and not IterableDatasets object. At one point of time, they might have been in sync, but the code for datasets version >=2.9.0 is very different as compared to the docs. I basically need to .map() a transform on images in an iterable dataset, which was made using a custom databuilder config. This works very good in map-styles datasets, but the .map() fails in IterableDatasets, show behvaiour as such: "pixel_values" key not found, KeyError in examples object/dict passed into transform function for map, which works fine with map style, even as batch. In iterable style, the object/dict passed into map() paramter callable function is completely different as what is mentioned in all examples. Please look into this. Thank you My databuilder class is inherited as such: def _info(self): print ("Config: ",self.config.__dict__.keys()) return datasets.DatasetInfo( description=_DESCRIPTION, features=datasets.Features( { "labels": datasets.Sequence(datasets.Value("uint16")), # "labels_name": datasets.Value("string"), # "pixel_values": datasets.Array3D(shape=(3, 1280, 960), dtype="float32"), "pixel_values": datasets.Array3D(shape=(1280, 960, 3), dtype="uint8"), "image_s3_path": datasets.Value("string"), } ), supervised_keys=None, homepage="none", citation="", ) def _split_generators(self, dl_manager): records_train = list(db.mini_set.find({'split':'train'},{'image_s3_path':1, 'ocwen_template_name':1}))[:10000] records_val = list(db.mini_set.find({'split':'val'},{'image_s3_path':1, 'ocwen_template_name':1}))[:1000] # print (len(records),self.config.num_shards) # shard_size_train = len(records_train)//self.config.num_shards # sharded_records_train = [records_train[i:i+shard_size_train] for i in range(0,len(records_train),shard_size_train)] # shard_size_val = len(records_val)//self.config.num_shards # sharded_records_val = [records_val[i:i+shard_size_val] for i in range(0,len(records_val),shard_size_val)] return [ datasets.SplitGenerator( name=datasets.Split.TRAIN, gen_kwargs={"records":records_train} # passing list of records, for sharding to take over ), datasets.SplitGenerator( name=datasets.Split.VALIDATION, gen_kwargs={"records":records_val} # passing list of records, for sharding to take over ), ] def _generate_examples(self, records): # print ("Generating examples for [{}] shards".format(len(shards))) # initiate_db_connection() # records = list(db.mini_set.find({'split':split},{'image_s3_path':1, 'ocwen_template_name':1}))[:10] id_ = 0 # for records in shards: for i,rec in enumerate(records): img_local_path = fetch_file(rec['image_s3_path'],self.config.buffer_dir) # t = self.config.processor(Image.open(img_local_path), random_padding=True, return_tensors="np").pixel_values.squeeze() # print (t.shape, type(t),type(t[0][0][0])) # sys.exit() pvs = np.array(Image.open(img_local_path).resize((1280,960))) # image object is wxh, so resize as per that, numpy array of it is hxwxc, transposing to cxwxh # pvs = self.config.processor(Image.open(img_local_path), random_padding=True, return_tensors="np").pixel_values.astype(np.float16).squeeze() # print (type(pvs[0][0][0])) lblids = self.config.processor.tokenizer('<s_class>'+rec['ocwen_template_name']+'</s_class>'+'</s>', add_special_tokens=False, padding=False, truncation=False, return_tensors="np")["input_ids"].squeeze(0) # take padding later, as per batch collating # print (len(lblids),type(lblids[0])) # print (type(pvs),pvs.shape,type(pvs[0][0][0]), type(lblids)) yield id_, {"labels":lblids,"pixel_values":pvs,"image_s3_path":rec['image_s3_path']} id_+=1 os.remove(img_local_path) and I load it inside my trainer script as such `ds = load_dataset("/tmp/DonutDS/dataset/", split="train", streaming=True) # iterable dataset, where .map() falls` or also as `ds = load_from_disk('/tmp/DonutDS/dataset/') #map style dataset` Thank you to the team for having such a great library, and for this bug fix in advance! ### Steps to reproduce the bug Above config can allow one to reproduce the said bug ### Expected behavior .map() should show some consistency b/w map-style and iterable-style datasets, or atleast the docs should address iterable-style datasets behaviour and examples. I honestly do not figure the use of such docs. ### Environment info datasets==2.9.0 transformers==4.26.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5870/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5870/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5869
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5869/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5869/comments
https://api.github.com/repos/huggingface/datasets/issues/5869/events
https://github.com/huggingface/datasets/issues/5869
1,711,990,003
I_kwDODunzps5mCuTz
5,869
Image Encoding Issue when submitting a Parquet Dataset
{ "login": "PhilippeMoussalli", "id": 47530815, "node_id": "MDQ6VXNlcjQ3NTMwODE1", "avatar_url": "https://avatars.githubusercontent.com/u/47530815?v=4", "gravatar_id": "", "url": "https://api.github.com/users/PhilippeMoussalli", "html_url": "https://github.com/PhilippeMoussalli", "followers_url": "https://api.github.com/users/PhilippeMoussalli/followers", "following_url": "https://api.github.com/users/PhilippeMoussalli/following{/other_user}", "gists_url": "https://api.github.com/users/PhilippeMoussalli/gists{/gist_id}", "starred_url": "https://api.github.com/users/PhilippeMoussalli/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/PhilippeMoussalli/subscriptions", "organizations_url": "https://api.github.com/users/PhilippeMoussalli/orgs", "repos_url": "https://api.github.com/users/PhilippeMoussalli/repos", "events_url": "https://api.github.com/users/PhilippeMoussalli/events{/privacy}", "received_events_url": "https://api.github.com/users/PhilippeMoussalli/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[ "Hi @PhilippeMoussalli thanks for opening a detailed issue. It seems the issue is more related to the `datasets` library so I'll ping @lhoestq @mariosasko on this one :) \n\n(edit: also can one of you move the issue to the datasets repo? Thanks in advance 🙏)", "Hi ! The `Image()` info is stored in the **schema metadata**. More precisely there should be a \"huggingface\" field in the schema metadata that contains the `datasets` feature type of each column.\r\n\r\nTo fix your issue, you can use the same schema as the original Parquet files to write the new ones. You can also get the schema with metadata from a `Features` object, e.g.\r\n\r\n```python\r\nfrom datasets import Features, Image, Value\r\n\r\nfeatures = Features({\"image\": Image(), \"text\": Value(\"string\")})\r\nschema = features.arrow_schema\r\nprint(schema.metadata)\r\n# {b'huggingface': b'{\"info\": {\"features\": {\"image\": {\"_type\": \"Image\"}, \"text\": {\"dtype\": \"string\", \"_type\": \"Value\"}}}}'}\r\n```", "It appears that the parquet files at `hf://datasets/lambdalabs/pokemon-blip-captions` don't have this metadata, and it is defined in the dataset_infos.json instead (legacy).\r\n\r\nYou can get the right schema with the HF metadata this way:\r\n\r\n```python\r\nfrom datasets import load_dataset_builder\r\n\r\nfeatures = load_dataset_builder(\"lambdalabs/pokemon-blip-captions\").info.features\r\nschema = features.arrow_schema\r\n```", "Btw in the future we might add support for an dedicated Image extension type in Arrow so that you won't need to add the schema metadata anymore ;)", "Thanks @Wauplin @lhoestq for the quick reply :)! \r\n\r\nI tried your approach by passing the huggingface schema to the dask writer \r\n\r\n```\r\nfrom datasets import Features, Image, Value\r\ndf = dd.read_parquet(f\"hf://datasets/lambdalabs/pokemon-blip-captions\",index=False)\r\nfeatures = Features({\"image\": Image(), \"text\": Value(\"string\")})\r\nschema = features.arrow_schema\r\ndd.to_parquet(df, path = \"hf://datasets/philippemo/dummy_dataset/data\", schema=schema)\r\n```\r\nAt first it didn't work as I was not able to visualize the images, so then I manually added the `dataset_infos.json` from the example dataset and it worked :)\r\n\r\nHowever, It's not very ideal since there are some metadata in that file that need to be computed in order to load the data properly such as `num_of_bytes` and `num_examples` which might be unknown in my use case. \r\n\r\n![Screenshot from 2023-05-16 16-54-55](https://github.com/huggingface/datasets/assets/47530815/b2b448d2-d3d8-43a7-9682-9c0187a5192b)\r\n\r\nDo you have any pointers there? you mentioned that `datasets_info.json` will be deprecated/legacy. Could you point me to some example image datasets on the hub that are stored as parquet and don't have the `datasets_info.json`?\r\n\r\n", "You don't need the dataset_infos.json file as long as you have the schema with HF metadata ;)\r\nI could also check that it works fine myself on the git revision without the dataset_infos.json file.\r\n\r\nWhat made you think it didn't work ?", "> You don't need the dataset_infos.json file as long as you have the schema with HF metadata ;) I could also check that it works fine myself on the git revision without the dataset_infos.json file.\r\n> \r\n> What made you think it didn't work ?\r\n\r\nThose are two identical dataset repos where both were pushed with dask with the specified schema you mentioned above. I then uploaded the `dataset_infos.json` manually taken from the original example dataset into one of them. \r\n\r\n* **With schema**: https://huggingface.co/datasets/philippemo/dummy_dataset_with_schema\r\n* **Without schema**: https://huggingface.co/datasets/philippemo/dummy_dataset_without_schema\r\n\r\nYou can see that in the examples without schema the images fail to render properly. When loaded with `datasets` they return an dict and not a Pillow Image ", "I see ! I think it's a bug on our side - it should work without the metadata - let me investigate", "Alright, it's fixed: https://huggingface.co/datasets/philippemo/dummy_dataset_without_schema\r\n\r\nIt shows the image correctly now - even without the extra metadata :)" ]
"2023-05-16T09:42:58"
"2023-05-25T14:18:14"
null
NONE
null
null
null
### Describe the bug Hello, I'd like to report an issue related to pushing a dataset represented as a Parquet file to a dataset repository using Dask. Here are the details: We attempted to load an example dataset in Parquet format from the Hugging Face (HF) filesystem using Dask with the following code snippet: ``` import dask.dataframe as dd df = dd.read_parquet("hf://datasets/lambdalabs/pokemon-blip-captions",index=False) ``` In this dataset, the "image" column is represented as a dictionary/struct with the format: ``` df = df.compute() df["image"].iloc[0].keys() -> dict_keys(['bytes', 'path']) ``` I think this is the format encoded by the [`Image`](https://huggingface.co/docs/datasets/v2.0.0/en/package_reference/main_classes#datasets.Image) feature extractor from datasets to format suitable for Arrow. The next step was to push the dataset to a repository that I created: ``` dd.to_parquet(dask_df, path = "hf://datasets/philippemo/dummy_dataset/data") ``` However, after pushing the dataset using Dask, the "image" column is now represented as the encoded dictionary `(['bytes', 'path'])`, and the images are not properly visualized. You can find the dataset here: [Link to the problematic dataset](https://huggingface.co/datasets/philippemo/dummy_dataset). It's worth noting that both the original dataset and the one submitted with Dask have the same schema with minor alterations related to metadata: **[ Schema of original dummy example.](https://huggingface.co/datasets/lambdalabs/pokemon-blip-captions/blob/main/data/train-00000-of-00001-566cc9b19d7203f8.parquet)** ``` image: struct<bytes: binary, path: null> child 0, bytes: binary child 1, path: null text: string ``` **[ Schema of pushed dataset with dask](https://huggingface.co/datasets/philippemo/dummy_dataset/blob/main/data/part.0.parquet)** ``` image: struct<bytes: binary, path: null> child 0, bytes: binary child 1, path: null text: string ``` This issue seems to be related to an encoding type that occurs when pushing a model to the hub. Normally, models should be represented as an HF dataset before pushing, but we are working with an example where we need to push large datasets using Dask. Could you please provide clarification on how to resolve this issue? Thank you! ### Reproduction To get the schema I downloaded the parquet files and used pyarrow.parquet to read the schema ``` import pyarrow.parquet pyarrow.parquet.read_schema(<path_to_parquet>, memory_map=True) ``` ### Logs _No response_ ### System info ```shell - huggingface_hub version: 0.14.1 - Platform: Linux-5.19.0-41-generic-x86_64-with-glibc2.35 - Python version: 3.10.6 - Running in iPython ?: No - Running in notebook ?: No - Running in Google Colab ?: No - Token path ?: /home/philippe/.cache/huggingface/token - Has saved token ?: True - Who am I ?: philippemo - Configured git credential helpers: cache - FastAI: N/A - Tensorflow: N/A - Torch: N/A - Jinja2: 3.1.2 - Graphviz: N/A - Pydot: N/A - Pillow: 9.4.0 - hf_transfer: N/A - gradio: N/A - ENDPOINT: https://huggingface.co - HUGGINGFACE_HUB_CACHE: /home/philippe/.cache/huggingface/hub - HUGGINGFACE_ASSETS_CACHE: /home/philippe/.cache/huggingface/assets - HF_TOKEN_PATH: /home/philippe/.cache/huggingface/token - HF_HUB_OFFLINE: False - HF_HUB_DISABLE_TELEMETRY: False - HF_HUB_DISABLE_PROGRESS_BARS: None - HF_HUB_DISABLE_SYMLINKS_WARNING: False - HF_HUB_DISABLE_EXPERIMENTAL_WARNING: False - HF_HUB_DISABLE_IMPLICIT_TOKEN: False - HF_HUB_ENABLE_HF_TRANSFER: False ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5869/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5869/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5868
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5868/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5868/comments
https://api.github.com/repos/huggingface/datasets/issues/5868/events
https://github.com/huggingface/datasets/issues/5868
1,711,173,098
I_kwDODunzps5l_m3q
5,868
Is it possible to change a cached file and 're-cache' it instead of re-generating?
{ "login": "zyh3826", "id": 31238754, "node_id": "MDQ6VXNlcjMxMjM4NzU0", "avatar_url": "https://avatars.githubusercontent.com/u/31238754?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zyh3826", "html_url": "https://github.com/zyh3826", "followers_url": "https://api.github.com/users/zyh3826/followers", "following_url": "https://api.github.com/users/zyh3826/following{/other_user}", "gists_url": "https://api.github.com/users/zyh3826/gists{/gist_id}", "starred_url": "https://api.github.com/users/zyh3826/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zyh3826/subscriptions", "organizations_url": "https://api.github.com/users/zyh3826/orgs", "repos_url": "https://api.github.com/users/zyh3826/repos", "events_url": "https://api.github.com/users/zyh3826/events{/privacy}", "received_events_url": "https://api.github.com/users/zyh3826/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
null
[]
null
[ "Arrow files/primitives (tables and arrays) are immutable, so re-generating them is the only option, I'm afraid.", "> \r\n\r\nGot it, thanks for your reply" ]
"2023-05-16T03:45:42"
"2023-05-17T11:21:36"
"2023-05-17T11:21:36"
NONE
null
null
null
### Feature request Hi, I have a huge cached file using `map`(over 500GB), and I want to change an attribution of each element, is there possible to do it using some method instead of re-generating, because `map` takes over 24 hours ### Motivation For large datasets, I think it is very important because we always face the problem which is changing something in the original cache without re-generating it. ### Your contribution For now, I can't help, sorry.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5868/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5868/timeline
null
completed
false

Dataset Card for "github-issues"

More Information needed

Downloads last month
0
Edit dataset card