id
int64
599M
2.47B
url
stringlengths
58
61
repository_url
stringclasses
1 value
events_url
stringlengths
65
68
labels
listlengths
0
4
active_lock_reason
null
updated_at
stringlengths
20
20
assignees
listlengths
0
4
html_url
stringlengths
46
51
author_association
stringclasses
4 values
state_reason
stringclasses
3 values
draft
bool
2 classes
milestone
dict
comments
sequencelengths
0
30
title
stringlengths
1
290
reactions
dict
node_id
stringlengths
18
32
pull_request
dict
created_at
stringlengths
20
20
comments_url
stringlengths
67
70
body
stringlengths
0
228k
user
dict
labels_url
stringlengths
72
75
timeline_url
stringlengths
67
70
state
stringclasses
2 values
locked
bool
1 class
number
int64
1
7.11k
performed_via_github_app
null
closed_at
stringlengths
20
20
assignee
dict
is_pull_request
bool
2 classes
1,935,629,679
https://api.github.com/repos/huggingface/datasets/issues/6290
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6290/events
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
null
2024-08-17T14:19:58Z
[]
https://github.com/huggingface/datasets/issues/6290
CONTRIBUTOR
null
null
null
[ "Yea I think waiting for #6269 would be best, or branching from it. For reference, this [PR](https://github.com/LAION-AI/Discord-Scrapers/pull/2) is progressing pretty well which will do similar using the hf hub for our LAION dataset bot https://github.com/LAION-AI/Discord-Scrapers/pull/2. ", "Is there any update on this?" ]
Incremental dataset (e.g. `.push_to_hub(..., append=True)`)
{ "+1": 3, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 3, "url": "https://api.github.com/repos/huggingface/datasets/issues/6290/reactions" }
I_kwDODunzps5zX11v
null
2023-10-10T15:18:03Z
https://api.github.com/repos/huggingface/datasets/issues/6290/comments
### Feature request Have the possibility to do `ds.push_to_hub(..., append=True)`. ### Motivation Requested in this [comment](https://huggingface.co/datasets/laion/dalle-3-dataset/discussions/3#65252597c4edc168202a5eaa) and this [comment](https://huggingface.co/datasets/laion/dalle-3-dataset/discussions/4#6524f675c9607bdffb208d8f). Discussed internally on [slack](https://huggingface.slack.com/archives/C02EMARJ65P/p1696950642610639?thread_ts=1690554266.830949&cid=C02EMARJ65P). ### Your contribution What I suggest to do for parquet datasets is to use `CommitOperationCopy` + `CommitOperationDelete` from `huggingface_hub`: 1. list files 2. copy files from parquet-0001-of-0004 to parquet-0001-of-0005 3. delete files like parquet-0001-of-0004 4. generate + add last parquet file parquet-0005-of-0005 => make a single commit with all commit operations at once I think it should be quite straightforward to implement. Happy to review a PR (maybe conflicting with the ongoing "1 commit push_to_hub" PR https://github.com/huggingface/datasets/pull/6269)
{ "avatar_url": "https://avatars.githubusercontent.com/u/11801849?v=4", "events_url": "https://api.github.com/users/Wauplin/events{/privacy}", "followers_url": "https://api.github.com/users/Wauplin/followers", "following_url": "https://api.github.com/users/Wauplin/following{/other_user}", "gists_url": "https://api.github.com/users/Wauplin/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Wauplin", "id": 11801849, "login": "Wauplin", "node_id": "MDQ6VXNlcjExODAxODQ5", "organizations_url": "https://api.github.com/users/Wauplin/orgs", "received_events_url": "https://api.github.com/users/Wauplin/received_events", "repos_url": "https://api.github.com/users/Wauplin/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Wauplin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Wauplin/subscriptions", "type": "User", "url": "https://api.github.com/users/Wauplin" }
https://api.github.com/repos/huggingface/datasets/issues/6290/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6290/timeline
open
false
6,290
null
null
null
false
1,935,628,506
https://api.github.com/repos/huggingface/datasets/issues/6289
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6289/events
[]
null
2023-10-13T08:57:14Z
[]
https://github.com/huggingface/datasets/pull/6289
NONE
null
false
null
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006424 / 0.011353 (-0.004929) | 0.003960 / 0.011008 (-0.007048) | 0.084022 / 0.038508 (0.045514) | 0.070770 / 0.023109 (0.047661) | 0.320525 / 0.275898 (0.044627) | 0.354507 / 0.323480 (0.031027) | 0.003939 / 0.007986 (-0.004047) | 0.004161 / 0.004328 (-0.000168) | 0.064754 / 0.004250 (0.060503) | 0.053630 / 0.037052 (0.016578) | 0.323948 / 0.258489 (0.065459) | 0.376908 / 0.293841 (0.083067) | 0.031063 / 0.128546 (-0.097483) | 0.008470 / 0.075646 (-0.067177) | 0.288110 / 0.419271 (-0.131161) | 0.053062 / 0.043533 (0.009529) | 0.328176 / 0.255139 (0.073037) | 0.345203 / 0.283200 (0.062003) | 0.024579 / 0.141683 (-0.117104) | 1.471649 / 1.452155 (0.019495) | 1.561458 / 1.492716 (0.068742) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.223591 / 0.018006 (0.205585) | 0.450758 / 0.000490 (0.450269) | 0.003751 / 0.000200 (0.003552) | 0.000079 / 0.000054 (0.000025) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027859 / 0.037411 (-0.009552) | 0.080607 / 0.014526 (0.066081) | 0.093835 / 0.176557 (-0.082722) | 0.150466 / 0.737135 (-0.586669) | 0.094381 / 0.296338 (-0.201957) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.394011 / 0.215209 (0.178802) | 3.918318 / 2.077655 (1.840664) | 1.928684 / 1.504120 (0.424564) | 1.765944 / 1.541195 (0.224749) | 1.784716 / 1.468490 (0.316226) | 0.487189 / 4.584777 (-4.097588) | 3.537705 / 3.745712 (-0.208008) | 3.312162 / 5.269862 (-1.957699) | 2.024520 / 4.565676 (-2.541156) | 0.057571 / 0.424275 (-0.366704) | 0.007203 / 0.007607 (-0.000404) | 0.467253 / 0.226044 (0.241208) | 4.659934 / 2.268929 (2.391005) | 2.377764 / 55.444624 (-53.066860) | 2.021984 / 6.876477 (-4.854492) | 2.197468 / 2.142072 (0.055395) | 0.586415 / 4.805227 (-4.218812) | 0.136636 / 6.500664 (-6.364028) | 0.060885 / 0.075469 (-0.014584) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.241879 / 1.841788 (-0.599908) | 18.719327 / 8.074308 (10.645019) | 14.408689 / 10.191392 (4.217297) | 0.155778 / 0.680424 (-0.524646) | 0.018475 / 0.534201 (-0.515726) | 0.392316 / 0.579283 (-0.186967) | 0.409803 / 0.434364 (-0.024561) | 0.458701 / 0.540337 (-0.081637) | 0.630561 / 1.386936 (-0.756375) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006541 / 0.011353 (-0.004812) | 0.003915 / 0.011008 (-0.007094) | 0.064292 / 0.038508 (0.025784) | 0.069174 / 0.023109 (0.046065) | 0.402048 / 0.275898 (0.126150) | 0.423960 / 0.323480 (0.100480) | 0.005355 / 0.007986 (-0.002631) | 0.003295 / 0.004328 (-0.001033) | 0.065212 / 0.004250 (0.060962) | 0.054292 / 0.037052 (0.017240) | 0.402930 / 0.258489 (0.144441) | 0.441840 / 0.293841 (0.147999) | 0.032732 / 0.128546 (-0.095814) | 0.008565 / 0.075646 (-0.067081) | 0.070705 / 0.419271 (-0.348567) | 0.047908 / 0.043533 (0.004375) | 0.401400 / 0.255139 (0.146261) | 0.422682 / 0.283200 (0.139483) | 0.022244 / 0.141683 (-0.119439) | 1.532018 / 1.452155 (0.079864) | 1.597955 / 1.492716 (0.105239) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.226277 / 0.018006 (0.208271) | 0.475578 / 0.000490 (0.475088) | 0.005456 / 0.000200 (0.005256) | 0.000140 / 0.000054 (0.000085) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033111 / 0.037411 (-0.004300) | 0.093138 / 0.014526 (0.078613) | 0.104619 / 0.176557 (-0.071937) | 0.157972 / 0.737135 (-0.579164) | 0.105017 / 0.296338 (-0.191321) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.441771 / 0.215209 (0.226562) | 4.396981 / 2.077655 (2.319326) | 2.410745 / 1.504120 (0.906625) | 2.258359 / 1.541195 (0.717164) | 2.372628 / 1.468490 (0.904138) | 0.491411 / 4.584777 (-4.093366) | 3.650084 / 3.745712 (-0.095628) | 3.279557 / 5.269862 (-1.990304) | 2.011377 / 4.565676 (-2.554300) | 0.058283 / 0.424275 (-0.365992) | 0.007435 / 0.007607 (-0.000172) | 0.507212 / 0.226044 (0.281167) | 5.080104 / 2.268929 (2.811176) | 2.822680 / 55.444624 (-52.621945) | 2.507608 / 6.876477 (-4.368869) | 2.719349 / 2.142072 (0.577277) | 0.586157 / 4.805227 (-4.219071) | 0.132851 / 6.500664 (-6.367813) | 0.059944 / 0.075469 (-0.015525) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.374801 / 1.841788 (-0.466987) | 19.089359 / 8.074308 (11.015051) | 14.525861 / 10.191392 (4.334469) | 0.184758 / 0.680424 (-0.495666) | 0.020206 / 0.534201 (-0.513995) | 0.397309 / 0.579283 (-0.181975) | 0.418120 / 0.434364 (-0.016244) | 0.471817 / 0.540337 (-0.068520) | 0.681691 / 1.386936 (-0.705245) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#2076cb857e90cf7a6050bba230f586993c5e034a \"CML watermark\")\n", "_The documentation is not available anymore as the PR was closed or merged._" ]
testing doc-builder
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6289/reactions" }
PR_kwDODunzps5cZiay
{ "diff_url": "https://github.com/huggingface/datasets/pull/6289.diff", "html_url": "https://github.com/huggingface/datasets/pull/6289", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/6289.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6289" }
2023-10-10T15:17:29Z
https://api.github.com/repos/huggingface/datasets/issues/6289/comments
testing https://github.com/huggingface/doc-builder/pull/426
{ "avatar_url": "https://avatars.githubusercontent.com/u/11827707?v=4", "events_url": "https://api.github.com/users/mishig25/events{/privacy}", "followers_url": "https://api.github.com/users/mishig25/followers", "following_url": "https://api.github.com/users/mishig25/following{/other_user}", "gists_url": "https://api.github.com/users/mishig25/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mishig25", "id": 11827707, "login": "mishig25", "node_id": "MDQ6VXNlcjExODI3NzA3", "organizations_url": "https://api.github.com/users/mishig25/orgs", "received_events_url": "https://api.github.com/users/mishig25/received_events", "repos_url": "https://api.github.com/users/mishig25/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mishig25/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mishig25/subscriptions", "type": "User", "url": "https://api.github.com/users/mishig25" }
https://api.github.com/repos/huggingface/datasets/issues/6289/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6289/timeline
closed
false
6,289
null
2023-10-13T08:56:48Z
null
true
1,935,005,457
https://api.github.com/repos/huggingface/datasets/issues/6288
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6288/events
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
null
2023-10-20T18:23:05Z
[]
https://github.com/huggingface/datasets/issues/6288
MEMBER
null
null
null
[ "A duplicate of https://github.com/huggingface/datasets/issues/4796.\r\n\r\nWe could get this for free by implementing the `Image` feature as an extension type, as shown in [this](https://colab.research.google.com/drive/1Uzm_tXVpGTwbzleDConWcNjacwO1yxE4?usp=sharing) Colab (example with UUIDs).\r\n", "+1 to this\r\nCalling this line with a df that contains a PIL image (as they are returned from load_dataset)\r\n`ds = Dataset.from_pandas(df)`\r\nResults in this error:\r\n`ArrowInvalid: ('Could not convert <PIL.PngImagePlugin.PngImageFile image mode=RGB size=1024x1024 at 0x2B41F2D70> with type PngImageFile: did not recognize Python value type when inferring an Arrow data type', 'Conversion failed for column image with type object')`" ]
Dataset.from_pandas with a DataFrame of PIL.Images
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6288/reactions" }
I_kwDODunzps5zVdcR
null
2023-10-10T10:29:16Z
https://api.github.com/repos/huggingface/datasets/issues/6288/comments
Currently type inference doesn't know what to do with a Pandas Series of PIL.Image objects, though it would be nice to get a Dataset with the Image type this way
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/6288/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6288/timeline
open
false
6,288
null
null
null
false
1,932,758,192
https://api.github.com/repos/huggingface/datasets/issues/6287
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6287/events
[]
null
2023-10-11T20:28:45Z
[]
https://github.com/huggingface/datasets/issues/6287
NONE
completed
null
null
[ "There is no \"text\" column in the `amazon_reviews_multi`, hence the `KeyError`. You can get the column names by running `dataset.column_names`." ]
map() not recognizing "text"
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6287/reactions" }
I_kwDODunzps5zM4yw
null
2023-10-09T10:27:30Z
https://api.github.com/repos/huggingface/datasets/issues/6287/comments
### Describe the bug The [map() documentation](https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/main_classes#datasets.Dataset.map) reads: ` ds = ds.map(lambda x: tokenizer(x['text'], truncation=True, padding=True), batched=True)` I have been trying to reproduce it in my code as: `tokenizedDataset = dataset.map(lambda x: tokenizer(x['text']), batched=True)` But it doesn't work as it throws the error: > KeyError: 'text' Can you please guide me on how to fix it? ### Steps to reproduce the bug 1. `from datasets import load_dataset dataset = load_dataset("amazon_reviews_multi")` 2. Then this code: `from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")` 3. The line I quoted above (which I have been trying) ### Expected behavior As mentioned in the documentation, it should run without any error and map the tokenization on the whole dataset. ### Environment info Python 3.10.2
{ "avatar_url": "https://avatars.githubusercontent.com/u/5688359?v=4", "events_url": "https://api.github.com/users/EngineerKhan/events{/privacy}", "followers_url": "https://api.github.com/users/EngineerKhan/followers", "following_url": "https://api.github.com/users/EngineerKhan/following{/other_user}", "gists_url": "https://api.github.com/users/EngineerKhan/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/EngineerKhan", "id": 5688359, "login": "EngineerKhan", "node_id": "MDQ6VXNlcjU2ODgzNTk=", "organizations_url": "https://api.github.com/users/EngineerKhan/orgs", "received_events_url": "https://api.github.com/users/EngineerKhan/received_events", "repos_url": "https://api.github.com/users/EngineerKhan/repos", "site_admin": false, "starred_url": "https://api.github.com/users/EngineerKhan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/EngineerKhan/subscriptions", "type": "User", "url": "https://api.github.com/users/EngineerKhan" }
https://api.github.com/repos/huggingface/datasets/issues/6287/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6287/timeline
closed
false
6,287
null
2023-10-11T20:28:45Z
null
false
1,932,640,128
https://api.github.com/repos/huggingface/datasets/issues/6286
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6286/events
[]
null
2023-10-10T07:13:22Z
[]
https://github.com/huggingface/datasets/pull/6286
MEMBER
null
false
null
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009157 / 0.011353 (-0.002195) | 0.004275 / 0.011008 (-0.006734) | 0.099341 / 0.038508 (0.060833) | 0.080634 / 0.023109 (0.057525) | 0.373598 / 0.275898 (0.097700) | 0.445048 / 0.323480 (0.121568) | 0.006541 / 0.007986 (-0.001444) | 0.003550 / 0.004328 (-0.000779) | 0.071034 / 0.004250 (0.066784) | 0.062637 / 0.037052 (0.025585) | 0.379110 / 0.258489 (0.120621) | 0.447896 / 0.293841 (0.154055) | 0.047739 / 0.128546 (-0.080807) | 0.012575 / 0.075646 (-0.063071) | 0.332314 / 0.419271 (-0.086957) | 0.065500 / 0.043533 (0.021967) | 0.365919 / 0.255139 (0.110780) | 0.438611 / 0.283200 (0.155412) | 0.034243 / 0.141683 (-0.107440) | 1.628034 / 1.452155 (0.175880) | 1.802970 / 1.492716 (0.310253) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.224528 / 0.018006 (0.206522) | 0.482094 / 0.000490 (0.481604) | 0.012752 / 0.000200 (0.012552) | 0.000570 / 0.000054 (0.000515) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025456 / 0.037411 (-0.011956) | 0.082281 / 0.014526 (0.067756) | 0.100050 / 0.176557 (-0.076506) | 0.156931 / 0.737135 (-0.580204) | 0.108229 / 0.296338 (-0.188110) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.560688 / 0.215209 (0.345479) | 5.171711 / 2.077655 (3.094056) | 2.273178 / 1.504120 (0.769058) | 1.948158 / 1.541195 (0.406963) | 1.879744 / 1.468490 (0.411254) | 0.789216 / 4.584777 (-3.795561) | 4.529370 / 3.745712 (0.783658) | 4.008743 / 5.269862 (-1.261118) | 2.633555 / 4.565676 (-1.932121) | 0.085411 / 0.424275 (-0.338864) | 0.007256 / 0.007607 (-0.000351) | 0.623254 / 0.226044 (0.397209) | 6.327256 / 2.268929 (4.058327) | 2.911787 / 55.444624 (-52.532837) | 2.240610 / 6.876477 (-4.635867) | 2.352811 / 2.142072 (0.210738) | 0.930114 / 4.805227 (-3.875114) | 0.185028 / 6.500664 (-6.315636) | 0.062115 / 0.075469 (-0.013354) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.394261 / 1.841788 (-0.447527) | 19.689376 / 8.074308 (11.615067) | 17.242289 / 10.191392 (7.050897) | 0.209122 / 0.680424 (-0.471302) | 0.027205 / 0.534201 (-0.506996) | 0.408613 / 0.579283 (-0.170670) | 0.503836 / 0.434364 (0.069472) | 0.485179 / 0.540337 (-0.055158) | 0.674333 / 1.386936 (-0.712603) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007506 / 0.011353 (-0.003847) | 0.004683 / 0.011008 (-0.006325) | 0.067584 / 0.038508 (0.029076) | 0.065635 / 0.023109 (0.042525) | 0.458814 / 0.275898 (0.182916) | 0.477549 / 0.323480 (0.154069) | 0.005212 / 0.007986 (-0.002774) | 0.003393 / 0.004328 (-0.000936) | 0.075307 / 0.004250 (0.071057) | 0.051989 / 0.037052 (0.014937) | 0.484229 / 0.258489 (0.225740) | 0.470889 / 0.293841 (0.177048) | 0.043528 / 0.128546 (-0.085018) | 0.014685 / 0.075646 (-0.060962) | 0.084199 / 0.419271 (-0.335073) | 0.053970 / 0.043533 (0.010437) | 0.432362 / 0.255139 (0.177223) | 0.467472 / 0.283200 (0.184272) | 0.031109 / 0.141683 (-0.110574) | 1.525938 / 1.452155 (0.073784) | 1.631993 / 1.492716 (0.139276) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.200196 / 0.018006 (0.182190) | 0.479316 / 0.000490 (0.478827) | 0.010146 / 0.000200 (0.009947) | 0.000118 / 0.000054 (0.000063) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027911 / 0.037411 (-0.009500) | 0.089720 / 0.014526 (0.075194) | 0.097000 / 0.176557 (-0.079557) | 0.157549 / 0.737135 (-0.579587) | 0.098247 / 0.296338 (-0.198092) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.581401 / 0.215209 (0.366192) | 5.703829 / 2.077655 (3.626174) | 2.688272 / 1.504120 (1.184152) | 2.321691 / 1.541195 (0.780496) | 2.355987 / 1.468490 (0.887497) | 0.759109 / 4.584777 (-3.825668) | 4.711288 / 3.745712 (0.965576) | 4.093019 / 5.269862 (-1.176843) | 2.648240 / 4.565676 (-1.917437) | 0.087839 / 0.424275 (-0.336436) | 0.007060 / 0.007607 (-0.000547) | 0.702783 / 0.226044 (0.476739) | 6.986924 / 2.268929 (4.717996) | 3.365970 / 55.444624 (-52.078654) | 2.670876 / 6.876477 (-4.205600) | 2.776431 / 2.142072 (0.634358) | 0.920005 / 4.805227 (-3.885222) | 0.197521 / 6.500664 (-6.303143) | 0.069974 / 0.075469 (-0.005495) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.596947 / 1.841788 (-0.244841) | 20.606007 / 8.074308 (12.531699) | 18.437425 / 10.191392 (8.246033) | 0.222445 / 0.680424 (-0.457978) | 0.028610 / 0.534201 (-0.505591) | 0.419748 / 0.579283 (-0.159535) | 0.513409 / 0.434364 (0.079045) | 0.487517 / 0.540337 (-0.052820) | 0.706637 / 1.386936 (-0.680299) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#d664439eb82d62889c21c5236a5869dae75ae779 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007744 / 0.011353 (-0.003609) | 0.004678 / 0.011008 (-0.006330) | 0.101243 / 0.038508 (0.062735) | 0.085653 / 0.023109 (0.062543) | 0.383772 / 0.275898 (0.107874) | 0.422151 / 0.323480 (0.098671) | 0.004566 / 0.007986 (-0.003419) | 0.003900 / 0.004328 (-0.000429) | 0.077778 / 0.004250 (0.073528) | 0.063761 / 0.037052 (0.026709) | 0.385505 / 0.258489 (0.127016) | 0.436186 / 0.293841 (0.142345) | 0.036172 / 0.128546 (-0.092374) | 0.009935 / 0.075646 (-0.065711) | 0.341434 / 0.419271 (-0.077837) | 0.061866 / 0.043533 (0.018333) | 0.385020 / 0.255139 (0.129881) | 0.399455 / 0.283200 (0.116256) | 0.029324 / 0.141683 (-0.112358) | 1.784749 / 1.452155 (0.332594) | 1.845926 / 1.492716 (0.353209) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.266322 / 0.018006 (0.248316) | 0.508708 / 0.000490 (0.508218) | 0.013680 / 0.000200 (0.013480) | 0.000868 / 0.000054 (0.000814) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033887 / 0.037411 (-0.003525) | 0.096709 / 0.014526 (0.082183) | 0.109472 / 0.176557 (-0.067084) | 0.174422 / 0.737135 (-0.562713) | 0.110830 / 0.296338 (-0.185509) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.457533 / 0.215209 (0.242324) | 4.615229 / 2.077655 (2.537575) | 2.418820 / 1.504120 (0.914700) | 2.181079 / 1.541195 (0.639884) | 2.229164 / 1.468490 (0.760674) | 0.554861 / 4.584777 (-4.029916) | 4.323787 / 3.745712 (0.578075) | 3.769396 / 5.269862 (-1.500466) | 2.376850 / 4.565676 (-2.188826) | 0.065030 / 0.424275 (-0.359245) | 0.008397 / 0.007607 (0.000790) | 0.541109 / 0.226044 (0.315065) | 5.477540 / 2.268929 (3.208612) | 2.957049 / 55.444624 (-52.487576) | 2.511732 / 6.876477 (-4.364744) | 2.703953 / 2.142072 (0.561881) | 0.660822 / 4.805227 (-4.144405) | 0.147035 / 6.500664 (-6.353630) | 0.066045 / 0.075469 (-0.009424) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.526481 / 1.841788 (-0.315307) | 22.020256 / 8.074308 (13.945948) | 16.854566 / 10.191392 (6.663174) | 0.192958 / 0.680424 (-0.487466) | 0.021505 / 0.534201 (-0.512696) | 0.462867 / 0.579283 (-0.116416) | 0.514813 / 0.434364 (0.080449) | 0.546147 / 0.540337 (0.005809) | 0.767853 / 1.386936 (-0.619083) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007770 / 0.011353 (-0.003583) | 0.004671 / 0.011008 (-0.006337) | 0.080862 / 0.038508 (0.042354) | 0.087049 / 0.023109 (0.063940) | 0.479497 / 0.275898 (0.203599) | 0.559787 / 0.323480 (0.236307) | 0.007168 / 0.007986 (-0.000818) | 0.003829 / 0.004328 (-0.000500) | 0.079018 / 0.004250 (0.074768) | 0.067359 / 0.037052 (0.030307) | 0.516140 / 0.258489 (0.257651) | 0.547000 / 0.293841 (0.253159) | 0.037955 / 0.128546 (-0.090591) | 0.010007 / 0.075646 (-0.065639) | 0.087673 / 0.419271 (-0.331598) | 0.059309 / 0.043533 (0.015777) | 0.473920 / 0.255139 (0.218781) | 0.529216 / 0.283200 (0.246017) | 0.028236 / 0.141683 (-0.113447) | 1.771127 / 1.452155 (0.318972) | 1.918878 / 1.492716 (0.426162) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.242010 / 0.018006 (0.224004) | 0.494944 / 0.000490 (0.494454) | 0.006319 / 0.000200 (0.006119) | 0.000111 / 0.000054 (0.000056) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.039220 / 0.037411 (0.001809) | 0.113805 / 0.014526 (0.099279) | 0.125704 / 0.176557 (-0.050853) | 0.189198 / 0.737135 (-0.547937) | 0.126334 / 0.296338 (-0.170004) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.502226 / 0.215209 (0.287017) | 5.039133 / 2.077655 (2.961478) | 2.782352 / 1.504120 (1.278232) | 2.587654 / 1.541195 (1.046460) | 2.692588 / 1.468490 (1.224098) | 0.585672 / 4.584777 (-3.999105) | 4.553078 / 3.745712 (0.807366) | 3.864739 / 5.269862 (-1.405123) | 2.536109 / 4.565676 (-2.029567) | 0.069567 / 0.424275 (-0.354708) | 0.008749 / 0.007607 (0.001142) | 0.620645 / 0.226044 (0.394601) | 6.247286 / 2.268929 (3.978357) | 3.345293 / 55.444624 (-52.099332) | 2.873970 / 6.876477 (-4.002507) | 3.123190 / 2.142072 (0.981118) | 0.687391 / 4.805227 (-4.117837) | 0.159046 / 6.500664 (-6.341618) | 0.071019 / 0.075469 (-0.004450) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.728724 / 1.841788 (-0.113064) | 22.828390 / 8.074308 (14.754082) | 17.305225 / 10.191392 (7.113833) | 0.176571 / 0.680424 (-0.503853) | 0.023837 / 0.534201 (-0.510364) | 0.467935 / 0.579283 (-0.111348) | 0.503701 / 0.434364 (0.069337) | 0.558140 / 0.540337 (0.017803) | 0.789326 / 1.386936 (-0.597610) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#7d357eb4b499cd530c3f4e626f2825a50ee6c8aa \"CML watermark\")\n" ]
Create DefunctDatasetError
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6286/reactions" }
PR_kwDODunzps5cPKNK
{ "diff_url": "https://github.com/huggingface/datasets/pull/6286.diff", "html_url": "https://github.com/huggingface/datasets/pull/6286", "merged_at": "2023-10-10T07:03:04Z", "patch_url": "https://github.com/huggingface/datasets/pull/6286.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6286" }
2023-10-09T09:23:23Z
https://api.github.com/repos/huggingface/datasets/issues/6286/comments
Create `DefunctDatasetError` as a specific error to be raised when a dataset is defunct and no longer accessible. See Hub discussion: https://huggingface.co/datasets/the_pile_books3/discussions/7#6523c13a94f3a1a2092d251b
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/6286/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6286/timeline
closed
false
6,286
null
2023-10-10T07:03:04Z
null
true
1,932,306,325
https://api.github.com/repos/huggingface/datasets/issues/6285
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6285/events
[]
null
2023-10-10T13:17:33Z
[]
https://github.com/huggingface/datasets/issues/6285
NONE
null
null
null
[ "You should be able to load the images by modifying the `load_dataset` call like this:\r\n```python\r\ndataset = load_dataset(\"imagefolder\", data_dir=\"/content/datasets/PotholeDetectionYOLOv8-1\")\r\n```\r\n\r\nThe `imagefolder` builder expects the image files to be in `path/label/image_file` (e.g. .`.../train/dog/image_1.jpg`), so the solution for the labels in your case is to create metadata files (one for each split; as explained [here](https://huggingface.co/docs/datasets/image_dataset#imagefolder)) that map the images to their labels.", "> You should be able to load the images by modifying the `load_dataset` call like this:\r\n> \r\n> ```python\r\n> dataset = load_dataset(\"imagefolder\", data_dir=\"/content/datasets/PotholeDetectionYOLOv8-1\")\r\n> ```\r\n> \r\n> The `imagefolder` builder expects the image files to be in `path/label/image_file` (e.g. .`.../train/dog/image_1.jpg`), so the solution for the labels in your case is to create metadata files (one for each split; as explained [here](https://huggingface.co/docs/datasets/image_dataset#imagefolder)) that map the images to their labels.\r\n\r\nI tried like this but only uploads images and not labels, Andyrasika/potholes-dataset", "As explained in my previous comment, you need to define metadata files to load the labels or update the paths to be in the format `train/label/image` (`train- image /n -labels` is not supported by the loader).", "I downloaded my file after annotating using roboflow . It gives train-\r\nimages, labels , test- images, labels , valid- images, labels . I hope it\r\ngives you an idea of the dataset . Please advise on this dataset\r\n\r\nOn Tue, Oct 10, 2023 at 18:12 Mario Šaško ***@***.***> wrote:\r\n\r\n> As explained in my previous comment, you need to define metadata files to\r\n> load the labels or update the paths to be in the format train/label/image\r\n> (train- image /n -labels is not supported by the loader).\r\n>\r\n> —\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/datasets/issues/6285#issuecomment-1755335215>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/AE4LJNN56FWWTSBYTSTUWHLX6U7CVAVCNFSM6AAAAAA5YHCSTGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTONJVGMZTKMRRGU>\r\n> .\r\n> You are receiving this because you authored the thread.Message ID:\r\n> ***@***.***>\r\n>\r\n" ]
TypeError: expected str, bytes or os.PathLike object, not dict
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6285/reactions" }
I_kwDODunzps5zLKeV
null
2023-10-09T04:56:26Z
https://api.github.com/repos/huggingface/datasets/issues/6285/comments
### Describe the bug my dataset is in form : train- image /n -labels and tried the code: ``` from datasets import load_dataset data_files = { "train": "/content/datasets/PotholeDetectionYOLOv8-1/train/", "validation": "/content/datasets/PotholeDetectionYOLOv8-1/valid/", "test": "/content/datasets/PotholeDetectionYOLOv8-1/test/" } dataset = load_dataset("imagefolder", data_dir=data_files) dataset ``` got error: ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) [<ipython-input-29-2ef1926f73d9>](https://localhost:8080/#) in <cell line: 8>() 6 "test": "/content/datasets/PotholeDetectionYOLOv8-1/test/" 7 } ----> 8 dataset = load_dataset("imagefolder", data_dir=data_files) 9 dataset 6 frames [/usr/lib/python3.10/pathlib.py](https://localhost:8080/#) in _parse_args(cls, args) 576 parts += a._parts 577 else: --> 578 a = os.fspath(a) 579 if isinstance(a, str): 580 # Force-cast str subclasses to str (issue #21127) TypeError: expected str, bytes or os.PathLike object, not dict ``` ### Steps to reproduce the bug as share above ### Expected behavior load images and labels , but my dataset only uploads images - https://huggingface.co/datasets/Andyrasika/potholes-dataset ### Environment info colab pro
{ "avatar_url": "https://avatars.githubusercontent.com/u/20493493?v=4", "events_url": "https://api.github.com/users/andysingal/events{/privacy}", "followers_url": "https://api.github.com/users/andysingal/followers", "following_url": "https://api.github.com/users/andysingal/following{/other_user}", "gists_url": "https://api.github.com/users/andysingal/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/andysingal", "id": 20493493, "login": "andysingal", "node_id": "MDQ6VXNlcjIwNDkzNDkz", "organizations_url": "https://api.github.com/users/andysingal/orgs", "received_events_url": "https://api.github.com/users/andysingal/received_events", "repos_url": "https://api.github.com/users/andysingal/repos", "site_admin": false, "starred_url": "https://api.github.com/users/andysingal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/andysingal/subscriptions", "type": "User", "url": "https://api.github.com/users/andysingal" }
https://api.github.com/repos/huggingface/datasets/issues/6285/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6285/timeline
open
false
6,285
null
null
null
false
1,929,551,712
https://api.github.com/repos/huggingface/datasets/issues/6284
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6284/events
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
null
2023-10-06T13:26:51Z
[]
https://github.com/huggingface/datasets/issues/6284
NONE
completed
null
null
[ "This dataset is already available on the Hub: https://huggingface.co/datasets/facebook/belebele.\r\n" ]
Add Belebele multiple-choice machine reading comprehension (MRC) dataset
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/6284/reactions" }
I_kwDODunzps5zAp9g
null
2023-10-06T06:58:03Z
https://api.github.com/repos/huggingface/datasets/issues/6284/comments
### Feature request Belebele is a multiple-choice machine reading comprehension (MRC) dataset spanning 122 language variants. This dataset enables the evaluation of mono- and multi-lingual models in high-, medium-, and low-resource languages. Each question has four multiple-choice answers and is linked to a short passage from the [FLORES-200](https://github.com/facebookresearch/flores/tree/main/flores200) dataset. The human annotation procedure was carefully curated to create questions that discriminate between different levels of generalizable language comprehension and is reinforced by extensive quality checks. While all questions directly relate to the passage, the English dataset on its own proves difficult enough to challenge state-of-the-art language models. Being fully parallel, this dataset enables direct comparison of model performance across all languages. Belebele opens up new avenues for evaluating and analyzing the multilingual abilities of language models and NLP systems. Please refer to paper for more details, [The Belebele Benchmark: a Parallel Reading Comprehension Dataset in 122 Language Variants](https://arxiv.org/abs/2308.16884). ## Composition - 900 questions per language variant - 488 distinct passages, there are 1-2 associated questions for each. - For each question, there is 4 multiple-choice answers, exactly 1 of which is correct. - 122 language/language variants (including English). - 900 x 122 = 109,800 total questions. ### Motivation official repo https://github.com/facebookresearch/belebele ### Your contribution -
{ "avatar_url": "https://avatars.githubusercontent.com/u/64583161?v=4", "events_url": "https://api.github.com/users/rajveer43/events{/privacy}", "followers_url": "https://api.github.com/users/rajveer43/followers", "following_url": "https://api.github.com/users/rajveer43/following{/other_user}", "gists_url": "https://api.github.com/users/rajveer43/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/rajveer43", "id": 64583161, "login": "rajveer43", "node_id": "MDQ6VXNlcjY0NTgzMTYx", "organizations_url": "https://api.github.com/users/rajveer43/orgs", "received_events_url": "https://api.github.com/users/rajveer43/received_events", "repos_url": "https://api.github.com/users/rajveer43/repos", "site_admin": false, "starred_url": "https://api.github.com/users/rajveer43/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rajveer43/subscriptions", "type": "User", "url": "https://api.github.com/users/rajveer43" }
https://api.github.com/repos/huggingface/datasets/issues/6284/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6284/timeline
closed
false
6,284
null
2023-10-06T13:26:51Z
null
false
1,928,552,257
https://api.github.com/repos/huggingface/datasets/issues/6283
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6283/events
[]
null
2024-07-04T07:24:20Z
[]
https://github.com/huggingface/datasets/pull/6283
COLLABORATOR
null
false
null
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006278 / 0.011353 (-0.005075) | 0.003692 / 0.011008 (-0.007316) | 0.080464 / 0.038508 (0.041956) | 0.064751 / 0.023109 (0.041642) | 0.318586 / 0.275898 (0.042688) | 0.351435 / 0.323480 (0.027955) | 0.005044 / 0.007986 (-0.002942) | 0.003034 / 0.004328 (-0.001295) | 0.063710 / 0.004250 (0.059460) | 0.050607 / 0.037052 (0.013555) | 0.318491 / 0.258489 (0.060001) | 0.365688 / 0.293841 (0.071847) | 0.027818 / 0.128546 (-0.100729) | 0.008119 / 0.075646 (-0.067527) | 0.262141 / 0.419271 (-0.157131) | 0.044710 / 0.043533 (0.001177) | 0.318875 / 0.255139 (0.063736) | 0.344559 / 0.283200 (0.061360) | 0.022861 / 0.141683 (-0.118822) | 1.452402 / 1.452155 (0.000247) | 1.502340 / 1.492716 (0.009624) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.219355 / 0.018006 (0.201349) | 0.433311 / 0.000490 (0.432822) | 0.006545 / 0.000200 (0.006345) | 0.000078 / 0.000054 (0.000024) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024538 / 0.037411 (-0.012874) | 0.073346 / 0.014526 (0.058821) | 0.083824 / 0.176557 (-0.092733) | 0.145176 / 0.737135 (-0.591959) | 0.085941 / 0.296338 (-0.210397) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.395153 / 0.215209 (0.179944) | 3.944734 / 2.077655 (1.867080) | 1.883910 / 1.504120 (0.379790) | 1.690560 / 1.541195 (0.149365) | 1.775180 / 1.468490 (0.306690) | 0.506873 / 4.584777 (-4.077904) | 3.111095 / 3.745712 (-0.634617) | 2.915358 / 5.269862 (-2.354504) | 1.892886 / 4.565676 (-2.672791) | 0.058690 / 0.424275 (-0.365585) | 0.006550 / 0.007607 (-0.001057) | 0.463372 / 0.226044 (0.237328) | 4.640511 / 2.268929 (2.371583) | 2.321051 / 55.444624 (-53.123573) | 1.986330 / 6.876477 (-4.890147) | 2.160046 / 2.142072 (0.017973) | 0.597833 / 4.805227 (-4.207394) | 0.127946 / 6.500664 (-6.372718) | 0.059709 / 0.075469 (-0.015760) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.278966 / 1.841788 (-0.562822) | 17.863102 / 8.074308 (9.788794) | 13.896057 / 10.191392 (3.704665) | 0.147512 / 0.680424 (-0.532912) | 0.016771 / 0.534201 (-0.517430) | 0.335260 / 0.579283 (-0.244024) | 0.383019 / 0.434364 (-0.051345) | 0.384821 / 0.540337 (-0.155516) | 0.550143 / 1.386936 (-0.836793) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006234 / 0.011353 (-0.005118) | 0.003695 / 0.011008 (-0.007313) | 0.062654 / 0.038508 (0.024146) | 0.059397 / 0.023109 (0.036287) | 0.458375 / 0.275898 (0.182477) | 0.488951 / 0.323480 (0.165471) | 0.004971 / 0.007986 (-0.003014) | 0.002914 / 0.004328 (-0.001415) | 0.061184 / 0.004250 (0.056934) | 0.051246 / 0.037052 (0.014194) | 0.458035 / 0.258489 (0.199546) | 0.490838 / 0.293841 (0.196997) | 0.028746 / 0.128546 (-0.099800) | 0.008167 / 0.075646 (-0.067480) | 0.068006 / 0.419271 (-0.351265) | 0.041809 / 0.043533 (-0.001724) | 0.453896 / 0.255139 (0.198757) | 0.477583 / 0.283200 (0.194383) | 0.020906 / 0.141683 (-0.120777) | 1.443275 / 1.452155 (-0.008879) | 1.493431 / 1.492716 (0.000714) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.219903 / 0.018006 (0.201896) | 0.410275 / 0.000490 (0.409785) | 0.003919 / 0.000200 (0.003719) | 0.000078 / 0.000054 (0.000024) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027850 / 0.037411 (-0.009561) | 0.080444 / 0.014526 (0.065918) | 0.089943 / 0.176557 (-0.086614) | 0.145810 / 0.737135 (-0.591326) | 0.090908 / 0.296338 (-0.205430) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.464386 / 0.215209 (0.249177) | 4.633787 / 2.077655 (2.556133) | 2.581658 / 1.504120 (1.077538) | 2.408486 / 1.541195 (0.867291) | 2.460491 / 1.468490 (0.992001) | 0.507512 / 4.584777 (-4.077265) | 3.190363 / 3.745712 (-0.555349) | 2.895581 / 5.269862 (-2.374280) | 1.871506 / 4.565676 (-2.694171) | 0.058469 / 0.424275 (-0.365806) | 0.006526 / 0.007607 (-0.001082) | 0.537641 / 0.226044 (0.311596) | 5.396660 / 2.268929 (3.127731) | 3.027028 / 55.444624 (-52.417596) | 2.703771 / 6.876477 (-4.172705) | 2.865576 / 2.142072 (0.723503) | 0.600103 / 4.805227 (-4.205124) | 0.127109 / 6.500664 (-6.373555) | 0.060985 / 0.075469 (-0.014484) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.365030 / 1.841788 (-0.476758) | 17.988218 / 8.074308 (9.913909) | 14.900796 / 10.191392 (4.709404) | 0.158211 / 0.680424 (-0.522213) | 0.018291 / 0.534201 (-0.515910) | 0.337437 / 0.579283 (-0.241846) | 0.383710 / 0.434364 (-0.050654) | 0.392341 / 0.540337 (-0.147997) | 0.561584 / 1.386936 (-0.825352) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#b7571ab4b0d9b67b767c55db400b4ffac0f752f1 \"CML watermark\")\n", "CI failures are unrelated", "I also plan to address https://github.com/huggingface/datasets/issues/6280#issuecomment-1749310065 in this PR :).", "Oh ok, ping me again whenever you want another review :)", "Have you had a chance to continue this ? I can also take a look if you want", "Yes, I'll finish it next week :).", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6283). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "@lhoestq Feel free to review this again. I've bumped PyArrow to 12.0.0 to simplify the implementation (no need for custom `array_concat` and less `pa.Array.from_buffers`). However, this makes `apache-beam` complain as it only supports `<12.0.0`. The next `apache-beam` release will set this boundary to `<15.0.0.`, so I think the only solution is to wait for it to be published.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005188 / 0.011353 (-0.006165) | 0.003997 / 0.011008 (-0.007011) | 0.062642 / 0.038508 (0.024134) | 0.028913 / 0.023109 (0.005804) | 0.248289 / 0.275898 (-0.027609) | 0.268084 / 0.323480 (-0.055396) | 0.004093 / 0.007986 (-0.003893) | 0.002822 / 0.004328 (-0.001506) | 0.048263 / 0.004250 (0.044012) | 0.041520 / 0.037052 (0.004468) | 0.263277 / 0.258489 (0.004788) | 0.289835 / 0.293841 (-0.004006) | 0.027621 / 0.128546 (-0.100925) | 0.010793 / 0.075646 (-0.064853) | 0.207624 / 0.419271 (-0.211648) | 0.035597 / 0.043533 (-0.007936) | 0.245706 / 0.255139 (-0.009433) | 0.268157 / 0.283200 (-0.015043) | 0.017310 / 0.141683 (-0.124373) | 1.130656 / 1.452155 (-0.321499) | 1.162134 / 1.492716 (-0.330583) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094081 / 0.018006 (0.076075) | 0.302298 / 0.000490 (0.301809) | 0.000220 / 0.000200 (0.000020) | 0.000048 / 0.000054 (-0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019072 / 0.037411 (-0.018339) | 0.061162 / 0.014526 (0.046636) | 0.072820 / 0.176557 (-0.103737) | 0.122628 / 0.737135 (-0.614507) | 0.074962 / 0.296338 (-0.221377) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.277858 / 0.215209 (0.062649) | 2.688478 / 2.077655 (0.610823) | 1.397366 / 1.504120 (-0.106754) | 1.285078 / 1.541195 (-0.256117) | 1.291559 / 1.468490 (-0.176931) | 0.553646 / 4.584777 (-4.031131) | 2.355737 / 3.745712 (-1.389975) | 2.773025 / 5.269862 (-2.496836) | 1.731195 / 4.565676 (-2.834481) | 0.061372 / 0.424275 (-0.362903) | 0.004928 / 0.007607 (-0.002679) | 0.321703 / 0.226044 (0.095659) | 3.212927 / 2.268929 (0.943999) | 1.727104 / 55.444624 (-53.717521) | 1.479430 / 6.876477 (-5.397047) | 1.513436 / 2.142072 (-0.628637) | 0.629913 / 4.805227 (-4.175315) | 0.114607 / 6.500664 (-6.386057) | 0.041707 / 0.075469 (-0.033762) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.976060 / 1.841788 (-0.865727) | 11.575163 / 8.074308 (3.500855) | 9.521390 / 10.191392 (-0.670003) | 0.138725 / 0.680424 (-0.541699) | 0.013752 / 0.534201 (-0.520449) | 0.286252 / 0.579283 (-0.293031) | 0.263420 / 0.434364 (-0.170944) | 0.325531 / 0.540337 (-0.214806) | 0.419466 / 1.386936 (-0.967470) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005615 / 0.011353 (-0.005738) | 0.003884 / 0.011008 (-0.007124) | 0.049563 / 0.038508 (0.011055) | 0.032573 / 0.023109 (0.009464) | 0.276917 / 0.275898 (0.001019) | 0.298403 / 0.323480 (-0.025077) | 0.004367 / 0.007986 (-0.003618) | 0.002794 / 0.004328 (-0.001534) | 0.049105 / 0.004250 (0.044855) | 0.045597 / 0.037052 (0.008545) | 0.289762 / 0.258489 (0.031273) | 0.318440 / 0.293841 (0.024599) | 0.051883 / 0.128546 (-0.076664) | 0.010644 / 0.075646 (-0.065003) | 0.057455 / 0.419271 (-0.361816) | 0.033667 / 0.043533 (-0.009866) | 0.274424 / 0.255139 (0.019285) | 0.295890 / 0.283200 (0.012690) | 0.017029 / 0.141683 (-0.124654) | 1.130123 / 1.452155 (-0.322031) | 1.214827 / 1.492716 (-0.277889) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094882 / 0.018006 (0.076876) | 0.302505 / 0.000490 (0.302015) | 0.000228 / 0.000200 (0.000028) | 0.000052 / 0.000054 (-0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021695 / 0.037411 (-0.015716) | 0.075196 / 0.014526 (0.060670) | 0.086641 / 0.176557 (-0.089915) | 0.124893 / 0.737135 (-0.612243) | 0.088765 / 0.296338 (-0.207574) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.303388 / 0.215209 (0.088179) | 2.934506 / 2.077655 (0.856852) | 1.608607 / 1.504120 (0.104487) | 1.494632 / 1.541195 (-0.046563) | 1.512801 / 1.468490 (0.044310) | 0.558563 / 4.584777 (-4.026214) | 2.383212 / 3.745712 (-1.362500) | 2.634629 / 5.269862 (-2.635233) | 1.729319 / 4.565676 (-2.836357) | 0.062345 / 0.424275 (-0.361930) | 0.004981 / 0.007607 (-0.002626) | 0.358333 / 0.226044 (0.132289) | 3.484229 / 2.268929 (1.215301) | 2.010043 / 55.444624 (-53.434581) | 1.693733 / 6.876477 (-5.182744) | 1.824150 / 2.142072 (-0.317922) | 0.650835 / 4.805227 (-4.154392) | 0.115933 / 6.500664 (-6.384732) | 0.041270 / 0.075469 (-0.034199) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.007949 / 1.841788 (-0.833838) | 12.000085 / 8.074308 (3.925776) | 10.453119 / 10.191392 (0.261727) | 0.143583 / 0.680424 (-0.536840) | 0.015937 / 0.534201 (-0.518264) | 0.286653 / 0.579283 (-0.292631) | 0.272359 / 0.434364 (-0.162005) | 0.330520 / 0.540337 (-0.209818) | 0.417015 / 1.386936 (-0.969921) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#ac05bac20fcc8e0e22a852707162e15a7e2ae357 \"CML watermark\")\n", "Still the problem is occured.\r\nHuggingface is sucks 🤮🤮🤮🤮" ]
Fix array cast/embed with null values
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6283/reactions" }
PR_kwDODunzps5cBlKq
{ "diff_url": "https://github.com/huggingface/datasets/pull/6283.diff", "html_url": "https://github.com/huggingface/datasets/pull/6283", "merged_at": "2024-02-06T19:24:18Z", "patch_url": "https://github.com/huggingface/datasets/pull/6283.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6283" }
2023-10-05T15:24:05Z
https://api.github.com/repos/huggingface/datasets/issues/6283/comments
Fixes issues with casting/embedding PyArrow list arrays with null values. It also bumps the required PyArrow version to 12.0.0 (over 9 months old) to simplify the implementation. Fix #6280, fix #6311, fix #6360 (Also fixes https://github.com/huggingface/datasets/issues/5430 to make Beam compatible with PyArrow>=12.0.0)
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
https://api.github.com/repos/huggingface/datasets/issues/6283/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6283/timeline
closed
false
6,283
null
2024-02-06T19:24:19Z
null
true
1,928,473,630
https://api.github.com/repos/huggingface/datasets/issues/6282
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6282/events
[]
null
2024-03-01T16:33:20Z
[]
https://github.com/huggingface/datasets/pull/6282
MEMBER
null
false
null
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006934 / 0.011353 (-0.004419) | 0.004097 / 0.011008 (-0.006911) | 0.084662 / 0.038508 (0.046154) | 0.077106 / 0.023109 (0.053996) | 0.355035 / 0.275898 (0.079137) | 0.381466 / 0.323480 (0.057986) | 0.004182 / 0.007986 (-0.003803) | 0.003411 / 0.004328 (-0.000917) | 0.065279 / 0.004250 (0.061029) | 0.058192 / 0.037052 (0.021140) | 0.372363 / 0.258489 (0.113874) | 0.401621 / 0.293841 (0.107780) | 0.031719 / 0.128546 (-0.096827) | 0.008753 / 0.075646 (-0.066893) | 0.287125 / 0.419271 (-0.132146) | 0.052943 / 0.043533 (0.009410) | 0.349680 / 0.255139 (0.094541) | 0.364004 / 0.283200 (0.080805) | 0.026705 / 0.141683 (-0.114977) | 1.472708 / 1.452155 (0.020553) | 1.556559 / 1.492716 (0.063842) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.224868 / 0.018006 (0.206862) | 0.458793 / 0.000490 (0.458304) | 0.009434 / 0.000200 (0.009234) | 0.000356 / 0.000054 (0.000301) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029670 / 0.037411 (-0.007741) | 0.086517 / 0.014526 (0.071991) | 0.097342 / 0.176557 (-0.079215) | 0.153722 / 0.737135 (-0.583413) | 0.098465 / 0.296338 (-0.197874) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.400739 / 0.215209 (0.185530) | 3.998087 / 2.077655 (1.920432) | 2.025772 / 1.504120 (0.521652) | 1.858679 / 1.541195 (0.317485) | 1.951573 / 1.468490 (0.483083) | 0.483028 / 4.584777 (-4.101749) | 3.554085 / 3.745712 (-0.191627) | 3.306983 / 5.269862 (-1.962879) | 2.087043 / 4.565676 (-2.478633) | 0.057127 / 0.424275 (-0.367148) | 0.007252 / 0.007607 (-0.000355) | 0.480180 / 0.226044 (0.254136) | 4.787183 / 2.268929 (2.518255) | 2.489667 / 55.444624 (-52.954957) | 2.150774 / 6.876477 (-4.725703) | 2.403197 / 2.142072 (0.261124) | 0.581843 / 4.805227 (-4.223384) | 0.134915 / 6.500664 (-6.365749) | 0.061283 / 0.075469 (-0.014186) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.285700 / 1.841788 (-0.556088) | 19.474093 / 8.074308 (11.399785) | 14.336349 / 10.191392 (4.144957) | 0.170932 / 0.680424 (-0.509492) | 0.018348 / 0.534201 (-0.515853) | 0.391909 / 0.579283 (-0.187374) | 0.414706 / 0.434364 (-0.019658) | 0.458156 / 0.540337 (-0.082182) | 0.656303 / 1.386936 (-0.730633) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006738 / 0.011353 (-0.004615) | 0.004029 / 0.011008 (-0.006979) | 0.064411 / 0.038508 (0.025903) | 0.078225 / 0.023109 (0.055116) | 0.408468 / 0.275898 (0.132569) | 0.445585 / 0.323480 (0.122105) | 0.005490 / 0.007986 (-0.002495) | 0.003419 / 0.004328 (-0.000910) | 0.063966 / 0.004250 (0.059715) | 0.056779 / 0.037052 (0.019727) | 0.415258 / 0.258489 (0.156769) | 0.461258 / 0.293841 (0.167418) | 0.032051 / 0.128546 (-0.096495) | 0.008471 / 0.075646 (-0.067176) | 0.071004 / 0.419271 (-0.348267) | 0.049068 / 0.043533 (0.005536) | 0.409575 / 0.255139 (0.154436) | 0.430748 / 0.283200 (0.147548) | 0.023784 / 0.141683 (-0.117899) | 1.507894 / 1.452155 (0.055739) | 1.586575 / 1.492716 (0.093859) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.228574 / 0.018006 (0.210568) | 0.451389 / 0.000490 (0.450900) | 0.006312 / 0.000200 (0.006112) | 0.000100 / 0.000054 (0.000045) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033391 / 0.037411 (-0.004020) | 0.096816 / 0.014526 (0.082290) | 0.107269 / 0.176557 (-0.069288) | 0.159749 / 0.737135 (-0.577387) | 0.108240 / 0.296338 (-0.188098) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.437643 / 0.215209 (0.222434) | 4.378173 / 2.077655 (2.300518) | 2.367218 / 1.504120 (0.863098) | 2.229493 / 1.541195 (0.688298) | 2.329849 / 1.468490 (0.861359) | 0.494985 / 4.584777 (-4.089792) | 3.578540 / 3.745712 (-0.167172) | 3.338220 / 5.269862 (-1.931642) | 2.092482 / 4.565676 (-2.473194) | 0.058495 / 0.424275 (-0.365780) | 0.007396 / 0.007607 (-0.000211) | 0.511001 / 0.226044 (0.284957) | 5.113497 / 2.268929 (2.844568) | 2.806215 / 55.444624 (-52.638409) | 2.485428 / 6.876477 (-4.391048) | 2.764907 / 2.142072 (0.622835) | 0.598824 / 4.805227 (-4.206404) | 0.134988 / 6.500664 (-6.365676) | 0.061752 / 0.075469 (-0.013717) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.365583 / 1.841788 (-0.476205) | 20.270297 / 8.074308 (12.195989) | 15.331673 / 10.191392 (5.140281) | 0.166152 / 0.680424 (-0.514272) | 0.020678 / 0.534201 (-0.513523) | 0.394821 / 0.579283 (-0.184462) | 0.420493 / 0.434364 (-0.013871) | 0.468551 / 0.540337 (-0.071787) | 0.654903 / 1.386936 (-0.732033) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#5f268dd4ad4fb6dada15937d57fb367cb2810162 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007803 / 0.011353 (-0.003550) | 0.004664 / 0.011008 (-0.006344) | 0.099908 / 0.038508 (0.061400) | 0.090674 / 0.023109 (0.067565) | 0.406009 / 0.275898 (0.130111) | 0.465098 / 0.323480 (0.141618) | 0.004667 / 0.007986 (-0.003319) | 0.003880 / 0.004328 (-0.000449) | 0.076552 / 0.004250 (0.072301) | 0.066345 / 0.037052 (0.029292) | 0.419195 / 0.258489 (0.160706) | 0.478581 / 0.293841 (0.184741) | 0.036967 / 0.128546 (-0.091579) | 0.010000 / 0.075646 (-0.065647) | 0.347126 / 0.419271 (-0.072145) | 0.062265 / 0.043533 (0.018733) | 0.406653 / 0.255139 (0.151514) | 0.439044 / 0.283200 (0.155845) | 0.031289 / 0.141683 (-0.110394) | 1.797674 / 1.452155 (0.345520) | 1.835183 / 1.492716 (0.342467) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.268194 / 0.018006 (0.250187) | 0.493614 / 0.000490 (0.493124) | 0.015636 / 0.000200 (0.015436) | 0.000417 / 0.000054 (0.000362) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034188 / 0.037411 (-0.003223) | 0.099127 / 0.014526 (0.084601) | 0.113949 / 0.176557 (-0.062607) | 0.181209 / 0.737135 (-0.555926) | 0.114943 / 0.296338 (-0.181395) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.455767 / 0.215209 (0.240558) | 4.542947 / 2.077655 (2.465293) | 2.214605 / 1.504120 (0.710485) | 2.015163 / 1.541195 (0.473969) | 2.084945 / 1.468490 (0.616455) | 0.583827 / 4.584777 (-4.000950) | 4.187009 / 3.745712 (0.441297) | 3.920841 / 5.269862 (-1.349020) | 2.447260 / 4.565676 (-2.118417) | 0.069139 / 0.424275 (-0.355137) | 0.008734 / 0.007607 (0.001127) | 0.544673 / 0.226044 (0.318629) | 5.445094 / 2.268929 (3.176165) | 2.788284 / 55.444624 (-52.656340) | 2.395863 / 6.876477 (-4.480614) | 2.622632 / 2.142072 (0.480560) | 0.703931 / 4.805227 (-4.101297) | 0.160502 / 6.500664 (-6.340162) | 0.073734 / 0.075469 (-0.001735) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.498992 / 1.841788 (-0.342795) | 22.761476 / 8.074308 (14.687168) | 17.123919 / 10.191392 (6.932527) | 0.170272 / 0.680424 (-0.510151) | 0.021307 / 0.534201 (-0.512894) | 0.467548 / 0.579283 (-0.111735) | 0.480777 / 0.434364 (0.046413) | 0.542168 / 0.540337 (0.001830) | 0.771092 / 1.386936 (-0.615844) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007923 / 0.011353 (-0.003430) | 0.004664 / 0.011008 (-0.006344) | 0.077795 / 0.038508 (0.039286) | 0.090293 / 0.023109 (0.067184) | 0.494682 / 0.275898 (0.218784) | 0.539973 / 0.323480 (0.216494) | 0.006302 / 0.007986 (-0.001684) | 0.003794 / 0.004328 (-0.000535) | 0.076567 / 0.004250 (0.072317) | 0.067141 / 0.037052 (0.030089) | 0.501279 / 0.258489 (0.242790) | 0.555670 / 0.293841 (0.261829) | 0.037773 / 0.128546 (-0.090773) | 0.009930 / 0.075646 (-0.065716) | 0.084839 / 0.419271 (-0.334433) | 0.056876 / 0.043533 (0.013344) | 0.499329 / 0.255139 (0.244190) | 0.518449 / 0.283200 (0.235249) | 0.026041 / 0.141683 (-0.115642) | 1.787259 / 1.452155 (0.335105) | 1.853505 / 1.492716 (0.360788) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.238413 / 0.018006 (0.220407) | 0.488889 / 0.000490 (0.488399) | 0.007476 / 0.000200 (0.007277) | 0.000141 / 0.000054 (0.000087) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.038701 / 0.037411 (0.001290) | 0.115391 / 0.014526 (0.100865) | 0.125553 / 0.176557 (-0.051004) | 0.190267 / 0.737135 (-0.546868) | 0.126401 / 0.296338 (-0.169937) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.509270 / 0.215209 (0.294061) | 5.087631 / 2.077655 (3.009976) | 2.745863 / 1.504120 (1.241743) | 2.560259 / 1.541195 (1.019064) | 2.653124 / 1.468490 (1.184634) | 0.582118 / 4.584777 (-4.002659) | 4.181144 / 3.745712 (0.435431) | 3.871179 / 5.269862 (-1.398683) | 2.459849 / 4.565676 (-2.105827) | 0.068844 / 0.424275 (-0.355431) | 0.008672 / 0.007607 (0.001065) | 0.604898 / 0.226044 (0.378854) | 6.073263 / 2.268929 (3.804334) | 3.366638 / 55.444624 (-52.077986) | 2.937261 / 6.876477 (-3.939215) | 3.181173 / 2.142072 (1.039100) | 0.700478 / 4.805227 (-4.104750) | 0.158361 / 6.500664 (-6.342303) | 0.072860 / 0.075469 (-0.002609) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.621363 / 1.841788 (-0.220425) | 23.614315 / 8.074308 (15.540007) | 17.607213 / 10.191392 (7.415821) | 0.198031 / 0.680424 (-0.482393) | 0.023859 / 0.534201 (-0.510342) | 0.474674 / 0.579283 (-0.104609) | 0.491173 / 0.434364 (0.056809) | 0.581995 / 0.540337 (0.041658) | 0.792168 / 1.386936 (-0.594768) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#56fa9645fd24e083adee3cfd0f7d972fce391f0e \"CML watermark\")\n", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6282). All of your documentation changes will be reflected on that endpoint.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004779 / 0.011353 (-0.006574) | 0.002916 / 0.011008 (-0.008092) | 0.061962 / 0.038508 (0.023454) | 0.029537 / 0.023109 (0.006428) | 0.242574 / 0.275898 (-0.033324) | 0.268585 / 0.323480 (-0.054894) | 0.004006 / 0.007986 (-0.003979) | 0.002434 / 0.004328 (-0.001895) | 0.048289 / 0.004250 (0.044039) | 0.045534 / 0.037052 (0.008481) | 0.248251 / 0.258489 (-0.010239) | 0.277037 / 0.293841 (-0.016804) | 0.023728 / 0.128546 (-0.104818) | 0.007295 / 0.075646 (-0.068351) | 0.205813 / 0.419271 (-0.213459) | 0.059093 / 0.043533 (0.015560) | 0.244336 / 0.255139 (-0.010803) | 0.262865 / 0.283200 (-0.020335) | 0.017232 / 0.141683 (-0.124451) | 1.126729 / 1.452155 (-0.325426) | 1.198987 / 1.492716 (-0.293729) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091246 / 0.018006 (0.073240) | 0.300747 / 0.000490 (0.300258) | 0.000202 / 0.000200 (0.000003) | 0.000041 / 0.000054 (-0.000013) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018681 / 0.037411 (-0.018731) | 0.063567 / 0.014526 (0.049041) | 0.074019 / 0.176557 (-0.102538) | 0.120856 / 0.737135 (-0.616279) | 0.076525 / 0.296338 (-0.219814) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.282230 / 0.215209 (0.067021) | 2.731502 / 2.077655 (0.653847) | 1.473901 / 1.504120 (-0.030219) | 1.351165 / 1.541195 (-0.190030) | 1.390582 / 1.468490 (-0.077908) | 0.398443 / 4.584777 (-4.186334) | 2.360497 / 3.745712 (-1.385215) | 2.548158 / 5.269862 (-2.721703) | 1.552416 / 4.565676 (-3.013260) | 0.045659 / 0.424275 (-0.378616) | 0.004778 / 0.007607 (-0.002829) | 0.330191 / 0.226044 (0.104146) | 3.262510 / 2.268929 (0.993582) | 1.823076 / 55.444624 (-53.621549) | 1.541206 / 6.876477 (-5.335271) | 1.589069 / 2.142072 (-0.553004) | 0.472265 / 4.805227 (-4.332963) | 0.099712 / 6.500664 (-6.400952) | 0.042803 / 0.075469 (-0.032666) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.963022 / 1.841788 (-0.878766) | 11.998807 / 8.074308 (3.924499) | 10.526006 / 10.191392 (0.334614) | 0.140965 / 0.680424 (-0.539459) | 0.014197 / 0.534201 (-0.520004) | 0.271668 / 0.579283 (-0.307615) | 0.263993 / 0.434364 (-0.170371) | 0.307213 / 0.540337 (-0.233124) | 0.427411 / 1.386936 (-0.959525) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004761 / 0.011353 (-0.006592) | 0.002652 / 0.011008 (-0.008357) | 0.047949 / 0.038508 (0.009441) | 0.049714 / 0.023109 (0.026604) | 0.274021 / 0.275898 (-0.001877) | 0.292413 / 0.323480 (-0.031067) | 0.003912 / 0.007986 (-0.004074) | 0.002290 / 0.004328 (-0.002038) | 0.047320 / 0.004250 (0.043069) | 0.038061 / 0.037052 (0.001009) | 0.279318 / 0.258489 (0.020829) | 0.305167 / 0.293841 (0.011326) | 0.024595 / 0.128546 (-0.103952) | 0.006976 / 0.075646 (-0.068671) | 0.052987 / 0.419271 (-0.366285) | 0.032454 / 0.043533 (-0.011079) | 0.273986 / 0.255139 (0.018847) | 0.297641 / 0.283200 (0.014442) | 0.017680 / 0.141683 (-0.124003) | 1.141218 / 1.452155 (-0.310937) | 1.222543 / 1.492716 (-0.270173) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092880 / 0.018006 (0.074873) | 0.305080 / 0.000490 (0.304590) | 0.000215 / 0.000200 (0.000016) | 0.000051 / 0.000054 (-0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021050 / 0.037411 (-0.016362) | 0.069676 / 0.014526 (0.055150) | 0.081082 / 0.176557 (-0.095475) | 0.119234 / 0.737135 (-0.617902) | 0.081242 / 0.296338 (-0.215096) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.295916 / 0.215209 (0.080707) | 2.909769 / 2.077655 (0.832115) | 1.623118 / 1.504120 (0.118998) | 1.502297 / 1.541195 (-0.038898) | 1.540290 / 1.468490 (0.071800) | 0.401176 / 4.584777 (-4.183601) | 2.427764 / 3.745712 (-1.317948) | 2.568610 / 5.269862 (-2.701252) | 1.550486 / 4.565676 (-3.015190) | 0.046895 / 0.424275 (-0.377380) | 0.004800 / 0.007607 (-0.002807) | 0.344524 / 0.226044 (0.118479) | 3.429189 / 2.268929 (1.160261) | 1.949738 / 55.444624 (-53.494887) | 1.681440 / 6.876477 (-5.195037) | 1.675304 / 2.142072 (-0.466769) | 0.469663 / 4.805227 (-4.335564) | 0.097470 / 6.500664 (-6.403194) | 0.040121 / 0.075469 (-0.035348) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.957947 / 1.841788 (-0.883841) | 11.968455 / 8.074308 (3.894147) | 10.809763 / 10.191392 (0.618371) | 0.140603 / 0.680424 (-0.539820) | 0.015562 / 0.534201 (-0.518638) | 0.276406 / 0.579283 (-0.302877) | 0.295267 / 0.434364 (-0.139097) | 0.315744 / 0.540337 (-0.224593) | 0.417985 / 1.386936 (-0.968951) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#12e01642a6978cbb9d5778c8b7f1c6b20a9887d5 \"CML watermark\")\n", "I've opened #6704 with a cleaner fix for the issue :)" ]
Drop data_files duplicates
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6282/reactions" }
PR_kwDODunzps5cBT5p
{ "diff_url": "https://github.com/huggingface/datasets/pull/6282.diff", "html_url": "https://github.com/huggingface/datasets/pull/6282", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/6282.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6282" }
2023-10-05T14:43:08Z
https://api.github.com/repos/huggingface/datasets/issues/6282/comments
I just added drop_duplicates=True to `.from_patterns`. I used a dict to deduplicate and preserve the order close https://github.com/huggingface/datasets/issues/6259 close https://github.com/huggingface/datasets/issues/6272
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/6282/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6282/timeline
open
false
6,282
null
null
null
true
1,928,456,959
https://api.github.com/repos/huggingface/datasets/issues/6281
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6281/events
[]
null
2023-10-05T19:09:07Z
[]
https://github.com/huggingface/datasets/pull/6281
CONTRIBUTOR
null
false
null
[ "I have looked at the doc failures, and I do not think that my change caused the doc build failure, but I'm not 100% sure about that.\r\nI have high confidence that the integration test failures are not something I introduced:-)", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008557 / 0.011353 (-0.002796) | 0.005224 / 0.011008 (-0.005784) | 0.109402 / 0.038508 (0.070893) | 0.075008 / 0.023109 (0.051899) | 0.388910 / 0.275898 (0.113012) | 0.425481 / 0.323480 (0.102002) | 0.005046 / 0.007986 (-0.002939) | 0.004166 / 0.004328 (-0.000162) | 0.079890 / 0.004250 (0.075639) | 0.061992 / 0.037052 (0.024940) | 0.409933 / 0.258489 (0.151444) | 0.444096 / 0.293841 (0.150255) | 0.043958 / 0.128546 (-0.084588) | 0.013655 / 0.075646 (-0.061991) | 0.402620 / 0.419271 (-0.016651) | 0.062784 / 0.043533 (0.019251) | 0.399653 / 0.255139 (0.144514) | 0.432926 / 0.283200 (0.149727) | 0.034631 / 0.141683 (-0.107052) | 1.801450 / 1.452155 (0.349296) | 1.965007 / 1.492716 (0.472290) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.305744 / 0.018006 (0.287738) | 0.590825 / 0.000490 (0.590335) | 0.014561 / 0.000200 (0.014361) | 0.000430 / 0.000054 (0.000375) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030449 / 0.037411 (-0.006962) | 0.091753 / 0.014526 (0.077227) | 0.106259 / 0.176557 (-0.070298) | 0.174599 / 0.737135 (-0.562537) | 0.107069 / 0.296338 (-0.189269) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.607544 / 0.215209 (0.392335) | 6.182592 / 2.077655 (4.104937) | 2.699782 / 1.504120 (1.195663) | 2.386915 / 1.541195 (0.845720) | 2.441763 / 1.468490 (0.973273) | 0.811360 / 4.584777 (-3.773417) | 5.253799 / 3.745712 (1.508087) | 4.762054 / 5.269862 (-0.507807) | 3.045161 / 4.565676 (-1.520515) | 0.095983 / 0.424275 (-0.328292) | 0.008653 / 0.007607 (0.001046) | 0.714218 / 0.226044 (0.488174) | 7.279325 / 2.268929 (5.010397) | 3.356107 / 55.444624 (-52.088517) | 2.765867 / 6.876477 (-4.110610) | 2.997756 / 2.142072 (0.855684) | 1.008740 / 4.805227 (-3.796487) | 0.201462 / 6.500664 (-6.299202) | 0.075780 / 0.075469 (0.000311) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.677034 / 1.841788 (-0.164754) | 23.546919 / 8.074308 (15.472610) | 21.576985 / 10.191392 (11.385593) | 0.239253 / 0.680424 (-0.441171) | 0.028740 / 0.534201 (-0.505460) | 0.468519 / 0.579283 (-0.110765) | 0.593935 / 0.434364 (0.159571) | 0.536830 / 0.540337 (-0.003507) | 0.779925 / 1.386936 (-0.607011) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009582 / 0.011353 (-0.001771) | 0.004971 / 0.011008 (-0.006037) | 0.081304 / 0.038508 (0.042796) | 0.077588 / 0.023109 (0.054478) | 0.486610 / 0.275898 (0.210712) | 0.580228 / 0.323480 (0.256748) | 0.006707 / 0.007986 (-0.001279) | 0.004325 / 0.004328 (-0.000004) | 0.086170 / 0.004250 (0.081920) | 0.060591 / 0.037052 (0.023539) | 0.501723 / 0.258489 (0.243234) | 0.548633 / 0.293841 (0.254793) | 0.050306 / 0.128546 (-0.078240) | 0.017458 / 0.075646 (-0.058188) | 0.093295 / 0.419271 (-0.325977) | 0.064588 / 0.043533 (0.021056) | 0.519395 / 0.255139 (0.264256) | 0.526021 / 0.283200 (0.242821) | 0.035795 / 0.141683 (-0.105888) | 1.792927 / 1.452155 (0.340772) | 1.956499 / 1.492716 (0.463783) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.296249 / 0.018006 (0.278243) | 0.594482 / 0.000490 (0.593992) | 0.007318 / 0.000200 (0.007118) | 0.000182 / 0.000054 (0.000128) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.036110 / 0.037411 (-0.001301) | 0.107924 / 0.014526 (0.093399) | 0.119975 / 0.176557 (-0.056582) | 0.177499 / 0.737135 (-0.559636) | 0.123299 / 0.296338 (-0.173039) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.632994 / 0.215209 (0.417785) | 6.481663 / 2.077655 (4.404008) | 3.231259 / 1.504120 (1.727139) | 2.768298 / 1.541195 (1.227103) | 2.694543 / 1.468490 (1.226053) | 0.837384 / 4.584777 (-3.747393) | 5.405278 / 3.745712 (1.659566) | 4.639424 / 5.269862 (-0.630437) | 2.944251 / 4.565676 (-1.621426) | 0.094978 / 0.424275 (-0.329297) | 0.008716 / 0.007607 (0.001108) | 0.795820 / 0.226044 (0.569776) | 8.514233 / 2.268929 (6.245304) | 3.800463 / 55.444624 (-51.644161) | 3.000005 / 6.876477 (-3.876472) | 3.298853 / 2.142072 (1.156781) | 0.994112 / 4.805227 (-3.811115) | 0.209435 / 6.500664 (-6.291229) | 0.075610 / 0.075469 (0.000141) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.681127 / 1.841788 (-0.160661) | 23.874465 / 8.074308 (15.800156) | 21.638567 / 10.191392 (11.447175) | 0.233303 / 0.680424 (-0.447121) | 0.032504 / 0.534201 (-0.501697) | 0.460462 / 0.579283 (-0.118821) | 0.560043 / 0.434364 (0.125679) | 0.555059 / 0.540337 (0.014721) | 0.831444 / 1.386936 (-0.555492) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#faada1742e1f25fce9cc5691ec11d3f91d4aa120 \"CML watermark\")\n" ]
Improve documentation of dataset.from_generator
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6281/reactions" }
PR_kwDODunzps5cBQPd
{ "diff_url": "https://github.com/huggingface/datasets/pull/6281.diff", "html_url": "https://github.com/huggingface/datasets/pull/6281", "merged_at": "2023-10-05T18:57:41Z", "patch_url": "https://github.com/huggingface/datasets/pull/6281.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6281" }
2023-10-05T14:34:49Z
https://api.github.com/repos/huggingface/datasets/issues/6281/comments
Improve documentation to clarify sharding behavior (#6270)
{ "avatar_url": "https://avatars.githubusercontent.com/u/53510?v=4", "events_url": "https://api.github.com/users/hartmans/events{/privacy}", "followers_url": "https://api.github.com/users/hartmans/followers", "following_url": "https://api.github.com/users/hartmans/following{/other_user}", "gists_url": "https://api.github.com/users/hartmans/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/hartmans", "id": 53510, "login": "hartmans", "node_id": "MDQ6VXNlcjUzNTEw", "organizations_url": "https://api.github.com/users/hartmans/orgs", "received_events_url": "https://api.github.com/users/hartmans/received_events", "repos_url": "https://api.github.com/users/hartmans/repos", "site_admin": false, "starred_url": "https://api.github.com/users/hartmans/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hartmans/subscriptions", "type": "User", "url": "https://api.github.com/users/hartmans" }
https://api.github.com/repos/huggingface/datasets/issues/6281/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6281/timeline
closed
false
6,281
null
2023-10-05T18:57:41Z
null
true
1,928,215,278
https://api.github.com/repos/huggingface/datasets/issues/6280
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6280/events
[]
null
2024-02-06T19:24:20Z
[]
https://github.com/huggingface/datasets/issues/6280
NONE
completed
null
null
[ "Thanks for reporting! I've opened a PR with a fix.", "Thanks for the quick response @mariosasko! I just installed your branch via `poetry add 'git+https://github.com/huggingface/datasets#fix-array_values'` and I can confirm it works on the example provided.\r\n\r\nFollow up question for you, should `None`s be supported in these types of features as they are in others?\r\n\r\nFor example, the following script:\r\n\r\n```\r\nfrom datasets import Features, Value, Sequence, ClassLabel, Dataset\r\n\r\ndataset_features = Features({\r\n 'text': Value('string'),\r\n 'embedding': Sequence(Value('double'), length=2),\r\n 'categories': Sequence(ClassLabel(names=sorted([\r\n 'one',\r\n 'two',\r\n 'three'\r\n ]))),\r\n})\r\n\r\ndataset = Dataset.from_dict(\r\n {\r\n 'text': ['A'] * 10000,\r\n \"embedding\": [None] * 10000, # THIS LINE CHANGED\r\n 'categories': [[0]] * 10000,\r\n },\r\n features=dataset_features\r\n)\r\n\r\ndef test_mapper(r):\r\n r['text'] = list(map(lambda t: t + ' b', r['text']))\r\n return r\r\n\r\n\r\ndataset = dataset.map(test_mapper, batched=True, batch_size=10, features=dataset_features, num_proc=2)\r\n```\r\n\r\nfails with\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/home/jmif/.virtualenvs/llm-training/lib/python3.10/site-packages/multiprocess/pool.py\", line 125, in worker\r\n result = (True, func(*args, **kwds))\r\n File \"/home/jmif/.virtualenvs/llm-training/lib/python3.10/site-packages/datasets/utils/py_utils.py\", line 1354, in _write_generator_to_queue\r\n for i, result in enumerate(func(**kwargs)):\r\n File \"/home/jmif/.virtualenvs/llm-training/lib/python3.10/site-packages/datasets/arrow_dataset.py\", line 3493, in _map_single\r\n writer.write_batch(batch)\r\n File \"/home/jmif/.virtualenvs/llm-training/lib/python3.10/site-packages/datasets/arrow_writer.py\", line 549, in write_batch\r\n array = cast_array_to_feature(col_values, col_type) if col_type is not None else col_values\r\n File \"/home/jmif/.virtualenvs/llm-training/lib/python3.10/site-packages/datasets/table.py\", line 1831, in wrapper\r\n return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])\r\n File \"/home/jmif/.virtualenvs/llm-training/lib/python3.10/site-packages/datasets/table.py\", line 1831, in <listcomp>\r\n return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])\r\n File \"/home/jmif/.virtualenvs/llm-training/lib/python3.10/site-packages/datasets/table.py\", line 2160, in cast_array_to_feature\r\n raise TypeError(f\"Couldn't cast array of type\\n{array.type}\\nto\\n{feature}\")\r\nTypeError: Couldn't cast array of type\r\nfixed_size_list<item: double>[2]\r\nto\r\nSequence(feature=Value(dtype='float64', id=None), length=2, id=None)\r\n```\r\n\r\nIdeally we can have empty embedding columns as well!", "This part of PyArrow is buggy and inconsistent regarding features implemented across the types, so the only option is to operate on the Arrow buffer level to fix issues such as the above one.", "Ok - can you take the POC I did [here](https://github.com/huggingface/datasets/commit/15443098e9ce053943172f7ec6fce3769d7dff6e)? Happy to turn this into an actual PR but would appreciate feedback on the implementation before I take another pass!" ]
Couldn't cast array of type fixed_size_list to Sequence(Value(float64))
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/6280/reactions" }
I_kwDODunzps5y7jru
null
2023-10-05T12:48:31Z
https://api.github.com/repos/huggingface/datasets/issues/6280/comments
### Describe the bug I have a dataset with an embedding column, when I try to map that dataset I get the following exception: ``` Traceback (most recent call last): File "/Users/jmif/.virtualenvs/llm-training/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 3189, in map for rank, done, content in iflatmap_unordered( File "/Users/jmif/.virtualenvs/llm-training/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 1387, in iflatmap_unordered [async_result.get(timeout=0.05) for async_result in async_results] File "/Users/jmif/.virtualenvs/llm-training/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 1387, in <listcomp> [async_result.get(timeout=0.05) for async_result in async_results] File "/Users/jmif/.virtualenvs/llm-training/lib/python3.10/site-packages/multiprocess/pool.py", line 774, in get raise self._value TypeError: Couldn't cast array of type fixed_size_list<item: float>[2] to Sequence(feature=Value(dtype='float32', id=None), length=2, id=None) ``` ### Steps to reproduce the bug Here's a simple repro script: ``` from datasets import Features, Value, Sequence, ClassLabel, Dataset dataset_features = Features({ 'text': Value('string'), 'embedding': Sequence(Value('double'), length=2), 'categories': Sequence(ClassLabel(names=sorted([ 'one', 'two', 'three' ]))), }) dataset = Dataset.from_dict( { 'text': ['A'] * 10000, 'embedding': [[0.0, 0.1]] * 10000, 'categories': [[0]] * 10000, }, features=dataset_features ) def test_mapper(r): r['text'] = list(map(lambda t: t + ' b', r['text'])) return r dataset = dataset.map(test_mapper, batched=True, batch_size=10, features=dataset_features, num_proc=2) ``` Removing the embedding column fixes the issue! ### Expected behavior The mapping completes successfully. ### Environment info - `datasets` version: 2.14.4 - Platform: macOS-14.0-arm64-arm-64bit - Python version: 3.10.12 - Huggingface_hub version: 0.17.1 - PyArrow version: 13.0.0 - Pandas version: 2.0.3
{ "avatar_url": "https://avatars.githubusercontent.com/u/1000442?v=4", "events_url": "https://api.github.com/users/jmif/events{/privacy}", "followers_url": "https://api.github.com/users/jmif/followers", "following_url": "https://api.github.com/users/jmif/following{/other_user}", "gists_url": "https://api.github.com/users/jmif/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jmif", "id": 1000442, "login": "jmif", "node_id": "MDQ6VXNlcjEwMDA0NDI=", "organizations_url": "https://api.github.com/users/jmif/orgs", "received_events_url": "https://api.github.com/users/jmif/received_events", "repos_url": "https://api.github.com/users/jmif/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jmif/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmif/subscriptions", "type": "User", "url": "https://api.github.com/users/jmif" }
https://api.github.com/repos/huggingface/datasets/issues/6280/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6280/timeline
closed
false
6,280
null
2024-02-06T19:24:20Z
null
false
1,928,028,226
https://api.github.com/repos/huggingface/datasets/issues/6279
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6279/events
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
null
2024-07-08T11:10:41Z
[]
https://github.com/huggingface/datasets/issues/6279
NONE
null
null
null
[ "This is exactly what I was looking for. It would also be very useful for me :-)", "This issue is really smashing the selling point of HF datasets... The only workaround I've found so far is to create a customized IterableDataloader which improves the loading speed to some extent.\r\n\r\nFor example I've a HF dataset `dt_train` with `len(dt_train) == 1M`. Using plain DataLoader is extremely slow:\r\n```\r\n%%time\r\ndl_train = DataLoader(dt_train, batch_size=128, shuffle = True)\r\nfor batch in dl_train:\r\n pass\r\n``` \r\n\r\n```\r\nCPU times: user 24min 35s, sys: 704 ms, total: 24min 36s\r\nWall time: 24min 37s\r\n```\r\nAnd DataLoader works even worse with HF's iterable_dataset:\r\n```\r\n%%time\r\ndt_train_ = dt_train.with_format(None).to_iterable_dataset(num_shards=64).shuffle(buffer_size=10_000)\r\ndl_train = DataLoader(dt_train_, batch_size=128)\r\nfor batch in dl_train:\r\n pass\r\n```\r\n```\r\nCPU times: user 1h 6min 2s, sys: 4.28 s, total: 1h 6min 6s\r\nWall time: 1h 7min 53s\r\n```\r\nWorkaround by running a customized wrapper:\r\n```\r\n%%time\r\nfrom torch.utils.data import DataLoader, IterableDataset\r\n\r\nclass Dataset2Iterable(IterableDataset):\r\n \"\"\"\r\n Wrapper to use a HF dataset as pytorch IterableDataset to speed up data loading.\r\n \"\"\"\r\n def __init__(self, dataset, batch_size=1, shuffle=True):\r\n super(Dataset2Iterable).__init__()\r\n self.dataset = dataset\r\n self.batch_size = batch_size\r\n self.shuffle = shuffle\r\n\r\n def __iter__(self):\r\n if self.shuffle: self.dataset.shuffle()\r\n return self.dataset.iter(batch_size=self.batch_size)\r\n\r\ndl_train = DataLoader(Dataset2Iterable(dt_train, batch_size = 128), batch_size=1, num_workers=0)\r\nfor n in range(2):\r\n for batch in dl_train:\r\n pass\r\n```\r\nThe speed still is slower than using tensorflow's loader but improved a lot than previous code:\r\n```\r\nCPU times: user 4min 18s, sys: 0 ns, total: 4min 18s\r\nWall time: 4min 20s\r\n```\r\nNote that the way I implemented `Dataset2Iterable` will only work with `num_workers=0`.", "I can confirm that @zhh210's solution works with `num_workers=0`. However, for my use case, this was still slower than tokenizing on the fly through a collator and leveraging multiple workers in the dataloder.\r\n\r\n@lhoestq I think this is an important use case (e.g., streaming from a large dataset, online or stored on disk). What do you think might be the best solution to move forward?", "I guess it can be implemented using a batched`.map()` under the hood that returns a single item containing the input batch.\r\n\r\nIn the meantime you can use this:\r\n\r\n```python\r\ndef batch(unbatched: dict[str, list]) -> dict[str, list]:\r\n return {k: [v] for k, v in unbatched}\r\n\r\nbatched_dataset = dataset.map(batch, batched=True, batch_size=batch_size)\r\n```\r\n\r\nThough it would be great to have a `.batch()` method indeed, I'd be happy to help with anyone wants to open a PR", "If no one else is planning to work on this, I can take it on. I'll wait until next week, and if no one has started a PR by then, I'll go ahead and open one." ]
Batched IterableDataset
{ "+1": 5, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 5, "url": "https://api.github.com/repos/huggingface/datasets/issues/6279/reactions" }
I_kwDODunzps5y62BC
null
2023-10-05T11:12:49Z
https://api.github.com/repos/huggingface/datasets/issues/6279/comments
### Feature request Hi, could you add an implementation of a batched `IterableDataset`. It already support an option to do batch iteration via `.iter(batch_size=...)` but this cannot be used in combination with a torch `DataLoader` since it just returns an iterator. ### Motivation The current implementation loads each element of a batch individually which can be very slow in cases of a big batch_size. I did some experiments [here](https://discuss.huggingface.co/t/slow-dataloader-with-big-batch-size/57224) and using a batched iteration would speed up data loading significantly. ### Your contribution N/A
{ "avatar_url": "https://avatars.githubusercontent.com/u/7010688?v=4", "events_url": "https://api.github.com/users/lneukom/events{/privacy}", "followers_url": "https://api.github.com/users/lneukom/followers", "following_url": "https://api.github.com/users/lneukom/following{/other_user}", "gists_url": "https://api.github.com/users/lneukom/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lneukom", "id": 7010688, "login": "lneukom", "node_id": "MDQ6VXNlcjcwMTA2ODg=", "organizations_url": "https://api.github.com/users/lneukom/orgs", "received_events_url": "https://api.github.com/users/lneukom/received_events", "repos_url": "https://api.github.com/users/lneukom/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lneukom/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lneukom/subscriptions", "type": "User", "url": "https://api.github.com/users/lneukom" }
https://api.github.com/repos/huggingface/datasets/issues/6279/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6279/timeline
open
false
6,279
null
null
null
false
1,927,957,877
https://api.github.com/repos/huggingface/datasets/issues/6278
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6278/events
[]
null
2024-01-11T06:32:49Z
[]
https://github.com/huggingface/datasets/pull/6278
MEMBER
null
true
null
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009624 / 0.011353 (-0.001729) | 0.005121 / 0.011008 (-0.005887) | 0.105560 / 0.038508 (0.067052) | 0.090749 / 0.023109 (0.067640) | 0.430274 / 0.275898 (0.154376) | 0.443399 / 0.323480 (0.119919) | 0.006575 / 0.007986 (-0.001411) | 0.004396 / 0.004328 (0.000068) | 0.080900 / 0.004250 (0.076649) | 0.064921 / 0.037052 (0.027868) | 0.410092 / 0.258489 (0.151603) | 0.470058 / 0.293841 (0.176217) | 0.054160 / 0.128546 (-0.074386) | 0.014367 / 0.075646 (-0.061279) | 0.384844 / 0.419271 (-0.034428) | 0.072818 / 0.043533 (0.029285) | 0.429341 / 0.255139 (0.174202) | 0.430968 / 0.283200 (0.147769) | 0.038437 / 0.141683 (-0.103246) | 1.814456 / 1.452155 (0.362301) | 1.832122 / 1.492716 (0.339406) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.329266 / 0.018006 (0.311260) | 0.596848 / 0.000490 (0.596358) | 0.018291 / 0.000200 (0.018091) | 0.000113 / 0.000054 (0.000058) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030505 / 0.037411 (-0.006907) | 0.097394 / 0.014526 (0.082869) | 0.127144 / 0.176557 (-0.049412) | 0.190251 / 0.737135 (-0.546884) | 0.116543 / 0.296338 (-0.179795) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.592124 / 0.215209 (0.376915) | 5.979801 / 2.077655 (3.902146) | 2.837753 / 1.504120 (1.333633) | 2.492942 / 1.541195 (0.951747) | 2.548083 / 1.468490 (1.079593) | 0.870446 / 4.584777 (-3.714330) | 5.493718 / 3.745712 (1.748006) | 4.945135 / 5.269862 (-0.324727) | 3.133994 / 4.565676 (-1.431683) | 0.097742 / 0.424275 (-0.326533) | 0.008750 / 0.007607 (0.001143) | 0.723304 / 0.226044 (0.497260) | 7.353766 / 2.268929 (5.084838) | 3.504808 / 55.444624 (-51.939816) | 2.872490 / 6.876477 (-4.003987) | 3.186628 / 2.142072 (1.044556) | 1.035470 / 4.805227 (-3.769758) | 0.211980 / 6.500664 (-6.288684) | 0.080356 / 0.075469 (0.004887) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.623389 / 1.841788 (-0.218399) | 23.492350 / 8.074308 (15.418042) | 21.053525 / 10.191392 (10.862133) | 0.225668 / 0.680424 (-0.454756) | 0.028311 / 0.534201 (-0.505890) | 0.472672 / 0.579283 (-0.106611) | 0.581536 / 0.434364 (0.147172) | 0.525180 / 0.540337 (-0.015158) | 0.790420 / 1.386936 (-0.596516) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009091 / 0.011353 (-0.002262) | 0.004978 / 0.011008 (-0.006030) | 0.077633 / 0.038508 (0.039125) | 0.103189 / 0.023109 (0.080080) | 0.500194 / 0.275898 (0.224296) | 0.524310 / 0.323480 (0.200831) | 0.006656 / 0.007986 (-0.001329) | 0.004586 / 0.004328 (0.000257) | 0.075535 / 0.004250 (0.071284) | 0.065100 / 0.037052 (0.028048) | 0.513776 / 0.258489 (0.255287) | 0.528483 / 0.293841 (0.234642) | 0.049877 / 0.128546 (-0.078669) | 0.012494 / 0.075646 (-0.063152) | 0.090225 / 0.419271 (-0.329046) | 0.054648 / 0.043533 (0.011116) | 0.510369 / 0.255139 (0.255230) | 0.540042 / 0.283200 (0.256842) | 0.035966 / 0.141683 (-0.105717) | 1.825965 / 1.452155 (0.373810) | 1.965647 / 1.492716 (0.472931) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.295921 / 0.018006 (0.277914) | 0.605751 / 0.000490 (0.605262) | 0.007243 / 0.000200 (0.007043) | 0.000134 / 0.000054 (0.000079) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032954 / 0.037411 (-0.004457) | 0.093613 / 0.014526 (0.079087) | 0.120010 / 0.176557 (-0.056546) | 0.176168 / 0.737135 (-0.560967) | 0.113978 / 0.296338 (-0.182360) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.682904 / 0.215209 (0.467695) | 6.674640 / 2.077655 (4.596986) | 3.360660 / 1.504120 (1.856540) | 3.227246 / 1.541195 (1.686051) | 3.188852 / 1.468490 (1.720362) | 0.862293 / 4.584777 (-3.722484) | 5.518455 / 3.745712 (1.772743) | 4.881904 / 5.269862 (-0.387957) | 3.066964 / 4.565676 (-1.498712) | 0.099284 / 0.424275 (-0.324991) | 0.008644 / 0.007607 (0.001037) | 0.789231 / 0.226044 (0.563186) | 7.872017 / 2.268929 (5.603089) | 4.037105 / 55.444624 (-51.407519) | 3.318921 / 6.876477 (-3.557555) | 3.621953 / 2.142072 (1.479881) | 1.012049 / 4.805227 (-3.793178) | 0.204541 / 6.500664 (-6.296123) | 0.074509 / 0.075469 (-0.000960) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.748215 / 1.841788 (-0.093573) | 24.274974 / 8.074308 (16.200665) | 20.582389 / 10.191392 (10.390997) | 0.251001 / 0.680424 (-0.429423) | 0.032390 / 0.534201 (-0.501811) | 0.479211 / 0.579283 (-0.100072) | 0.607482 / 0.434364 (0.173118) | 0.587867 / 0.540337 (0.047530) | 0.822399 / 1.386936 (-0.564537) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#f2b6b2fd90ba47f19e9ab125f6f7656903dd065f \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009715 / 0.011353 (-0.001638) | 0.005449 / 0.011008 (-0.005559) | 0.108556 / 0.038508 (0.070048) | 0.080512 / 0.023109 (0.057403) | 0.450736 / 0.275898 (0.174838) | 0.487771 / 0.323480 (0.164291) | 0.005155 / 0.007986 (-0.002830) | 0.004213 / 0.004328 (-0.000115) | 0.087247 / 0.004250 (0.082997) | 0.063962 / 0.037052 (0.026909) | 0.454153 / 0.258489 (0.195664) | 0.499917 / 0.293841 (0.206076) | 0.052605 / 0.128546 (-0.075942) | 0.013019 / 0.075646 (-0.062627) | 0.379716 / 0.419271 (-0.039555) | 0.073241 / 0.043533 (0.029708) | 0.473488 / 0.255139 (0.218349) | 0.482944 / 0.283200 (0.199745) | 0.041541 / 0.141683 (-0.100142) | 1.829415 / 1.452155 (0.377261) | 1.953280 / 1.492716 (0.460564) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.313725 / 0.018006 (0.295719) | 0.591336 / 0.000490 (0.590847) | 0.021224 / 0.000200 (0.021025) | 0.000969 / 0.000054 (0.000914) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031874 / 0.037411 (-0.005537) | 0.099786 / 0.014526 (0.085260) | 0.116987 / 0.176557 (-0.059569) | 0.205538 / 0.737135 (-0.531597) | 0.118716 / 0.296338 (-0.177622) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.617145 / 0.215209 (0.401936) | 6.079144 / 2.077655 (4.001489) | 2.567233 / 1.504120 (1.063113) | 2.265301 / 1.541195 (0.724107) | 2.314001 / 1.468490 (0.845511) | 0.871561 / 4.584777 (-3.713216) | 5.477049 / 3.745712 (1.731337) | 4.720552 / 5.269862 (-0.549309) | 3.107515 / 4.565676 (-1.458162) | 0.100438 / 0.424275 (-0.323838) | 0.008586 / 0.007607 (0.000979) | 0.716913 / 0.226044 (0.490869) | 7.108417 / 2.268929 (4.839489) | 3.391336 / 55.444624 (-52.053288) | 2.734052 / 6.876477 (-4.142425) | 2.857226 / 2.142072 (0.715153) | 1.024121 / 4.805227 (-3.781106) | 0.216735 / 6.500664 (-6.283929) | 0.081605 / 0.075469 (0.006136) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.678176 / 1.841788 (-0.163611) | 23.606037 / 8.074308 (15.531729) | 21.485331 / 10.191392 (11.293939) | 0.218312 / 0.680424 (-0.462112) | 0.027061 / 0.534201 (-0.507140) | 0.481188 / 0.579283 (-0.098096) | 0.620592 / 0.434364 (0.186228) | 0.574778 / 0.540337 (0.034441) | 0.831529 / 1.386936 (-0.555407) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.011666 / 0.011353 (0.000313) | 0.005187 / 0.011008 (-0.005821) | 0.080692 / 0.038508 (0.042184) | 0.079159 / 0.023109 (0.056049) | 0.530823 / 0.275898 (0.254925) | 0.577807 / 0.323480 (0.254327) | 0.006246 / 0.007986 (-0.001740) | 0.004355 / 0.004328 (0.000026) | 0.080702 / 0.004250 (0.076452) | 0.062279 / 0.037052 (0.025226) | 0.553712 / 0.258489 (0.295223) | 0.579112 / 0.293841 (0.285271) | 0.056374 / 0.128546 (-0.072172) | 0.014681 / 0.075646 (-0.060966) | 0.097110 / 0.419271 (-0.322161) | 0.061040 / 0.043533 (0.017507) | 0.524718 / 0.255139 (0.269579) | 0.568586 / 0.283200 (0.285386) | 0.035774 / 0.141683 (-0.105909) | 1.864590 / 1.452155 (0.412435) | 1.953715 / 1.492716 (0.460998) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.271315 / 0.018006 (0.253309) | 0.571343 / 0.000490 (0.570854) | 0.015812 / 0.000200 (0.015612) | 0.000115 / 0.000054 (0.000060) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.038582 / 0.037411 (0.001170) | 0.117523 / 0.014526 (0.102997) | 0.128864 / 0.176557 (-0.047693) | 0.191164 / 0.737135 (-0.545971) | 0.133161 / 0.296338 (-0.163178) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.679305 / 0.215209 (0.464096) | 6.814451 / 2.077655 (4.736796) | 3.377431 / 1.504120 (1.873311) | 3.011008 / 1.541195 (1.469813) | 3.093200 / 1.468490 (1.624710) | 0.905827 / 4.584777 (-3.678950) | 5.456094 / 3.745712 (1.710382) | 4.848511 / 5.269862 (-0.421351) | 3.064230 / 4.565676 (-1.501447) | 0.107478 / 0.424275 (-0.316798) | 0.009234 / 0.007607 (0.001627) | 0.833944 / 0.226044 (0.607899) | 8.286100 / 2.268929 (6.017171) | 4.241455 / 55.444624 (-51.203169) | 3.405460 / 6.876477 (-3.471017) | 3.660618 / 2.142072 (1.518546) | 1.046310 / 4.805227 (-3.758917) | 0.210891 / 6.500664 (-6.289773) | 0.079413 / 0.075469 (0.003944) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.825448 / 1.841788 (-0.016340) | 24.639059 / 8.074308 (16.564750) | 21.970417 / 10.191392 (11.779025) | 0.247708 / 0.680424 (-0.432715) | 0.033810 / 0.534201 (-0.500391) | 0.495517 / 0.579283 (-0.083766) | 0.601820 / 0.434364 (0.167456) | 0.585618 / 0.540337 (0.045280) | 0.858722 / 1.386936 (-0.528214) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#0477e20dccb77b68f0add77fd5c9b4cb05473235 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006137 / 0.011353 (-0.005216) | 0.003685 / 0.011008 (-0.007324) | 0.079985 / 0.038508 (0.041476) | 0.060937 / 0.023109 (0.037828) | 0.390583 / 0.275898 (0.114685) | 0.425307 / 0.323480 (0.101827) | 0.003433 / 0.007986 (-0.004552) | 0.002868 / 0.004328 (-0.001461) | 0.062572 / 0.004250 (0.058322) | 0.048642 / 0.037052 (0.011590) | 0.401096 / 0.258489 (0.142607) | 0.436988 / 0.293841 (0.143147) | 0.027645 / 0.128546 (-0.100901) | 0.007973 / 0.075646 (-0.067673) | 0.261997 / 0.419271 (-0.157275) | 0.045393 / 0.043533 (0.001860) | 0.394266 / 0.255139 (0.139127) | 0.414448 / 0.283200 (0.131248) | 0.022551 / 0.141683 (-0.119131) | 1.438458 / 1.452155 (-0.013697) | 1.501568 / 1.492716 (0.008852) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.224335 / 0.018006 (0.206329) | 0.421918 / 0.000490 (0.421428) | 0.006883 / 0.000200 (0.006683) | 0.000210 / 0.000054 (0.000155) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023505 / 0.037411 (-0.013906) | 0.072438 / 0.014526 (0.057912) | 0.083576 / 0.176557 (-0.092981) | 0.142906 / 0.737135 (-0.594229) | 0.083910 / 0.296338 (-0.212428) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.396004 / 0.215209 (0.180795) | 3.969852 / 2.077655 (1.892197) | 1.966000 / 1.504120 (0.461880) | 1.786453 / 1.541195 (0.245258) | 1.866082 / 1.468490 (0.397592) | 0.502633 / 4.584777 (-4.082144) | 3.114331 / 3.745712 (-0.631382) | 2.940003 / 5.269862 (-2.329859) | 1.901844 / 4.565676 (-2.663832) | 0.058109 / 0.424275 (-0.366166) | 0.006502 / 0.007607 (-0.001105) | 0.463465 / 0.226044 (0.237420) | 4.641531 / 2.268929 (2.372603) | 2.315759 / 55.444624 (-53.128865) | 2.253088 / 6.876477 (-4.623389) | 2.151399 / 2.142072 (0.009326) | 0.592225 / 4.805227 (-4.213002) | 0.125072 / 6.500664 (-6.375592) | 0.059966 / 0.075469 (-0.015503) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.231392 / 1.841788 (-0.610396) | 17.533893 / 8.074308 (9.459585) | 13.710478 / 10.191392 (3.519086) | 0.147389 / 0.680424 (-0.533035) | 0.017932 / 0.534201 (-0.516269) | 0.334144 / 0.579283 (-0.245139) | 0.368817 / 0.434364 (-0.065547) | 0.383790 / 0.540337 (-0.156547) | 0.540262 / 1.386936 (-0.846674) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006066 / 0.011353 (-0.005287) | 0.003804 / 0.011008 (-0.007205) | 0.062474 / 0.038508 (0.023966) | 0.060547 / 0.023109 (0.037437) | 0.448643 / 0.275898 (0.172745) | 0.487005 / 0.323480 (0.163525) | 0.004884 / 0.007986 (-0.003102) | 0.002911 / 0.004328 (-0.001418) | 0.062950 / 0.004250 (0.058700) | 0.049672 / 0.037052 (0.012620) | 0.477491 / 0.258489 (0.219002) | 0.488234 / 0.293841 (0.194393) | 0.028711 / 0.128546 (-0.099835) | 0.008101 / 0.075646 (-0.067545) | 0.068333 / 0.419271 (-0.350939) | 0.040959 / 0.043533 (-0.002574) | 0.450716 / 0.255139 (0.195577) | 0.471089 / 0.283200 (0.187890) | 0.020710 / 0.141683 (-0.120973) | 1.474850 / 1.452155 (0.022695) | 1.540115 / 1.492716 (0.047399) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.229811 / 0.018006 (0.211805) | 0.419526 / 0.000490 (0.419036) | 0.003818 / 0.000200 (0.003618) | 0.000084 / 0.000054 (0.000030) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026045 / 0.037411 (-0.011366) | 0.080325 / 0.014526 (0.065799) | 0.091549 / 0.176557 (-0.085007) | 0.145253 / 0.737135 (-0.591882) | 0.091849 / 0.296338 (-0.204489) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.463047 / 0.215209 (0.247838) | 4.598727 / 2.077655 (2.521072) | 2.558996 / 1.504120 (1.054877) | 2.405896 / 1.541195 (0.864701) | 2.447291 / 1.468490 (0.978801) | 0.510393 / 4.584777 (-4.074384) | 3.173344 / 3.745712 (-0.572368) | 2.901201 / 5.269862 (-2.368661) | 1.896440 / 4.565676 (-2.669236) | 0.058374 / 0.424275 (-0.365901) | 0.006449 / 0.007607 (-0.001158) | 0.539653 / 0.226044 (0.313608) | 5.408217 / 2.268929 (3.139289) | 3.042453 / 55.444624 (-52.402172) | 2.656724 / 6.876477 (-4.219753) | 2.838165 / 2.142072 (0.696092) | 0.598663 / 4.805227 (-4.206565) | 0.126211 / 6.500664 (-6.374453) | 0.062830 / 0.075469 (-0.012639) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.392412 / 1.841788 (-0.449376) | 18.195170 / 8.074308 (10.120862) | 14.788251 / 10.191392 (4.596859) | 0.132579 / 0.680424 (-0.547845) | 0.017867 / 0.534201 (-0.516334) | 0.340020 / 0.579283 (-0.239263) | 0.386719 / 0.434364 (-0.047645) | 0.398863 / 0.540337 (-0.141475) | 0.579320 / 1.386936 (-0.807617) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#a2569fdfcf387f8885974a35fafa409fbc6dd059 \"CML watermark\")\n", "closing in favor of https://github.com/huggingface/datasets/pull/6282" ]
No data files duplicates
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6278/reactions" }
PR_kwDODunzps5b_iKb
{ "diff_url": "https://github.com/huggingface/datasets/pull/6278.diff", "html_url": "https://github.com/huggingface/datasets/pull/6278", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/6278.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6278" }
2023-10-05T10:31:58Z
https://api.github.com/repos/huggingface/datasets/issues/6278/comments
I added a new DataFilesSet class to disallow duplicate data files. I also deprecated DataFilesList. EDIT: actually I might just add drop_duplicates=True to `.from_patterns` close https://github.com/huggingface/datasets/issues/6259 close https://github.com/huggingface/datasets/issues/6272 TODO: - [ ] tests - [ ] preserve data files order
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/6278/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6278/timeline
closed
false
6,278
null
2023-10-05T14:43:17Z
null
true
1,927,044,546
https://api.github.com/repos/huggingface/datasets/issues/6277
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6277/events
[]
null
2023-10-08T17:05:46Z
[]
https://github.com/huggingface/datasets/issues/6277
NONE
completed
null
null
[ "`evaluate.load(\"paws-x\", \"es\")` throws the error because there is no such metric in the `evaluate` lib.\r\n\r\nSo, this is unrelated to our lib." ]
FileNotFoundError: Couldn't find a module script at /content/paws-x/paws-x.py. Module 'paws-x' doesn't exist on the Hugging Face Hub either.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6277/reactions" }
I_kwDODunzps5y3F3C
null
2023-10-04T22:01:25Z
https://api.github.com/repos/huggingface/datasets/issues/6277/comments
### Describe the bug I'm encountering a "FileNotFoundError" while attempting to use the "paws-x" dataset to retrain the DistilRoBERTa-base model. The error message is as follows: FileNotFoundError: Couldn't find a module script at /content/paws-x/paws-x.py. Module 'paws-x' doesn't exist on the Hugging Face Hub either. ### Steps to reproduce the bug https://colab.research.google.com/drive/11xUUFxloClpmqLvDy_Xxfmo3oUzjY5nx#scrollTo=kUn74FigzhHm ### Expected behavior The the trained model ### Environment info colab, "paws-x" dataset , DistilRoBERTa-base model
{ "avatar_url": "https://avatars.githubusercontent.com/u/66733346?v=4", "events_url": "https://api.github.com/users/diegogonzalezc/events{/privacy}", "followers_url": "https://api.github.com/users/diegogonzalezc/followers", "following_url": "https://api.github.com/users/diegogonzalezc/following{/other_user}", "gists_url": "https://api.github.com/users/diegogonzalezc/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/diegogonzalezc", "id": 66733346, "login": "diegogonzalezc", "node_id": "MDQ6VXNlcjY2NzMzMzQ2", "organizations_url": "https://api.github.com/users/diegogonzalezc/orgs", "received_events_url": "https://api.github.com/users/diegogonzalezc/received_events", "repos_url": "https://api.github.com/users/diegogonzalezc/repos", "site_admin": false, "starred_url": "https://api.github.com/users/diegogonzalezc/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/diegogonzalezc/subscriptions", "type": "User", "url": "https://api.github.com/users/diegogonzalezc" }
https://api.github.com/repos/huggingface/datasets/issues/6277/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6277/timeline
closed
false
6,277
null
2023-10-08T17:05:46Z
null
false
1,925,961,878
https://api.github.com/repos/huggingface/datasets/issues/6276
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6276/events
[]
null
2023-11-27T10:39:16Z
[]
https://github.com/huggingface/datasets/issues/6276
NONE
null
null
null
[ "Since you are using Windows, maybe moving the `map` call inside `if __name__ == \"__main__\"` can fix the issue:\r\n```python\r\nif __name__ == \"__main__\":\r\n common_voice = common_voice.map(prepare_dataset, remove_columns=common_voice.column_names[\"train\"], num_proc=4)\r\n```\r\n\r\nOtherwise, the only solution is to set `num_proc=1`.", "> Since you are using Windows, maybe moving the `map` call inside `if __name__ == \"__main__\"` can fix the issue:\r\n> \r\n> ```python\r\n> if __name__ == \"__main__\":\r\n> common_voice = common_voice.map(prepare_dataset, remove_columns=common_voice.column_names[\"train\"], num_proc=4)\r\n> ```\r\n> \r\n> Otherwise, the only solution is to set `num_proc=1`.\r\n\r\nThank you very much for the response, i eventually tried setting `num_proc=1` and now the jupyter notebook kernel keers dying after running the command, what do you think the issue could be, could it be that my system is not capable of running the command \"i'm using a Lenovo Thinkpad T440 with no GPU\"", "Firstly, you didn't define feature_extractor variable. Secondly, it is large nlp model. Hence you should use proper gpu, otherwise your machine's cpu will be overclock and you can do nothing." ]
I'm trying to fine tune the openai/whisper model from huggingface using jupyter notebook and i keep getting this error
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6276/reactions" }
I_kwDODunzps5yy9iW
null
2023-10-04T11:03:41Z
https://api.github.com/repos/huggingface/datasets/issues/6276/comments
### Describe the bug I'm trying to fine tune the openai/whisper model from huggingface using jupyter notebook and i keep getting this error, i'm following the steps in this blog post https://huggingface.co/blog/fine-tune-whisper I tried google collab and it works but because I'm on the free version the training doesn't complete the error comes in jupyter notebook when i run this line `common_voice = common_voice.map(prepare_dataset, remove_columns=common_voice.column_names["train"], num_proc=4)` here is the error message ``` Map (num_proc=4): 0% 0/2506 [00:52<?, ? examples/s] The above exception was the direct cause of the following exception: NameError Traceback (most recent call last) Cell In[19], line 1 ----> 1 common_voice = common_voice.map(prepare_dataset, remove_columns=common_voice.column_names["train"], num_proc=4) File ~\anaconda\Lib\site-packages\datasets\dataset_dict.py:853, in DatasetDict.map(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_names, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, desc) 850 if cache_file_names is None: 851 cache_file_names = {k: None for k in self} 852 return DatasetDict( --> 853 { 854 k: dataset.map( 855 function=function, 856 with_indices=with_indices, 857 with_rank=with_rank, 858 input_columns=input_columns, 859 batched=batched, 860 batch_size=batch_size, 861 drop_last_batch=drop_last_batch, 862 remove_columns=remove_columns, 863 keep_in_memory=keep_in_memory, 864 load_from_cache_file=load_from_cache_file, 865 cache_file_name=cache_file_names[k], 866 writer_batch_size=writer_batch_size, 867 features=features, 868 disable_nullable=disable_nullable, 869 fn_kwargs=fn_kwargs, 870 num_proc=num_proc, 871 desc=desc, 872 ) 873 for k, dataset in self.items() 874 } 875 ) File ~\anaconda\Lib\site-packages\datasets\dataset_dict.py:854, in <dictcomp>(.0) 850 if cache_file_names is None: 851 cache_file_names = {k: None for k in self} 852 return DatasetDict( 853 { --> 854 k: dataset.map( 855 function=function, 856 with_indices=with_indices, 857 with_rank=with_rank, 858 input_columns=input_columns, 859 batched=batched, 860 batch_size=batch_size, 861 drop_last_batch=drop_last_batch, 862 remove_columns=remove_columns, 863 keep_in_memory=keep_in_memory, 864 load_from_cache_file=load_from_cache_file, 865 cache_file_name=cache_file_names[k], 866 writer_batch_size=writer_batch_size, 867 features=features, 868 disable_nullable=disable_nullable, 869 fn_kwargs=fn_kwargs, 870 num_proc=num_proc, 871 desc=desc, 872 ) 873 for k, dataset in self.items() 874 } 875 ) File ~\anaconda\Lib\site-packages\datasets\arrow_dataset.py:592, in transmit_tasks.<locals>.wrapper(*args, **kwargs) 590 self: "Dataset" = kwargs.pop("self") 591 # apply actual function --> 592 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) 593 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out] 594 for dataset in datasets: 595 # Remove task templates if a column mapping of the template is no longer valid File ~\anaconda\Lib\site-packages\datasets\arrow_dataset.py:557, in transmit_format.<locals>.wrapper(*args, **kwargs) 550 self_format = { 551 "type": self._format_type, 552 "format_kwargs": self._format_kwargs, 553 "columns": self._format_columns, 554 "output_all_columns": self._output_all_columns, 555 } 556 # apply actual function --> 557 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) 558 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out] 559 # re-apply format to the output File ~\anaconda\Lib\site-packages\datasets\arrow_dataset.py:3189, in Dataset.map(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc) 3182 logger.info(f"Spawning {num_proc} processes") 3183 with logging.tqdm( 3184 disable=not logging.is_progress_bar_enabled(), 3185 unit=" examples", 3186 total=pbar_total, 3187 desc=(desc or "Map") + f" (num_proc={num_proc})", 3188 ) as pbar: -> 3189 for rank, done, content in iflatmap_unordered( 3190 pool, Dataset._map_single, kwargs_iterable=kwargs_per_job 3191 ): 3192 if done: 3193 shards_done += 1 File ~\anaconda\Lib\site-packages\datasets\utils\py_utils.py:1394, in iflatmap_unordered(pool, func, kwargs_iterable) 1391 finally: 1392 if not pool_changed: 1393 # we get the result in case there's an error to raise -> 1394 [async_result.get(timeout=0.05) for async_result in async_results] File ~\anaconda\Lib\site-packages\datasets\utils\py_utils.py:1394, in <listcomp>(.0) 1391 finally: 1392 if not pool_changed: 1393 # we get the result in case there's an error to raise -> 1394 [async_result.get(timeout=0.05) for async_result in async_results] File ~\anaconda\Lib\site-packages\multiprocess\pool.py:774, in ApplyResult.get(self, timeout) 772 return self._value 773 else: --> 774 raise self._value NameError: name 'feature_extractor' is not defined ``` ### Steps to reproduce the bug 1. follow the steps in this blog post https://huggingface.co/blog/fine-tune-whisper 2. run this line of code `common_voice = common_voice.map(prepare_dataset, remove_columns=common_voice.column_names["train"], num_proc=4)` 3. I'm using jupyter notebook from anaconda ### Expected behavior No error message ### Environment info datasets version: 2.8.0 Python version: 3.11 Windows 10
{ "avatar_url": "https://avatars.githubusercontent.com/u/50768065?v=4", "events_url": "https://api.github.com/users/valaofficial/events{/privacy}", "followers_url": "https://api.github.com/users/valaofficial/followers", "following_url": "https://api.github.com/users/valaofficial/following{/other_user}", "gists_url": "https://api.github.com/users/valaofficial/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/valaofficial", "id": 50768065, "login": "valaofficial", "node_id": "MDQ6VXNlcjUwNzY4MDY1", "organizations_url": "https://api.github.com/users/valaofficial/orgs", "received_events_url": "https://api.github.com/users/valaofficial/received_events", "repos_url": "https://api.github.com/users/valaofficial/repos", "site_admin": false, "starred_url": "https://api.github.com/users/valaofficial/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/valaofficial/subscriptions", "type": "User", "url": "https://api.github.com/users/valaofficial" }
https://api.github.com/repos/huggingface/datasets/issues/6276/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6276/timeline
open
false
6,276
null
null
null
false
1,921,354,680
https://api.github.com/repos/huggingface/datasets/issues/6275
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6275/events
[]
null
2023-10-10T16:27:54Z
[]
https://github.com/huggingface/datasets/issues/6275
NONE
completed
null
null
[ "Hi! The process of contributing a dataset is explained here: https://huggingface.co/docs/datasets/upload_dataset. Also, check https://huggingface.co/docs/datasets/image_dataset for a more detailed explanation of how to share an image dataset." ]
Would like to Contribute a dataset
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6275/reactions" }
I_kwDODunzps5yhYu4
null
2023-10-02T07:00:21Z
https://api.github.com/repos/huggingface/datasets/issues/6275/comments
I have a dataset of 2500 images that can be used for color-blind machine-learning algorithms. Since , there was no dataset available online , I made this dataset myself and would like to contribute this now to community
{ "avatar_url": "https://avatars.githubusercontent.com/u/97907750?v=4", "events_url": "https://api.github.com/users/vikas70607/events{/privacy}", "followers_url": "https://api.github.com/users/vikas70607/followers", "following_url": "https://api.github.com/users/vikas70607/following{/other_user}", "gists_url": "https://api.github.com/users/vikas70607/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/vikas70607", "id": 97907750, "login": "vikas70607", "node_id": "U_kgDOBdX0Jg", "organizations_url": "https://api.github.com/users/vikas70607/orgs", "received_events_url": "https://api.github.com/users/vikas70607/received_events", "repos_url": "https://api.github.com/users/vikas70607/repos", "site_admin": false, "starred_url": "https://api.github.com/users/vikas70607/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vikas70607/subscriptions", "type": "User", "url": "https://api.github.com/users/vikas70607" }
https://api.github.com/repos/huggingface/datasets/issues/6275/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6275/timeline
closed
false
6,275
null
2023-10-10T16:27:54Z
null
false
1,921,036,328
https://api.github.com/repos/huggingface/datasets/issues/6274
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6274/events
[]
null
2024-08-14T04:42:02Z
[]
https://github.com/huggingface/datasets/issues/6274
NONE
completed
null
null
[ "Please tell me if the above info is not enough for solving the problem. I will then make my dataset public temporarily so that you can really reproduce the bug. ", "Hi! \r\nCould you share how to solve this problem? \r\nI faced this same error. " ]
FileNotFoundError for dataset with multiple builder config
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6274/reactions" }
I_kwDODunzps5ygLAo
null
2023-10-01T23:45:56Z
https://api.github.com/repos/huggingface/datasets/issues/6274/comments
### Describe the bug When there is only one config and only the dataset name is entered when using datasets.load_dataset(), it works fine. But if I create a second builder_config for my dataset and enter the config name when using datasets.load_dataset(), the following error will happen. FileNotFoundError: [Errno 2] No such file or directory: 'C:/Users/chenx/.cache/huggingface/datasets/my_dataset/0_shot_multiple_choice/1.0.0/97c3854a012cfd6b045e3be4c864739902af2d818bb9235b047baa94c302e9a2.incomplete/my_dataset-test-00000-00000-of-NNNNN.arrow' The "XXX.incomplete folder" in the cache folder of my dataset will disappear before "generating test split", which does not happen when config name is not entered and the config name is "default" C:\Users\chenx\.cache\huggingface\datasets\my_dataset\0_shot_multiple_choice\1.0.0 The folder that is supposed to remain under the above directory will disappear, and the data generator will not have a place to generate data into. ### Steps to reproduce the bug test = load_dataset('my_dataset', '0_shot_multiple_choice') ### Expected behavior FileNotFoundError: [Errno 2] No such file or directory: 'C:/Users/chenx/.cache/huggingface/datasets/my_dataset/0_shot_multiple_choice/1.0.0/97c3854a012cfd6b045e3be4c864739902af2d818bb9235b047baa94c302e9a2.incomplete/my_dataset-test-00000-00000-of-NNNNN.arrow' ### Environment info datasets 2.14.5 python 3.8.18
{ "avatar_url": "https://avatars.githubusercontent.com/u/97120485?v=4", "events_url": "https://api.github.com/users/LouisChen15/events{/privacy}", "followers_url": "https://api.github.com/users/LouisChen15/followers", "following_url": "https://api.github.com/users/LouisChen15/following{/other_user}", "gists_url": "https://api.github.com/users/LouisChen15/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/LouisChen15", "id": 97120485, "login": "LouisChen15", "node_id": "U_kgDOBcnw5Q", "organizations_url": "https://api.github.com/users/LouisChen15/orgs", "received_events_url": "https://api.github.com/users/LouisChen15/received_events", "repos_url": "https://api.github.com/users/LouisChen15/repos", "site_admin": false, "starred_url": "https://api.github.com/users/LouisChen15/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LouisChen15/subscriptions", "type": "User", "url": "https://api.github.com/users/LouisChen15" }
https://api.github.com/repos/huggingface/datasets/issues/6274/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6274/timeline
closed
false
6,274
null
2023-10-02T20:09:38Z
null
false
1,920,922,260
https://api.github.com/repos/huggingface/datasets/issues/6273
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6273/events
[]
null
2024-04-28T02:30:42Z
[]
https://github.com/huggingface/datasets/issues/6273
NONE
null
null
null
[ "This has already been reported in the HF Course repo (https://github.com/huggingface/course/issues/623).", "@lhoestq @albertvillanova @lewtun I don't think we are allowed to host these data files on the Hub (due to DMCA), which means the only option is to use a different dataset in the course (and to re-record the video 🙂), no?", "Keeping the video is maybe fine, we can add a note on youtube to suggest to load a dataset with a different name. Maybe C4 ? And update the code snippets on the website ?", "Maybe you want to try it with the PUBMED dataset that I reproduced based on the The [PubMed Abstract GitHub Site](http://github.com/thoppe/The-Pile-PubMed) and uploaded on the HuggingFace:\r\n\r\n```\r\nfrom datasets import load_dataset\r\npubmed_dataset = load_dataset(\"hwang2006/PUBMED_title_abstracts_2020_baseline\")\r\npubmed_dataset\r\n\r\n#Downloading data: 100%\r\n#7.98G/7.98G [11:47<00:00, 9.68MB/s]\r\n#Generating train split: 17722096/0 [00:36<00:00, 505376.37 examples/s]\r\n\r\n#DatasetDict({\r\n# train: Dataset({\r\n# features: ['meta', 'text'],\r\n# num_rows: 17722096\r\n# })\r\n#})\r\n```", "孔令涛说感谢感谢" ]
Broken Link to PubMed Abstracts dataset .
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/6273/reactions" }
I_kwDODunzps5yfvKU
null
2023-10-01T19:08:48Z
https://api.github.com/repos/huggingface/datasets/issues/6273/comments
### Describe the bug The link provided for the dataset is broken, data_files = [https://the-eye.eu/public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst](url) The ### Steps to reproduce the bug Steps to reproduce: 1) Head over to [https://huggingface.co/learn/nlp-course/chapter5/4?fw=pt#big-data-datasets-to-the-rescue](url) 2) In the Section "What is the Pile?", you can see a code snippet that contains the broken link. ### Expected behavior The link should Redirect to the "PubMed Abstracts dataset" as expected . ### Environment info .
{ "avatar_url": "https://avatars.githubusercontent.com/u/100606327?v=4", "events_url": "https://api.github.com/users/sameemqureshi/events{/privacy}", "followers_url": "https://api.github.com/users/sameemqureshi/followers", "following_url": "https://api.github.com/users/sameemqureshi/following{/other_user}", "gists_url": "https://api.github.com/users/sameemqureshi/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sameemqureshi", "id": 100606327, "login": "sameemqureshi", "node_id": "U_kgDOBf8hdw", "organizations_url": "https://api.github.com/users/sameemqureshi/orgs", "received_events_url": "https://api.github.com/users/sameemqureshi/received_events", "repos_url": "https://api.github.com/users/sameemqureshi/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sameemqureshi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sameemqureshi/subscriptions", "type": "User", "url": "https://api.github.com/users/sameemqureshi" }
https://api.github.com/repos/huggingface/datasets/issues/6273/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6273/timeline
open
false
6,273
null
null
null
false
1,920,831,487
https://api.github.com/repos/huggingface/datasets/issues/6272
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6272/events
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
null
2024-03-15T15:22:05Z
[]
https://github.com/huggingface/datasets/issues/6272
MEMBER
completed
null
null
[ "Also reported in https://github.com/huggingface/datasets/issues/6259", "I think it's best to drop duplicates with a `set` (as a temporary fix) and improve the patterns when/if https://github.com/fsspec/filesystem_spec/pull/1382 gets merged. @lhoestq Do you have some other ideas?", "Alternatively we could just use this no ?\r\n\r\n```python\r\nif config.FSSPEC_VERSION < version.parse(\"2023.9.0\"):\r\n KEYWORDS_IN_PATH_NAME_BASE_PATTERNS = [\r\n \"{keyword}[{sep}/]**\",\r\n \"**[{sep}]{keyword}[{sep}/]**\",\r\n \"**/{keyword}[{sep}/]**\",\r\n ]\r\nelse:\r\n KEYWORDS_IN_PATH_NAME_BASE_PATTERNS = [\r\n \"{keyword}[{sep}/]**\",\r\n \"**/*[{sep}]{keyword}[{sep}/]**\",\r\n \"**/*/{keyword}[{sep}/]**\",\r\n ]\r\n```\r\n\r\nThis way no need to implement sets, which would require a bit of work since we've always considered a list of pattern to be resolved as the concatenated list of resolved files for each pattern (including duplicates)\r\n", "Arf `\"**/*/{keyword}[{sep}/]**\"` does return `data/keyword.txt` in latest `fsspec` but not in `glob.glob`\r\n\r\nEDIT: actually forgot to set `recursive=True`", "Actually `glob.glob` does return it with `recursive=True` ! my bad", "Pff just tested and my idea sucks, pattern 1 and 3 obviously give duplicates ", "> I think it's best to drop duplicates with a set (as a temporary fix)\r\n\r\nI started https://github.com/huggingface/datasets/pull/6278 to use DataFilesSet objects instead of DataFilesList" ]
Duplicate `data_files` when named `<split>/<split>.parquet`
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6272/reactions" }
I_kwDODunzps5yfY__
null
2023-10-01T15:43:56Z
https://api.github.com/repos/huggingface/datasets/issues/6272/comments
e.g. with `u23429/stock_1_minute_ticker` ```ipython In [1]: from datasets import * In [2]: b = load_dataset_builder("u23429/stock_1_minute_ticker") Downloading readme: 100%|██████████████████████████| 627/627 [00:00<00:00, 246kB/s] In [3]: b.config.data_files Out[3]: {NamedSplit('train'): ['hf://datasets/u23429/stock_1_minute_ticker@65c973cf4ec061f01a363b40da4c1bb128ba4166/train/train.parquet', 'hf://datasets/u23429/stock_1_minute_ticker@65c973cf4ec061f01a363b40da4c1bb128ba4166/train/train.parquet'], NamedSplit('validation'): ['hf://datasets/u23429/stock_1_minute_ticker@65c973cf4ec061f01a363b40da4c1bb128ba4166/validation/validation.parquet', 'hf://datasets/u23429/stock_1_minute_ticker@65c973cf4ec061f01a363b40da4c1bb128ba4166/validation/validation.parquet'], NamedSplit('test'): ['hf://datasets/u23429/stock_1_minute_ticker@65c973cf4ec061f01a363b40da4c1bb128ba4166/test/test.parquet', 'hf://datasets/u23429/stock_1_minute_ticker@65c973cf4ec061f01a363b40da4c1bb128ba4166/test/test.parquet']} ``` This bug issue is present in the current `datasets` 2.14.5 and also on `main` even after https://github.com/huggingface/datasets/pull/6244 cc @mariosasko
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/6272/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6272/timeline
closed
false
6,272
null
2024-03-15T15:22:05Z
null
false
1,920,420,295
https://api.github.com/repos/huggingface/datasets/issues/6271
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6271/events
[]
null
2023-10-16T13:30:50Z
[]
https://github.com/huggingface/datasets/issues/6271
NONE
completed
null
null
[]
Overwriting Split overwrites data but not metadata, corrupting dataset
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6271/reactions" }
I_kwDODunzps5yd0nH
null
2023-09-30T22:37:31Z
https://api.github.com/repos/huggingface/datasets/issues/6271/comments
### Describe the bug I want to be able to overwrite/update/delete splits in my dataset. Currently the only way to do is to manually go into the dataset and delete the split. If I try to overwrite programmatically I end up in an error state and (somewhat) corrupting the dataset. Read below. **Current Behavior** When I push to an existing split I get this error: `ValueError: Split complexRoofLocation_01Apr2023_to_31May2023test already present` This seems to suggest that the library doesn't support overwriting splits. **Potential Bug** What’s strange is that datasets, despite the operation erroring out with the ValueError above, does, in fact, overwrite the split: `Pushing dataset shards to the dataset hub: 100% [.....................] 1/1 [00:00<00:00, 55.04it/s]` Even though you got an error message and your code fails, your dataset is now changed. That seems like a bug. Either don't change the dataset, or don't throw the error and allow the script to proceed. Additional Bug While it overwrites the split, it doesn’t overwrite the split’s information. Because of this when you pull down the dataset you may end up getting a `NonMatchingSplitsSizesError` if the size of the dataset during the overwrite is different. For example, my original split had 5 rows, but on my overwrite, I only had 4. Then when I try to download the dataset, I get a `NonMatchingSplitsSizesError` because the dataset's data.json states there’s 5 but only 4 exist in the split. Expected Behavior This corrupts the dataset rendering it unusable (until you take manual intervention). Either the library should let the overwrite happen (which it does but should also update the metadata) or it shouldn’t do anything. ### Steps to reproduce the bug [Colab Notebook](https://colab.research.google.com/drive/1bqVkD06Ngs9MQNdSk_ygCG6y1UqXA4pC?usp=sharing) ### Expected behavior The split should be overwritten and I should be able to use the new version of the dataset without issue. ### Environment info - `datasets` version: 2.14.5 - Platform: Linux-5.15.120+-x86_64-with-glibc2.35 - Python version: 3.10.12 - Huggingface_hub version: 0.17.3 - PyArrow version: 9.0.0 - Pandas version: 1.5.3
{ "avatar_url": "https://avatars.githubusercontent.com/u/13859249?v=4", "events_url": "https://api.github.com/users/govindrai/events{/privacy}", "followers_url": "https://api.github.com/users/govindrai/followers", "following_url": "https://api.github.com/users/govindrai/following{/other_user}", "gists_url": "https://api.github.com/users/govindrai/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/govindrai", "id": 13859249, "login": "govindrai", "node_id": "MDQ6VXNlcjEzODU5MjQ5", "organizations_url": "https://api.github.com/users/govindrai/orgs", "received_events_url": "https://api.github.com/users/govindrai/received_events", "repos_url": "https://api.github.com/users/govindrai/repos", "site_admin": false, "starred_url": "https://api.github.com/users/govindrai/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/govindrai/subscriptions", "type": "User", "url": "https://api.github.com/users/govindrai" }
https://api.github.com/repos/huggingface/datasets/issues/6271/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6271/timeline
closed
false
6,271
null
2023-10-16T13:30:50Z
null
false
1,920,329,373
https://api.github.com/repos/huggingface/datasets/issues/6270
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6270/events
[]
null
2023-10-11T20:29:12Z
[]
https://github.com/huggingface/datasets/issues/6270
CONTRIBUTOR
completed
null
null
[ "`gen_kwargs` should be a `dict`, as stated in the docstring, but you are passing a `list`.\r\n\r\nSo, to fix the error, replace the list of dicts with a dict of lists (and slightly modify the generator function):\r\n```python\r\nfrom pathlib import Path\r\nimport datasets\r\n\r\ndef process_yaml(files):\r\n for f in files:\r\n # process\r\n yield dict(...)\r\n\r\n\r\nif __name__ == '__main__':\r\n import sys\r\n dir = Path(sys.argv[0]).parent\r\n ds = datasets.Dataset.from_generator(process_yaml, gen_kwargs={'files': [f for f in dir.glob('*.yml')]})\r\n ds.to_json('training.jsonl')\r\n```", "That runs, and because my dataset is small, it's what I did to get past the problem.\r\nHowever, it does not produce a sharded dataset. From the doc string I expect there ought to be a way to call from_generator such that num_shards in the resulting data set is equal to the number of items in the list.\r\nThe part of the doc string that your suggestion is not responsive to is:\r\n` You can define a sharded dataset by passing the list of shards in *g\r\nen_kwargs*.\r\n`\r\n\r\nWhat your suggestion does is calls the generator once, with the list argument, and produces a single shard dataset.\r\n", "The sharding mentioned here refers to using this function with `num_proc` (multiprocessing splits the `kwargs` into shards and passes them to the generator function)\r\n\r\n> That runs, and because my dataset is small, it's what I did to get past the problem.\r\n\r\n`from_generator` generates a memory-mapped dataset (can be larger than RAM), so the dataset size should not be an issue unless the generator function's implementation does not properly free the memory.\r\n", "It sounds like you are saying that num_proc affects the form of gen_kwargs.\r\nAre you saying that for non-zero num_proc gen_kwargs should be a list whose length is the same as num_proc?\r\nOr are you saying that for non-zero num_proc, gen_kwargs should be a dict whose elements are lists the length of num_proc?\r\n", "I ran some tests. So, it looks like with num_proc greater than 1, gen_kwargs is expected to be a dict of lists. It calls the generator also with a dict of lists, but the lists are split.\r\nI.E. if my original has `gen_kwargs=dict(a=[0,1,2])`, then my generator might get called with `gen_kwalrgs=dict([0])`.\r\nThat all makes sense, but I definitely think there is room for improvement in the doc string here.\r\nIn order to suggest improvements to the doc string, I need to look at how the gen_kwargs are split, and figure out if:\r\n* num_proc needs to exactly equal the length of the lists\r\n* num_proc needs to evenly divide the length of the lists\r\n* Or there's no required relationship.\r\nI'll look into that and then propose an improved doc string if no one else gets to it first.", "Okay, that was fun; I took a dive through the dataset code and feel like I have a much better understanding.\r\nHere is my understanding of the behavior:\r\n* max_proc is an upper limit on the number of shards that `from_generator` produces\r\n* If `max_proc` is greater than 1, then all lists in *gen_kwargs* must be the same length\r\n* If the lists in *gen_kwargs* are shorter than *num_proc* elements, *num_proc* will be reduced and a warning produced. Put another way, `min(list_length, num_shards)` shards will be produced\r\n* The members of the lists in *gen_kwargs* will be partitioned among the created jobs.\r\nTo validate the above, take a look at\r\n`_number_of_shards_in_gen_kwargs` and `_distribute_shards` and `_split_gen_kwargs` in utils/sharding.py.\r\nI've also chased down starting at *from_generator* all the way through to GeneratorBuilder and the calls to the functions in sharding.py.\r\nTomorrow I'll take a look at the contributing guidelines and see what's involved in putting together a PR to improve the doc string." ]
Dataset.from_generator raises with sharded gen_args
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6270/reactions" }
I_kwDODunzps5ydead
null
2023-09-30T16:50:06Z
https://api.github.com/repos/huggingface/datasets/issues/6270/comments
### Describe the bug According to the docs of Datasets.from_generator: ``` gen_kwargs(`dict`, *optional*): Keyword arguments to be passed to the `generator` callable. You can define a sharded dataset by passing the list of shards in `gen_kwargs`. ``` So I'd expect that if gen_kwargs was a list, then my generator would be called once for each element in the list with the dict in the list for that element. It doesn't work that way though. ### Steps to reproduce the bug ```python #!/usr/bin/python from pathlib import Path import datasets def process_yaml(file): yield dict(example=42) if __name__ == '__main__': import sys dir = Path(sys.argv[0]).parent ds = datasets.Dataset.from_generator(process_yaml, gen_kwargs=[{'file':f} for f in dir.glob('*.yml')], ) ds.to_json('training.jsonl') ``` ``` Generating train split: 0 examples [00:00, ? examples/s] Traceback (most recent call last): File "/tmp/dataset_bug.py", line 13, in <module> ds = datasets.Dataset.from_generator(process_yaml, gen_kwargs=[{'file':f} for f in dir.glob('*.yml')], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/hartmans/ai/venv/lib/python3.11/site-packages/datasets/arrow_dataset.py", line 1072, in from_generator ).read() ^^^^^^ File "/home/hartmans/ai/venv/lib/python3.11/site-packages/datasets/io/generator.py", line 47, in read self.builder.download_and_prepare( File "/home/hartmans/ai/venv/lib/python3.11/site-packages/datasets/builder.py", line 954, in download_and_prepare self._download_and_prepare( File "/home/hartmans/ai/venv/lib/python3.11/site-packages/datasets/builder.py", line 1717, in _download_and_prepare super()._download_and_prepare( File "/home/hartmans/ai/venv/lib/python3.11/site-packages/datasets/builder.py", line 1049, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/hartmans/ai/venv/lib/python3.11/site-packages/datasets/builder.py", line 1555, in _prepare_split for job_id, done, content in self._prepare_split_single( File "/home/hartmans/ai/venv/lib/python3.11/site-packages/datasets/builder.py", line 1656, in _prepare_split_single generator = self._generate_examples(**gen_kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ TypeError: datasets.packaged_modules.generator.generator.Generator._generate_examples() argument after ** must be a ``` mapping, not list ### Expected behavior I would expect that process_yaml would be called once for each yaml file in the directory where the script is run. I also tried with the list being in gen_kwargs, but in that case process_yaml gets called with a list. ### Environment info - `datasets` version: 2.14.6.dev0 (git commit 0cc77d7f45c7369; also tested with 2.14.0) - Platform: Linux-6.1.0-10-amd64-x86_64-with-glibc2.36 - Python version: 3.11.2 - Huggingface_hub version: 0.16.4 - PyArrow version: 12.0.1 - Pandas version: 2.0.3
{ "avatar_url": "https://avatars.githubusercontent.com/u/53510?v=4", "events_url": "https://api.github.com/users/hartmans/events{/privacy}", "followers_url": "https://api.github.com/users/hartmans/followers", "following_url": "https://api.github.com/users/hartmans/following{/other_user}", "gists_url": "https://api.github.com/users/hartmans/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/hartmans", "id": 53510, "login": "hartmans", "node_id": "MDQ6VXNlcjUzNTEw", "organizations_url": "https://api.github.com/users/hartmans/orgs", "received_events_url": "https://api.github.com/users/hartmans/received_events", "repos_url": "https://api.github.com/users/hartmans/repos", "site_admin": false, "starred_url": "https://api.github.com/users/hartmans/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hartmans/subscriptions", "type": "User", "url": "https://api.github.com/users/hartmans" }
https://api.github.com/repos/huggingface/datasets/issues/6270/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6270/timeline
closed
false
6,270
null
2023-10-11T20:29:11Z
null
false
1,919,572,790
https://api.github.com/repos/huggingface/datasets/issues/6269
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6269/events
[]
null
2023-10-16T16:03:18Z
[]
https://github.com/huggingface/datasets/pull/6269
COLLABORATOR
null
false
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005864 / 0.011353 (-0.005489) | 0.003535 / 0.011008 (-0.007474) | 0.080732 / 0.038508 (0.042224) | 0.057072 / 0.023109 (0.033963) | 0.334342 / 0.275898 (0.058444) | 0.361345 / 0.323480 (0.037865) | 0.003290 / 0.007986 (-0.004696) | 0.003794 / 0.004328 (-0.000534) | 0.063414 / 0.004250 (0.059163) | 0.046901 / 0.037052 (0.009848) | 0.335973 / 0.258489 (0.077484) | 0.377929 / 0.293841 (0.084088) | 0.027199 / 0.128546 (-0.101348) | 0.008049 / 0.075646 (-0.067597) | 0.261810 / 0.419271 (-0.157462) | 0.044669 / 0.043533 (0.001136) | 0.333600 / 0.255139 (0.078461) | 0.356362 / 0.283200 (0.073162) | 0.020325 / 0.141683 (-0.121358) | 1.458138 / 1.452155 (0.005984) | 1.505923 / 1.492716 (0.013207) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.216456 / 0.018006 (0.198450) | 0.421750 / 0.000490 (0.421261) | 0.007359 / 0.000200 (0.007159) | 0.000246 / 0.000054 (0.000191) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023400 / 0.037411 (-0.014012) | 0.073363 / 0.014526 (0.058838) | 0.083533 / 0.176557 (-0.093023) | 0.144045 / 0.737135 (-0.593090) | 0.084050 / 0.296338 (-0.212288) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.398354 / 0.215209 (0.183145) | 3.982875 / 2.077655 (1.905220) | 2.047299 / 1.504120 (0.543180) | 1.873780 / 1.541195 (0.332585) | 1.977044 / 1.468490 (0.508554) | 0.497038 / 4.584777 (-4.087739) | 3.039743 / 3.745712 (-0.705969) | 2.832885 / 5.269862 (-2.436977) | 1.827300 / 4.565676 (-2.738377) | 0.057503 / 0.424275 (-0.366772) | 0.006272 / 0.007607 (-0.001335) | 0.468681 / 0.226044 (0.242637) | 4.696551 / 2.268929 (2.427622) | 2.413805 / 55.444624 (-53.030819) | 2.157199 / 6.876477 (-4.719278) | 2.345986 / 2.142072 (0.203914) | 0.584632 / 4.805227 (-4.220595) | 0.124684 / 6.500664 (-6.375980) | 0.060090 / 0.075469 (-0.015379) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.293551 / 1.841788 (-0.548236) | 17.198292 / 8.074308 (9.123984) | 13.677910 / 10.191392 (3.486518) | 0.146633 / 0.680424 (-0.533791) | 0.016711 / 0.534201 (-0.517490) | 0.331644 / 0.579283 (-0.247639) | 0.360148 / 0.434364 (-0.074215) | 0.381194 / 0.540337 (-0.159143) | 0.537952 / 1.386936 (-0.848984) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006020 / 0.011353 (-0.005333) | 0.003557 / 0.011008 (-0.007451) | 0.061926 / 0.038508 (0.023418) | 0.056246 / 0.023109 (0.033137) | 0.446679 / 0.275898 (0.170781) | 0.479843 / 0.323480 (0.156363) | 0.004656 / 0.007986 (-0.003330) | 0.002823 / 0.004328 (-0.001505) | 0.061366 / 0.004250 (0.057115) | 0.045793 / 0.037052 (0.008740) | 0.460807 / 0.258489 (0.202318) | 0.485467 / 0.293841 (0.191626) | 0.028555 / 0.128546 (-0.099991) | 0.007973 / 0.075646 (-0.067674) | 0.068305 / 0.419271 (-0.350966) | 0.040844 / 0.043533 (-0.002689) | 0.463715 / 0.255139 (0.208576) | 0.474553 / 0.283200 (0.191354) | 0.019959 / 0.141683 (-0.121723) | 1.432527 / 1.452155 (-0.019628) | 1.485410 / 1.492716 (-0.007307) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.205555 / 0.018006 (0.187549) | 0.408271 / 0.000490 (0.407781) | 0.004325 / 0.000200 (0.004125) | 0.000076 / 0.000054 (0.000022) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026338 / 0.037411 (-0.011074) | 0.080534 / 0.014526 (0.066008) | 0.093935 / 0.176557 (-0.082622) | 0.146446 / 0.737135 (-0.590689) | 0.092890 / 0.296338 (-0.203448) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.463879 / 0.215209 (0.248670) | 4.646411 / 2.077655 (2.568756) | 2.567320 / 1.504120 (1.063200) | 2.384376 / 1.541195 (0.843181) | 2.412738 / 1.468490 (0.944248) | 0.510240 / 4.584777 (-4.074537) | 3.094988 / 3.745712 (-0.650724) | 2.837700 / 5.269862 (-2.432161) | 1.850163 / 4.565676 (-2.715513) | 0.059320 / 0.424275 (-0.364955) | 0.006330 / 0.007607 (-0.001277) | 0.537770 / 0.226044 (0.311726) | 5.385556 / 2.268929 (3.116627) | 3.036088 / 55.444624 (-52.408536) | 2.650464 / 6.876477 (-4.226013) | 2.755676 / 2.142072 (0.613603) | 0.607353 / 4.805227 (-4.197875) | 0.124589 / 6.500664 (-6.376075) | 0.060778 / 0.075469 (-0.014691) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.343243 / 1.841788 (-0.498545) | 17.630281 / 8.074308 (9.555973) | 14.401219 / 10.191392 (4.209827) | 0.143252 / 0.680424 (-0.537172) | 0.017880 / 0.534201 (-0.516321) | 0.337391 / 0.579283 (-0.241892) | 0.373531 / 0.434364 (-0.060833) | 0.398408 / 0.540337 (-0.141929) | 0.558925 / 1.386936 (-0.828011) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#a8f511638b486b9f83b17fd69a505fe606ad257b \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006552 / 0.011353 (-0.004801) | 0.003853 / 0.011008 (-0.007155) | 0.077673 / 0.038508 (0.039165) | 0.066043 / 0.023109 (0.042934) | 0.289858 / 0.275898 (0.013960) | 0.299009 / 0.323480 (-0.024471) | 0.004806 / 0.007986 (-0.003179) | 0.003517 / 0.004328 (-0.000811) | 0.058227 / 0.004250 (0.053977) | 0.052134 / 0.037052 (0.015082) | 0.328800 / 0.258489 (0.070311) | 0.317616 / 0.293841 (0.023776) | 0.028344 / 0.128546 (-0.100202) | 0.007853 / 0.075646 (-0.067794) | 0.291207 / 0.419271 (-0.128065) | 0.052977 / 0.043533 (0.009444) | 0.287548 / 0.255139 (0.032409) | 0.307647 / 0.283200 (0.024448) | 0.023899 / 0.141683 (-0.117784) | 1.382267 / 1.452155 (-0.069888) | 1.589915 / 1.492716 (0.097199) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.246244 / 0.018006 (0.228238) | 0.478255 / 0.000490 (0.477766) | 0.014115 / 0.000200 (0.013915) | 0.000305 / 0.000054 (0.000250) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027033 / 0.037411 (-0.010378) | 0.073988 / 0.014526 (0.059462) | 0.088337 / 0.176557 (-0.088219) | 0.144067 / 0.737135 (-0.593069) | 0.091295 / 0.296338 (-0.205043) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.365904 / 0.215209 (0.150695) | 3.537330 / 2.077655 (1.459675) | 1.678341 / 1.504120 (0.174221) | 1.530297 / 1.541195 (-0.010898) | 1.605634 / 1.468490 (0.137144) | 0.437461 / 4.584777 (-4.147316) | 3.419040 / 3.745712 (-0.326672) | 3.203549 / 5.269862 (-2.066312) | 1.913214 / 4.565676 (-2.652463) | 0.052675 / 0.424275 (-0.371600) | 0.006681 / 0.007607 (-0.000926) | 0.429269 / 0.226044 (0.203225) | 4.214051 / 2.268929 (1.945122) | 2.217928 / 55.444624 (-53.226696) | 1.842679 / 6.876477 (-5.033798) | 1.867961 / 2.142072 (-0.274111) | 0.550566 / 4.805227 (-4.254661) | 0.118015 / 6.500664 (-6.382649) | 0.054749 / 0.075469 (-0.020720) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.170547 / 1.841788 (-0.671241) | 18.410567 / 8.074308 (10.336259) | 12.729992 / 10.191392 (2.538600) | 0.160426 / 0.680424 (-0.519998) | 0.021259 / 0.534201 (-0.512942) | 0.369573 / 0.579283 (-0.209710) | 0.440350 / 0.434364 (0.005986) | 0.443755 / 0.540337 (-0.096582) | 0.645614 / 1.386936 (-0.741322) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005913 / 0.011353 (-0.005440) | 0.003542 / 0.011008 (-0.007466) | 0.057621 / 0.038508 (0.019113) | 0.065822 / 0.023109 (0.042713) | 0.390847 / 0.275898 (0.114949) | 0.393127 / 0.323480 (0.069647) | 0.005040 / 0.007986 (-0.002945) | 0.002944 / 0.004328 (-0.001384) | 0.069058 / 0.004250 (0.064808) | 0.051594 / 0.037052 (0.014542) | 0.383745 / 0.258489 (0.125256) | 0.414372 / 0.293841 (0.120531) | 0.030038 / 0.128546 (-0.098508) | 0.008109 / 0.075646 (-0.067538) | 0.065444 / 0.419271 (-0.353828) | 0.045974 / 0.043533 (0.002441) | 0.401695 / 0.255139 (0.146556) | 0.417834 / 0.283200 (0.134635) | 0.020137 / 0.141683 (-0.121546) | 1.452130 / 1.452155 (-0.000025) | 1.455259 / 1.492716 (-0.037458) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.228262 / 0.018006 (0.210255) | 0.455155 / 0.000490 (0.454665) | 0.006667 / 0.000200 (0.006467) | 0.000207 / 0.000054 (0.000153) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030159 / 0.037411 (-0.007252) | 0.098478 / 0.014526 (0.083952) | 0.101409 / 0.176557 (-0.075147) | 0.148689 / 0.737135 (-0.588446) | 0.103067 / 0.296338 (-0.193272) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.444095 / 0.215209 (0.228886) | 3.991588 / 2.077655 (1.913934) | 2.147845 / 1.504120 (0.643725) | 2.007871 / 1.541195 (0.466676) | 2.042074 / 1.468490 (0.573584) | 0.451592 / 4.584777 (-4.133185) | 3.439400 / 3.745712 (-0.306312) | 3.107756 / 5.269862 (-2.162106) | 1.909785 / 4.565676 (-2.655891) | 0.051718 / 0.424275 (-0.372558) | 0.006597 / 0.007607 (-0.001010) | 0.480822 / 0.226044 (0.254777) | 4.913235 / 2.268929 (2.644307) | 2.631882 / 55.444624 (-52.812742) | 2.397209 / 6.876477 (-4.479267) | 2.487191 / 2.142072 (0.345119) | 0.566321 / 4.805227 (-4.238906) | 0.121741 / 6.500664 (-6.378924) | 0.053399 / 0.075469 (-0.022070) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.256599 / 1.841788 (-0.585189) | 18.891127 / 8.074308 (10.816819) | 13.219662 / 10.191392 (3.028270) | 0.154570 / 0.680424 (-0.525854) | 0.022599 / 0.534201 (-0.511602) | 0.361998 / 0.579283 (-0.217286) | 0.413287 / 0.434364 (-0.021077) | 0.464867 / 0.540337 (-0.075470) | 0.638880 / 1.386936 (-0.748056) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#869e6bc775cf4dff1b92834426e1a286b104432b \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010625 / 0.011353 (-0.000728) | 0.005129 / 0.011008 (-0.005879) | 0.119975 / 0.038508 (0.081467) | 0.100128 / 0.023109 (0.077019) | 0.448678 / 0.275898 (0.172780) | 0.533150 / 0.323480 (0.209670) | 0.005881 / 0.007986 (-0.002105) | 0.007451 / 0.004328 (0.003123) | 0.090792 / 0.004250 (0.086542) | 0.073416 / 0.037052 (0.036363) | 0.455395 / 0.258489 (0.196906) | 0.497572 / 0.293841 (0.203731) | 0.053112 / 0.128546 (-0.075434) | 0.014619 / 0.075646 (-0.061027) | 0.388023 / 0.419271 (-0.031248) | 0.074004 / 0.043533 (0.030471) | 0.435319 / 0.255139 (0.180180) | 0.465985 / 0.283200 (0.182785) | 0.046991 / 0.141683 (-0.094692) | 1.895717 / 1.452155 (0.443563) | 2.086600 / 1.492716 (0.593884) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.334412 / 0.018006 (0.316406) | 0.645510 / 0.000490 (0.645020) | 0.019175 / 0.000200 (0.018975) | 0.000429 / 0.000054 (0.000374) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034385 / 0.037411 (-0.003026) | 0.108939 / 0.014526 (0.094413) | 0.125937 / 0.176557 (-0.050619) | 0.205643 / 0.737135 (-0.531493) | 0.127662 / 0.296338 (-0.168676) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.674093 / 0.215209 (0.458884) | 6.646554 / 2.077655 (4.568900) | 2.837698 / 1.504120 (1.333578) | 2.397199 / 1.541195 (0.856004) | 2.485856 / 1.468490 (1.017366) | 0.955142 / 4.584777 (-3.629635) | 5.667462 / 3.745712 (1.921750) | 5.354129 / 5.269862 (0.084268) | 3.301609 / 4.565676 (-1.264068) | 0.106051 / 0.424275 (-0.318224) | 0.009287 / 0.007607 (0.001680) | 0.766678 / 0.226044 (0.540634) | 7.786701 / 2.268929 (5.517772) | 3.665463 / 55.444624 (-51.779161) | 2.982912 / 6.876477 (-3.893564) | 3.053363 / 2.142072 (0.911290) | 1.141090 / 4.805227 (-3.664137) | 0.223975 / 6.500664 (-6.276689) | 0.093024 / 0.075469 (0.017555) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.728175 / 1.841788 (-0.113613) | 25.640134 / 8.074308 (17.565826) | 22.124769 / 10.191392 (11.933377) | 0.237489 / 0.680424 (-0.442935) | 0.030353 / 0.534201 (-0.503848) | 0.509371 / 0.579283 (-0.069913) | 0.642320 / 0.434364 (0.207956) | 0.576889 / 0.540337 (0.036552) | 0.899377 / 1.386936 (-0.487559) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010846 / 0.011353 (-0.000507) | 0.005876 / 0.011008 (-0.005132) | 0.090810 / 0.038508 (0.052302) | 0.106651 / 0.023109 (0.083542) | 0.551064 / 0.275898 (0.275166) | 0.608328 / 0.323480 (0.284848) | 0.007563 / 0.007986 (-0.000423) | 0.004595 / 0.004328 (0.000267) | 0.089125 / 0.004250 (0.084874) | 0.076577 / 0.037052 (0.039525) | 0.579970 / 0.258489 (0.321481) | 0.620214 / 0.293841 (0.326373) | 0.052577 / 0.128546 (-0.075970) | 0.013734 / 0.075646 (-0.061912) | 0.099825 / 0.419271 (-0.319447) | 0.068391 / 0.043533 (0.024858) | 0.564733 / 0.255139 (0.309594) | 0.593925 / 0.283200 (0.310726) | 0.037201 / 0.141683 (-0.104482) | 1.880969 / 1.452155 (0.428815) | 2.065094 / 1.492716 (0.572377) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.426148 / 0.018006 (0.408141) | 0.673935 / 0.000490 (0.673445) | 0.124190 / 0.000200 (0.123990) | 0.001219 / 0.000054 (0.001164) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.040280 / 0.037411 (0.002868) | 0.122042 / 0.014526 (0.107516) | 0.131333 / 0.176557 (-0.045223) | 0.203039 / 0.737135 (-0.534096) | 0.134851 / 0.296338 (-0.161487) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.684599 / 0.215209 (0.469390) | 6.727529 / 2.077655 (4.649874) | 3.255228 / 1.504120 (1.751108) | 2.925865 / 1.541195 (1.384670) | 2.978762 / 1.468490 (1.510272) | 0.931769 / 4.584777 (-3.653008) | 5.988956 / 3.745712 (2.243244) | 5.228049 / 5.269862 (-0.041812) | 3.341470 / 4.565676 (-1.224206) | 0.106737 / 0.424275 (-0.317539) | 0.009847 / 0.007607 (0.002240) | 0.813954 / 0.226044 (0.587909) | 8.137071 / 2.268929 (5.868143) | 4.140725 / 55.444624 (-51.303899) | 3.500579 / 6.876477 (-3.375898) | 3.623120 / 2.142072 (1.481047) | 1.096634 / 4.805227 (-3.708593) | 0.236938 / 6.500664 (-6.263726) | 0.083099 / 0.075469 (0.007630) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.856112 / 1.841788 (0.014324) | 26.531325 / 8.074308 (18.457017) | 24.435866 / 10.191392 (14.244474) | 0.264093 / 0.680424 (-0.416331) | 0.034872 / 0.534201 (-0.499329) | 0.520682 / 0.579283 (-0.058601) | 0.635010 / 0.434364 (0.200646) | 0.645451 / 0.540337 (0.105113) | 0.914616 / 1.386936 (-0.472320) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#d8c29b9416371283e8aaabee235a91b2f45a05ee \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005928 / 0.011353 (-0.005425) | 0.003633 / 0.011008 (-0.007375) | 0.079554 / 0.038508 (0.041046) | 0.057093 / 0.023109 (0.033984) | 0.311374 / 0.275898 (0.035476) | 0.343778 / 0.323480 (0.020298) | 0.004634 / 0.007986 (-0.003352) | 0.002886 / 0.004328 (-0.001443) | 0.061888 / 0.004250 (0.057637) | 0.045895 / 0.037052 (0.008843) | 0.316447 / 0.258489 (0.057958) | 0.358141 / 0.293841 (0.064300) | 0.027247 / 0.128546 (-0.101300) | 0.007947 / 0.075646 (-0.067699) | 0.259070 / 0.419271 (-0.160201) | 0.043802 / 0.043533 (0.000269) | 0.315453 / 0.255139 (0.060314) | 0.335282 / 0.283200 (0.052082) | 0.021096 / 0.141683 (-0.120587) | 1.443219 / 1.452155 (-0.008936) | 1.523140 / 1.492716 (0.030423) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.222957 / 0.018006 (0.204951) | 0.414611 / 0.000490 (0.414122) | 0.008354 / 0.000200 (0.008154) | 0.000249 / 0.000054 (0.000195) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023880 / 0.037411 (-0.013532) | 0.074523 / 0.014526 (0.059997) | 0.084803 / 0.176557 (-0.091754) | 0.146701 / 0.737135 (-0.590435) | 0.084990 / 0.296338 (-0.211348) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.397736 / 0.215209 (0.182527) | 3.961740 / 2.077655 (1.884086) | 1.909014 / 1.504120 (0.404894) | 1.823026 / 1.541195 (0.281831) | 1.966235 / 1.468490 (0.497745) | 0.498056 / 4.584777 (-4.086721) | 3.041408 / 3.745712 (-0.704304) | 2.998010 / 5.269862 (-2.271852) | 1.887293 / 4.565676 (-2.678384) | 0.057096 / 0.424275 (-0.367179) | 0.006338 / 0.007607 (-0.001269) | 0.465166 / 0.226044 (0.239122) | 4.667710 / 2.268929 (2.398781) | 2.480798 / 55.444624 (-52.963826) | 2.270701 / 6.876477 (-4.605776) | 2.376470 / 2.142072 (0.234397) | 0.579873 / 4.805227 (-4.225355) | 0.125032 / 6.500664 (-6.375632) | 0.061057 / 0.075469 (-0.014412) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.229916 / 1.841788 (-0.611872) | 17.829628 / 8.074308 (9.755320) | 13.860184 / 10.191392 (3.668792) | 0.143507 / 0.680424 (-0.536917) | 0.016943 / 0.534201 (-0.517258) | 0.350106 / 0.579283 (-0.229178) | 0.364547 / 0.434364 (-0.069817) | 0.398889 / 0.540337 (-0.141448) | 0.557948 / 1.386936 (-0.828988) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006052 / 0.011353 (-0.005301) | 0.003636 / 0.011008 (-0.007372) | 0.062705 / 0.038508 (0.024197) | 0.057753 / 0.023109 (0.034644) | 0.453219 / 0.275898 (0.177321) | 0.485179 / 0.323480 (0.161699) | 0.004886 / 0.007986 (-0.003100) | 0.002838 / 0.004328 (-0.001490) | 0.062593 / 0.004250 (0.058343) | 0.047476 / 0.037052 (0.010423) | 0.454266 / 0.258489 (0.195777) | 0.487939 / 0.293841 (0.194098) | 0.028124 / 0.128546 (-0.100422) | 0.008000 / 0.075646 (-0.067647) | 0.068335 / 0.419271 (-0.350937) | 0.040491 / 0.043533 (-0.003042) | 0.457868 / 0.255139 (0.202729) | 0.476355 / 0.283200 (0.193155) | 0.019557 / 0.141683 (-0.122126) | 1.507111 / 1.452155 (0.054956) | 1.569720 / 1.492716 (0.077003) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.209205 / 0.018006 (0.191199) | 0.411782 / 0.000490 (0.411292) | 0.003544 / 0.000200 (0.003344) | 0.000072 / 0.000054 (0.000018) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026569 / 0.037411 (-0.010842) | 0.081213 / 0.014526 (0.066687) | 0.090971 / 0.176557 (-0.085585) | 0.145287 / 0.737135 (-0.591849) | 0.091792 / 0.296338 (-0.204546) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.458329 / 0.215209 (0.243120) | 4.574463 / 2.077655 (2.496808) | 2.516693 / 1.504120 (1.012573) | 2.329463 / 1.541195 (0.788269) | 2.386704 / 1.468490 (0.918214) | 0.503526 / 4.584777 (-4.081251) | 3.113382 / 3.745712 (-0.632331) | 2.872538 / 5.269862 (-2.397323) | 1.865483 / 4.565676 (-2.700194) | 0.058292 / 0.424275 (-0.365983) | 0.006434 / 0.007607 (-0.001173) | 0.530804 / 0.226044 (0.304760) | 5.312666 / 2.268929 (3.043738) | 2.992569 / 55.444624 (-52.452055) | 2.611524 / 6.876477 (-4.264953) | 2.779569 / 2.142072 (0.637497) | 0.595200 / 4.805227 (-4.210028) | 0.123957 / 6.500664 (-6.376707) | 0.060601 / 0.075469 (-0.014868) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.345536 / 1.841788 (-0.496252) | 18.183827 / 8.074308 (10.109519) | 14.814084 / 10.191392 (4.622692) | 0.145305 / 0.680424 (-0.535119) | 0.018812 / 0.534201 (-0.515389) | 0.334793 / 0.579283 (-0.244490) | 0.375331 / 0.434364 (-0.059033) | 0.392499 / 0.540337 (-0.147839) | 0.563286 / 1.386936 (-0.823650) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#1e186f0b7fe851f1f474020f0d6b1dc35114f994 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008922 / 0.011353 (-0.002431) | 0.005169 / 0.011008 (-0.005840) | 0.106275 / 0.038508 (0.067767) | 0.076446 / 0.023109 (0.053337) | 0.400207 / 0.275898 (0.124309) | 0.476262 / 0.323480 (0.152782) | 0.006032 / 0.007986 (-0.001954) | 0.004266 / 0.004328 (-0.000063) | 0.083518 / 0.004250 (0.079267) | 0.059644 / 0.037052 (0.022592) | 0.409094 / 0.258489 (0.150605) | 0.470400 / 0.293841 (0.176559) | 0.050161 / 0.128546 (-0.078385) | 0.013580 / 0.075646 (-0.062066) | 0.375047 / 0.419271 (-0.044224) | 0.068319 / 0.043533 (0.024786) | 0.433765 / 0.255139 (0.178626) | 0.449221 / 0.283200 (0.166021) | 0.037636 / 0.141683 (-0.104047) | 1.825855 / 1.452155 (0.373700) | 1.889665 / 1.492716 (0.396948) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.319622 / 0.018006 (0.301616) | 0.588878 / 0.000490 (0.588388) | 0.017790 / 0.000200 (0.017590) | 0.000532 / 0.000054 (0.000477) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031152 / 0.037411 (-0.006259) | 0.093808 / 0.014526 (0.079282) | 0.119296 / 0.176557 (-0.057261) | 0.181845 / 0.737135 (-0.555291) | 0.108527 / 0.296338 (-0.187811) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.575106 / 0.215209 (0.359896) | 5.776322 / 2.077655 (3.698668) | 2.592913 / 1.504120 (1.088793) | 2.389481 / 1.541195 (0.848286) | 2.390117 / 1.468490 (0.921627) | 0.852420 / 4.584777 (-3.732357) | 5.474171 / 3.745712 (1.728459) | 4.967188 / 5.269862 (-0.302674) | 3.053712 / 4.565676 (-1.511965) | 0.098128 / 0.424275 (-0.326147) | 0.008722 / 0.007607 (0.001115) | 0.699838 / 0.226044 (0.473794) | 7.103622 / 2.268929 (4.834693) | 3.359326 / 55.444624 (-52.085299) | 2.733943 / 6.876477 (-4.142534) | 2.770001 / 2.142072 (0.627929) | 1.058217 / 4.805227 (-3.747011) | 0.215845 / 6.500664 (-6.284820) | 0.078532 / 0.075469 (0.003063) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.633173 / 1.841788 (-0.208614) | 23.795045 / 8.074308 (15.720737) | 21.094433 / 10.191392 (10.903041) | 0.234522 / 0.680424 (-0.445902) | 0.033632 / 0.534201 (-0.500569) | 0.496701 / 0.579283 (-0.082582) | 0.626861 / 0.434364 (0.192497) | 0.558267 / 0.540337 (0.017930) | 0.807461 / 1.386936 (-0.579475) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009136 / 0.011353 (-0.002217) | 0.005425 / 0.011008 (-0.005584) | 0.081478 / 0.038508 (0.042970) | 0.077240 / 0.023109 (0.054130) | 0.512156 / 0.275898 (0.236258) | 0.561593 / 0.323480 (0.238113) | 0.006499 / 0.007986 (-0.001486) | 0.004080 / 0.004328 (-0.000248) | 0.082121 / 0.004250 (0.077870) | 0.063774 / 0.037052 (0.026722) | 0.509801 / 0.258489 (0.251312) | 0.572826 / 0.293841 (0.278985) | 0.050969 / 0.128546 (-0.077578) | 0.014876 / 0.075646 (-0.060771) | 0.094815 / 0.419271 (-0.324456) | 0.063904 / 0.043533 (0.020371) | 0.530572 / 0.255139 (0.275433) | 0.545940 / 0.283200 (0.262741) | 0.036729 / 0.141683 (-0.104954) | 1.799493 / 1.452155 (0.347339) | 1.931955 / 1.492716 (0.439239) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.291405 / 0.018006 (0.273398) | 0.590257 / 0.000490 (0.589767) | 0.008394 / 0.000200 (0.008194) | 0.000112 / 0.000054 (0.000058) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.037613 / 0.037411 (0.000201) | 0.103136 / 0.014526 (0.088610) | 0.121744 / 0.176557 (-0.054813) | 0.198503 / 0.737135 (-0.538632) | 0.120183 / 0.296338 (-0.176156) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.659872 / 0.215209 (0.444663) | 6.616775 / 2.077655 (4.539120) | 3.031679 / 1.504120 (1.527559) | 2.743489 / 1.541195 (1.202294) | 2.786786 / 1.468490 (1.318296) | 0.866625 / 4.584777 (-3.718152) | 5.637705 / 3.745712 (1.891993) | 4.702563 / 5.269862 (-0.567298) | 3.017797 / 4.565676 (-1.547879) | 0.100107 / 0.424275 (-0.324169) | 0.008443 / 0.007607 (0.000836) | 0.791385 / 0.226044 (0.565341) | 7.869504 / 2.268929 (5.600576) | 3.856634 / 55.444624 (-51.587991) | 3.140089 / 6.876477 (-3.736388) | 3.489339 / 2.142072 (1.347267) | 1.132170 / 4.805227 (-3.673058) | 0.219630 / 6.500664 (-6.281034) | 0.082289 / 0.075469 (0.006820) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.781902 / 1.841788 (-0.059885) | 24.912604 / 8.074308 (16.838296) | 21.626512 / 10.191392 (11.435120) | 0.228194 / 0.680424 (-0.452230) | 0.032799 / 0.534201 (-0.501402) | 0.483683 / 0.579283 (-0.095600) | 0.604966 / 0.434364 (0.170602) | 0.617278 / 0.540337 (0.076940) | 0.887337 / 1.386936 (-0.499599) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#579c31fda7182ca6fc33ab1e95db9e3a21fb5518 \"CML watermark\")\n", "I used [this](https://colab.research.google.com/drive/1q2FYnkJFDMM3OZbhnYeZkfzmBa6ksofQ?usp=sharing) Colab to test the new `push_to_hub` on a large dataset (55 GB). It works great. \r\n\r\nOne thing that could be improved is the performance of `dataset.data.nbytes` - it takes ≈ 3 minutes to compute for the dataset in question (50k array chunks per column). It probably makes sense to store larger chunks locally. But this can be addressed in a subsequent PR.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007190 / 0.011353 (-0.004163) | 0.004394 / 0.011008 (-0.006614) | 0.085506 / 0.038508 (0.046998) | 0.092177 / 0.023109 (0.069068) | 0.351636 / 0.275898 (0.075738) | 0.389716 / 0.323480 (0.066236) | 0.004443 / 0.007986 (-0.003543) | 0.003641 / 0.004328 (-0.000687) | 0.066578 / 0.004250 (0.062328) | 0.061399 / 0.037052 (0.024346) | 0.356008 / 0.258489 (0.097519) | 0.398677 / 0.293841 (0.104836) | 0.031958 / 0.128546 (-0.096588) | 0.008857 / 0.075646 (-0.066789) | 0.289613 / 0.419271 (-0.129659) | 0.053555 / 0.043533 (0.010022) | 0.349268 / 0.255139 (0.094129) | 0.368666 / 0.283200 (0.085466) | 0.028267 / 0.141683 (-0.113416) | 1.502857 / 1.452155 (0.050702) | 1.598422 / 1.492716 (0.105705) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.319938 / 0.018006 (0.301931) | 0.566925 / 0.000490 (0.566435) | 0.014625 / 0.000200 (0.014425) | 0.000372 / 0.000054 (0.000318) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030156 / 0.037411 (-0.007255) | 0.083128 / 0.014526 (0.068602) | 0.101435 / 0.176557 (-0.075122) | 0.158971 / 0.737135 (-0.578165) | 0.101488 / 0.296338 (-0.194851) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.383904 / 0.215209 (0.168695) | 3.829201 / 2.077655 (1.751546) | 1.815224 / 1.504120 (0.311104) | 1.647865 / 1.541195 (0.106670) | 1.738411 / 1.468490 (0.269921) | 0.484963 / 4.584777 (-4.099814) | 3.494811 / 3.745712 (-0.250901) | 3.505811 / 5.269862 (-1.764051) | 2.115467 / 4.565676 (-2.450210) | 0.057271 / 0.424275 (-0.367004) | 0.007285 / 0.007607 (-0.000322) | 0.467162 / 0.226044 (0.241118) | 4.661572 / 2.268929 (2.392643) | 2.330443 / 55.444624 (-53.114182) | 1.986116 / 6.876477 (-4.890361) | 2.055350 / 2.142072 (-0.086723) | 0.580369 / 4.805227 (-4.224858) | 0.132700 / 6.500664 (-6.367964) | 0.061219 / 0.075469 (-0.014251) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.270843 / 1.841788 (-0.570945) | 19.870723 / 8.074308 (11.796415) | 14.368932 / 10.191392 (4.177540) | 0.167345 / 0.680424 (-0.513079) | 0.018358 / 0.534201 (-0.515843) | 0.390833 / 0.579283 (-0.188450) | 0.419884 / 0.434364 (-0.014480) | 0.465683 / 0.540337 (-0.074655) | 0.646101 / 1.386936 (-0.740835) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007027 / 0.011353 (-0.004326) | 0.004578 / 0.011008 (-0.006430) | 0.066468 / 0.038508 (0.027960) | 0.081576 / 0.023109 (0.058466) | 0.414928 / 0.275898 (0.139030) | 0.452130 / 0.323480 (0.128651) | 0.005861 / 0.007986 (-0.002124) | 0.003740 / 0.004328 (-0.000588) | 0.066943 / 0.004250 (0.062692) | 0.060100 / 0.037052 (0.023048) | 0.418697 / 0.258489 (0.160208) | 0.466604 / 0.293841 (0.172764) | 0.031887 / 0.128546 (-0.096660) | 0.009119 / 0.075646 (-0.066527) | 0.072285 / 0.419271 (-0.346986) | 0.047599 / 0.043533 (0.004066) | 0.410791 / 0.255139 (0.155652) | 0.434182 / 0.283200 (0.150982) | 0.024799 / 0.141683 (-0.116884) | 1.500310 / 1.452155 (0.048155) | 1.567151 / 1.492716 (0.074434) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.322482 / 0.018006 (0.304476) | 0.550234 / 0.000490 (0.549744) | 0.007796 / 0.000200 (0.007596) | 0.000088 / 0.000054 (0.000033) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.036013 / 0.037411 (-0.001398) | 0.098482 / 0.014526 (0.083956) | 0.111641 / 0.176557 (-0.064916) | 0.166251 / 0.737135 (-0.570884) | 0.112426 / 0.296338 (-0.183912) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.429181 / 0.215209 (0.213972) | 4.273126 / 2.077655 (2.195472) | 2.277440 / 1.504120 (0.773321) | 2.112567 / 1.541195 (0.571372) | 2.224118 / 1.468490 (0.755628) | 0.488876 / 4.584777 (-4.095901) | 3.711638 / 3.745712 (-0.034074) | 3.480995 / 5.269862 (-1.788867) | 2.122114 / 4.565676 (-2.443563) | 0.057538 / 0.424275 (-0.366737) | 0.007416 / 0.007607 (-0.000191) | 0.506881 / 0.226044 (0.280836) | 5.067601 / 2.268929 (2.798672) | 2.769216 / 55.444624 (-52.675408) | 2.420448 / 6.876477 (-4.456029) | 2.694225 / 2.142072 (0.552153) | 0.588911 / 4.805227 (-4.216316) | 0.133542 / 6.500664 (-6.367122) | 0.061135 / 0.075469 (-0.014334) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.378029 / 1.841788 (-0.463758) | 20.660942 / 8.074308 (12.586634) | 15.725969 / 10.191392 (5.534577) | 0.169078 / 0.680424 (-0.511346) | 0.020540 / 0.534201 (-0.513661) | 0.399409 / 0.579283 (-0.179874) | 0.432572 / 0.434364 (-0.001792) | 0.477106 / 0.540337 (-0.063231) | 0.675593 / 1.386936 (-0.711343) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#9764c49d8bfdad5439e16faa6c52e510feabf6fa \"CML watermark\")\n", "@lhoestq \r\n\r\n> single commit can fail (time out) if there are too many operations so we might have to do multi commits anyway in that case\r\n\r\nMultiple commits complicate the logic significantly. Maybe, let's keep things simple and emit a warning if there are more than 100 additions (we can suggest increasing `max_shard_size` in that case). Additionally, we can set the default `max_shard_size` to a higher value, e.g., 5GB. I think handling up to 500GB of data in the default case seems reasonable. In rare cases where this is a problem, one could increase the default `max_shard_size` even further (if RAM is not a limiting factor) or use `to_parquet` + `huggingface_hub` (we could have a docstring or a doc note that explains this).\r\n\r\nNote that we split the dataset based on the Arrow data size, which means Parquet shards will be considerably smaller unless there are binary fields such as image JPEGs in the dataset, which are hard to compress efficiently.\r\n\r\n> how to let users resume a push_to_hub that failed mid-way because of a connection error for example\r\n\r\nThey can resume by rerunning the failed `push_to_hub`.\r\n\r\n`preupload_lfs_files` will be instant in that scenario, as explained in https://github.com/huggingface/huggingface_hub/pull/1699#discussion_r1342446406", "> Multiple commits complicate the logic significantly. Maybe, let's keep things simple and emit a warning if there are more than 100 additions (we can suggest increasing max_shard_size in that case)\r\n\r\nI don't think we can do that, many people are uploading files with 100+ files and it would break their workflow", "Indeed, we should not break this, considering the number of datasets with more than 100 shards on the Hub (over 1k)", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006834 / 0.011353 (-0.004519) | 0.004424 / 0.011008 (-0.006584) | 0.085199 / 0.038508 (0.046691) | 0.080237 / 0.023109 (0.057128) | 0.308800 / 0.275898 (0.032902) | 0.346314 / 0.323480 (0.022835) | 0.004399 / 0.007986 (-0.003586) | 0.003773 / 0.004328 (-0.000556) | 0.065886 / 0.004250 (0.061636) | 0.057830 / 0.037052 (0.020777) | 0.312035 / 0.258489 (0.053546) | 0.362646 / 0.293841 (0.068805) | 0.031223 / 0.128546 (-0.097323) | 0.008851 / 0.075646 (-0.066795) | 0.288264 / 0.419271 (-0.131007) | 0.052600 / 0.043533 (0.009067) | 0.316127 / 0.255139 (0.060988) | 0.328539 / 0.283200 (0.045340) | 0.026068 / 0.141683 (-0.115615) | 1.458928 / 1.452155 (0.006773) | 1.547619 / 1.492716 (0.054902) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.274382 / 0.018006 (0.256375) | 0.591192 / 0.000490 (0.590703) | 0.009290 / 0.000200 (0.009090) | 0.000327 / 0.000054 (0.000273) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031428 / 0.037411 (-0.005983) | 0.087523 / 0.014526 (0.072997) | 0.101427 / 0.176557 (-0.075130) | 0.159228 / 0.737135 (-0.577907) | 0.101430 / 0.296338 (-0.194909) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.393914 / 0.215209 (0.178705) | 3.917323 / 2.077655 (1.839668) | 1.940577 / 1.504120 (0.436457) | 1.760996 / 1.541195 (0.219801) | 1.865858 / 1.468490 (0.397368) | 0.488920 / 4.584777 (-4.095857) | 3.513465 / 3.745712 (-0.232248) | 3.506600 / 5.269862 (-1.763261) | 2.072583 / 4.565676 (-2.493093) | 0.058256 / 0.424275 (-0.366019) | 0.007420 / 0.007607 (-0.000187) | 0.467241 / 0.226044 (0.241197) | 4.671470 / 2.268929 (2.402542) | 2.422717 / 55.444624 (-53.021908) | 2.069501 / 6.876477 (-4.806975) | 2.159257 / 2.142072 (0.017184) | 0.583808 / 4.805227 (-4.221419) | 0.134160 / 6.500664 (-6.366504) | 0.068855 / 0.075469 (-0.006614) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.305299 / 1.841788 (-0.536488) | 19.913902 / 8.074308 (11.839593) | 14.708057 / 10.191392 (4.516665) | 0.160113 / 0.680424 (-0.520311) | 0.018431 / 0.534201 (-0.515770) | 0.396147 / 0.579283 (-0.183136) | 0.411738 / 0.434364 (-0.022626) | 0.459297 / 0.540337 (-0.081041) | 0.636599 / 1.386936 (-0.750337) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006936 / 0.011353 (-0.004417) | 0.004290 / 0.011008 (-0.006718) | 0.065754 / 0.038508 (0.027246) | 0.080655 / 0.023109 (0.057546) | 0.399701 / 0.275898 (0.123803) | 0.435999 / 0.323480 (0.112519) | 0.005690 / 0.007986 (-0.002295) | 0.003580 / 0.004328 (-0.000748) | 0.065685 / 0.004250 (0.061434) | 0.059299 / 0.037052 (0.022246) | 0.404295 / 0.258489 (0.145806) | 0.438745 / 0.293841 (0.144904) | 0.032241 / 0.128546 (-0.096305) | 0.008699 / 0.075646 (-0.066947) | 0.072053 / 0.419271 (-0.347218) | 0.047489 / 0.043533 (0.003956) | 0.395638 / 0.255139 (0.140499) | 0.417224 / 0.283200 (0.134025) | 0.022734 / 0.141683 (-0.118949) | 1.507519 / 1.452155 (0.055364) | 1.570459 / 1.492716 (0.077743) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.260442 / 0.018006 (0.242435) | 0.551933 / 0.000490 (0.551444) | 0.005240 / 0.000200 (0.005040) | 0.000097 / 0.000054 (0.000042) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033718 / 0.037411 (-0.003694) | 0.095710 / 0.014526 (0.081184) | 0.109970 / 0.176557 (-0.066586) | 0.167930 / 0.737135 (-0.569205) | 0.109977 / 0.296338 (-0.186362) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.430067 / 0.215209 (0.214857) | 4.292564 / 2.077655 (2.214910) | 2.313511 / 1.504120 (0.809391) | 2.158153 / 1.541195 (0.616959) | 2.262486 / 1.468490 (0.793996) | 0.492376 / 4.584777 (-4.092401) | 3.622287 / 3.745712 (-0.123425) | 3.380162 / 5.269862 (-1.889699) | 2.111874 / 4.565676 (-2.453803) | 0.057882 / 0.424275 (-0.366393) | 0.007317 / 0.007607 (-0.000290) | 0.504722 / 0.226044 (0.278678) | 5.039009 / 2.268929 (2.770080) | 2.772162 / 55.444624 (-52.672463) | 2.430928 / 6.876477 (-4.445549) | 2.666556 / 2.142072 (0.524484) | 0.586722 / 4.805227 (-4.218505) | 0.133780 / 6.500664 (-6.366884) | 0.060269 / 0.075469 (-0.015200) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.339064 / 1.841788 (-0.502724) | 20.743931 / 8.074308 (12.669623) | 15.491066 / 10.191392 (5.299674) | 0.159236 / 0.680424 (-0.521188) | 0.020722 / 0.534201 (-0.513479) | 0.399440 / 0.579283 (-0.179843) | 0.424501 / 0.434364 (-0.009863) | 0.474026 / 0.540337 (-0.066311) | 0.685239 / 1.386936 (-0.701697) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#58406f61c52e7ff064ac6c19ebdb3e5247c70862 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005930 / 0.011353 (-0.005422) | 0.003496 / 0.011008 (-0.007512) | 0.079631 / 0.038508 (0.041123) | 0.058250 / 0.023109 (0.035141) | 0.310108 / 0.275898 (0.034210) | 0.352747 / 0.323480 (0.029267) | 0.005367 / 0.007986 (-0.002619) | 0.002943 / 0.004328 (-0.001386) | 0.062449 / 0.004250 (0.058199) | 0.046433 / 0.037052 (0.009381) | 0.311020 / 0.258489 (0.052531) | 0.361033 / 0.293841 (0.067192) | 0.027419 / 0.128546 (-0.101128) | 0.008073 / 0.075646 (-0.067574) | 0.261403 / 0.419271 (-0.157869) | 0.045059 / 0.043533 (0.001527) | 0.310622 / 0.255139 (0.055483) | 0.344361 / 0.283200 (0.061161) | 0.020561 / 0.141683 (-0.121122) | 1.427409 / 1.452155 (-0.024746) | 1.506612 / 1.492716 (0.013896) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.234095 / 0.018006 (0.216089) | 0.432603 / 0.000490 (0.432113) | 0.010283 / 0.000200 (0.010083) | 0.000289 / 0.000054 (0.000235) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024263 / 0.037411 (-0.013148) | 0.073672 / 0.014526 (0.059146) | 0.084080 / 0.176557 (-0.092476) | 0.146679 / 0.737135 (-0.590457) | 0.084337 / 0.296338 (-0.212001) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.434297 / 0.215209 (0.219088) | 4.358287 / 2.077655 (2.280633) | 2.268461 / 1.504120 (0.764341) | 2.107924 / 1.541195 (0.566729) | 2.165136 / 1.468490 (0.696646) | 0.498421 / 4.584777 (-4.086356) | 3.094414 / 3.745712 (-0.651298) | 2.991511 / 5.269862 (-2.278351) | 1.998052 / 4.565676 (-2.567624) | 0.057363 / 0.424275 (-0.366912) | 0.006405 / 0.007607 (-0.001203) | 0.508396 / 0.226044 (0.282351) | 5.104756 / 2.268929 (2.835828) | 2.720462 / 55.444624 (-52.724163) | 2.391840 / 6.876477 (-4.484637) | 2.443063 / 2.142072 (0.300991) | 0.590015 / 4.805227 (-4.215212) | 0.125414 / 6.500664 (-6.375250) | 0.061122 / 0.075469 (-0.014347) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.221883 / 1.841788 (-0.619904) | 17.788248 / 8.074308 (9.713940) | 13.753315 / 10.191392 (3.561923) | 0.146388 / 0.680424 (-0.534036) | 0.017038 / 0.534201 (-0.517163) | 0.339162 / 0.579283 (-0.240121) | 0.372054 / 0.434364 (-0.062309) | 0.381507 / 0.540337 (-0.158830) | 0.538603 / 1.386936 (-0.848333) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006044 / 0.011353 (-0.005309) | 0.003654 / 0.011008 (-0.007354) | 0.062956 / 0.038508 (0.024448) | 0.061325 / 0.023109 (0.038216) | 0.450006 / 0.275898 (0.174108) | 0.474560 / 0.323480 (0.151080) | 0.004846 / 0.007986 (-0.003140) | 0.002904 / 0.004328 (-0.001425) | 0.064206 / 0.004250 (0.059956) | 0.047850 / 0.037052 (0.010798) | 0.448431 / 0.258489 (0.189942) | 0.481363 / 0.293841 (0.187523) | 0.028622 / 0.128546 (-0.099925) | 0.008255 / 0.075646 (-0.067391) | 0.068461 / 0.419271 (-0.350810) | 0.040234 / 0.043533 (-0.003299) | 0.447396 / 0.255139 (0.192257) | 0.465383 / 0.283200 (0.182184) | 0.021864 / 0.141683 (-0.119819) | 1.402197 / 1.452155 (-0.049957) | 1.475337 / 1.492716 (-0.017379) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.227093 / 0.018006 (0.209087) | 0.407908 / 0.000490 (0.407419) | 0.006709 / 0.000200 (0.006509) | 0.000076 / 0.000054 (0.000022) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026560 / 0.037411 (-0.010851) | 0.080926 / 0.014526 (0.066400) | 0.091531 / 0.176557 (-0.085026) | 0.145742 / 0.737135 (-0.591393) | 0.092203 / 0.296338 (-0.204135) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.473029 / 0.215209 (0.257820) | 4.703613 / 2.077655 (2.625958) | 2.642622 / 1.504120 (1.138502) | 2.465376 / 1.541195 (0.924181) | 2.510125 / 1.468490 (1.041635) | 0.512606 / 4.584777 (-4.072171) | 3.132127 / 3.745712 (-0.613585) | 2.890098 / 5.269862 (-2.379763) | 1.908140 / 4.565676 (-2.657537) | 0.058938 / 0.424275 (-0.365337) | 0.006486 / 0.007607 (-0.001121) | 0.542279 / 0.226044 (0.316235) | 5.435621 / 2.268929 (3.166693) | 3.083943 / 55.444624 (-52.360681) | 2.761575 / 6.876477 (-4.114901) | 2.919672 / 2.142072 (0.777599) | 0.608022 / 4.805227 (-4.197205) | 0.126821 / 6.500664 (-6.373843) | 0.061374 / 0.075469 (-0.014095) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.348848 / 1.841788 (-0.492940) | 18.323507 / 8.074308 (10.249199) | 14.713411 / 10.191392 (4.522019) | 0.155277 / 0.680424 (-0.525146) | 0.017739 / 0.534201 (-0.516462) | 0.337357 / 0.579283 (-0.241926) | 0.376519 / 0.434364 (-0.057844) | 0.398011 / 0.540337 (-0.142327) | 0.589797 / 1.386936 (-0.797139) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#26d8bfca337e01bd78d5590d5e49c6d8d022a3ff \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007823 / 0.011353 (-0.003530) | 0.004136 / 0.011008 (-0.006872) | 0.087282 / 0.038508 (0.048774) | 0.086352 / 0.023109 (0.063243) | 0.328107 / 0.275898 (0.052209) | 0.368717 / 0.323480 (0.045237) | 0.005452 / 0.007986 (-0.002533) | 0.003460 / 0.004328 (-0.000868) | 0.064360 / 0.004250 (0.060110) | 0.062215 / 0.037052 (0.025162) | 0.334666 / 0.258489 (0.076177) | 0.388688 / 0.293841 (0.094847) | 0.031093 / 0.128546 (-0.097454) | 0.008510 / 0.075646 (-0.067137) | 0.295965 / 0.419271 (-0.123306) | 0.052858 / 0.043533 (0.009325) | 0.320104 / 0.255139 (0.064965) | 0.346761 / 0.283200 (0.063562) | 0.024864 / 0.141683 (-0.116819) | 1.483164 / 1.452155 (0.031010) | 1.580363 / 1.492716 (0.087647) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.243523 / 0.018006 (0.225516) | 0.459741 / 0.000490 (0.459251) | 0.010508 / 0.000200 (0.010308) | 0.000384 / 0.000054 (0.000330) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029896 / 0.037411 (-0.007515) | 0.089150 / 0.014526 (0.074624) | 0.098855 / 0.176557 (-0.077702) | 0.154469 / 0.737135 (-0.582667) | 0.099546 / 0.296338 (-0.196792) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.403547 / 0.215209 (0.188338) | 4.036711 / 2.077655 (1.959056) | 2.030882 / 1.504120 (0.526762) | 1.850432 / 1.541195 (0.309238) | 1.924248 / 1.468490 (0.455758) | 0.493153 / 4.584777 (-4.091624) | 3.634074 / 3.745712 (-0.111638) | 3.546145 / 5.269862 (-1.723717) | 2.120819 / 4.565676 (-2.444858) | 0.057137 / 0.424275 (-0.367138) | 0.007454 / 0.007607 (-0.000153) | 0.481687 / 0.226044 (0.255642) | 4.813203 / 2.268929 (2.544275) | 2.481260 / 55.444624 (-52.963364) | 2.194185 / 6.876477 (-4.682292) | 2.255381 / 2.142072 (0.113308) | 0.575160 / 4.805227 (-4.230068) | 0.132310 / 6.500664 (-6.368355) | 0.061917 / 0.075469 (-0.013553) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.265722 / 1.841788 (-0.576066) | 19.949624 / 8.074308 (11.875315) | 14.804356 / 10.191392 (4.612964) | 0.170485 / 0.680424 (-0.509939) | 0.018831 / 0.534201 (-0.515370) | 0.407051 / 0.579283 (-0.172233) | 0.420560 / 0.434364 (-0.013804) | 0.470721 / 0.540337 (-0.069616) | 0.651665 / 1.386936 (-0.735271) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007113 / 0.011353 (-0.004240) | 0.004186 / 0.011008 (-0.006822) | 0.065082 / 0.038508 (0.026574) | 0.080275 / 0.023109 (0.057166) | 0.393460 / 0.275898 (0.117562) | 0.426702 / 0.323480 (0.103223) | 0.005639 / 0.007986 (-0.002347) | 0.003492 / 0.004328 (-0.000836) | 0.065774 / 0.004250 (0.061523) | 0.059708 / 0.037052 (0.022656) | 0.395598 / 0.258489 (0.137109) | 0.437088 / 0.293841 (0.143247) | 0.033165 / 0.128546 (-0.095381) | 0.008559 / 0.075646 (-0.067087) | 0.071782 / 0.419271 (-0.347490) | 0.048672 / 0.043533 (0.005139) | 0.393883 / 0.255139 (0.138744) | 0.412817 / 0.283200 (0.129617) | 0.024115 / 0.141683 (-0.117568) | 1.522752 / 1.452155 (0.070597) | 1.577311 / 1.492716 (0.084595) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.225569 / 0.018006 (0.207563) | 0.460310 / 0.000490 (0.459820) | 0.004733 / 0.000200 (0.004533) | 0.000115 / 0.000054 (0.000060) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.035241 / 0.037411 (-0.002170) | 0.098092 / 0.014526 (0.083566) | 0.108025 / 0.176557 (-0.068531) | 0.162910 / 0.737135 (-0.574225) | 0.108649 / 0.296338 (-0.187689) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.441723 / 0.215209 (0.226514) | 4.400656 / 2.077655 (2.323001) | 2.413588 / 1.504120 (0.909468) | 2.261890 / 1.541195 (0.720696) | 2.420878 / 1.468490 (0.952388) | 0.496456 / 4.584777 (-4.088321) | 3.679930 / 3.745712 (-0.065782) | 3.390539 / 5.269862 (-1.879322) | 2.109599 / 4.565676 (-2.456078) | 0.058896 / 0.424275 (-0.365379) | 0.007483 / 0.007607 (-0.000125) | 0.521108 / 0.226044 (0.295064) | 5.209468 / 2.268929 (2.940540) | 2.948595 / 55.444624 (-52.496029) | 2.658864 / 6.876477 (-4.217613) | 2.913653 / 2.142072 (0.771580) | 0.602776 / 4.805227 (-4.202451) | 0.136166 / 6.500664 (-6.364498) | 0.063812 / 0.075469 (-0.011657) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.350306 / 1.841788 (-0.491482) | 20.453980 / 8.074308 (12.379672) | 15.758719 / 10.191392 (5.567327) | 0.165847 / 0.680424 (-0.514577) | 0.020254 / 0.534201 (-0.513947) | 0.400006 / 0.579283 (-0.179277) | 0.440336 / 0.434364 (0.005972) | 0.480122 / 0.540337 (-0.060215) | 0.688994 / 1.386936 (-0.697942) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#997082a2a3c599ea1b23a70759d3af98a78f7f33 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008633 / 0.011353 (-0.002720) | 0.004851 / 0.011008 (-0.006157) | 0.100647 / 0.038508 (0.062139) | 0.084701 / 0.023109 (0.061592) | 0.410489 / 0.275898 (0.134590) | 0.440231 / 0.323480 (0.116751) | 0.004679 / 0.007986 (-0.003307) | 0.004172 / 0.004328 (-0.000157) | 0.079911 / 0.004250 (0.075661) | 0.069537 / 0.037052 (0.032485) | 0.423506 / 0.258489 (0.165017) | 0.466098 / 0.293841 (0.172257) | 0.048773 / 0.128546 (-0.079773) | 0.014446 / 0.075646 (-0.061200) | 0.342776 / 0.419271 (-0.076495) | 0.065672 / 0.043533 (0.022139) | 0.411845 / 0.255139 (0.156706) | 0.466662 / 0.283200 (0.183462) | 0.035752 / 0.141683 (-0.105931) | 1.684956 / 1.452155 (0.232801) | 1.832173 / 1.492716 (0.339456) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.250744 / 0.018006 (0.232738) | 0.528860 / 0.000490 (0.528371) | 0.013301 / 0.000200 (0.013101) | 0.000413 / 0.000054 (0.000359) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032376 / 0.037411 (-0.005035) | 0.094630 / 0.014526 (0.080104) | 0.107163 / 0.176557 (-0.069394) | 0.172503 / 0.737135 (-0.564633) | 0.108407 / 0.296338 (-0.187932) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.671251 / 0.215209 (0.456042) | 6.235361 / 2.077655 (4.157706) | 2.650328 / 1.504120 (1.146208) | 2.341199 / 1.541195 (0.800004) | 2.368803 / 1.468490 (0.900313) | 0.841347 / 4.584777 (-3.743430) | 5.042508 / 3.745712 (1.296796) | 4.807565 / 5.269862 (-0.462296) | 3.007420 / 4.565676 (-1.558257) | 0.099953 / 0.424275 (-0.324322) | 0.008412 / 0.007607 (0.000805) | 0.747803 / 0.226044 (0.521759) | 7.481245 / 2.268929 (5.212316) | 3.416157 / 55.444624 (-52.028467) | 2.724608 / 6.876477 (-4.151869) | 2.832982 / 2.142072 (0.690910) | 1.072423 / 4.805227 (-3.732804) | 0.211314 / 6.500664 (-6.289351) | 0.074098 / 0.075469 (-0.001371) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.566010 / 1.841788 (-0.275778) | 23.137708 / 8.074308 (15.063400) | 21.440132 / 10.191392 (11.248740) | 0.230713 / 0.680424 (-0.449711) | 0.028271 / 0.534201 (-0.505930) | 0.450821 / 0.579283 (-0.128463) | 0.548399 / 0.434364 (0.114035) | 0.543588 / 0.540337 (0.003250) | 0.805522 / 1.386936 (-0.581414) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008969 / 0.011353 (-0.002384) | 0.004793 / 0.011008 (-0.006216) | 0.075804 / 0.038508 (0.037296) | 0.079893 / 0.023109 (0.056783) | 0.464358 / 0.275898 (0.188460) | 0.507243 / 0.323480 (0.183763) | 0.005945 / 0.007986 (-0.002040) | 0.005341 / 0.004328 (0.001012) | 0.077952 / 0.004250 (0.073701) | 0.059965 / 0.037052 (0.022913) | 0.478947 / 0.258489 (0.220458) | 0.528444 / 0.293841 (0.234603) | 0.052878 / 0.128546 (-0.075668) | 0.013939 / 0.075646 (-0.061707) | 0.087351 / 0.419271 (-0.331920) | 0.058448 / 0.043533 (0.014916) | 0.478664 / 0.255139 (0.223525) | 0.491239 / 0.283200 (0.208039) | 0.032674 / 0.141683 (-0.109008) | 1.753911 / 1.452155 (0.301756) | 1.858923 / 1.492716 (0.366206) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.239278 / 0.018006 (0.221271) | 0.507372 / 0.000490 (0.506882) | 0.005489 / 0.000200 (0.005289) | 0.000142 / 0.000054 (0.000087) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032919 / 0.037411 (-0.004493) | 0.097726 / 0.014526 (0.083200) | 0.119159 / 0.176557 (-0.057398) | 0.174545 / 0.737135 (-0.562590) | 0.115319 / 0.296338 (-0.181020) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.627107 / 0.215209 (0.411898) | 6.211925 / 2.077655 (4.134270) | 2.731484 / 1.504120 (1.227365) | 2.488847 / 1.541195 (0.947652) | 2.372445 / 1.468490 (0.903955) | 0.822663 / 4.584777 (-3.762114) | 4.924001 / 3.745712 (1.178289) | 4.371161 / 5.269862 (-0.898700) | 2.850314 / 4.565676 (-1.715363) | 0.099156 / 0.424275 (-0.325119) | 0.007941 / 0.007607 (0.000334) | 0.721539 / 0.226044 (0.495495) | 7.260874 / 2.268929 (4.991946) | 3.351072 / 55.444624 (-52.093552) | 2.757115 / 6.876477 (-4.119362) | 2.858899 / 2.142072 (0.716827) | 0.994054 / 4.805227 (-3.811173) | 0.209186 / 6.500664 (-6.291478) | 0.072070 / 0.075469 (-0.003399) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.748073 / 1.841788 (-0.093714) | 23.514638 / 8.074308 (15.440330) | 20.372037 / 10.191392 (10.180645) | 0.220020 / 0.680424 (-0.460404) | 0.057130 / 0.534201 (-0.477071) | 0.458204 / 0.579283 (-0.121079) | 0.600509 / 0.434364 (0.166145) | 0.557100 / 0.540337 (0.016762) | 0.814360 / 1.386936 (-0.572576) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#072f0ceafde60c16516fe1297e4aba981fc56052 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007341 / 0.011353 (-0.004012) | 0.004606 / 0.011008 (-0.006402) | 0.087903 / 0.038508 (0.049395) | 0.094090 / 0.023109 (0.070981) | 0.322278 / 0.275898 (0.046380) | 0.356770 / 0.323480 (0.033290) | 0.005988 / 0.007986 (-0.001997) | 0.003667 / 0.004328 (-0.000662) | 0.066105 / 0.004250 (0.061854) | 0.061220 / 0.037052 (0.024167) | 0.331190 / 0.258489 (0.072701) | 0.381402 / 0.293841 (0.087561) | 0.032261 / 0.128546 (-0.096285) | 0.009281 / 0.075646 (-0.066366) | 0.293694 / 0.419271 (-0.125577) | 0.055041 / 0.043533 (0.011508) | 0.318080 / 0.255139 (0.062941) | 0.348763 / 0.283200 (0.065563) | 0.027379 / 0.141683 (-0.114304) | 1.496294 / 1.452155 (0.044139) | 1.581942 / 1.492716 (0.089226) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.307592 / 0.018006 (0.289586) | 0.591805 / 0.000490 (0.591316) | 0.017082 / 0.000200 (0.016882) | 0.000721 / 0.000054 (0.000666) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032157 / 0.037411 (-0.005254) | 0.096249 / 0.014526 (0.081724) | 0.106656 / 0.176557 (-0.069901) | 0.162966 / 0.737135 (-0.574169) | 0.107068 / 0.296338 (-0.189271) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.409083 / 0.215209 (0.193874) | 4.044307 / 2.077655 (1.966652) | 2.062887 / 1.504120 (0.558767) | 1.900568 / 1.541195 (0.359373) | 2.011862 / 1.468490 (0.543372) | 0.489250 / 4.584777 (-4.095527) | 3.519531 / 3.745712 (-0.226182) | 3.631713 / 5.269862 (-1.638149) | 2.163967 / 4.565676 (-2.401709) | 0.057723 / 0.424275 (-0.366552) | 0.007474 / 0.007607 (-0.000133) | 0.479562 / 0.226044 (0.253517) | 4.799825 / 2.268929 (2.530897) | 2.530036 / 55.444624 (-52.914588) | 2.195344 / 6.876477 (-4.681133) | 2.341046 / 2.142072 (0.198974) | 0.625105 / 4.805227 (-4.180122) | 0.132823 / 6.500664 (-6.367841) | 0.061721 / 0.075469 (-0.013748) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.301313 / 1.841788 (-0.540475) | 21.218468 / 8.074308 (13.144159) | 15.466347 / 10.191392 (5.274955) | 0.166115 / 0.680424 (-0.514309) | 0.018866 / 0.534201 (-0.515335) | 0.399307 / 0.579283 (-0.179976) | 0.430537 / 0.434364 (-0.003827) | 0.467110 / 0.540337 (-0.073228) | 0.645686 / 1.386936 (-0.741250) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007288 / 0.011353 (-0.004065) | 0.004298 / 0.011008 (-0.006710) | 0.065515 / 0.038508 (0.027007) | 0.089948 / 0.023109 (0.066839) | 0.410121 / 0.275898 (0.134223) | 0.449312 / 0.323480 (0.125832) | 0.006749 / 0.007986 (-0.001237) | 0.003927 / 0.004328 (-0.000401) | 0.065321 / 0.004250 (0.061071) | 0.062480 / 0.037052 (0.025428) | 0.410796 / 0.258489 (0.152307) | 0.457356 / 0.293841 (0.163515) | 0.032632 / 0.128546 (-0.095914) | 0.008798 / 0.075646 (-0.066849) | 0.075936 / 0.419271 (-0.343335) | 0.048402 / 0.043533 (0.004869) | 0.403385 / 0.255139 (0.148246) | 0.426094 / 0.283200 (0.142895) | 0.025326 / 0.141683 (-0.116357) | 1.551550 / 1.452155 (0.099395) | 1.628622 / 1.492716 (0.135905) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.279689 / 0.018006 (0.261682) | 0.583754 / 0.000490 (0.583265) | 0.006579 / 0.000200 (0.006379) | 0.000096 / 0.000054 (0.000042) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034906 / 0.037411 (-0.002505) | 0.099232 / 0.014526 (0.084706) | 0.113093 / 0.176557 (-0.063464) | 0.165499 / 0.737135 (-0.571636) | 0.113398 / 0.296338 (-0.182941) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.439154 / 0.215209 (0.223945) | 4.377041 / 2.077655 (2.299387) | 2.395058 / 1.504120 (0.890938) | 2.233359 / 1.541195 (0.692164) | 2.357281 / 1.468490 (0.888791) | 0.486036 / 4.584777 (-4.098741) | 3.568794 / 3.745712 (-0.176918) | 3.485421 / 5.269862 (-1.784440) | 2.174325 / 4.565676 (-2.391351) | 0.057855 / 0.424275 (-0.366420) | 0.007545 / 0.007607 (-0.000062) | 0.516853 / 0.226044 (0.290808) | 5.173340 / 2.268929 (2.904412) | 2.931475 / 55.444624 (-52.513149) | 2.566814 / 6.876477 (-4.309663) | 2.873304 / 2.142072 (0.731232) | 0.597072 / 4.805227 (-4.208155) | 0.133589 / 6.500664 (-6.367075) | 0.061882 / 0.075469 (-0.013587) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.382845 / 1.841788 (-0.458943) | 21.608316 / 8.074308 (13.534008) | 15.702152 / 10.191392 (5.510759) | 0.190629 / 0.680424 (-0.489795) | 0.020572 / 0.534201 (-0.513629) | 0.396207 / 0.579283 (-0.183076) | 0.421184 / 0.434364 (-0.013180) | 0.477700 / 0.540337 (-0.062638) | 0.690828 / 1.386936 (-0.696108) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#5e7374b453911cda5e0f866ad45b51c3fbe267c9 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008450 / 0.011353 (-0.002903) | 0.004958 / 0.011008 (-0.006051) | 0.105397 / 0.038508 (0.066889) | 0.079508 / 0.023109 (0.056399) | 0.403050 / 0.275898 (0.127152) | 0.443679 / 0.323480 (0.120199) | 0.004654 / 0.007986 (-0.003332) | 0.005629 / 0.004328 (0.001301) | 0.078755 / 0.004250 (0.074505) | 0.055694 / 0.037052 (0.018642) | 0.409952 / 0.258489 (0.151463) | 0.454931 / 0.293841 (0.161090) | 0.045124 / 0.128546 (-0.083422) | 0.014031 / 0.075646 (-0.061616) | 0.347340 / 0.419271 (-0.071931) | 0.064359 / 0.043533 (0.020826) | 0.414158 / 0.255139 (0.159019) | 0.428442 / 0.283200 (0.145243) | 0.033726 / 0.141683 (-0.107957) | 1.770483 / 1.452155 (0.318328) | 1.795267 / 1.492716 (0.302551) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.251020 / 0.018006 (0.233014) | 0.507066 / 0.000490 (0.506576) | 0.015751 / 0.000200 (0.015551) | 0.000531 / 0.000054 (0.000477) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028897 / 0.037411 (-0.008515) | 0.087393 / 0.014526 (0.072867) | 0.097365 / 0.176557 (-0.079192) | 0.164833 / 0.737135 (-0.572303) | 0.101281 / 0.296338 (-0.195058) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.610806 / 0.215209 (0.395597) | 6.011697 / 2.077655 (3.934042) | 2.544268 / 1.504120 (1.040148) | 2.127103 / 1.541195 (0.585908) | 2.133330 / 1.468490 (0.664839) | 0.860964 / 4.584777 (-3.723813) | 4.982374 / 3.745712 (1.236662) | 5.073026 / 5.269862 (-0.196836) | 3.033056 / 4.565676 (-1.532621) | 0.118835 / 0.424275 (-0.305440) | 0.010122 / 0.007607 (0.002515) | 0.805807 / 0.226044 (0.579763) | 7.839166 / 2.268929 (5.570238) | 3.512405 / 55.444624 (-51.932219) | 2.767578 / 6.876477 (-4.108898) | 2.936885 / 2.142072 (0.794813) | 1.058533 / 4.805227 (-3.746695) | 0.222260 / 6.500664 (-6.278404) | 0.073890 / 0.075469 (-0.001580) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.628307 / 1.841788 (-0.213480) | 22.827116 / 8.074308 (14.752808) | 21.809759 / 10.191392 (11.618367) | 0.220637 / 0.680424 (-0.459786) | 0.028030 / 0.534201 (-0.506171) | 0.448620 / 0.579283 (-0.130663) | 0.540442 / 0.434364 (0.106078) | 0.548601 / 0.540337 (0.008264) | 0.770387 / 1.386936 (-0.616549) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009198 / 0.011353 (-0.002155) | 0.004935 / 0.011008 (-0.006073) | 0.079095 / 0.038508 (0.040587) | 0.090490 / 0.023109 (0.067381) | 0.453374 / 0.275898 (0.177476) | 0.519483 / 0.323480 (0.196003) | 0.006539 / 0.007986 (-0.001447) | 0.004160 / 0.004328 (-0.000169) | 0.078433 / 0.004250 (0.074182) | 0.068022 / 0.037052 (0.030969) | 0.467686 / 0.258489 (0.209197) | 0.523863 / 0.293841 (0.230022) | 0.050926 / 0.128546 (-0.077620) | 0.013664 / 0.075646 (-0.061982) | 0.088787 / 0.419271 (-0.330485) | 0.060503 / 0.043533 (0.016971) | 0.474692 / 0.255139 (0.219553) | 0.516461 / 0.283200 (0.233261) | 0.034482 / 0.141683 (-0.107200) | 1.747939 / 1.452155 (0.295784) | 1.915212 / 1.492716 (0.422496) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.247400 / 0.018006 (0.229394) | 0.516829 / 0.000490 (0.516339) | 0.005770 / 0.000200 (0.005570) | 0.000121 / 0.000054 (0.000067) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034334 / 0.037411 (-0.003077) | 0.102397 / 0.014526 (0.087871) | 0.114187 / 0.176557 (-0.062370) | 0.171093 / 0.737135 (-0.566043) | 0.117281 / 0.296338 (-0.179058) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.635710 / 0.215209 (0.420501) | 6.400656 / 2.077655 (4.323002) | 2.896896 / 1.504120 (1.392776) | 2.682890 / 1.541195 (1.141696) | 2.656445 / 1.468490 (1.187955) | 1.044244 / 4.584777 (-3.540533) | 5.393212 / 3.745712 (1.647500) | 4.592928 / 5.269862 (-0.676934) | 2.798525 / 4.565676 (-1.767151) | 0.103720 / 0.424275 (-0.320555) | 0.010196 / 0.007607 (0.002589) | 0.762756 / 0.226044 (0.536711) | 7.232939 / 2.268929 (4.964011) | 3.714015 / 55.444624 (-51.730609) | 3.050766 / 6.876477 (-3.825711) | 3.149715 / 2.142072 (1.007643) | 1.058827 / 4.805227 (-3.746400) | 0.214079 / 6.500664 (-6.286585) | 0.076712 / 0.075469 (0.001243) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.701032 / 1.841788 (-0.140755) | 23.742023 / 8.074308 (15.667715) | 22.486043 / 10.191392 (12.294651) | 0.249757 / 0.680424 (-0.430667) | 0.031714 / 0.534201 (-0.502486) | 0.479914 / 0.579283 (-0.099369) | 0.593315 / 0.434364 (0.158951) | 0.562897 / 0.540337 (0.022560) | 0.826636 / 1.386936 (-0.560300) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#429f9c69d1813ec643c316313b69ff23aaf208f6 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007816 / 0.011353 (-0.003537) | 0.004541 / 0.011008 (-0.006467) | 0.097256 / 0.038508 (0.058748) | 0.081376 / 0.023109 (0.058267) | 0.356635 / 0.275898 (0.080737) | 0.394969 / 0.323480 (0.071489) | 0.004670 / 0.007986 (-0.003316) | 0.003537 / 0.004328 (-0.000791) | 0.075564 / 0.004250 (0.071314) | 0.063459 / 0.037052 (0.026407) | 0.363846 / 0.258489 (0.105357) | 0.416337 / 0.293841 (0.122496) | 0.036690 / 0.128546 (-0.091857) | 0.009653 / 0.075646 (-0.065993) | 0.337265 / 0.419271 (-0.082007) | 0.061446 / 0.043533 (0.017913) | 0.359190 / 0.255139 (0.104051) | 0.385866 / 0.283200 (0.102666) | 0.030474 / 0.141683 (-0.111209) | 1.796903 / 1.452155 (0.344748) | 1.852332 / 1.492716 (0.359616) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.264008 / 0.018006 (0.246002) | 0.507387 / 0.000490 (0.506897) | 0.012309 / 0.000200 (0.012109) | 0.000377 / 0.000054 (0.000323) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033224 / 0.037411 (-0.004188) | 0.097136 / 0.014526 (0.082610) | 0.113035 / 0.176557 (-0.063522) | 0.181778 / 0.737135 (-0.555357) | 0.130511 / 0.296338 (-0.165827) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.444512 / 0.215209 (0.229303) | 4.453285 / 2.077655 (2.375631) | 2.154123 / 1.504120 (0.650003) | 1.955451 / 1.541195 (0.414256) | 2.015089 / 1.468490 (0.546599) | 0.567824 / 4.584777 (-4.016953) | 4.083084 / 3.745712 (0.337371) | 3.912417 / 5.269862 (-1.357445) | 2.366197 / 4.565676 (-2.199480) | 0.066468 / 0.424275 (-0.357807) | 0.008478 / 0.007607 (0.000870) | 0.531196 / 0.226044 (0.305152) | 5.311285 / 2.268929 (3.042356) | 2.743252 / 55.444624 (-52.701372) | 2.322353 / 6.876477 (-4.554124) | 2.368168 / 2.142072 (0.226095) | 0.679223 / 4.805227 (-4.126004) | 0.152401 / 6.500664 (-6.348263) | 0.071954 / 0.075469 (-0.003515) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.489114 / 1.841788 (-0.352674) | 22.114956 / 8.074308 (14.040648) | 16.072564 / 10.191392 (5.881172) | 0.164303 / 0.680424 (-0.516121) | 0.021317 / 0.534201 (-0.512884) | 0.460250 / 0.579283 (-0.119033) | 0.467554 / 0.434364 (0.033190) | 0.539773 / 0.540337 (-0.000564) | 0.751904 / 1.386936 (-0.635032) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007520 / 0.011353 (-0.003833) | 0.004487 / 0.011008 (-0.006521) | 0.075074 / 0.038508 (0.036566) | 0.083135 / 0.023109 (0.060026) | 0.474052 / 0.275898 (0.198154) | 0.524051 / 0.323480 (0.200571) | 0.006192 / 0.007986 (-0.001793) | 0.003835 / 0.004328 (-0.000494) | 0.074643 / 0.004250 (0.070392) | 0.065334 / 0.037052 (0.028282) | 0.507033 / 0.258489 (0.248544) | 0.519846 / 0.293841 (0.226005) | 0.036985 / 0.128546 (-0.091561) | 0.009828 / 0.075646 (-0.065818) | 0.082992 / 0.419271 (-0.336279) | 0.055942 / 0.043533 (0.012409) | 0.480652 / 0.255139 (0.225513) | 0.503683 / 0.283200 (0.220483) | 0.025560 / 0.141683 (-0.116123) | 1.801390 / 1.452155 (0.349235) | 1.892929 / 1.492716 (0.400213) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.246771 / 0.018006 (0.228765) | 0.498901 / 0.000490 (0.498411) | 0.008186 / 0.000200 (0.007986) | 0.000166 / 0.000054 (0.000112) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.038666 / 0.037411 (0.001254) | 0.110317 / 0.014526 (0.095791) | 0.122995 / 0.176557 (-0.053562) | 0.185355 / 0.737135 (-0.551781) | 0.123720 / 0.296338 (-0.172619) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.508421 / 0.215209 (0.293212) | 5.046464 / 2.077655 (2.968809) | 2.660004 / 1.504120 (1.155884) | 2.482841 / 1.541195 (0.941646) | 2.573941 / 1.468490 (1.105451) | 0.565702 / 4.584777 (-4.019075) | 4.197895 / 3.745712 (0.452183) | 3.755480 / 5.269862 (-1.514381) | 2.308066 / 4.565676 (-2.257610) | 0.066559 / 0.424275 (-0.357716) | 0.008436 / 0.007607 (0.000829) | 0.589858 / 0.226044 (0.363814) | 5.873488 / 2.268929 (3.604559) | 3.241810 / 55.444624 (-52.202814) | 2.789831 / 6.876477 (-4.086645) | 3.008989 / 2.142072 (0.866917) | 0.679624 / 4.805227 (-4.125603) | 0.150868 / 6.500664 (-6.349796) | 0.068581 / 0.075469 (-0.006889) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.582955 / 1.841788 (-0.258833) | 22.684969 / 8.074308 (14.610661) | 16.829855 / 10.191392 (6.638463) | 0.201599 / 0.680424 (-0.478825) | 0.023261 / 0.534201 (-0.510940) | 0.465009 / 0.579283 (-0.114274) | 0.497701 / 0.434364 (0.063337) | 0.557822 / 0.540337 (0.017485) | 0.803234 / 1.386936 (-0.583702) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#9241c1070b5c9021705c17b12548b6fea75782d8 \"CML watermark\")\n", "Well done! :clap: :fire: ", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008866 / 0.011353 (-0.002487) | 0.005910 / 0.011008 (-0.005098) | 0.099916 / 0.038508 (0.061408) | 0.085787 / 0.023109 (0.062678) | 0.391028 / 0.275898 (0.115130) | 0.412689 / 0.323480 (0.089209) | 0.006527 / 0.007986 (-0.001459) | 0.004629 / 0.004328 (0.000301) | 0.084627 / 0.004250 (0.080377) | 0.063404 / 0.037052 (0.026352) | 0.408923 / 0.258489 (0.150434) | 0.437130 / 0.293841 (0.143289) | 0.050256 / 0.128546 (-0.078290) | 0.013914 / 0.075646 (-0.061732) | 0.350893 / 0.419271 (-0.068379) | 0.067931 / 0.043533 (0.024398) | 0.383807 / 0.255139 (0.128668) | 0.424150 / 0.283200 (0.140950) | 0.039978 / 0.141683 (-0.101705) | 1.697631 / 1.452155 (0.245476) | 1.925568 / 1.492716 (0.432851) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.315417 / 0.018006 (0.297410) | 0.607050 / 0.000490 (0.606560) | 0.017314 / 0.000200 (0.017114) | 0.000514 / 0.000054 (0.000459) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032994 / 0.037411 (-0.004417) | 0.103993 / 0.014526 (0.089467) | 0.125369 / 0.176557 (-0.051187) | 0.185984 / 0.737135 (-0.551151) | 0.139192 / 0.296338 (-0.157146) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.639769 / 0.215209 (0.424560) | 6.236187 / 2.077655 (4.158532) | 2.775777 / 1.504120 (1.271657) | 2.599683 / 1.541195 (1.058488) | 2.780064 / 1.468490 (1.311574) | 1.107247 / 4.584777 (-3.477530) | 5.724223 / 3.745712 (1.978511) | 5.284786 / 5.269862 (0.014925) | 3.342465 / 4.565676 (-1.223211) | 0.107685 / 0.424275 (-0.316590) | 0.009237 / 0.007607 (0.001630) | 0.760282 / 0.226044 (0.534238) | 7.570859 / 2.268929 (5.301930) | 3.572498 / 55.444624 (-51.872126) | 2.997482 / 6.876477 (-3.878995) | 2.910001 / 2.142072 (0.767929) | 1.249272 / 4.805227 (-3.555955) | 0.229425 / 6.500664 (-6.271239) | 0.091974 / 0.075469 (0.016505) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.663859 / 1.841788 (-0.177929) | 25.283961 / 8.074308 (17.209653) | 20.793389 / 10.191392 (10.601997) | 0.239263 / 0.680424 (-0.441161) | 0.028808 / 0.534201 (-0.505393) | 0.521045 / 0.579283 (-0.058238) | 0.602451 / 0.434364 (0.168087) | 0.544536 / 0.540337 (0.004198) | 0.819732 / 1.386936 (-0.567204) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008970 / 0.011353 (-0.002383) | 0.009663 / 0.011008 (-0.001345) | 0.083471 / 0.038508 (0.044963) | 0.090695 / 0.023109 (0.067585) | 0.562539 / 0.275898 (0.286641) | 0.572092 / 0.323480 (0.248612) | 0.007269 / 0.007986 (-0.000717) | 0.004664 / 0.004328 (0.000335) | 0.084212 / 0.004250 (0.079961) | 0.072716 / 0.037052 (0.035664) | 0.559810 / 0.258489 (0.301320) | 0.574296 / 0.293841 (0.280455) | 0.048555 / 0.128546 (-0.079991) | 0.015901 / 0.075646 (-0.059746) | 0.107815 / 0.419271 (-0.311456) | 0.065404 / 0.043533 (0.021871) | 0.544787 / 0.255139 (0.289648) | 0.586993 / 0.283200 (0.303794) | 0.042613 / 0.141683 (-0.099069) | 1.919266 / 1.452155 (0.467111) | 2.095189 / 1.492716 (0.602473) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.298512 / 0.018006 (0.280506) | 0.597745 / 0.000490 (0.597256) | 0.008806 / 0.000200 (0.008606) | 0.000119 / 0.000054 (0.000064) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.039420 / 0.037411 (0.002009) | 0.111378 / 0.014526 (0.096852) | 0.136421 / 0.176557 (-0.040135) | 0.192006 / 0.737135 (-0.545129) | 0.130037 / 0.296338 (-0.166301) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.679169 / 0.215209 (0.463960) | 6.750881 / 2.077655 (4.673226) | 3.220411 / 1.504120 (1.716291) | 2.851988 / 1.541195 (1.310794) | 2.974247 / 1.468490 (1.505757) | 0.892593 / 4.584777 (-3.692184) | 5.659975 / 3.745712 (1.914263) | 5.172641 / 5.269862 (-0.097220) | 3.308429 / 4.565676 (-1.257248) | 0.100580 / 0.424275 (-0.323695) | 0.009320 / 0.007607 (0.001713) | 0.833290 / 0.226044 (0.607245) | 8.091847 / 2.268929 (5.822918) | 4.023734 / 55.444624 (-51.420890) | 3.441583 / 6.876477 (-3.434894) | 3.763562 / 2.142072 (1.621489) | 1.055105 / 4.805227 (-3.750122) | 0.239218 / 6.500664 (-6.261446) | 0.081922 / 0.075469 (0.006453) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.796495 / 1.841788 (-0.045293) | 25.942492 / 8.074308 (17.868184) | 23.211617 / 10.191392 (13.020225) | 0.256054 / 0.680424 (-0.424370) | 0.030491 / 0.534201 (-0.503710) | 0.520474 / 0.579283 (-0.058809) | 0.626331 / 0.434364 (0.191967) | 0.619897 / 0.540337 (0.079560) | 0.900833 / 1.386936 (-0.486103) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#e74f80255700c4b8cde383a426c4b2def6db1253 \"CML watermark\")\n", "Congrats on merging this! 👏 " ]
Reduce the number of commits in `push_to_hub`
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 3, "laugh": 0, "rocket": 1, "total_count": 4, "url": "https://api.github.com/repos/huggingface/datasets/issues/6269/reactions" }
PR_kwDODunzps5bjbDc
{ "diff_url": "https://github.com/huggingface/datasets/pull/6269.diff", "html_url": "https://github.com/huggingface/datasets/pull/6269", "merged_at": "2023-10-16T13:30:46Z", "patch_url": "https://github.com/huggingface/datasets/pull/6269.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6269" }
2023-09-29T16:22:31Z
https://api.github.com/repos/huggingface/datasets/issues/6269/comments
Reduces the number of commits in `push_to_hub` by using the `preupload` API from https://github.com/huggingface/huggingface_hub/pull/1699. Each commit contains a maximum of 50 uploaded files. A shard's fingerprint no longer needs to be added as a suffix to support resuming an upload, meaning the shards' naming scheme is the same as the initial one. Also, it adds support for the following params: `create_pr`, `commit_message` and `revision` (`branch` deprecated; unlike the previous implementation, this one creates a branch if the branch does not exist to be consistent with `transformers`). (Nit) This implementation keeps the markdown section of the generated README.md empty to enable importing the card template (when the card is accessed on the Hub). Fixes https://github.com/huggingface/datasets/issues/5492, fixes https://github.com/huggingface/datasets/issues/6257, fixes https://github.com/huggingface/datasets/issues/5045, fixes https://github.com/huggingface/datasets/issues/6271 TODO: - [x] set the minimal version to the next `hfh` release (once it's published)
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
https://api.github.com/repos/huggingface/datasets/issues/6269/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6269/timeline
closed
false
6,269
null
2023-10-16T13:30:46Z
null
true
1,919,010,645
https://api.github.com/repos/huggingface/datasets/issues/6268
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6268/events
[]
null
2023-10-01T15:29:45Z
[]
https://github.com/huggingface/datasets/pull/6268
MEMBER
null
true
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6268). All of your documentation changes will be reflected on that endpoint.", "In https://github.com/huggingface/datasets/issues/4129 we want to track the origin of a dataset, e.g. if it comes from multiple datasets.\r\n\r\nI think it's out of scope of DatasetInfo alone, which has info for one dataset only.\r\nTherefore it makes sense to add repo_id, which is for one dataset only.\r\n\r\nIMO if we want to track multiple origins we will need a new DatasetInfo that would have fields relevant to a mix of datasets (out of scope of this PR)\r\n\r\ncc @mariosasko I'd like your opinion on this", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009009 / 0.011353 (-0.002344) | 0.004169 / 0.011008 (-0.006840) | 0.098634 / 0.038508 (0.060126) | 0.069526 / 0.023109 (0.046417) | 0.337963 / 0.275898 (0.062065) | 0.379737 / 0.323480 (0.056257) | 0.004318 / 0.007986 (-0.003668) | 0.005347 / 0.004328 (0.001019) | 0.069875 / 0.004250 (0.065624) | 0.055964 / 0.037052 (0.018912) | 0.340305 / 0.258489 (0.081816) | 0.429718 / 0.293841 (0.135877) | 0.045101 / 0.128546 (-0.083445) | 0.012610 / 0.075646 (-0.063036) | 0.312366 / 0.419271 (-0.106905) | 0.064711 / 0.043533 (0.021178) | 0.345216 / 0.255139 (0.090077) | 0.367245 / 0.283200 (0.084046) | 0.034638 / 0.141683 (-0.107045) | 1.541947 / 1.452155 (0.089793) | 1.645268 / 1.492716 (0.152551) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.233501 / 0.018006 (0.215495) | 0.514207 / 0.000490 (0.513717) | 0.014271 / 0.000200 (0.014072) | 0.000366 / 0.000054 (0.000311) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026288 / 0.037411 (-0.011124) | 0.083206 / 0.014526 (0.068680) | 0.098172 / 0.176557 (-0.078385) | 0.158529 / 0.737135 (-0.578606) | 0.095183 / 0.296338 (-0.201155) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.538300 / 0.215209 (0.323091) | 5.486939 / 2.077655 (3.409285) | 2.321812 / 1.504120 (0.817692) | 2.002124 / 1.541195 (0.460929) | 2.045043 / 1.468490 (0.576553) | 0.852772 / 4.584777 (-3.732005) | 5.014897 / 3.745712 (1.269185) | 4.428115 / 5.269862 (-0.841746) | 2.750126 / 4.565676 (-1.815550) | 0.099028 / 0.424275 (-0.325247) | 0.007678 / 0.007607 (0.000070) | 0.664463 / 0.226044 (0.438418) | 6.617811 / 2.268929 (4.348883) | 2.888382 / 55.444624 (-52.556242) | 2.190753 / 6.876477 (-4.685724) | 2.414586 / 2.142072 (0.272513) | 1.010302 / 4.805227 (-3.794925) | 0.194925 / 6.500664 (-6.305739) | 0.063490 / 0.075469 (-0.011979) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.543464 / 1.841788 (-0.298323) | 20.566666 / 8.074308 (12.492358) | 19.410745 / 10.191392 (9.219353) | 0.207077 / 0.680424 (-0.473347) | 0.028895 / 0.534201 (-0.505306) | 0.427525 / 0.579283 (-0.151758) | 0.535450 / 0.434364 (0.101086) | 0.494632 / 0.540337 (-0.045705) | 0.723705 / 1.386936 (-0.663231) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008209 / 0.011353 (-0.003144) | 0.004184 / 0.011008 (-0.006824) | 0.072420 / 0.038508 (0.033912) | 0.066851 / 0.023109 (0.043742) | 0.424137 / 0.275898 (0.148239) | 0.473156 / 0.323480 (0.149676) | 0.005394 / 0.007986 (-0.002591) | 0.003898 / 0.004328 (-0.000430) | 0.069996 / 0.004250 (0.065746) | 0.053113 / 0.037052 (0.016061) | 0.453214 / 0.258489 (0.194725) | 0.495921 / 0.293841 (0.202080) | 0.043028 / 0.128546 (-0.085519) | 0.012320 / 0.075646 (-0.063326) | 0.080270 / 0.419271 (-0.339002) | 0.053337 / 0.043533 (0.009804) | 0.436604 / 0.255139 (0.181465) | 0.463422 / 0.283200 (0.180223) | 0.030277 / 0.141683 (-0.111406) | 1.560261 / 1.452155 (0.108106) | 1.647209 / 1.492716 (0.154493) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.232556 / 0.018006 (0.214550) | 0.502387 / 0.000490 (0.501897) | 0.006688 / 0.000200 (0.006488) | 0.000118 / 0.000054 (0.000064) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030204 / 0.037411 (-0.007207) | 0.089438 / 0.014526 (0.074912) | 0.118939 / 0.176557 (-0.057617) | 0.160537 / 0.737135 (-0.576598) | 0.113432 / 0.296338 (-0.182906) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.586469 / 0.215209 (0.371260) | 5.916156 / 2.077655 (3.838502) | 2.904960 / 1.504120 (1.400840) | 2.346838 / 1.541195 (0.805644) | 2.373688 / 1.468490 (0.905198) | 0.829917 / 4.584777 (-3.754860) | 4.851283 / 3.745712 (1.105571) | 4.220103 / 5.269862 (-1.049758) | 2.706139 / 4.565676 (-1.859538) | 0.094095 / 0.424275 (-0.330180) | 0.008201 / 0.007607 (0.000594) | 0.699099 / 0.226044 (0.473054) | 7.046940 / 2.268929 (4.778011) | 3.374837 / 55.444624 (-52.069788) | 2.690839 / 6.876477 (-4.185638) | 2.845717 / 2.142072 (0.703645) | 0.989698 / 4.805227 (-3.815529) | 0.190413 / 6.500664 (-6.310251) | 0.066233 / 0.075469 (-0.009236) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.513607 / 1.841788 (-0.328180) | 21.544200 / 8.074308 (13.469892) | 20.297337 / 10.191392 (10.105945) | 0.216390 / 0.680424 (-0.464034) | 0.029962 / 0.534201 (-0.504239) | 0.451531 / 0.579283 (-0.127752) | 0.530147 / 0.434364 (0.095783) | 0.520739 / 0.540337 (-0.019598) | 0.716431 / 1.386936 (-0.670505) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#fcaa9f218ad1505bb5474060889b4b9578e24423 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006509 / 0.011353 (-0.004844) | 0.003987 / 0.011008 (-0.007022) | 0.085233 / 0.038508 (0.046725) | 0.077765 / 0.023109 (0.054656) | 0.310467 / 0.275898 (0.034569) | 0.343363 / 0.323480 (0.019883) | 0.005557 / 0.007986 (-0.002429) | 0.003430 / 0.004328 (-0.000898) | 0.064948 / 0.004250 (0.060697) | 0.056864 / 0.037052 (0.019812) | 0.314005 / 0.258489 (0.055516) | 0.360638 / 0.293841 (0.066798) | 0.031134 / 0.128546 (-0.097412) | 0.008869 / 0.075646 (-0.066777) | 0.286409 / 0.419271 (-0.132862) | 0.051338 / 0.043533 (0.007805) | 0.311329 / 0.255139 (0.056190) | 0.334373 / 0.283200 (0.051174) | 0.024816 / 0.141683 (-0.116867) | 1.502872 / 1.452155 (0.050718) | 1.569941 / 1.492716 (0.077224) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.269639 / 0.018006 (0.251633) | 0.558510 / 0.000490 (0.558020) | 0.011748 / 0.000200 (0.011548) | 0.000234 / 0.000054 (0.000180) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029139 / 0.037411 (-0.008272) | 0.083586 / 0.014526 (0.069060) | 0.102426 / 0.176557 (-0.074131) | 0.162398 / 0.737135 (-0.574737) | 0.101364 / 0.296338 (-0.194975) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.382281 / 0.215209 (0.167072) | 3.826412 / 2.077655 (1.748758) | 1.815911 / 1.504120 (0.311791) | 1.644539 / 1.541195 (0.103344) | 1.688487 / 1.468490 (0.219996) | 0.482115 / 4.584777 (-4.102662) | 3.574773 / 3.745712 (-0.170939) | 3.262733 / 5.269862 (-2.007129) | 2.058115 / 4.565676 (-2.507562) | 0.056367 / 0.424275 (-0.367908) | 0.007233 / 0.007607 (-0.000374) | 0.456859 / 0.226044 (0.230815) | 4.565935 / 2.268929 (2.297006) | 2.311802 / 55.444624 (-53.132823) | 1.943936 / 6.876477 (-4.932541) | 2.129811 / 2.142072 (-0.012261) | 0.575098 / 4.805227 (-4.230129) | 0.130495 / 6.500664 (-6.370169) | 0.059757 / 0.075469 (-0.015712) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.238495 / 1.841788 (-0.603293) | 18.940000 / 8.074308 (10.865692) | 14.034240 / 10.191392 (3.842848) | 0.166418 / 0.680424 (-0.514006) | 0.018420 / 0.534201 (-0.515781) | 0.395330 / 0.579283 (-0.183953) | 0.413518 / 0.434364 (-0.020846) | 0.461499 / 0.540337 (-0.078838) | 0.661371 / 1.386936 (-0.725565) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006673 / 0.011353 (-0.004680) | 0.004335 / 0.011008 (-0.006673) | 0.064568 / 0.038508 (0.026060) | 0.072763 / 0.023109 (0.049653) | 0.429488 / 0.275898 (0.153590) | 0.456900 / 0.323480 (0.133420) | 0.005481 / 0.007986 (-0.002505) | 0.003649 / 0.004328 (-0.000680) | 0.064975 / 0.004250 (0.060724) | 0.056839 / 0.037052 (0.019786) | 0.439451 / 0.258489 (0.180962) | 0.461691 / 0.293841 (0.167850) | 0.031455 / 0.128546 (-0.097092) | 0.008848 / 0.075646 (-0.066798) | 0.071719 / 0.419271 (-0.347553) | 0.047116 / 0.043533 (0.003583) | 0.429055 / 0.255139 (0.173916) | 0.434204 / 0.283200 (0.151004) | 0.022594 / 0.141683 (-0.119089) | 1.539231 / 1.452155 (0.087077) | 1.568111 / 1.492716 (0.075394) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.267374 / 0.018006 (0.249368) | 0.553202 / 0.000490 (0.552712) | 0.005410 / 0.000200 (0.005210) | 0.000101 / 0.000054 (0.000046) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031478 / 0.037411 (-0.005933) | 0.092438 / 0.014526 (0.077912) | 0.103874 / 0.176557 (-0.072682) | 0.158428 / 0.737135 (-0.578708) | 0.111617 / 0.296338 (-0.184721) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.434783 / 0.215209 (0.219574) | 4.332536 / 2.077655 (2.254881) | 2.354522 / 1.504120 (0.850402) | 2.220271 / 1.541195 (0.679076) | 2.338524 / 1.468490 (0.870034) | 0.494508 / 4.584777 (-4.090269) | 3.619592 / 3.745712 (-0.126120) | 3.320897 / 5.269862 (-1.948964) | 2.107475 / 4.565676 (-2.458202) | 0.058479 / 0.424275 (-0.365796) | 0.007427 / 0.007607 (-0.000180) | 0.509298 / 0.226044 (0.283254) | 5.067940 / 2.268929 (2.799012) | 2.815336 / 55.444624 (-52.629288) | 2.470958 / 6.876477 (-4.405519) | 2.672801 / 2.142072 (0.530728) | 0.588199 / 4.805227 (-4.217028) | 0.134062 / 6.500664 (-6.366602) | 0.060951 / 0.075469 (-0.014518) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.353955 / 1.841788 (-0.487832) | 20.386012 / 8.074308 (12.311704) | 15.032463 / 10.191392 (4.841071) | 0.167243 / 0.680424 (-0.513181) | 0.020426 / 0.534201 (-0.513775) | 0.396815 / 0.579283 (-0.182469) | 0.421806 / 0.434364 (-0.012558) | 0.471866 / 0.540337 (-0.068471) | 0.667206 / 1.386936 (-0.719730) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#aade5a0c79398c84632a3ff253111e694c7b598b \"CML watermark\")\n", "Really happy to see this! It could also be helpful to track some other metadata about how the dataset was built in the future. i.e. for the Stack loaded like this:\r\n\r\n```\r\nds = load_dataset(\"bigcode/the-stack\", data_dir=\"data/dockerfile\", split=\"train\")\r\n```\r\nIt could be helpful to have easy access to the `data_dir` argument used during loading since that changes the training data quite a bit vs. loading the full dataset. You can also recover this from `download_checksums`, which seems a bit hacky. That is not necessary for this PR, though.\r\n", "Perhaps it is also interesting to track the revision? I suppose the version also kind of covers that.\r\n\r\nThat said, this is looking great already! I'm quite excited about this. Losing the `repo_id` after merging (different) datasets also makes sense to me, well done.", "One other thought. Is it worth tracking if a `token` was passed during loading? \r\n\r\nThe Hub ID for private datasets could in some cases contain information someone wouldn't want to make public i.e. `davanstrien/super_secret_dataset_using_GPT_created_data`. \r\n\r\nAdding a bool like `is_private` could then be used by another library to determine if the dataset ID should be shared or not (or default to not sharing the ID for private datasets). i.e. in SpanMarker @tomaarsen might do a check like \r\n\r\n```python\r\nif ds.is_private and not push_hub_id_for_private_ds:\r\n\tds_name = None\r\n```\r\nPotentially this is overkill but could be useful for downstream libraries who might use this information for creating automatic model cards. \r\n\r\n\r\n", "We should probably find a way to remove `DatasetInfo`, as (most of) its attributes are outdated (homepage, description, etc.), not introduce new ones :). But I guess storing `repo_id` there is the simplest solution for now, so I'm OK with it.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007757 / 0.011353 (-0.003595) | 0.004543 / 0.011008 (-0.006465) | 0.100193 / 0.038508 (0.061685) | 0.082333 / 0.023109 (0.059224) | 0.374586 / 0.275898 (0.098688) | 0.412617 / 0.323480 (0.089137) | 0.006148 / 0.007986 (-0.001838) | 0.003826 / 0.004328 (-0.000503) | 0.077077 / 0.004250 (0.072827) | 0.064057 / 0.037052 (0.027005) | 0.391435 / 0.258489 (0.132946) | 0.436439 / 0.293841 (0.142599) | 0.036534 / 0.128546 (-0.092012) | 0.009986 / 0.075646 (-0.065660) | 0.344243 / 0.419271 (-0.075028) | 0.062013 / 0.043533 (0.018480) | 0.378113 / 0.255139 (0.122974) | 0.398476 / 0.283200 (0.115276) | 0.026552 / 0.141683 (-0.115131) | 1.740505 / 1.452155 (0.288350) | 1.835684 / 1.492716 (0.342968) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.267917 / 0.018006 (0.249911) | 0.510676 / 0.000490 (0.510186) | 0.010810 / 0.000200 (0.010610) | 0.000383 / 0.000054 (0.000328) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032113 / 0.037411 (-0.005299) | 0.097679 / 0.014526 (0.083153) | 0.113213 / 0.176557 (-0.063344) | 0.177897 / 0.737135 (-0.559238) | 0.111761 / 0.296338 (-0.184577) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.450544 / 0.215209 (0.235335) | 4.476746 / 2.077655 (2.399091) | 2.205391 / 1.504120 (0.701271) | 2.006457 / 1.541195 (0.465262) | 2.058859 / 1.468490 (0.590369) | 0.571549 / 4.584777 (-4.013228) | 4.175039 / 3.745712 (0.429327) | 3.815445 / 5.269862 (-1.454416) | 2.376673 / 4.565676 (-2.189004) | 0.067048 / 0.424275 (-0.357227) | 0.008544 / 0.007607 (0.000937) | 0.536384 / 0.226044 (0.310340) | 5.386232 / 2.268929 (3.117304) | 2.825620 / 55.444624 (-52.619004) | 2.339821 / 6.876477 (-4.536656) | 2.535736 / 2.142072 (0.393663) | 0.679572 / 4.805227 (-4.125655) | 0.156799 / 6.500664 (-6.343865) | 0.071667 / 0.075469 (-0.003802) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.512198 / 1.841788 (-0.329590) | 21.786760 / 8.074308 (13.712452) | 16.386274 / 10.191392 (6.194882) | 0.169108 / 0.680424 (-0.511316) | 0.021312 / 0.534201 (-0.512889) | 0.466153 / 0.579283 (-0.113130) | 0.496192 / 0.434364 (0.061829) | 0.549420 / 0.540337 (0.009082) | 0.780974 / 1.386936 (-0.605962) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007644 / 0.011353 (-0.003709) | 0.004654 / 0.011008 (-0.006354) | 0.075280 / 0.038508 (0.036772) | 0.083044 / 0.023109 (0.059935) | 0.481704 / 0.275898 (0.205805) | 0.514828 / 0.323480 (0.191348) | 0.006245 / 0.007986 (-0.001740) | 0.003715 / 0.004328 (-0.000614) | 0.074498 / 0.004250 (0.070248) | 0.064406 / 0.037052 (0.027353) | 0.481874 / 0.258489 (0.223385) | 0.518527 / 0.293841 (0.224686) | 0.037549 / 0.128546 (-0.090997) | 0.010106 / 0.075646 (-0.065541) | 0.084266 / 0.419271 (-0.335006) | 0.056659 / 0.043533 (0.013126) | 0.497707 / 0.255139 (0.242568) | 0.503201 / 0.283200 (0.220001) | 0.027086 / 0.141683 (-0.114597) | 1.834715 / 1.452155 (0.382560) | 1.919927 / 1.492716 (0.427210) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.249288 / 0.018006 (0.231282) | 0.500950 / 0.000490 (0.500460) | 0.005856 / 0.000200 (0.005656) | 0.000120 / 0.000054 (0.000065) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.037674 / 0.037411 (0.000263) | 0.111141 / 0.014526 (0.096615) | 0.123408 / 0.176557 (-0.053149) | 0.186604 / 0.737135 (-0.550531) | 0.125360 / 0.296338 (-0.170979) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.520480 / 0.215209 (0.305271) | 5.171108 / 2.077655 (3.093453) | 2.812746 / 1.504120 (1.308626) | 2.602941 / 1.541195 (1.061746) | 2.666196 / 1.468490 (1.197706) | 0.578684 / 4.584777 (-4.006092) | 4.238722 / 3.745712 (0.493010) | 3.844361 / 5.269862 (-1.425501) | 2.369214 / 4.565676 (-2.196462) | 0.068543 / 0.424275 (-0.355732) | 0.008695 / 0.007607 (0.001088) | 0.621869 / 0.226044 (0.395825) | 6.200566 / 2.268929 (3.931637) | 3.340846 / 55.444624 (-52.103779) | 2.920691 / 6.876477 (-3.955786) | 3.132438 / 2.142072 (0.990366) | 0.697394 / 4.805227 (-4.107834) | 0.158385 / 6.500664 (-6.342280) | 0.072566 / 0.075469 (-0.002903) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.599070 / 1.841788 (-0.242717) | 22.767139 / 8.074308 (14.692831) | 17.053988 / 10.191392 (6.862596) | 0.188414 / 0.680424 (-0.492009) | 0.023409 / 0.534201 (-0.510792) | 0.472092 / 0.579283 (-0.107191) | 0.486107 / 0.434364 (0.051743) | 0.562190 / 0.540337 (0.021852) | 0.791606 / 1.386936 (-0.595330) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#aacbaf45c93f88e8c95924f6224153fb37c3064a \"CML watermark\")\n" ]
Add repo_id to DatasetInfo
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 2, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/6268/reactions" }
PR_kwDODunzps5bhgs7
{ "diff_url": "https://github.com/huggingface/datasets/pull/6268.diff", "html_url": "https://github.com/huggingface/datasets/pull/6268", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/6268.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6268" }
2023-09-29T10:24:55Z
https://api.github.com/repos/huggingface/datasets/issues/6268/comments
```python from datasets import load_dataset ds = load_dataset("lhoestq/demo1", split="train") ds = ds.map(lambda x: {}, num_proc=2).filter(lambda x: True).remove_columns(["id"]) print(ds.repo_id) # lhoestq/demo1 ``` - repo_id is None when the dataset doesn't come from the Hub, e.g. from Dataset.from_dict - repo_id is set to None when concatenating datasets with different repo ids related to https://github.com/huggingface/datasets/issues/4129 TODO: - [ ] discuss if it's ok for now - [ ] tests
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/6268/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6268/timeline
open
false
6,268
null
null
null
true
1,916,443,262
https://api.github.com/repos/huggingface/datasets/issues/6267
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6267/events
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
null
2023-10-26T18:46:08Z
[]
https://github.com/huggingface/datasets/issues/6267
NONE
null
null
null
[ "You can use a `Sequence(ClassLabel(...))` feature type to represent a list of labels, and `cast_column`/`cast` to perform the \"string to label\" conversion (`class_encode_column` does support nested fields), e.g., in your case:\r\n```python\r\nfrom datasets import Dataset, Sequence, ClassLabel\r\ndata = {\r\n 'text': ['one', 'two', 'three', 'four'],\r\n 'labels': [['a', 'b'], ['b'], ['b', 'c'], ['a', 'd']]\r\n}\r\n\r\ndataset = Dataset.from_dict(data)\r\ndataset = dataset.cast_column('labels', Sequence(ClassLabel(names=[\"a\", \"b\", \"c\", \"d\"])))\r\n```", "Great! Can you elaborate on \"class_encode_column does support nested fields\"? Do you mean that there is a way to `class_encode_column` on a Sequence?", "Yes, exactly! This would be a nice contribution, though.", "Sorry, I'm still not following. Are you saying that there currently exists a way to call `class_encode_column` on a `Sequence(ClassLabel)` type? Or that the underlying data structures support it and a contribution of a method to do that would be welcome?", "`class_encode_column ` currently does not support `Sequence(ClassLabel)`. Implementing support for this would be a nice contribution.\r\n\r\nIn the meantime, this limitation can be circumvented by fetching (unique) labels and calling `.cast_column(col, Sequence(ClassLabel(names=labels)))`.", "Ok makes sense, can you take a look at the POC implementation I did [here](https://github.com/huggingface/datasets/commit/15443098e9ce053943172f7ec6fce3769d7dff6e)? Happy to take another pass / submit as a PR but would be helpful if I got a thumbs up that this was directionally correct with respect to implementation / architecture. ", "There is no need to introduce a new type (`MultiLabel`) for this feature. Also, I think we can keep the logic inside a single method instead of separating the two cases.\r\n\r\nMaybe https://github.com/huggingface/datasets/pull/4277 can help with the implementation. We extended `align_labels_with_mapping` to support `Sequence(ClassLabel(...))` in that PR (initially, it only worked with `ClassLabel(...)`)" ]
Multi label class encoding
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6267/reactions" }
I_kwDODunzps5yOpp-
null
2023-09-27T22:48:08Z
https://api.github.com/repos/huggingface/datasets/issues/6267/comments
### Feature request I have a multi label dataset and I'd like to be able to class encode the column and store the mapping directly in the features just as I can with a single label column. `class_encode_column` currently does not support multi labels. Here's an example of what I'd like to encode: ``` data = { 'text': ['one', 'two', 'three', 'four'], 'labels': [['a', 'b'], ['b'], ['b', 'c'], ['a', 'd']] } dataset = Dataset.from_dict(data) dataset = dataset.class_encode_column('labels') ``` I did some digging into the code base to evaluate the feasibility of this (note I'm very new to this code base) and from what I noticed the `ClassLabel` feature is still stored as an underlying raw data type of int so I thought a `MultiLabel` feature could similarly be stored as a Sequence of ints, thus not requiring significant serialization / conversion work to / from arrow. I did a POC of this [here](https://github.com/huggingface/datasets/commit/15443098e9ce053943172f7ec6fce3769d7dff6e) and included a simple test case (please excuse all the commented out tests, going for speed of POC here and didn't want to fight IDE to debug a single test). In the test I just assert that `num_classes` is the same to show that things are properly serializing, but if you break after loading from disk you'll see the dataset correct and the dataset feature is as expected. After digging more I did notice a few issues - After loading from disk I noticed type of the `labels` class is `Sequence` not `MultiLabel` (though the added `feature` attribute came through). This doesn't happen for `ClassLabel` but I couldn't find the encode / decode code paths that handle this. - I subclass `Sequence` in `MultiLabel` to leverage existing serialization, but this does miss the custom encode logic that `ClassLabel` has. I'm not sure of the best way to approach this as I haven't fully understood the encode / decode flow for datasets. I suspect my simple implementation will need some improvement as it'll require a significant amount of repeated logic to mimic `ClassLabel` behavior. ### Motivation See above - would like to support multi label class encodings. ### Your contribution This would be a big help for us and we're open to contributing but I'll likely need some guidance on how to implement to fit the encode / decode flow. Some suggestions on tests / would be great too, I'm guessing in addition to the class encode tests (that I'll need to expand) we'll need encode / decode tests.
{ "avatar_url": "https://avatars.githubusercontent.com/u/1000442?v=4", "events_url": "https://api.github.com/users/jmif/events{/privacy}", "followers_url": "https://api.github.com/users/jmif/followers", "following_url": "https://api.github.com/users/jmif/following{/other_user}", "gists_url": "https://api.github.com/users/jmif/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jmif", "id": 1000442, "login": "jmif", "node_id": "MDQ6VXNlcjEwMDA0NDI=", "organizations_url": "https://api.github.com/users/jmif/orgs", "received_events_url": "https://api.github.com/users/jmif/received_events", "repos_url": "https://api.github.com/users/jmif/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jmif/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmif/subscriptions", "type": "User", "url": "https://api.github.com/users/jmif" }
https://api.github.com/repos/huggingface/datasets/issues/6267/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6267/timeline
open
false
6,267
null
null
null
false
1,916,334,394
https://api.github.com/repos/huggingface/datasets/issues/6266
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6266/events
[]
null
2023-09-28T14:29:24Z
[]
https://github.com/huggingface/datasets/pull/6266
CONTRIBUTOR
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6266). All of your documentation changes will be reflected on that endpoint.", "On Ubuntu, if `libyaml-dev` is installed, you can install PyYAML 6.0.1 with LibYAML with the following command (as it's automatically detected):\r\n\r\n```bash\r\npip install git+https://github.com/yaml/pyyaml.git@6.0.1\r\n```", "Are the failing tests flaky?", "We use `huggingface_hub`'s RepoCard API instead of these modules to parse the YAML block (notice the deprecations), so the `huggingface_hub` repo is the right place to suggest these changes.\r\n\r\nPersonally, I'm not a fan of these changes, as a single non-standard usage of the `ClassLabel` type is not a sufficient reason to merge them. Also, the dataset in question stores data in a single Parquet file, with the features info embedded in its (schema) metadata, which means the YAML parsing can be skipped while preserving the features by directly loading the Parquet file:\r\n```python\r\nfrom datasets import load_dataset\r\nds = load_dataset(\"parquet\", data_files=\"https://huggingface.co/datasets/HuggingFaceM4/SugarCrepe_swap_obj/resolve/main/data/test-00000-of-00001-ca2ae6017a2336d7.parquet\")\r\n```\r\n\r\nPS: Yes, these tests are flaky. We are working on fixing them.", "Oh, I didn't realize they were deprecated. Thanks for the tip on how to work around this issue!\r\n\r\nFor future reference, the places to change the code in `huggingface_hub` would be:\r\n\r\nhttps://github.com/huggingface/huggingface_hub/blob/89cc69105074f1d071e0471144605f3cdfe1dab3/src/huggingface_hub/repocard.py#L506\r\n\r\nhttps://github.com/huggingface/huggingface_hub/blob/89cc69105074f1d071e0471144605f3cdfe1dab3/src/huggingface_hub/utils/_fixes.py#L34" ]
Use LibYAML with PyYAML if available
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6266/reactions" }
PR_kwDODunzps5bYYb8
{ "diff_url": "https://github.com/huggingface/datasets/pull/6266.diff", "html_url": "https://github.com/huggingface/datasets/pull/6266", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/6266.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6266" }
2023-09-27T21:13:36Z
https://api.github.com/repos/huggingface/datasets/issues/6266/comments
PyYAML, the YAML framework used in this library, allows the use of LibYAML to accelerate the methods `load` and `dump`. To use it, a user would need to first install a PyYAML version that uses LibYAML (not available in PyPI; needs to be manually installed). Then, to actually use them, PyYAML suggests importing the LibYAML version of the `Loader` and `Dumper` and falling back to the default ones. This PR implements this change. See [PyYAML docs](https://pyyaml.org/wiki/PyYAMLDocumentation) for more info. This change was motivated after trying to use any of [the SugarCREPE datasets in the Hub](https://huggingface.co/datasets?search=sugarcrepe) provided by [the org HuggingFaceM4](https://huggingface.co/datasets/HuggingFaceM4). Such datasets save a lot of information (~1MB) in the YAML metadata from the `README.md` file and I noticed this slowed down the data loading process. BTW, I also noticed cache files for it is also slow because it tries to hash an instance of `DatasetInfo`, which in turn has all this metadata. Also, I changed two list comprehensions into generator expressions to avoid allocating extra memory unnecessarily. And BTW, there's [an issue in PyYAML suggesting to make this automatic](https://github.com/yaml/pyyaml/issues/437).
{ "avatar_url": "https://avatars.githubusercontent.com/u/3905501?v=4", "events_url": "https://api.github.com/users/bryant1410/events{/privacy}", "followers_url": "https://api.github.com/users/bryant1410/followers", "following_url": "https://api.github.com/users/bryant1410/following{/other_user}", "gists_url": "https://api.github.com/users/bryant1410/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/bryant1410", "id": 3905501, "login": "bryant1410", "node_id": "MDQ6VXNlcjM5MDU1MDE=", "organizations_url": "https://api.github.com/users/bryant1410/orgs", "received_events_url": "https://api.github.com/users/bryant1410/received_events", "repos_url": "https://api.github.com/users/bryant1410/repos", "site_admin": false, "starred_url": "https://api.github.com/users/bryant1410/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bryant1410/subscriptions", "type": "User", "url": "https://api.github.com/users/bryant1410" }
https://api.github.com/repos/huggingface/datasets/issues/6266/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6266/timeline
open
false
6,266
null
null
null
true
1,915,651,566
https://api.github.com/repos/huggingface/datasets/issues/6265
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6265/events
[]
null
2023-09-28T18:34:02Z
[]
https://github.com/huggingface/datasets/pull/6265
COLLABORATOR
null
false
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005896 / 0.011353 (-0.005457) | 0.003642 / 0.011008 (-0.007366) | 0.081917 / 0.038508 (0.043409) | 0.059513 / 0.023109 (0.036404) | 0.341422 / 0.275898 (0.065524) | 0.359278 / 0.323480 (0.035798) | 0.004707 / 0.007986 (-0.003279) | 0.002938 / 0.004328 (-0.001390) | 0.063095 / 0.004250 (0.058845) | 0.051777 / 0.037052 (0.014725) | 0.321114 / 0.258489 (0.062625) | 0.363823 / 0.293841 (0.069982) | 0.027590 / 0.128546 (-0.100957) | 0.007846 / 0.075646 (-0.067800) | 0.261197 / 0.419271 (-0.158074) | 0.045812 / 0.043533 (0.002279) | 0.319787 / 0.255139 (0.064648) | 0.341839 / 0.283200 (0.058640) | 0.021913 / 0.141683 (-0.119770) | 1.397525 / 1.452155 (-0.054630) | 1.495902 / 1.492716 (0.003186) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.224815 / 0.018006 (0.206809) | 0.425780 / 0.000490 (0.425290) | 0.006934 / 0.000200 (0.006734) | 0.000225 / 0.000054 (0.000171) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024342 / 0.037411 (-0.013070) | 0.073923 / 0.014526 (0.059398) | 0.082108 / 0.176557 (-0.094448) | 0.143017 / 0.737135 (-0.594119) | 0.083163 / 0.296338 (-0.213175) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.398244 / 0.215209 (0.183035) | 3.957688 / 2.077655 (1.880033) | 1.904615 / 1.504120 (0.400495) | 1.710353 / 1.541195 (0.169158) | 1.798980 / 1.468490 (0.330490) | 0.499307 / 4.584777 (-4.085470) | 3.026734 / 3.745712 (-0.718978) | 2.923940 / 5.269862 (-2.345922) | 1.831870 / 4.565676 (-2.733807) | 0.058551 / 0.424275 (-0.365724) | 0.006403 / 0.007607 (-0.001204) | 0.464164 / 0.226044 (0.238119) | 4.644556 / 2.268929 (2.375628) | 2.341455 / 55.444624 (-53.103169) | 2.004385 / 6.876477 (-4.872092) | 2.051819 / 2.142072 (-0.090253) | 0.585610 / 4.805227 (-4.219617) | 0.124735 / 6.500664 (-6.375929) | 0.061150 / 0.075469 (-0.014319) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.224665 / 1.841788 (-0.617122) | 17.476227 / 8.074308 (9.401919) | 13.867617 / 10.191392 (3.676225) | 0.144177 / 0.680424 (-0.536247) | 0.017045 / 0.534201 (-0.517156) | 0.337468 / 0.579283 (-0.241815) | 0.374476 / 0.434364 (-0.059888) | 0.393428 / 0.540337 (-0.146910) | 0.535335 / 1.386936 (-0.851601) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006208 / 0.011353 (-0.005145) | 0.003650 / 0.011008 (-0.007359) | 0.062843 / 0.038508 (0.024335) | 0.062272 / 0.023109 (0.039162) | 0.446336 / 0.275898 (0.170438) | 0.477476 / 0.323480 (0.153996) | 0.004862 / 0.007986 (-0.003124) | 0.002822 / 0.004328 (-0.001506) | 0.063427 / 0.004250 (0.059177) | 0.049023 / 0.037052 (0.011971) | 0.453633 / 0.258489 (0.195144) | 0.486494 / 0.293841 (0.192653) | 0.028634 / 0.128546 (-0.099912) | 0.008187 / 0.075646 (-0.067460) | 0.068846 / 0.419271 (-0.350425) | 0.041104 / 0.043533 (-0.002429) | 0.446646 / 0.255139 (0.191507) | 0.468860 / 0.283200 (0.185660) | 0.020980 / 0.141683 (-0.120703) | 1.455565 / 1.452155 (0.003410) | 1.511142 / 1.492716 (0.018426) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.224242 / 0.018006 (0.206236) | 0.408483 / 0.000490 (0.407993) | 0.003495 / 0.000200 (0.003296) | 0.000076 / 0.000054 (0.000022) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027286 / 0.037411 (-0.010125) | 0.081151 / 0.014526 (0.066625) | 0.096598 / 0.176557 (-0.079959) | 0.146193 / 0.737135 (-0.590942) | 0.092213 / 0.296338 (-0.204125) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.463837 / 0.215209 (0.248628) | 4.636820 / 2.077655 (2.559165) | 2.576100 / 1.504120 (1.071980) | 2.396974 / 1.541195 (0.855779) | 2.461526 / 1.468490 (0.993036) | 0.502360 / 4.584777 (-4.082417) | 3.099973 / 3.745712 (-0.645739) | 2.937260 / 5.269862 (-2.332602) | 1.871274 / 4.565676 (-2.694402) | 0.057913 / 0.424275 (-0.366362) | 0.006511 / 0.007607 (-0.001096) | 0.536917 / 0.226044 (0.310873) | 5.396966 / 2.268929 (3.128038) | 3.015646 / 55.444624 (-52.428978) | 2.673793 / 6.876477 (-4.202684) | 2.712376 / 2.142072 (0.570304) | 0.591632 / 4.805227 (-4.213595) | 0.124872 / 6.500664 (-6.375792) | 0.061820 / 0.075469 (-0.013649) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.356828 / 1.841788 (-0.484960) | 18.076995 / 8.074308 (10.002687) | 15.116482 / 10.191392 (4.925090) | 0.151375 / 0.680424 (-0.529049) | 0.017867 / 0.534201 (-0.516334) | 0.335012 / 0.579283 (-0.244271) | 0.384137 / 0.434364 (-0.050226) | 0.397792 / 0.540337 (-0.142546) | 0.551521 / 1.386936 (-0.835415) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#46a0506765d0f92916ed5c37eb19e5fa1a77736a \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009418 / 0.011353 (-0.001935) | 0.005186 / 0.011008 (-0.005822) | 0.112270 / 0.038508 (0.073761) | 0.114856 / 0.023109 (0.091747) | 0.402267 / 0.275898 (0.126369) | 0.445213 / 0.323480 (0.121733) | 0.005588 / 0.007986 (-0.002398) | 0.004315 / 0.004328 (-0.000013) | 0.083561 / 0.004250 (0.079311) | 0.087319 / 0.037052 (0.050267) | 0.400989 / 0.258489 (0.142500) | 0.455636 / 0.293841 (0.161795) | 0.045168 / 0.128546 (-0.083378) | 0.010939 / 0.075646 (-0.064707) | 0.400120 / 0.419271 (-0.019151) | 0.071599 / 0.043533 (0.028066) | 0.418112 / 0.255139 (0.162973) | 0.443889 / 0.283200 (0.160690) | 0.032433 / 0.141683 (-0.109250) | 1.886313 / 1.452155 (0.434159) | 2.012909 / 1.492716 (0.520193) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.306991 / 0.018006 (0.288985) | 0.590426 / 0.000490 (0.589937) | 0.011811 / 0.000200 (0.011611) | 0.000596 / 0.000054 (0.000542) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.042520 / 0.037411 (0.005108) | 0.129808 / 0.014526 (0.115283) | 0.125481 / 0.176557 (-0.051075) | 0.199181 / 0.737135 (-0.537954) | 0.130426 / 0.296338 (-0.165913) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.526455 / 0.215209 (0.311246) | 5.213304 / 2.077655 (3.135649) | 2.643406 / 1.504120 (1.139286) | 2.611214 / 1.541195 (1.070019) | 2.586730 / 1.468490 (1.118240) | 0.639103 / 4.584777 (-3.945674) | 5.197421 / 3.745712 (1.451709) | 4.634642 / 5.269862 (-0.635220) | 2.741079 / 4.565676 (-1.824598) | 0.073064 / 0.424275 (-0.351211) | 0.009441 / 0.007607 (0.001834) | 0.635984 / 0.226044 (0.409940) | 6.283268 / 2.268929 (4.014339) | 3.337205 / 55.444624 (-52.107419) | 3.192362 / 6.876477 (-3.684114) | 2.910367 / 2.142072 (0.768294) | 0.767937 / 4.805227 (-4.037290) | 0.177467 / 6.500664 (-6.323198) | 0.081162 / 0.075469 (0.005693) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.803717 / 1.841788 (-0.038071) | 26.823235 / 8.074308 (18.748927) | 19.714471 / 10.191392 (9.523079) | 0.204048 / 0.680424 (-0.476376) | 0.025992 / 0.534201 (-0.508209) | 0.521438 / 0.579283 (-0.057845) | 0.596524 / 0.434364 (0.162160) | 0.600763 / 0.540337 (0.060425) | 0.945971 / 1.386936 (-0.440965) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009126 / 0.011353 (-0.002226) | 0.005109 / 0.011008 (-0.005899) | 0.083046 / 0.038508 (0.044538) | 0.115930 / 0.023109 (0.092821) | 0.534311 / 0.275898 (0.258413) | 0.552846 / 0.323480 (0.229366) | 0.007240 / 0.007986 (-0.000746) | 0.004617 / 0.004328 (0.000289) | 0.083927 / 0.004250 (0.079676) | 0.075926 / 0.037052 (0.038873) | 0.534750 / 0.258489 (0.276261) | 0.575122 / 0.293841 (0.281281) | 0.041001 / 0.128546 (-0.087545) | 0.010851 / 0.075646 (-0.064795) | 0.096574 / 0.419271 (-0.322697) | 0.063533 / 0.043533 (0.020001) | 0.546850 / 0.255139 (0.291711) | 0.547122 / 0.283200 (0.263922) | 0.032437 / 0.141683 (-0.109245) | 1.926191 / 1.452155 (0.474036) | 2.029841 / 1.492716 (0.537125) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.275582 / 0.018006 (0.257576) | 0.574212 / 0.000490 (0.573722) | 0.006863 / 0.000200 (0.006663) | 0.000236 / 0.000054 (0.000181) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.045340 / 0.037411 (0.007928) | 0.129196 / 0.014526 (0.114670) | 0.136637 / 0.176557 (-0.039920) | 0.200040 / 0.737135 (-0.537096) | 0.136328 / 0.296338 (-0.160011) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.612379 / 0.215209 (0.397170) | 5.874664 / 2.077655 (3.797010) | 3.070626 / 1.504120 (1.566506) | 2.999319 / 1.541195 (1.458124) | 3.000571 / 1.468490 (1.532081) | 0.732119 / 4.584777 (-3.852658) | 5.193226 / 3.745712 (1.447514) | 4.714571 / 5.269862 (-0.555291) | 2.870438 / 4.565676 (-1.695239) | 0.075793 / 0.424275 (-0.348482) | 0.009238 / 0.007607 (0.001631) | 0.695192 / 0.226044 (0.469148) | 6.897996 / 2.268929 (4.629067) | 3.923474 / 55.444624 (-51.521150) | 3.458326 / 6.876477 (-3.418151) | 3.331652 / 2.142072 (1.189579) | 0.821132 / 4.805227 (-3.984095) | 0.182252 / 6.500664 (-6.318412) | 0.084730 / 0.075469 (0.009260) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.919861 / 1.841788 (0.078073) | 27.437228 / 8.074308 (19.362920) | 21.109899 / 10.191392 (10.918507) | 0.245998 / 0.680424 (-0.434426) | 0.025817 / 0.534201 (-0.508384) | 0.517757 / 0.579283 (-0.061526) | 0.576375 / 0.434364 (0.142011) | 0.625283 / 0.540337 (0.084945) | 0.956877 / 1.386936 (-0.430059) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#8ddee15a8650a0ea52073477036d8c973da50f11 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008099 / 0.011353 (-0.003254) | 0.004815 / 0.011008 (-0.006194) | 0.099657 / 0.038508 (0.061149) | 0.064737 / 0.023109 (0.041628) | 0.461773 / 0.275898 (0.185875) | 0.444810 / 0.323480 (0.121330) | 0.004247 / 0.007986 (-0.003739) | 0.004956 / 0.004328 (0.000628) | 0.068664 / 0.004250 (0.064414) | 0.052039 / 0.037052 (0.014986) | 0.406750 / 0.258489 (0.148261) | 0.452832 / 0.293841 (0.158991) | 0.044518 / 0.128546 (-0.084028) | 0.013220 / 0.075646 (-0.062426) | 0.317713 / 0.419271 (-0.101558) | 0.061897 / 0.043533 (0.018364) | 0.398664 / 0.255139 (0.143525) | 0.531494 / 0.283200 (0.248294) | 0.064033 / 0.141683 (-0.077650) | 1.590385 / 1.452155 (0.138231) | 1.769918 / 1.492716 (0.277202) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.230795 / 0.018006 (0.212789) | 0.568797 / 0.000490 (0.568308) | 0.013498 / 0.000200 (0.013298) | 0.000448 / 0.000054 (0.000393) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028394 / 0.037411 (-0.009017) | 0.081973 / 0.014526 (0.067447) | 0.097623 / 0.176557 (-0.078934) | 0.158691 / 0.737135 (-0.578445) | 0.101548 / 0.296338 (-0.194791) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.574459 / 0.215209 (0.359249) | 5.709871 / 2.077655 (3.632217) | 2.521460 / 1.504120 (1.017340) | 2.239463 / 1.541195 (0.698268) | 2.195067 / 1.468490 (0.726577) | 0.792390 / 4.584777 (-3.792387) | 4.841665 / 3.745712 (1.095952) | 4.201620 / 5.269862 (-1.068241) | 2.664081 / 4.565676 (-1.901595) | 0.097661 / 0.424275 (-0.326614) | 0.008428 / 0.007607 (0.000821) | 0.698729 / 0.226044 (0.472684) | 6.908867 / 2.268929 (4.639939) | 3.247480 / 55.444624 (-52.197145) | 2.563921 / 6.876477 (-4.312556) | 2.738249 / 2.142072 (0.596177) | 0.972066 / 4.805227 (-3.833161) | 0.191196 / 6.500664 (-6.309468) | 0.064732 / 0.075469 (-0.010737) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.421910 / 1.841788 (-0.419877) | 20.633538 / 8.074308 (12.559230) | 18.054562 / 10.191392 (7.863170) | 0.194125 / 0.680424 (-0.486299) | 0.028097 / 0.534201 (-0.506104) | 0.417857 / 0.579283 (-0.161426) | 0.518758 / 0.434364 (0.084394) | 0.500199 / 0.540337 (-0.040138) | 0.754662 / 1.386936 (-0.632274) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008452 / 0.011353 (-0.002901) | 0.004646 / 0.011008 (-0.006362) | 0.077286 / 0.038508 (0.038778) | 0.072507 / 0.023109 (0.049398) | 0.439580 / 0.275898 (0.163682) | 0.506166 / 0.323480 (0.182686) | 0.006035 / 0.007986 (-0.001950) | 0.003886 / 0.004328 (-0.000442) | 0.075091 / 0.004250 (0.070841) | 0.063163 / 0.037052 (0.026110) | 0.468550 / 0.258489 (0.210061) | 0.523273 / 0.293841 (0.229432) | 0.048728 / 0.128546 (-0.079818) | 0.012991 / 0.075646 (-0.062655) | 0.087964 / 0.419271 (-0.331308) | 0.058920 / 0.043533 (0.015387) | 0.451247 / 0.255139 (0.196108) | 0.489827 / 0.283200 (0.206628) | 0.031164 / 0.141683 (-0.110519) | 1.675504 / 1.452155 (0.223349) | 1.806098 / 1.492716 (0.313382) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.253567 / 0.018006 (0.235561) | 0.508971 / 0.000490 (0.508481) | 0.010882 / 0.000200 (0.010682) | 0.000111 / 0.000054 (0.000057) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029490 / 0.037411 (-0.007921) | 0.090255 / 0.014526 (0.075729) | 0.110075 / 0.176557 (-0.066482) | 0.159375 / 0.737135 (-0.577760) | 0.109313 / 0.296338 (-0.187025) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.580252 / 0.215209 (0.365043) | 5.911741 / 2.077655 (3.834086) | 2.659405 / 1.504120 (1.155285) | 2.344943 / 1.541195 (0.803749) | 2.390748 / 1.468490 (0.922258) | 0.827823 / 4.584777 (-3.756954) | 4.973544 / 3.745712 (1.227832) | 4.300220 / 5.269862 (-0.969642) | 2.826181 / 4.565676 (-1.739495) | 0.101013 / 0.424275 (-0.323263) | 0.008025 / 0.007607 (0.000418) | 0.728414 / 0.226044 (0.502369) | 7.508045 / 2.268929 (5.239117) | 3.687627 / 55.444624 (-51.756997) | 2.902953 / 6.876477 (-3.973524) | 3.094624 / 2.142072 (0.952551) | 1.054696 / 4.805227 (-3.750531) | 0.212297 / 6.500664 (-6.288367) | 0.070211 / 0.075469 (-0.005258) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.567117 / 1.841788 (-0.274670) | 21.420746 / 8.074308 (13.346438) | 19.857467 / 10.191392 (9.666075) | 0.228554 / 0.680424 (-0.451870) | 0.032278 / 0.534201 (-0.501923) | 0.459966 / 0.579283 (-0.119317) | 0.541219 / 0.434364 (0.106855) | 0.549599 / 0.540337 (0.009261) | 0.731476 / 1.386936 (-0.655460) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#0cc77d7f45c73698c31eab4f8cfff901044d0020 \"CML watermark\")\n" ]
Remove `apache_beam` import in `BeamBasedBuilder._save_info`
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6265/reactions" }
PR_kwDODunzps5bWDfc
{ "diff_url": "https://github.com/huggingface/datasets/pull/6265.diff", "html_url": "https://github.com/huggingface/datasets/pull/6265", "merged_at": "2023-09-28T18:23:35Z", "patch_url": "https://github.com/huggingface/datasets/pull/6265.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6265" }
2023-09-27T13:56:34Z
https://api.github.com/repos/huggingface/datasets/issues/6265/comments
... to avoid an `ImportError` raised in `BeamBasedBuilder._save_info` when `apache_beam` is not installed (e.g., when downloading the processed version of a dataset from the HF GCS) Fix https://github.com/huggingface/datasets/issues/6260
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
https://api.github.com/repos/huggingface/datasets/issues/6265/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6265/timeline
closed
false
6,265
null
2023-09-28T18:23:35Z
null
true
1,914,958,781
https://api.github.com/repos/huggingface/datasets/issues/6264
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6264/events
[]
null
2023-09-27T08:45:24Z
[]
https://github.com/huggingface/datasets/pull/6264
MEMBER
null
false
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008356 / 0.011353 (-0.002997) | 0.004553 / 0.011008 (-0.006455) | 0.101025 / 0.038508 (0.062517) | 0.090194 / 0.023109 (0.067085) | 0.427127 / 0.275898 (0.151229) | 0.469116 / 0.323480 (0.145636) | 0.007593 / 0.007986 (-0.000393) | 0.003751 / 0.004328 (-0.000578) | 0.077432 / 0.004250 (0.073182) | 0.082744 / 0.037052 (0.045692) | 0.433638 / 0.258489 (0.175149) | 0.482387 / 0.293841 (0.188546) | 0.040658 / 0.128546 (-0.087888) | 0.009799 / 0.075646 (-0.065848) | 0.345274 / 0.419271 (-0.073998) | 0.076642 / 0.043533 (0.033109) | 0.424417 / 0.255139 (0.169278) | 0.457045 / 0.283200 (0.173846) | 0.033642 / 0.141683 (-0.108041) | 1.765446 / 1.452155 (0.313291) | 1.859279 / 1.492716 (0.366562) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.273629 / 0.018006 (0.255623) | 0.505743 / 0.000490 (0.505253) | 0.009300 / 0.000200 (0.009100) | 0.000359 / 0.000054 (0.000305) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032510 / 0.037411 (-0.004901) | 0.099628 / 0.014526 (0.085103) | 0.112904 / 0.176557 (-0.063652) | 0.179118 / 0.737135 (-0.558018) | 0.115946 / 0.296338 (-0.180393) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.456431 / 0.215209 (0.241222) | 4.556559 / 2.077655 (2.478904) | 2.207893 / 1.504120 (0.703773) | 2.024706 / 1.541195 (0.483512) | 2.165424 / 1.468490 (0.696934) | 0.571745 / 4.584777 (-4.013031) | 4.341017 / 3.745712 (0.595305) | 3.980520 / 5.269862 (-1.289342) | 2.333077 / 4.565676 (-2.232599) | 0.067200 / 0.424275 (-0.357075) | 0.008563 / 0.007607 (0.000956) | 0.545294 / 0.226044 (0.319250) | 5.445152 / 2.268929 (3.176224) | 2.740657 / 55.444624 (-52.703968) | 2.370635 / 6.876477 (-4.505842) | 2.451642 / 2.142072 (0.309570) | 0.679385 / 4.805227 (-4.125842) | 0.155967 / 6.500664 (-6.344697) | 0.072812 / 0.075469 (-0.002657) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.494483 / 1.841788 (-0.347305) | 23.673700 / 8.074308 (15.599392) | 16.608529 / 10.191392 (6.417137) | 0.170220 / 0.680424 (-0.510204) | 0.021630 / 0.534201 (-0.512571) | 0.470771 / 0.579283 (-0.108512) | 0.535874 / 0.434364 (0.101510) | 0.550376 / 0.540337 (0.010039) | 0.776633 / 1.386936 (-0.610303) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007899 / 0.011353 (-0.003454) | 0.004581 / 0.011008 (-0.006427) | 0.076520 / 0.038508 (0.038012) | 0.090374 / 0.023109 (0.067265) | 0.495016 / 0.275898 (0.219118) | 0.532384 / 0.323480 (0.208904) | 0.006160 / 0.007986 (-0.001825) | 0.003780 / 0.004328 (-0.000548) | 0.077164 / 0.004250 (0.072914) | 0.064444 / 0.037052 (0.027391) | 0.501642 / 0.258489 (0.243153) | 0.549170 / 0.293841 (0.255329) | 0.038051 / 0.128546 (-0.090495) | 0.010081 / 0.075646 (-0.065565) | 0.083752 / 0.419271 (-0.335520) | 0.061334 / 0.043533 (0.017801) | 0.493502 / 0.255139 (0.238363) | 0.518018 / 0.283200 (0.234818) | 0.029534 / 0.141683 (-0.112149) | 1.929432 / 1.452155 (0.477277) | 1.889985 / 1.492716 (0.397268) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.254802 / 0.018006 (0.236795) | 0.494463 / 0.000490 (0.493974) | 0.005040 / 0.000200 (0.004840) | 0.000120 / 0.000054 (0.000065) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.038372 / 0.037411 (0.000960) | 0.112247 / 0.014526 (0.097721) | 0.124365 / 0.176557 (-0.052191) | 0.187142 / 0.737135 (-0.549993) | 0.126070 / 0.296338 (-0.170269) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.513418 / 0.215209 (0.298209) | 5.132267 / 2.077655 (3.054613) | 2.773676 / 1.504120 (1.269556) | 2.576840 / 1.541195 (1.035645) | 2.681729 / 1.468490 (1.213238) | 0.581809 / 4.584777 (-4.002968) | 4.327075 / 3.745712 (0.581363) | 4.040264 / 5.269862 (-1.229598) | 2.436192 / 4.565676 (-2.129484) | 0.067819 / 0.424275 (-0.356456) | 0.008760 / 0.007607 (0.001153) | 0.610765 / 0.226044 (0.384720) | 6.105679 / 2.268929 (3.836750) | 3.341341 / 55.444624 (-52.103284) | 2.926695 / 6.876477 (-3.949781) | 3.017269 / 2.142072 (0.875196) | 0.707289 / 4.805227 (-4.097938) | 0.157379 / 6.500664 (-6.343285) | 0.072549 / 0.075469 (-0.002920) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.666738 / 1.841788 (-0.175050) | 23.698567 / 8.074308 (15.624259) | 17.806437 / 10.191392 (7.615045) | 0.172103 / 0.680424 (-0.508321) | 0.023508 / 0.534201 (-0.510693) | 0.473171 / 0.579283 (-0.106112) | 0.524834 / 0.434364 (0.090470) | 0.562562 / 0.540337 (0.022224) | 0.788667 / 1.386936 (-0.598269) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#1e7338259b26b32a095d251d5cdbc779c3573307 \"CML watermark\")\n", "CI 404 errors are unrelated. See:\r\n- #6262 ", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006657 / 0.011353 (-0.004696) | 0.003975 / 0.011008 (-0.007033) | 0.084614 / 0.038508 (0.046106) | 0.074557 / 0.023109 (0.051448) | 0.309213 / 0.275898 (0.033315) | 0.338245 / 0.323480 (0.014765) | 0.005375 / 0.007986 (-0.002610) | 0.003355 / 0.004328 (-0.000973) | 0.064406 / 0.004250 (0.060156) | 0.061763 / 0.037052 (0.024711) | 0.313405 / 0.258489 (0.054916) | 0.352149 / 0.293841 (0.058308) | 0.031597 / 0.128546 (-0.096949) | 0.008499 / 0.075646 (-0.067147) | 0.289098 / 0.419271 (-0.130174) | 0.054415 / 0.043533 (0.010882) | 0.313210 / 0.255139 (0.058071) | 0.326728 / 0.283200 (0.043528) | 0.024597 / 0.141683 (-0.117086) | 1.449916 / 1.452155 (-0.002239) | 1.526314 / 1.492716 (0.033598) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.231435 / 0.018006 (0.213429) | 0.537224 / 0.000490 (0.536734) | 0.007287 / 0.000200 (0.007088) | 0.000227 / 0.000054 (0.000172) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028340 / 0.037411 (-0.009071) | 0.084085 / 0.014526 (0.069560) | 0.428211 / 0.176557 (0.251655) | 0.157360 / 0.737135 (-0.579775) | 0.139470 / 0.296338 (-0.156868) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.389311 / 0.215209 (0.174102) | 3.871329 / 2.077655 (1.793674) | 1.861533 / 1.504120 (0.357413) | 1.688082 / 1.541195 (0.146887) | 1.804036 / 1.468490 (0.335546) | 0.489154 / 4.584777 (-4.095623) | 3.603843 / 3.745712 (-0.141869) | 3.424868 / 5.269862 (-1.844994) | 2.013525 / 4.565676 (-2.552152) | 0.057387 / 0.424275 (-0.366888) | 0.007274 / 0.007607 (-0.000333) | 0.462340 / 0.226044 (0.236295) | 4.620095 / 2.268929 (2.351167) | 2.326641 / 55.444624 (-53.117984) | 1.990082 / 6.876477 (-4.886395) | 2.037841 / 2.142072 (-0.104232) | 0.581973 / 4.805227 (-4.223254) | 0.135932 / 6.500664 (-6.364732) | 0.061092 / 0.075469 (-0.014377) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.249586 / 1.841788 (-0.592202) | 19.036233 / 8.074308 (10.961925) | 14.083365 / 10.191392 (3.891973) | 0.169802 / 0.680424 (-0.510622) | 0.018547 / 0.534201 (-0.515654) | 0.392926 / 0.579283 (-0.186357) | 0.409993 / 0.434364 (-0.024371) | 0.460081 / 0.540337 (-0.080257) | 0.643836 / 1.386936 (-0.743100) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006889 / 0.011353 (-0.004464) | 0.004060 / 0.011008 (-0.006948) | 0.064332 / 0.038508 (0.025824) | 0.077067 / 0.023109 (0.053958) | 0.401235 / 0.275898 (0.125337) | 0.437139 / 0.323480 (0.113659) | 0.005510 / 0.007986 (-0.002476) | 0.003338 / 0.004328 (-0.000991) | 0.064446 / 0.004250 (0.060195) | 0.055537 / 0.037052 (0.018485) | 0.432689 / 0.258489 (0.174200) | 0.460005 / 0.293841 (0.166164) | 0.033122 / 0.128546 (-0.095424) | 0.008637 / 0.075646 (-0.067010) | 0.071088 / 0.419271 (-0.348183) | 0.049024 / 0.043533 (0.005491) | 0.400258 / 0.255139 (0.145119) | 0.419324 / 0.283200 (0.136124) | 0.022050 / 0.141683 (-0.119632) | 1.475744 / 1.452155 (0.023589) | 1.546565 / 1.492716 (0.053848) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.226241 / 0.018006 (0.208235) | 0.448574 / 0.000490 (0.448085) | 0.004732 / 0.000200 (0.004533) | 0.000097 / 0.000054 (0.000042) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033260 / 0.037411 (-0.004151) | 0.092622 / 0.014526 (0.078096) | 0.105501 / 0.176557 (-0.071056) | 0.157981 / 0.737135 (-0.579155) | 0.105993 / 0.296338 (-0.190345) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.445716 / 0.215209 (0.230507) | 4.451848 / 2.077655 (2.374194) | 2.404769 / 1.504120 (0.900649) | 2.232594 / 1.541195 (0.691399) | 2.312735 / 1.468490 (0.844245) | 0.491208 / 4.584777 (-4.093569) | 3.561629 / 3.745712 (-0.184083) | 3.444269 / 5.269862 (-1.825592) | 2.060365 / 4.565676 (-2.505311) | 0.057723 / 0.424275 (-0.366552) | 0.007392 / 0.007607 (-0.000215) | 0.526447 / 0.226044 (0.300403) | 5.264307 / 2.268929 (2.995379) | 2.951481 / 55.444624 (-52.493143) | 2.593178 / 6.876477 (-4.283299) | 2.689780 / 2.142072 (0.547707) | 0.588649 / 4.805227 (-4.216579) | 0.133566 / 6.500664 (-6.367098) | 0.060462 / 0.075469 (-0.015008) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.381008 / 1.841788 (-0.460780) | 19.452394 / 8.074308 (11.378086) | 15.255912 / 10.191392 (5.064520) | 0.171043 / 0.680424 (-0.509381) | 0.020395 / 0.534201 (-0.513806) | 0.396429 / 0.579283 (-0.182854) | 0.422820 / 0.434364 (-0.011544) | 0.477305 / 0.540337 (-0.063032) | 0.658274 / 1.386936 (-0.728663) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#faedc670ca896584d0f8edcb1fd9c13d1d6cc903 \"CML watermark\")\n" ]
Temporarily pin tensorflow < 2.14.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6264/reactions" }
PR_kwDODunzps5bTvzh
{ "diff_url": "https://github.com/huggingface/datasets/pull/6264.diff", "html_url": "https://github.com/huggingface/datasets/pull/6264", "merged_at": "2023-09-27T08:36:39Z", "patch_url": "https://github.com/huggingface/datasets/pull/6264.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6264" }
2023-09-27T08:16:06Z
https://api.github.com/repos/huggingface/datasets/issues/6264/comments
Temporarily pin tensorflow < 2.14.0 until permanent solution is found. Hot fix #6263.
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/6264/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6264/timeline
closed
false
6,264
null
2023-09-27T08:36:39Z
null
true
1,914,951,043
https://api.github.com/repos/huggingface/datasets/issues/6263
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6263/events
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
null
2023-09-27T08:36:40Z
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
https://github.com/huggingface/datasets/issues/6263
MEMBER
completed
null
null
[]
CI is broken: ImportError: cannot import name 'context' from 'tensorflow.python'
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6263/reactions" }
I_kwDODunzps5yI9WD
null
2023-09-27T08:12:05Z
https://api.github.com/repos/huggingface/datasets/issues/6263/comments
Python 3.10 CI is broken for `test_py310`. See: https://github.com/huggingface/datasets/actions/runs/6322990957/job/17169678812?pr=6262 ``` FAILED tests/test_py_utils.py::TempSeedTest::test_tensorflow - ImportError: cannot import name 'context' from 'tensorflow.python' (/opt/hostedtoolcache/Python/3.10.13/x64/lib/python3.10/site-packages/tensorflow/python/__init__.py) ``` ``` _________________________ TempSeedTest.test_tensorflow _________________________ [gw1] linux -- Python 3.10.13 /opt/hostedtoolcache/Python/3.10.13/x64/bin/python self = <tests.test_py_utils.TempSeedTest testMethod=test_tensorflow> @require_tf def test_tensorflow(self): import tensorflow as tf from tensorflow.keras import layers model = layers.Dense(2) def gen_random_output(): x = tf.random.uniform((1, 3)) return model(x).numpy() > with temp_seed(42, set_tensorflow=True): tests/test_py_utils.py:155: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ /opt/hostedtoolcache/Python/3.10.13/x64/lib/python3.10/contextlib.py:135: in __enter__ return next(self.gen) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ seed = 42, set_pytorch = False, set_tensorflow = True @contextmanager def temp_seed(seed: int, set_pytorch=False, set_tensorflow=False): """Temporarily set the random seed. This works for python numpy, pytorch and tensorflow.""" np_state = np.random.get_state() np.random.seed(seed) if set_pytorch and config.TORCH_AVAILABLE: import torch torch_state = torch.random.get_rng_state() torch.random.manual_seed(seed) if torch.cuda.is_available(): torch_cuda_states = torch.cuda.get_rng_state_all() torch.cuda.manual_seed_all(seed) if set_tensorflow and config.TF_AVAILABLE: import tensorflow as tf > from tensorflow.python import context as tfpycontext E ImportError: cannot import name 'context' from 'tensorflow.python' (/opt/hostedtoolcache/Python/3.10.13/x64/lib/python3.10/site-packages/tensorflow/python/__init__.py) /opt/hostedtoolcache/Python/3.10.13/x64/lib/python3.10/site-packages/datasets/utils/py_utils.py:257: ImportError ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/6263/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6263/timeline
closed
false
6,263
null
2023-09-27T08:36:40Z
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
false
1,914,895,459
https://api.github.com/repos/huggingface/datasets/issues/6262
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6262/events
[]
null
2023-09-28T15:39:16Z
[]
https://github.com/huggingface/datasets/pull/6262
MEMBER
null
false
null
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008220 / 0.011353 (-0.003133) | 0.005560 / 0.011008 (-0.005448) | 0.100147 / 0.038508 (0.061639) | 0.070106 / 0.023109 (0.046996) | 0.411906 / 0.275898 (0.136008) | 0.432825 / 0.323480 (0.109345) | 0.004795 / 0.007986 (-0.003190) | 0.004094 / 0.004328 (-0.000235) | 0.075719 / 0.004250 (0.071468) | 0.067426 / 0.037052 (0.030374) | 0.428531 / 0.258489 (0.170042) | 0.437114 / 0.293841 (0.143273) | 0.045603 / 0.128546 (-0.082943) | 0.013333 / 0.075646 (-0.062313) | 0.353137 / 0.419271 (-0.066134) | 0.067902 / 0.043533 (0.024369) | 0.396633 / 0.255139 (0.141494) | 0.399185 / 0.283200 (0.115985) | 0.036377 / 0.141683 (-0.105306) | 1.624249 / 1.452155 (0.172094) | 1.792575 / 1.492716 (0.299859) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.315847 / 0.018006 (0.297840) | 0.595009 / 0.000490 (0.594519) | 0.018876 / 0.000200 (0.018676) | 0.000613 / 0.000054 (0.000558) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029886 / 0.037411 (-0.007526) | 0.085765 / 0.014526 (0.071239) | 0.108680 / 0.176557 (-0.067877) | 0.174588 / 0.737135 (-0.562548) | 0.104494 / 0.296338 (-0.191844) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.594429 / 0.215209 (0.379220) | 5.912352 / 2.077655 (3.834698) | 2.408501 / 1.504120 (0.904381) | 2.050914 / 1.541195 (0.509720) | 2.199349 / 1.468490 (0.730859) | 0.813797 / 4.584777 (-3.770980) | 5.169577 / 3.745712 (1.423864) | 4.653951 / 5.269862 (-0.615911) | 2.805423 / 4.565676 (-1.760253) | 0.092278 / 0.424275 (-0.331997) | 0.007394 / 0.007607 (-0.000213) | 0.684029 / 0.226044 (0.457985) | 6.964260 / 2.268929 (4.695331) | 3.108408 / 55.444624 (-52.336217) | 2.470907 / 6.876477 (-4.405569) | 2.460153 / 2.142072 (0.318081) | 0.986445 / 4.805227 (-3.818782) | 0.213069 / 6.500664 (-6.287596) | 0.074061 / 0.075469 (-0.001408) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.590732 / 1.841788 (-0.251056) | 23.736918 / 8.074308 (15.662609) | 21.223910 / 10.191392 (11.032518) | 0.236173 / 0.680424 (-0.444251) | 0.030056 / 0.534201 (-0.504145) | 0.489461 / 0.579283 (-0.089822) | 0.607582 / 0.434364 (0.173218) | 0.539889 / 0.540337 (-0.000449) | 0.817942 / 1.386936 (-0.568994) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008042 / 0.011353 (-0.003311) | 0.004836 / 0.011008 (-0.006173) | 0.075434 / 0.038508 (0.036926) | 0.080818 / 0.023109 (0.057709) | 0.474797 / 0.275898 (0.198899) | 0.526168 / 0.323480 (0.202689) | 0.006463 / 0.007986 (-0.001522) | 0.004031 / 0.004328 (-0.000297) | 0.074141 / 0.004250 (0.069891) | 0.068265 / 0.037052 (0.031212) | 0.562550 / 0.258489 (0.304061) | 0.544820 / 0.293841 (0.250979) | 0.047263 / 0.128546 (-0.081283) | 0.014113 / 0.075646 (-0.061534) | 0.086061 / 0.419271 (-0.333210) | 0.062475 / 0.043533 (0.018942) | 0.479912 / 0.255139 (0.224773) | 0.494784 / 0.283200 (0.211584) | 0.035847 / 0.141683 (-0.105836) | 1.726452 / 1.452155 (0.274297) | 1.770113 / 1.492716 (0.277396) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.286713 / 0.018006 (0.268707) | 0.609704 / 0.000490 (0.609214) | 0.009342 / 0.000200 (0.009143) | 0.000134 / 0.000054 (0.000080) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.035137 / 0.037411 (-0.002275) | 0.099331 / 0.014526 (0.084805) | 0.108971 / 0.176557 (-0.067586) | 0.170952 / 0.737135 (-0.566183) | 0.111736 / 0.296338 (-0.184603) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.617434 / 0.215209 (0.402225) | 6.204351 / 2.077655 (4.126697) | 2.854347 / 1.504120 (1.350227) | 2.557424 / 1.541195 (1.016229) | 2.638173 / 1.468490 (1.169683) | 0.854234 / 4.584777 (-3.730543) | 5.383288 / 3.745712 (1.637576) | 4.698098 / 5.269862 (-0.571763) | 2.903860 / 4.565676 (-1.661817) | 0.094689 / 0.424275 (-0.329586) | 0.007892 / 0.007607 (0.000285) | 0.729420 / 0.226044 (0.503376) | 7.356691 / 2.268929 (5.087763) | 3.708039 / 55.444624 (-51.736585) | 2.979734 / 6.876477 (-3.896743) | 2.978983 / 2.142072 (0.836911) | 1.040554 / 4.805227 (-3.764673) | 0.211246 / 6.500664 (-6.289418) | 0.079880 / 0.075469 (0.004411) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.676057 / 1.841788 (-0.165731) | 23.428443 / 8.074308 (15.354135) | 21.016293 / 10.191392 (10.824901) | 0.260927 / 0.680424 (-0.419497) | 0.030689 / 0.534201 (-0.503512) | 0.495652 / 0.579283 (-0.083632) | 0.622976 / 0.434364 (0.188612) | 0.561175 / 0.540337 (0.020837) | 0.786733 / 1.386936 (-0.600203) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#fb621b9630a69643255d25f192fdb011935122b1 \"CML watermark\")\n", "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005942 / 0.011353 (-0.005410) | 0.003706 / 0.011008 (-0.007302) | 0.081002 / 0.038508 (0.042493) | 0.056854 / 0.023109 (0.033745) | 0.358668 / 0.275898 (0.082770) | 0.369718 / 0.323480 (0.046238) | 0.005202 / 0.007986 (-0.002784) | 0.002841 / 0.004328 (-0.001487) | 0.062976 / 0.004250 (0.058726) | 0.051308 / 0.037052 (0.014255) | 0.373636 / 0.258489 (0.115147) | 0.390480 / 0.293841 (0.096639) | 0.027480 / 0.128546 (-0.101067) | 0.007960 / 0.075646 (-0.067686) | 0.262719 / 0.419271 (-0.156552) | 0.046488 / 0.043533 (0.002955) | 0.347299 / 0.255139 (0.092160) | 0.393448 / 0.283200 (0.110249) | 0.019445 / 0.141683 (-0.122238) | 1.431314 / 1.452155 (-0.020841) | 1.495578 / 1.492716 (0.002862) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.223724 / 0.018006 (0.205718) | 0.416929 / 0.000490 (0.416440) | 0.005253 / 0.000200 (0.005053) | 0.000217 / 0.000054 (0.000163) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023571 / 0.037411 (-0.013841) | 0.073503 / 0.014526 (0.058978) | 0.081366 / 0.176557 (-0.095190) | 0.142716 / 0.737135 (-0.594420) | 0.082612 / 0.296338 (-0.213727) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.407319 / 0.215209 (0.192109) | 4.141404 / 2.077655 (2.063749) | 1.910842 / 1.504120 (0.406722) | 1.731694 / 1.541195 (0.190499) | 1.805228 / 1.468490 (0.336738) | 0.497109 / 4.584777 (-4.087668) | 3.107624 / 3.745712 (-0.638088) | 2.890687 / 5.269862 (-2.379174) | 1.795913 / 4.565676 (-2.769763) | 0.057099 / 0.424275 (-0.367176) | 0.006414 / 0.007607 (-0.001194) | 0.482127 / 0.226044 (0.256083) | 4.835158 / 2.268929 (2.566229) | 2.368909 / 55.444624 (-53.075715) | 2.001608 / 6.876477 (-4.874868) | 2.004492 / 2.142072 (-0.137580) | 0.579910 / 4.805227 (-4.225317) | 0.123541 / 6.500664 (-6.377123) | 0.059651 / 0.075469 (-0.015818) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.242364 / 1.841788 (-0.599424) | 16.982676 / 8.074308 (8.908368) | 13.718885 / 10.191392 (3.527493) | 0.132759 / 0.680424 (-0.547665) | 0.017012 / 0.534201 (-0.517189) | 0.333447 / 0.579283 (-0.245836) | 0.360149 / 0.434364 (-0.074215) | 0.385526 / 0.540337 (-0.154811) | 0.536915 / 1.386936 (-0.850021) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005946 / 0.011353 (-0.005407) | 0.003442 / 0.011008 (-0.007566) | 0.062595 / 0.038508 (0.024087) | 0.058699 / 0.023109 (0.035590) | 0.442626 / 0.275898 (0.166728) | 0.473773 / 0.323480 (0.150293) | 0.004622 / 0.007986 (-0.003364) | 0.002812 / 0.004328 (-0.001516) | 0.064099 / 0.004250 (0.059849) | 0.046784 / 0.037052 (0.009731) | 0.466049 / 0.258489 (0.207560) | 0.487912 / 0.293841 (0.194071) | 0.028372 / 0.128546 (-0.100174) | 0.007992 / 0.075646 (-0.067654) | 0.068151 / 0.419271 (-0.351120) | 0.041010 / 0.043533 (-0.002523) | 0.442331 / 0.255139 (0.187192) | 0.469686 / 0.283200 (0.186487) | 0.019694 / 0.141683 (-0.121989) | 1.467928 / 1.452155 (0.015774) | 1.525635 / 1.492716 (0.032918) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.204459 / 0.018006 (0.186453) | 0.407766 / 0.000490 (0.407276) | 0.003898 / 0.000200 (0.003698) | 0.000077 / 0.000054 (0.000023) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025909 / 0.037411 (-0.011503) | 0.080341 / 0.014526 (0.065816) | 0.088231 / 0.176557 (-0.088325) | 0.144056 / 0.737135 (-0.593079) | 0.089769 / 0.296338 (-0.206569) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.462876 / 0.215209 (0.247667) | 4.625983 / 2.077655 (2.548329) | 2.580079 / 1.504120 (1.075959) | 2.402792 / 1.541195 (0.861597) | 2.424982 / 1.468490 (0.956491) | 0.503654 / 4.584777 (-4.081123) | 3.178995 / 3.745712 (-0.566717) | 2.956126 / 5.269862 (-2.313735) | 1.847837 / 4.565676 (-2.717840) | 0.057964 / 0.424275 (-0.366311) | 0.006405 / 0.007607 (-0.001202) | 0.536036 / 0.226044 (0.309992) | 5.374416 / 2.268929 (3.105487) | 3.036440 / 55.444624 (-52.408184) | 2.682054 / 6.876477 (-4.194422) | 2.683462 / 2.142072 (0.541390) | 0.592751 / 4.805227 (-4.212477) | 0.124313 / 6.500664 (-6.376351) | 0.061127 / 0.075469 (-0.014342) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.383539 / 1.841788 (-0.458249) | 17.766221 / 8.074308 (9.691913) | 15.306600 / 10.191392 (5.115208) | 0.145035 / 0.680424 (-0.535389) | 0.018078 / 0.534201 (-0.516123) | 0.330102 / 0.579283 (-0.249181) | 0.375380 / 0.434364 (-0.058984) | 0.388531 / 0.540337 (-0.151807) | 0.548720 / 1.386936 (-0.838216) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#0082342ac792a05f4a615e4985d1c791e155115a \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006757 / 0.011353 (-0.004596) | 0.004110 / 0.011008 (-0.006898) | 0.084727 / 0.038508 (0.046219) | 0.074328 / 0.023109 (0.051219) | 0.310467 / 0.275898 (0.034569) | 0.343209 / 0.323480 (0.019729) | 0.004228 / 0.007986 (-0.003757) | 0.003400 / 0.004328 (-0.000929) | 0.065546 / 0.004250 (0.061296) | 0.063057 / 0.037052 (0.026005) | 0.315023 / 0.258489 (0.056534) | 0.356395 / 0.293841 (0.062554) | 0.031959 / 0.128546 (-0.096588) | 0.008577 / 0.075646 (-0.067069) | 0.289075 / 0.419271 (-0.130196) | 0.055011 / 0.043533 (0.011478) | 0.308861 / 0.255139 (0.053722) | 0.328691 / 0.283200 (0.045491) | 0.027037 / 0.141683 (-0.114646) | 1.464314 / 1.452155 (0.012159) | 1.549644 / 1.492716 (0.056927) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.238330 / 0.018006 (0.220324) | 0.451570 / 0.000490 (0.451080) | 0.010873 / 0.000200 (0.010673) | 0.000341 / 0.000054 (0.000286) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029909 / 0.037411 (-0.007503) | 0.085222 / 0.014526 (0.070696) | 0.100180 / 0.176557 (-0.076377) | 0.154842 / 0.737135 (-0.582293) | 0.099253 / 0.296338 (-0.197086) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.401603 / 0.215209 (0.186394) | 4.009781 / 2.077655 (1.932126) | 2.021807 / 1.504120 (0.517687) | 1.861017 / 1.541195 (0.319822) | 2.009072 / 1.468490 (0.540582) | 0.483798 / 4.584777 (-4.100979) | 3.580394 / 3.745712 (-0.165318) | 3.464587 / 5.269862 (-1.805275) | 2.018400 / 4.565676 (-2.547276) | 0.057134 / 0.424275 (-0.367141) | 0.007303 / 0.007607 (-0.000304) | 0.473627 / 0.226044 (0.247582) | 4.722634 / 2.268929 (2.453706) | 2.490884 / 55.444624 (-52.953741) | 2.121568 / 6.876477 (-4.754909) | 2.200699 / 2.142072 (0.058626) | 0.576728 / 4.805227 (-4.228499) | 0.135633 / 6.500664 (-6.365032) | 0.061625 / 0.075469 (-0.013844) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.250545 / 1.841788 (-0.591243) | 19.167642 / 8.074308 (11.093334) | 14.189891 / 10.191392 (3.998499) | 0.164552 / 0.680424 (-0.515872) | 0.018215 / 0.534201 (-0.515986) | 0.389962 / 0.579283 (-0.189321) | 0.413972 / 0.434364 (-0.020392) | 0.460253 / 0.540337 (-0.080085) | 0.647897 / 1.386936 (-0.739039) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006714 / 0.011353 (-0.004639) | 0.004081 / 0.011008 (-0.006927) | 0.065627 / 0.038508 (0.027119) | 0.077644 / 0.023109 (0.054535) | 0.409950 / 0.275898 (0.134052) | 0.442940 / 0.323480 (0.119460) | 0.005523 / 0.007986 (-0.002463) | 0.003366 / 0.004328 (-0.000962) | 0.065425 / 0.004250 (0.061174) | 0.056222 / 0.037052 (0.019169) | 0.429928 / 0.258489 (0.171439) | 0.457136 / 0.293841 (0.163296) | 0.032356 / 0.128546 (-0.096190) | 0.008676 / 0.075646 (-0.066970) | 0.071785 / 0.419271 (-0.347486) | 0.048458 / 0.043533 (0.004925) | 0.408003 / 0.255139 (0.152864) | 0.433529 / 0.283200 (0.150330) | 0.023232 / 0.141683 (-0.118450) | 1.483640 / 1.452155 (0.031485) | 1.552425 / 1.492716 (0.059709) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.282347 / 0.018006 (0.264341) | 0.448742 / 0.000490 (0.448253) | 0.039590 / 0.000200 (0.039390) | 0.000407 / 0.000054 (0.000353) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032516 / 0.037411 (-0.004896) | 0.095269 / 0.014526 (0.080744) | 0.106363 / 0.176557 (-0.070193) | 0.157945 / 0.737135 (-0.579191) | 0.106783 / 0.296338 (-0.189556) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.436334 / 0.215209 (0.221125) | 4.348147 / 2.077655 (2.270492) | 2.326830 / 1.504120 (0.822710) | 2.162586 / 1.541195 (0.621391) | 2.257769 / 1.468490 (0.789279) | 0.491677 / 4.584777 (-4.093099) | 3.707385 / 3.745712 (-0.038328) | 3.567147 / 5.269862 (-1.702715) | 2.099451 / 4.565676 (-2.466226) | 0.058486 / 0.424275 (-0.365789) | 0.007324 / 0.007607 (-0.000283) | 0.510962 / 0.226044 (0.284917) | 5.106550 / 2.268929 (2.837622) | 2.785723 / 55.444624 (-52.658901) | 2.452928 / 6.876477 (-4.423548) | 2.545034 / 2.142072 (0.402961) | 0.611124 / 4.805227 (-4.194103) | 0.133503 / 6.500664 (-6.367161) | 0.061118 / 0.075469 (-0.014351) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.386640 / 1.841788 (-0.455148) | 20.485670 / 8.074308 (12.411362) | 15.332223 / 10.191392 (5.140831) | 0.164070 / 0.680424 (-0.516354) | 0.019962 / 0.534201 (-0.514239) | 0.394217 / 0.579283 (-0.185066) | 0.428442 / 0.434364 (-0.005922) | 0.473784 / 0.540337 (-0.066553) | 0.665141 / 1.386936 (-0.721795) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#c722eb75a6cc56eac530c44a17ff679ca805aa89 \"CML watermark\")\n", "The CI errors seem unrelated to this PR but I think they need further investigation in another PR.\r\n```\r\nFAILED tests/test_upstream_hub.py::TestPushToHub::test_push_dataset_dict_to_hub_multiple_files - KeyError: 'url'\r\n```", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008766 / 0.011353 (-0.002587) | 0.005289 / 0.011008 (-0.005720) | 0.097220 / 0.038508 (0.058712) | 0.072246 / 0.023109 (0.049137) | 0.369359 / 0.275898 (0.093461) | 0.422571 / 0.323480 (0.099091) | 0.004941 / 0.007986 (-0.003044) | 0.006103 / 0.004328 (0.001774) | 0.075828 / 0.004250 (0.071578) | 0.065795 / 0.037052 (0.028743) | 0.412835 / 0.258489 (0.154346) | 0.430062 / 0.293841 (0.136221) | 0.045806 / 0.128546 (-0.082741) | 0.013760 / 0.075646 (-0.061887) | 0.351542 / 0.419271 (-0.067729) | 0.064836 / 0.043533 (0.021304) | 0.370162 / 0.255139 (0.115023) | 0.434949 / 0.283200 (0.151749) | 0.039198 / 0.141683 (-0.102485) | 1.670940 / 1.452155 (0.218785) | 1.809677 / 1.492716 (0.316961) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.295104 / 0.018006 (0.277097) | 0.594584 / 0.000490 (0.594095) | 0.010923 / 0.000200 (0.010723) | 0.000479 / 0.000054 (0.000425) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029174 / 0.037411 (-0.008237) | 0.094637 / 0.014526 (0.080111) | 0.102948 / 0.176557 (-0.073608) | 0.171048 / 0.737135 (-0.566087) | 0.111465 / 0.296338 (-0.184873) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.582017 / 0.215209 (0.366808) | 5.727008 / 2.077655 (3.649354) | 2.563211 / 1.504120 (1.059091) | 2.308912 / 1.541195 (0.767717) | 2.301258 / 1.468490 (0.832768) | 0.819594 / 4.584777 (-3.765183) | 5.177536 / 3.745712 (1.431824) | 4.473602 / 5.269862 (-0.796260) | 2.743819 / 4.565676 (-1.821857) | 0.090052 / 0.424275 (-0.334223) | 0.007903 / 0.007607 (0.000295) | 0.679142 / 0.226044 (0.453097) | 6.887891 / 2.268929 (4.618962) | 3.337926 / 55.444624 (-52.106699) | 2.659228 / 6.876477 (-4.217249) | 2.641289 / 2.142072 (0.499216) | 0.974829 / 4.805227 (-3.830398) | 0.205775 / 6.500664 (-6.294890) | 0.075268 / 0.075469 (-0.000201) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.500562 / 1.841788 (-0.341226) | 22.688483 / 8.074308 (14.614175) | 19.634878 / 10.191392 (9.443486) | 0.227409 / 0.680424 (-0.453015) | 0.029794 / 0.534201 (-0.504407) | 0.475204 / 0.579283 (-0.104079) | 0.579379 / 0.434364 (0.145016) | 0.541244 / 0.540337 (0.000907) | 0.739187 / 1.386936 (-0.647749) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008641 / 0.011353 (-0.002712) | 0.006139 / 0.011008 (-0.004870) | 0.075048 / 0.038508 (0.036540) | 0.074070 / 0.023109 (0.050961) | 0.508288 / 0.275898 (0.232390) | 0.539770 / 0.323480 (0.216290) | 0.006092 / 0.007986 (-0.001894) | 0.003748 / 0.004328 (-0.000581) | 0.077945 / 0.004250 (0.073695) | 0.056989 / 0.037052 (0.019936) | 0.526889 / 0.258489 (0.268400) | 0.560862 / 0.293841 (0.267021) | 0.046507 / 0.128546 (-0.082040) | 0.013249 / 0.075646 (-0.062397) | 0.088363 / 0.419271 (-0.330908) | 0.058776 / 0.043533 (0.015243) | 0.495869 / 0.255139 (0.240730) | 0.538615 / 0.283200 (0.255415) | 0.034055 / 0.141683 (-0.107628) | 1.658713 / 1.452155 (0.206558) | 1.736599 / 1.492716 (0.243883) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.288355 / 0.018006 (0.270349) | 0.571481 / 0.000490 (0.570991) | 0.006765 / 0.000200 (0.006565) | 0.000101 / 0.000054 (0.000047) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031836 / 0.037411 (-0.005575) | 0.101312 / 0.014526 (0.086786) | 0.111433 / 0.176557 (-0.065124) | 0.169599 / 0.737135 (-0.567536) | 0.114595 / 0.296338 (-0.181743) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.645258 / 0.215209 (0.430049) | 6.446653 / 2.077655 (4.368998) | 2.983498 / 1.504120 (1.479379) | 2.573820 / 1.541195 (1.032625) | 2.624286 / 1.468490 (1.155796) | 0.815997 / 4.584777 (-3.768780) | 5.140248 / 3.745712 (1.394536) | 4.636915 / 5.269862 (-0.632947) | 2.866313 / 4.565676 (-1.699364) | 0.096643 / 0.424275 (-0.327633) | 0.008452 / 0.007607 (0.000845) | 0.765837 / 0.226044 (0.539793) | 7.622897 / 2.268929 (5.353968) | 3.796247 / 55.444624 (-51.648378) | 3.019349 / 6.876477 (-3.857128) | 3.034187 / 2.142072 (0.892115) | 1.001682 / 4.805227 (-3.803546) | 0.211841 / 6.500664 (-6.288823) | 0.073351 / 0.075469 (-0.002119) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.740254 / 1.841788 (-0.101534) | 23.465619 / 8.074308 (15.391311) | 21.651670 / 10.191392 (11.460278) | 0.226129 / 0.680424 (-0.454294) | 0.029611 / 0.534201 (-0.504590) | 0.441140 / 0.579283 (-0.138143) | 0.605591 / 0.434364 (0.171227) | 0.552427 / 0.540337 (0.012090) | 0.771975 / 1.386936 (-0.614961) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#ef5751522c424c758df0647ff9a449b8b0404b6a \"CML watermark\")\n", "> The CI errors seem unrelated to this PR but I think they need further investigation in another PR.\r\n> \r\n> ```\r\n> FAILED tests/test_upstream_hub.py::TestPushToHub::test_push_dataset_dict_to_hub_multiple_files - KeyError: 'url'\r\n> ```\r\n\r\nWe need to wait for `huggingface_hub`'s next release to fix this (see https://github.com/huggingface/huggingface_hub/pull/1675; 409 error is currently ignored, hence the `KeyError`)\r\n\r\nAlso, we should be able to fix `test_push_dataset_dict_to_hub_overwrite_files` by inserting `gc.collect()` (to drop the \"reference\" to an Arrow file) between the `load_dataset` calls to avoid the `PermissionError` (also reported in https://github.com/huggingface/datasets/issues/3139)\r\n\r\n(Indeed, this can be addressed in subsequent PRs.)\r\n\r\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008988 / 0.011353 (-0.002365) | 0.005270 / 0.011008 (-0.005738) | 0.114577 / 0.038508 (0.076068) | 0.091630 / 0.023109 (0.068521) | 0.409217 / 0.275898 (0.133319) | 0.440903 / 0.323480 (0.117424) | 0.005226 / 0.007986 (-0.002760) | 0.004289 / 0.004328 (-0.000040) | 0.082246 / 0.004250 (0.077995) | 0.084926 / 0.037052 (0.047873) | 0.407822 / 0.258489 (0.149333) | 0.440891 / 0.293841 (0.147051) | 0.052225 / 0.128546 (-0.076321) | 0.014218 / 0.075646 (-0.061429) | 0.436994 / 0.419271 (0.017722) | 0.066433 / 0.043533 (0.022901) | 0.413909 / 0.255139 (0.158770) | 0.425729 / 0.283200 (0.142530) | 0.039576 / 0.141683 (-0.102107) | 1.905604 / 1.452155 (0.453449) | 1.907032 / 1.492716 (0.414315) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.313662 / 0.018006 (0.295655) | 0.614541 / 0.000490 (0.614051) | 0.015631 / 0.000200 (0.015431) | 0.000507 / 0.000054 (0.000453) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029049 / 0.037411 (-0.008362) | 0.094626 / 0.014526 (0.080100) | 0.104718 / 0.176557 (-0.071838) | 0.187346 / 0.737135 (-0.549790) | 0.108001 / 0.296338 (-0.188337) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.578997 / 0.215209 (0.363788) | 5.815546 / 2.077655 (3.737892) | 2.411301 / 1.504120 (0.907181) | 2.110088 / 1.541195 (0.568893) | 2.147839 / 1.468490 (0.679349) | 0.861285 / 4.584777 (-3.723492) | 5.264245 / 3.745712 (1.518533) | 4.695786 / 5.269862 (-0.574076) | 2.867522 / 4.565676 (-1.698154) | 0.096523 / 0.424275 (-0.327752) | 0.008777 / 0.007607 (0.001170) | 0.716316 / 0.226044 (0.490272) | 7.257574 / 2.268929 (4.988645) | 3.141502 / 55.444624 (-52.303123) | 2.480604 / 6.876477 (-4.395872) | 2.530031 / 2.142072 (0.387958) | 1.054274 / 4.805227 (-3.750953) | 0.210781 / 6.500664 (-6.289883) | 0.073837 / 0.075469 (-0.001632) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.607689 / 1.841788 (-0.234099) | 23.856780 / 8.074308 (15.782472) | 19.507196 / 10.191392 (9.315804) | 0.232712 / 0.680424 (-0.447712) | 0.027037 / 0.534201 (-0.507164) | 0.466613 / 0.579283 (-0.112670) | 0.571139 / 0.434364 (0.136775) | 0.543109 / 0.540337 (0.002771) | 0.785558 / 1.386936 (-0.601378) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008104 / 0.011353 (-0.003249) | 0.004923 / 0.011008 (-0.006086) | 0.075093 / 0.038508 (0.036585) | 0.075218 / 0.023109 (0.052109) | 0.476615 / 0.275898 (0.200717) | 0.506984 / 0.323480 (0.183504) | 0.006371 / 0.007986 (-0.001614) | 0.004818 / 0.004328 (0.000489) | 0.075634 / 0.004250 (0.071383) | 0.059513 / 0.037052 (0.022461) | 0.523763 / 0.258489 (0.265274) | 0.531858 / 0.293841 (0.238017) | 0.048168 / 0.128546 (-0.080379) | 0.014110 / 0.075646 (-0.061537) | 0.086052 / 0.419271 (-0.333219) | 0.058369 / 0.043533 (0.014836) | 0.475537 / 0.255139 (0.220398) | 0.509429 / 0.283200 (0.226229) | 0.033924 / 0.141683 (-0.107758) | 1.657490 / 1.452155 (0.205336) | 1.762544 / 1.492716 (0.269828) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.263863 / 0.018006 (0.245857) | 0.584468 / 0.000490 (0.583978) | 0.007063 / 0.000200 (0.006863) | 0.000181 / 0.000054 (0.000126) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032229 / 0.037411 (-0.005183) | 0.096750 / 0.014526 (0.082224) | 0.117798 / 0.176557 (-0.058758) | 0.173376 / 0.737135 (-0.563760) | 0.117241 / 0.296338 (-0.179098) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.701935 / 0.215209 (0.486726) | 6.544655 / 2.077655 (4.467001) | 3.055531 / 1.504120 (1.551411) | 2.896339 / 1.541195 (1.355144) | 3.013157 / 1.468490 (1.544667) | 0.852989 / 4.584777 (-3.731788) | 5.399355 / 3.745712 (1.653643) | 5.119811 / 5.269862 (-0.150051) | 3.167269 / 4.565676 (-1.398407) | 0.096962 / 0.424275 (-0.327313) | 0.008843 / 0.007607 (0.001236) | 0.776170 / 0.226044 (0.550125) | 7.735093 / 2.268929 (5.466164) | 3.792629 / 55.444624 (-51.651996) | 3.249911 / 6.876477 (-3.626565) | 3.235590 / 2.142072 (1.093517) | 1.046426 / 4.805227 (-3.758801) | 0.239854 / 6.500664 (-6.260810) | 0.100648 / 0.075469 (0.025179) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.774488 / 1.841788 (-0.067300) | 25.646958 / 8.074308 (17.572650) | 23.181577 / 10.191392 (12.990185) | 0.231948 / 0.680424 (-0.448476) | 0.030147 / 0.534201 (-0.504054) | 0.464161 / 0.579283 (-0.115122) | 0.598980 / 0.434364 (0.164616) | 0.571156 / 0.540337 (0.030819) | 0.833221 / 1.386936 (-0.553715) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#ad876e8908188dcd56759a35c4da182bf121535a \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006010 / 0.011353 (-0.005343) | 0.003662 / 0.011008 (-0.007346) | 0.079971 / 0.038508 (0.041463) | 0.066790 / 0.023109 (0.043681) | 0.311387 / 0.275898 (0.035489) | 0.346781 / 0.323480 (0.023301) | 0.003500 / 0.007986 (-0.004485) | 0.002831 / 0.004328 (-0.001498) | 0.063238 / 0.004250 (0.058988) | 0.056163 / 0.037052 (0.019110) | 0.317456 / 0.258489 (0.058967) | 0.356106 / 0.293841 (0.062265) | 0.027358 / 0.128546 (-0.101188) | 0.007906 / 0.075646 (-0.067741) | 0.261779 / 0.419271 (-0.157492) | 0.046385 / 0.043533 (0.002852) | 0.312587 / 0.255139 (0.057448) | 0.339513 / 0.283200 (0.056314) | 0.021474 / 0.141683 (-0.120209) | 1.418637 / 1.452155 (-0.033518) | 1.510257 / 1.492716 (0.017540) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.211761 / 0.018006 (0.193755) | 0.424387 / 0.000490 (0.423898) | 0.002579 / 0.000200 (0.002379) | 0.000065 / 0.000054 (0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024038 / 0.037411 (-0.013374) | 0.072524 / 0.014526 (0.057998) | 0.083443 / 0.176557 (-0.093113) | 0.144835 / 0.737135 (-0.592300) | 0.084754 / 0.296338 (-0.211585) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.392423 / 0.215209 (0.177214) | 3.927220 / 2.077655 (1.849565) | 1.877853 / 1.504120 (0.373733) | 1.699275 / 1.541195 (0.158081) | 1.793144 / 1.468490 (0.324654) | 0.503809 / 4.584777 (-4.080968) | 3.052569 / 3.745712 (-0.693143) | 2.907432 / 5.269862 (-2.362429) | 1.811220 / 4.565676 (-2.754457) | 0.057249 / 0.424275 (-0.367026) | 0.006433 / 0.007607 (-0.001174) | 0.463257 / 0.226044 (0.237213) | 4.631038 / 2.268929 (2.362109) | 2.315870 / 55.444624 (-53.128754) | 2.000476 / 6.876477 (-4.876001) | 2.043581 / 2.142072 (-0.098492) | 0.588911 / 4.805227 (-4.216317) | 0.125370 / 6.500664 (-6.375295) | 0.061721 / 0.075469 (-0.013748) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.244486 / 1.841788 (-0.597301) | 17.862422 / 8.074308 (9.788114) | 13.890205 / 10.191392 (3.698813) | 0.145467 / 0.680424 (-0.534957) | 0.016856 / 0.534201 (-0.517345) | 0.329357 / 0.579283 (-0.249926) | 0.367550 / 0.434364 (-0.066814) | 0.377541 / 0.540337 (-0.162796) | 0.534087 / 1.386936 (-0.852849) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006030 / 0.011353 (-0.005323) | 0.003650 / 0.011008 (-0.007359) | 0.063300 / 0.038508 (0.024792) | 0.058877 / 0.023109 (0.035767) | 0.454662 / 0.275898 (0.178764) | 0.489362 / 0.323480 (0.165882) | 0.004856 / 0.007986 (-0.003130) | 0.002909 / 0.004328 (-0.001420) | 0.063356 / 0.004250 (0.059105) | 0.047867 / 0.037052 (0.010814) | 0.465461 / 0.258489 (0.206972) | 0.506684 / 0.293841 (0.212843) | 0.028599 / 0.128546 (-0.099947) | 0.008076 / 0.075646 (-0.067570) | 0.068695 / 0.419271 (-0.350576) | 0.041487 / 0.043533 (-0.002045) | 0.448676 / 0.255139 (0.193537) | 0.471206 / 0.283200 (0.188007) | 0.020401 / 0.141683 (-0.121282) | 1.461181 / 1.452155 (0.009026) | 1.517079 / 1.492716 (0.024363) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.222827 / 0.018006 (0.204821) | 0.425074 / 0.000490 (0.424585) | 0.004153 / 0.000200 (0.003953) | 0.000081 / 0.000054 (0.000026) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026980 / 0.037411 (-0.010431) | 0.080786 / 0.014526 (0.066260) | 0.092040 / 0.176557 (-0.084517) | 0.146082 / 0.737135 (-0.591053) | 0.092739 / 0.296338 (-0.203600) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.461663 / 0.215209 (0.246454) | 4.604828 / 2.077655 (2.527173) | 2.566926 / 1.504120 (1.062806) | 2.394419 / 1.541195 (0.853224) | 2.458375 / 1.468490 (0.989885) | 0.505140 / 4.584777 (-4.079637) | 3.155916 / 3.745712 (-0.589796) | 3.014474 / 5.269862 (-2.255388) | 1.900296 / 4.565676 (-2.665380) | 0.058063 / 0.424275 (-0.366212) | 0.006409 / 0.007607 (-0.001198) | 0.541165 / 0.226044 (0.315120) | 5.410700 / 2.268929 (3.141772) | 3.010239 / 55.444624 (-52.434386) | 2.668103 / 6.876477 (-4.208373) | 2.730418 / 2.142072 (0.588346) | 0.603471 / 4.805227 (-4.201756) | 0.129852 / 6.500664 (-6.370812) | 0.061507 / 0.075469 (-0.013962) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.355272 / 1.841788 (-0.486516) | 18.170088 / 8.074308 (10.095780) | 15.583855 / 10.191392 (5.392463) | 0.146246 / 0.680424 (-0.534178) | 0.018093 / 0.534201 (-0.516108) | 0.331695 / 0.579283 (-0.247588) | 0.380845 / 0.434364 (-0.053519) | 0.388564 / 0.540337 (-0.151774) | 0.551465 / 1.386936 (-0.835471) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#afc3c2b034481a3502f5476186a110cf8613a248 \"CML watermark\")\n" ]
Fix CI 404 errors
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6262/reactions" }
PR_kwDODunzps5bTh6H
{ "diff_url": "https://github.com/huggingface/datasets/pull/6262.diff", "html_url": "https://github.com/huggingface/datasets/pull/6262", "merged_at": "2023-09-28T15:30:40Z", "patch_url": "https://github.com/huggingface/datasets/pull/6262.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6262" }
2023-09-27T07:40:18Z
https://api.github.com/repos/huggingface/datasets/issues/6262/comments
Currently our CI usually raises 404 errors when trying to delete temporary repositories. See, e.g.: https://github.com/huggingface/datasets/actions/runs/6314980985/job/17146507884 ``` FAILED tests/test_upstream_hub.py::TestPushToHub::test_push_dataset_dict_to_hub_multiple_files_with_max_shard_size - huggingface_hub.utils._errors.RepositoryNotFoundError: 404 Client Error. (Request ID: Root=1-6512fb99-4a52c561752ece3d77eb6d57;2b61cae4-613d-4a73-bbb1-2faf9e32b02d) Repository Not Found for url: https://hub-ci.huggingface.co/api/repos/delete. Please make sure you specified the correct `repo_id` and `repo_type`. If you are trying to access a private or gated repo, make sure you are authenticated. FAILED tests/test_upstream_hub.py::TestPushToHub::test_push_dataset_to_hub_custom_features_audio - huggingface_hub.utils._errors.RepositoryNotFoundError: 404 Client Error. (Request ID: Root=1-6512fbb2-0333dd666d42f0e173c2bb68;dfdc4271-b49b-4008-8c49-f05cf7c1d53d) Repository Not Found for url: https://hub-ci.huggingface.co/api/repos/delete. Please make sure you specified the correct `repo_id` and `repo_type`. If you are trying to access a private or gated repo, make sure you are authenticated. FAILED tests/test_upstream_hub.py::TestPushToHub::test_push_dataset_dict_to_hub_custom_splits - huggingface_hub.utils._errors.RepositoryNotFoundError: 404 Client Error. (Request ID: Root=1-6512fbca-167690694f39770a5b3a444e;baeaa905-0a57-4585-ac97-9aaae12dd47d) Repository Not Found for url: https://hub-ci.huggingface.co/api/repos/delete. Please make sure you specified the correct `repo_id` and `repo_type`. If you are trying to access a private or gated repo, make sure you are authenticated. ``` I think this can be caused by collisions in temporary repository IDs because we create them in multiprocessing: ```python with temporary_repo(f"{CI_HUB_USER}/test-{int(time.time() * 10e3)}") as ds_name: ``` This can also be caused when there is another issue that does not allow the creation of the repository, thus making it impossible to delete it. This PR tries to fix this issue by increasing the precision of the number on the repository ID: `10e6` instead of `10e3`. Additionally, this PR catches RepositoryNotFoundError.
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/6262/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6262/timeline
closed
false
6,262
null
2023-09-28T15:30:40Z
null
true
1,913,813,178
https://api.github.com/repos/huggingface/datasets/issues/6261
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6261/events
[]
null
2023-10-05T10:23:23Z
[]
https://github.com/huggingface/datasets/issues/6261
NONE
completed
null
null
[ "I believe is due to the fact that doesn't work with .tgz files.", "`JourneyDB/JourneyDB` is a gated dataset, so this error means you are not authenticated to access it, either by using an invalid token or by not agreeing to the terms in the dialog on the dataset page.\r\n\r\n> I believe is due to the fact that doesn't work with .tgz files.\r\n\r\nIndeed, the dataset's data files structure is not supported natively by `datasets`. To load it, one option is to clone the repo (or download it with `huggingface_hub.snapshot_download`) and use `Dataset.from_generator` to process the files.", "> JourneyDB/JourneyDB is a gated dataset, so this error means you are not authenticated to access it, either by using an invalid token or by not agreeing to the terms in the dialog on the dataset page.´\r\n\r\nI did authentication with:\r\n\r\n```\r\nfrom huggingface_hub import notebook_login\r\nnotebook_login()\r\n```\r\n\r\nIsn't that the correct way to do it?\r\n\r\n> Indeed, the dataset's data files structure is not supported natively by datasets. To load it, one option is to clone the repo (or download it with huggingface_hub.snapshot_download) and use Dataset.from_generator to process the files.\r\n\r\nGreat suggestion I will give it a try.", "Have you accepted the terms in the dialog [here](https://huggingface.co/datasets/JourneyDB/JourneyDB)?\r\n\r\nIIRC Kaggle preinstalls an outdated `datasets` version, so it's also a good idea to update it before importing `datasets` (and do the same for `huggingface_hub`)", "Sorry for the late reply. Yes, I did. Thanks for the tip!" ]
Can't load a dataset
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6261/reactions" }
I_kwDODunzps5yEni6
null
2023-09-26T15:46:25Z
https://api.github.com/repos/huggingface/datasets/issues/6261/comments
### Describe the bug Can't seem to load the JourneyDB dataset. It throws the following error: ``` --------------------------------------------------------------------------- FileNotFoundError Traceback (most recent call last) Cell In[15], line 2 1 # If the dataset is gated/private, make sure you have run huggingface-cli login ----> 2 dataset = load_dataset("JourneyDB/JourneyDB", data_files="data", use_auth_token=True) File /opt/conda/lib/python3.10/site-packages/datasets/load.py:1664, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs) 1661 ignore_verifications = ignore_verifications or save_infos 1663 # Create a dataset builder -> 1664 builder_instance = load_dataset_builder( 1665 path=path, 1666 name=name, 1667 data_dir=data_dir, 1668 data_files=data_files, 1669 cache_dir=cache_dir, 1670 features=features, 1671 download_config=download_config, 1672 download_mode=download_mode, 1673 revision=revision, 1674 use_auth_token=use_auth_token, 1675 **config_kwargs, 1676 ) 1678 # Return iterable dataset in case of streaming 1679 if streaming: File /opt/conda/lib/python3.10/site-packages/datasets/load.py:1490, in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, use_auth_token, **config_kwargs) 1488 download_config = download_config.copy() if download_config else DownloadConfig() 1489 download_config.use_auth_token = use_auth_token -> 1490 dataset_module = dataset_module_factory( 1491 path, 1492 revision=revision, 1493 download_config=download_config, 1494 download_mode=download_mode, 1495 data_dir=data_dir, 1496 data_files=data_files, 1497 ) 1499 # Get dataset builder class from the processing script 1500 builder_cls = import_main_class(dataset_module.module_path) File /opt/conda/lib/python3.10/site-packages/datasets/load.py:1238, in dataset_module_factory(path, revision, download_config, download_mode, force_local_path, dynamic_modules_path, data_dir, data_files, **download_kwargs) 1236 raise ConnectionError(f"Couln't reach the Hugging Face Hub for dataset '{path}': {e1}") from None 1237 if isinstance(e1, FileNotFoundError): -> 1238 raise FileNotFoundError( 1239 f"Couldn't find a dataset script at {relative_to_absolute_path(combined_path)} or any data file in the same directory. " 1240 f"Couldn't find '{path}' on the Hugging Face Hub either: {type(e1).__name__}: {e1}" 1241 ) from None 1242 raise e1 from None 1243 else: FileNotFoundError: Couldn't find a dataset script at /kaggle/working/JourneyDB/JourneyDB/JourneyDB.py or any data file in the same directory. Couldn't find 'JourneyDB/JourneyDB' on the Hugging Face Hub either: FileNotFoundError: Unable to find data in dataset repository JourneyDB/JourneyDB with any supported extension ['csv', 'tsv', 'json', 'jsonl', 'parquet', 'txt', 'blp', 'bmp', 'dib', 'bufr', 'cur', 'pcx', 'dcx', 'dds', 'ps', 'eps', 'fit', 'fits', 'fli', 'flc', 'ftc', 'ftu', 'gbr', 'gif', 'grib', 'h5', 'hdf', 'png', 'apng', 'jp2', 'j2k', 'jpc', 'jpf', 'jpx', 'j2c', 'icns', 'ico', 'im', 'iim', 'tif', 'tiff', 'jfif', 'jpe', 'jpg', 'jpeg', 'mpg', 'mpeg', 'msp', 'pcd', 'pxr', 'pbm', 'pgm', 'ppm', 'pnm', 'psd', 'bw', 'rgb', 'rgba', 'sgi', 'ras', 'tga', 'icb', 'vda', 'vst', 'webp', 'wmf', 'emf', 'xbm', 'xpm', 'zip'] ``` ### Steps to reproduce the bug 1) ``` from huggingface_hub import notebook_login notebook_login() ``` 2) ``` !pip install -q datasets from datasets import load_dataset ``` 3) `dataset = load_dataset("JourneyDB/JourneyDB", data_files="data", use_auth_token=True)` ### Expected behavior Load the dataset ### Environment info Notebook
{ "avatar_url": "https://avatars.githubusercontent.com/u/37955817?v=4", "events_url": "https://api.github.com/users/joaopedrosdmm/events{/privacy}", "followers_url": "https://api.github.com/users/joaopedrosdmm/followers", "following_url": "https://api.github.com/users/joaopedrosdmm/following{/other_user}", "gists_url": "https://api.github.com/users/joaopedrosdmm/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/joaopedrosdmm", "id": 37955817, "login": "joaopedrosdmm", "node_id": "MDQ6VXNlcjM3OTU1ODE3", "organizations_url": "https://api.github.com/users/joaopedrosdmm/orgs", "received_events_url": "https://api.github.com/users/joaopedrosdmm/received_events", "repos_url": "https://api.github.com/users/joaopedrosdmm/repos", "site_admin": false, "starred_url": "https://api.github.com/users/joaopedrosdmm/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/joaopedrosdmm/subscriptions", "type": "User", "url": "https://api.github.com/users/joaopedrosdmm" }
https://api.github.com/repos/huggingface/datasets/issues/6261/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6261/timeline
closed
false
6,261
null
2023-10-05T10:23:22Z
null
false
1,912,593,466
https://api.github.com/repos/huggingface/datasets/issues/6260
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6260/events
[]
null
2023-09-28T18:23:36Z
[]
https://github.com/huggingface/datasets/issues/6260
NONE
completed
null
null
[ "Hi! Unfortunately, the current behavior is to delete the downloaded data when this error happens. So, I've opened a PR that removes the problematic import to avoid losing data due to `apache_beam` not being installed (we host the preprocessed version of `natual_questions` on the HF GCS, so requiring `apache_beam` in that case doesn't make sense)", "Thanks for your reply. I met another question that I set `export HF_DATASETS_CACHE=/data/lxy/.cache` , but each time I run load_datasets, the datasets module still looking for NQ in the wrong default cache dir '/home/lxy/.cache' 。How to avoid this incorrect behavior. I am sure HF_DATASETS_CACHE was set correctly since I use echo & to check it.\r\n![image](https://github.com/huggingface/datasets/assets/88258534/e7029f27-b9f9-496c-8948-6234ef695646)\r\nby the way I delete the file in '/home/lxy/.cache' since I found there has some kb size file seems useless.", "You need to set this variable before the `datasets` import. Then, you can use `import datasets; datasets.config.HF_DATASETS_CACHE` to verify the cache location." ]
REUSE_DATASET_IF_EXISTS don't work
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6260/reactions" }
I_kwDODunzps5x_9w6
null
2023-09-26T03:02:16Z
https://api.github.com/repos/huggingface/datasets/issues/6260/comments
### Describe the bug I use the following code to download natural_question dataset. Even though I have completely download it, the next time I run this code, the new download procedure will start and cover the original /data/lxy/NQ config=datasets.DownloadConfig(resume_download=True,max_retries=100,cache_dir=r'/data/lxy/NQ',download_desc='NQ') data=datasets.load_dataset('natural_questions',cache_dir=r'/data/lxy/NQ',download_config=config,download_mode=DownloadMode.REUSE_DATASET_IF_EXISTS) --- Since I don't have apache_beam installed, it throw a exception. After I pip install apache_beam ,the download restart.. ![image](https://github.com/huggingface/datasets/assets/88258534/f28ce7fe-29ea-4348-b87f-e69182a8bd41) ### Steps to reproduce the bug run this two line code config=datasets.DownloadConfig(resume_download=True,max_retries=100,cache_dir=r'/data/lxy/NQ',download_desc='NQ') data=datasets.load_dataset('natural_questions',cache_dir=r'/data/lxy/NQ',download_config=config,download_mode=DownloadMode.REUSE_DATASET_IF_EXISTS) ### Expected behavior Download behavior can be correctly follow DownloadMode ### Environment info - `datasets` version: 2.14.4 - Platform: Linux-3.10.0-1160.88.1.el7.x86_64-x86_64-with-glibc2.17 - Python version: 3.9.17 - Huggingface_hub version: 0.16.4 - PyArrow version: 11.0.0 - Pandas version: 2.0.3
{ "avatar_url": "https://avatars.githubusercontent.com/u/88258534?v=4", "events_url": "https://api.github.com/users/rangehow/events{/privacy}", "followers_url": "https://api.github.com/users/rangehow/followers", "following_url": "https://api.github.com/users/rangehow/following{/other_user}", "gists_url": "https://api.github.com/users/rangehow/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/rangehow", "id": 88258534, "login": "rangehow", "node_id": "MDQ6VXNlcjg4MjU4NTM0", "organizations_url": "https://api.github.com/users/rangehow/orgs", "received_events_url": "https://api.github.com/users/rangehow/received_events", "repos_url": "https://api.github.com/users/rangehow/repos", "site_admin": false, "starred_url": "https://api.github.com/users/rangehow/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rangehow/subscriptions", "type": "User", "url": "https://api.github.com/users/rangehow" }
https://api.github.com/repos/huggingface/datasets/issues/6260/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6260/timeline
closed
false
6,260
null
2023-09-28T18:23:36Z
null
false
1,911,965,758
https://api.github.com/repos/huggingface/datasets/issues/6259
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6259/events
[]
null
2024-03-15T15:22:04Z
[ { "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" } ]
https://github.com/huggingface/datasets/issues/6259
NONE
completed
null
null
[ "Thanks for reporting this issue! We should be able to avoid this by making our `glob` patterns more precise. In the meantime, you can load the dataset by directly assigning splits to the data files: \r\n```python\r\nfrom datasets import load_dataset\r\nds = load_dataset(\"parquet\", data_files={\"train\": \"testing123/train/output_train.parquet\", \"validation\": \"testing123/val/output_val.parquet\"})\r\n```" ]
Duplicated Rows When Loading Parquet Files from Root Directory with Subdirectories
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6259/reactions" }
I_kwDODunzps5x9kg-
null
2023-09-25T17:20:54Z
https://api.github.com/repos/huggingface/datasets/issues/6259/comments
### Describe the bug When parquet files are saved in "train" and "val" subdirectories under a root directory, and datasets are then loaded using `load_dataset("parquet", data_dir="root_directory")`, the resulting dataset has duplicated rows for both the training and validation sets. ### Steps to reproduce the bug 1. Create a root directory, e.g., "testing123". 2. Under "testing123", create two subdirectories: "train" and "val". 3. Create and save a parquet file with 3 unique rows in the "train" subdirectory. 4. Create and save a parquet file with 4 unique rows in the "val" subdirectory. 5. Load the datasets from the root directory using `load_dataset("parquet", data_dir="testing123")` 6. Iterate through the datasets and print the rows Here's a collab reproducing these steps: https://colab.research.google.com/drive/11NEdImnQ3OqJlwKSHRMhr7jCBesNdLY4?usp=sharing ### Expected behavior - Training set should contain 3 unique rows. - Validation set should contain 4 unique rows. ### Environment info - `datasets` version: 2.14.5 - Platform: Linux-5.15.120+-x86_64-with-glibc2.35 - Python version: 3.10.12 - Huggingface_hub version: 0.17.2 - PyArrow version: 9.0.0 - Pandas version: 1.5.3
{ "avatar_url": "https://avatars.githubusercontent.com/u/141304309?v=4", "events_url": "https://api.github.com/users/MF-FOOM/events{/privacy}", "followers_url": "https://api.github.com/users/MF-FOOM/followers", "following_url": "https://api.github.com/users/MF-FOOM/following{/other_user}", "gists_url": "https://api.github.com/users/MF-FOOM/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/MF-FOOM", "id": 141304309, "login": "MF-FOOM", "node_id": "U_kgDOCGwh9Q", "organizations_url": "https://api.github.com/users/MF-FOOM/orgs", "received_events_url": "https://api.github.com/users/MF-FOOM/received_events", "repos_url": "https://api.github.com/users/MF-FOOM/repos", "site_admin": false, "starred_url": "https://api.github.com/users/MF-FOOM/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MF-FOOM/subscriptions", "type": "User", "url": "https://api.github.com/users/MF-FOOM" }
https://api.github.com/repos/huggingface/datasets/issues/6259/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6259/timeline
closed
false
6,259
null
2024-03-15T15:22:04Z
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
false
1,911,445,373
https://api.github.com/repos/huggingface/datasets/issues/6258
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6258/events
[]
null
2023-09-26T14:55:35Z
[]
https://github.com/huggingface/datasets/pull/6258
CONTRIBUTOR
null
false
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006131 / 0.011353 (-0.005222) | 0.003682 / 0.011008 (-0.007327) | 0.081108 / 0.038508 (0.042600) | 0.061580 / 0.023109 (0.038471) | 0.395880 / 0.275898 (0.119982) | 0.427429 / 0.323480 (0.103949) | 0.003570 / 0.007986 (-0.004416) | 0.003874 / 0.004328 (-0.000455) | 0.063322 / 0.004250 (0.059072) | 0.049742 / 0.037052 (0.012690) | 0.396547 / 0.258489 (0.138058) | 0.434759 / 0.293841 (0.140918) | 0.028137 / 0.128546 (-0.100409) | 0.008103 / 0.075646 (-0.067544) | 0.262504 / 0.419271 (-0.156767) | 0.045944 / 0.043533 (0.002411) | 0.397659 / 0.255139 (0.142520) | 0.416479 / 0.283200 (0.133280) | 0.022870 / 0.141683 (-0.118813) | 1.478280 / 1.452155 (0.026126) | 1.543748 / 1.492716 (0.051031) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.228851 / 0.018006 (0.210845) | 0.432845 / 0.000490 (0.432355) | 0.005922 / 0.000200 (0.005722) | 0.000227 / 0.000054 (0.000172) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025545 / 0.037411 (-0.011867) | 0.073506 / 0.014526 (0.058980) | 0.087622 / 0.176557 (-0.088935) | 0.145455 / 0.737135 (-0.591680) | 0.085236 / 0.296338 (-0.211102) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.433083 / 0.215209 (0.217874) | 4.323121 / 2.077655 (2.245466) | 2.297947 / 1.504120 (0.793827) | 2.126405 / 1.541195 (0.585211) | 2.201635 / 1.468490 (0.733145) | 0.509902 / 4.584777 (-4.074875) | 3.116877 / 3.745712 (-0.628835) | 2.892949 / 5.269862 (-2.376912) | 1.866833 / 4.565676 (-2.698844) | 0.058087 / 0.424275 (-0.366189) | 0.006464 / 0.007607 (-0.001143) | 0.503594 / 0.226044 (0.277550) | 5.027634 / 2.268929 (2.758705) | 2.718030 / 55.444624 (-52.726595) | 2.373876 / 6.876477 (-4.502600) | 2.515496 / 2.142072 (0.373423) | 0.602648 / 4.805227 (-4.202579) | 0.126119 / 6.500664 (-6.374545) | 0.060623 / 0.075469 (-0.014846) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.236429 / 1.841788 (-0.605359) | 17.760532 / 8.074308 (9.686224) | 13.970093 / 10.191392 (3.778701) | 0.145455 / 0.680424 (-0.534969) | 0.017110 / 0.534201 (-0.517091) | 0.329649 / 0.579283 (-0.249634) | 0.366942 / 0.434364 (-0.067421) | 0.384418 / 0.540337 (-0.155920) | 0.552330 / 1.386936 (-0.834606) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006302 / 0.011353 (-0.005051) | 0.003677 / 0.011008 (-0.007331) | 0.062836 / 0.038508 (0.024328) | 0.063317 / 0.023109 (0.040207) | 0.449970 / 0.275898 (0.174072) | 0.480903 / 0.323480 (0.157423) | 0.005013 / 0.007986 (-0.002972) | 0.002934 / 0.004328 (-0.001394) | 0.062975 / 0.004250 (0.058724) | 0.051285 / 0.037052 (0.014233) | 0.448417 / 0.258489 (0.189928) | 0.486022 / 0.293841 (0.192181) | 0.029215 / 0.128546 (-0.099332) | 0.008189 / 0.075646 (-0.067457) | 0.068203 / 0.419271 (-0.351068) | 0.041942 / 0.043533 (-0.001591) | 0.445749 / 0.255139 (0.190610) | 0.465442 / 0.283200 (0.182243) | 0.020681 / 0.141683 (-0.121002) | 1.500704 / 1.452155 (0.048549) | 1.550511 / 1.492716 (0.057795) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.224922 / 0.018006 (0.206915) | 0.419714 / 0.000490 (0.419224) | 0.003804 / 0.000200 (0.003604) | 0.000082 / 0.000054 (0.000028) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026924 / 0.037411 (-0.010487) | 0.082400 / 0.014526 (0.067874) | 0.092193 / 0.176557 (-0.084363) | 0.147045 / 0.737135 (-0.590090) | 0.093173 / 0.296338 (-0.203166) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.462510 / 0.215209 (0.247300) | 4.635249 / 2.077655 (2.557594) | 2.627127 / 1.504120 (1.123007) | 2.442879 / 1.541195 (0.901684) | 2.502456 / 1.468490 (1.033966) | 0.506607 / 4.584777 (-4.078170) | 3.127348 / 3.745712 (-0.618364) | 2.901818 / 5.269862 (-2.368044) | 1.906876 / 4.565676 (-2.658801) | 0.058025 / 0.424275 (-0.366250) | 0.006442 / 0.007607 (-0.001165) | 0.534438 / 0.226044 (0.308394) | 5.352481 / 2.268929 (3.083553) | 3.058068 / 55.444624 (-52.386556) | 2.697310 / 6.876477 (-4.179167) | 2.873141 / 2.142072 (0.731069) | 0.594517 / 4.805227 (-4.210710) | 0.125369 / 6.500664 (-6.375295) | 0.061411 / 0.075469 (-0.014058) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.369549 / 1.841788 (-0.472238) | 17.933507 / 8.074308 (9.859199) | 14.890107 / 10.191392 (4.698715) | 0.154398 / 0.680424 (-0.526026) | 0.018021 / 0.534201 (-0.516180) | 0.335163 / 0.579283 (-0.244120) | 0.350396 / 0.434364 (-0.083968) | 0.397694 / 0.540337 (-0.142643) | 0.554853 / 1.386936 (-0.832083) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#f56fd9d6c877ffa6fb44fb832c13b61227c9cc5b \"CML watermark\")\n" ]
[DOCS] Fix typo: Elasticsearch
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/6258/reactions" }
PR_kwDODunzps5bHxHl
{ "diff_url": "https://github.com/huggingface/datasets/pull/6258.diff", "html_url": "https://github.com/huggingface/datasets/pull/6258", "merged_at": "2023-09-26T13:36:40Z", "patch_url": "https://github.com/huggingface/datasets/pull/6258.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6258" }
2023-09-25T12:50:59Z
https://api.github.com/repos/huggingface/datasets/issues/6258/comments
Not ElasticSearch :)
{ "avatar_url": "https://avatars.githubusercontent.com/u/32779855?v=4", "events_url": "https://api.github.com/users/leemthompo/events{/privacy}", "followers_url": "https://api.github.com/users/leemthompo/followers", "following_url": "https://api.github.com/users/leemthompo/following{/other_user}", "gists_url": "https://api.github.com/users/leemthompo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/leemthompo", "id": 32779855, "login": "leemthompo", "node_id": "MDQ6VXNlcjMyNzc5ODU1", "organizations_url": "https://api.github.com/users/leemthompo/orgs", "received_events_url": "https://api.github.com/users/leemthompo/received_events", "repos_url": "https://api.github.com/users/leemthompo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/leemthompo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/leemthompo/subscriptions", "type": "User", "url": "https://api.github.com/users/leemthompo" }
https://api.github.com/repos/huggingface/datasets/issues/6258/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6258/timeline
closed
false
6,258
null
2023-09-26T13:36:40Z
null
true
1,910,741,044
https://api.github.com/repos/huggingface/datasets/issues/6257
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6257/events
[]
null
2023-10-16T13:30:49Z
[]
https://github.com/huggingface/datasets/issues/6257
NONE
completed
null
null
[ "how is your dataset structured? (file types, how many commits and files are you trying to push, etc)", "I succeeded in uploading it after several attempts with an hour gap between each attempt (inconvenient but worked). The final dataset is [here](https://huggingface.co/datasets/yuvalkirstain/pickapic_v2), code and context to the dataset can be found [here](https://github.com/yuvalkirstain/PickScore/).\r\nI can close the issue if this behavior is intended, as most users probably do not need to upload large-scale datasets.", "We could fix this by creating a single commit for all the (Parquet) shards in `push_to_hub` instead of one commit per shard, as we currently do. \r\n\r\n@Wauplin Any updates on the 2-step commit process suggested by you that we need to implement this?", "> Any updates on the 2-step commit process suggested by you that we need to implement this?\r\n\r\nRe-prioritizing this, sorry. Will let you know but probably can be done this week." ]
HfHubHTTPError - exceeded our hourly quotas for action: commit
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6257/reactions" }
I_kwDODunzps5x45g0
null
2023-09-25T06:11:43Z
https://api.github.com/repos/huggingface/datasets/issues/6257/comments
### Describe the bug I try to upload a very large dataset of images, and get the following error: ``` File /fsx-multigen/yuvalkirstain/miniconda/envs/pickapic/lib/python3.10/site-packages/huggingface_hub/hf_api.py:2712, in HfApi.create_commit(self, repo_id, operations, commit_message, commit_description, token, repo_type, revision, create_pr, num_threads, parent_commit, run_as_future) 2710 try: 2711 commit_resp = get_session().post(url=commit_url, headers=headers, data=data, params=params) -> 2712 hf_raise_for_status(commit_resp, endpoint_name="commit") 2713 except RepositoryNotFoundError as e: 2714 e.append_to_message(_CREATE_COMMIT_NO_REPO_ERROR_MESSAGE) File /fsx-multigen/yuvalkirstain/miniconda/envs/pickapic/lib/python3.10/site-packages/huggingface_hub/utils/_errors.py:301, in hf_raise_for_status(response, endpoint_name) 297 raise BadRequestError(message, response=response) from e 299 # Convert `HTTPError` into a `HfHubHTTPError` to display request information 300 # as well (request id and/or server error message) --> 301 raise HfHubHTTPError(str(e), response=response) from e HfHubHTTPError: 429 Client Error: Too Many Requests for url: https://huggingface.co/api/datasets/yuvalkirstain/pickapic_v2/commit/main (Request ID: Root=1-65112399-12d63f7d7f28bfa40a36a0fd) You have exceeded our hourly quotas for action: commit. We invite you to retry later. ``` this makes it much less convenient to host large datasets on HF hub. ### Steps to reproduce the bug Upload a very large dataset of images ### Expected behavior the upload to work well ### Environment info - `datasets` version: 2.13.1 - Platform: Linux-5.15.0-1033-aws-x86_64-with-glibc2.31 - Python version: 3.10.11 - Huggingface_hub version: 0.15.1 - PyArrow version: 12.0.1 - Pandas version: 1.5.3
{ "avatar_url": "https://avatars.githubusercontent.com/u/57996478?v=4", "events_url": "https://api.github.com/users/yuvalkirstain/events{/privacy}", "followers_url": "https://api.github.com/users/yuvalkirstain/followers", "following_url": "https://api.github.com/users/yuvalkirstain/following{/other_user}", "gists_url": "https://api.github.com/users/yuvalkirstain/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/yuvalkirstain", "id": 57996478, "login": "yuvalkirstain", "node_id": "MDQ6VXNlcjU3OTk2NDc4", "organizations_url": "https://api.github.com/users/yuvalkirstain/orgs", "received_events_url": "https://api.github.com/users/yuvalkirstain/received_events", "repos_url": "https://api.github.com/users/yuvalkirstain/repos", "site_admin": false, "starred_url": "https://api.github.com/users/yuvalkirstain/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yuvalkirstain/subscriptions", "type": "User", "url": "https://api.github.com/users/yuvalkirstain" }
https://api.github.com/repos/huggingface/datasets/issues/6257/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6257/timeline
closed
false
6,257
null
2023-10-16T13:30:48Z
null
false
1,910,275,199
https://api.github.com/repos/huggingface/datasets/issues/6256
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6256/events
[]
null
2023-09-27T13:40:45Z
[]
https://github.com/huggingface/datasets/issues/6256
NONE
null
null
null
[ "Can you share the error message?\r\n\r\nAlso, it would help if you could check whether `huggingface_hub`'s download behaves the same:\r\n```python\r\nfrom huggingface_hub import snapshot_download\r\nsnapshot_download(\"trec\", repo_type=\"dataset\", cache_dir='/path/to/my/dir)\r\n```\r\n\r\nIn the next major release, we aim to switch to `huggingface_hub` for file download/caching, but we could align the `cache_dir`'s `umask` behavior earlier than this if their solution works for your use case." ]
load_dataset() function's cache_dir does not seems to work
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6256/reactions" }
I_kwDODunzps5x3Hx_
null
2023-09-24T15:34:06Z
https://api.github.com/repos/huggingface/datasets/issues/6256/comments
### Describe the bug datasets version: 2.14.5 when trying to run the following command trec = load_dataset('trec', split='train[:1000]', cache_dir='/path/to/my/dir') I keep getting error saying the command does not have permission to the default cache directory on my macbook pro machine. It seems the cache_dir parameter cannot change the dataset saving directory from the default what ever explained in the https://huggingface.co/docs/datasets/cache does not seem to work ### Steps to reproduce the bug datasets version: 2.14.5 when trying to run the following command trec = load_dataset('trec', split='train[:1000]', cache_dir='/path/to/my/dir') I keep getting error saying the command does not have permission to the default cache directory on my macbook pro machine. It seems the cache_dir parameter cannot change the dataset saving directory from the default what ever explained in the https://huggingface.co/docs/datasets/cache does not seem to work ### Expected behavior the dataset should be saved to the cache_dir points to ### Environment info datasets version: 2.14.5 macos X: Ventura 13.4.1 (c)
{ "avatar_url": "https://avatars.githubusercontent.com/u/171831?v=4", "events_url": "https://api.github.com/users/andyzhu/events{/privacy}", "followers_url": "https://api.github.com/users/andyzhu/followers", "following_url": "https://api.github.com/users/andyzhu/following{/other_user}", "gists_url": "https://api.github.com/users/andyzhu/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/andyzhu", "id": 171831, "login": "andyzhu", "node_id": "MDQ6VXNlcjE3MTgzMQ==", "organizations_url": "https://api.github.com/users/andyzhu/orgs", "received_events_url": "https://api.github.com/users/andyzhu/received_events", "repos_url": "https://api.github.com/users/andyzhu/repos", "site_admin": false, "starred_url": "https://api.github.com/users/andyzhu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/andyzhu/subscriptions", "type": "User", "url": "https://api.github.com/users/andyzhu" }
https://api.github.com/repos/huggingface/datasets/issues/6256/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6256/timeline
open
false
6,256
null
null
null
false
1,909,842,977
https://api.github.com/repos/huggingface/datasets/issues/6255
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6255/events
[]
null
2024-01-11T06:32:34Z
[]
https://github.com/huggingface/datasets/pull/6255
MEMBER
null
false
null
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005905 / 0.011353 (-0.005448) | 0.003623 / 0.011008 (-0.007385) | 0.079616 / 0.038508 (0.041108) | 0.059840 / 0.023109 (0.036730) | 0.392281 / 0.275898 (0.116383) | 0.434539 / 0.323480 (0.111059) | 0.004746 / 0.007986 (-0.003239) | 0.002935 / 0.004328 (-0.001394) | 0.062907 / 0.004250 (0.058657) | 0.048233 / 0.037052 (0.011181) | 0.394170 / 0.258489 (0.135681) | 0.427430 / 0.293841 (0.133589) | 0.027382 / 0.128546 (-0.101164) | 0.007890 / 0.075646 (-0.067756) | 0.259681 / 0.419271 (-0.159591) | 0.044085 / 0.043533 (0.000552) | 0.388640 / 0.255139 (0.133501) | 0.412665 / 0.283200 (0.129465) | 0.021256 / 0.141683 (-0.120427) | 1.485672 / 1.452155 (0.033518) | 1.531410 / 1.492716 (0.038694) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.220346 / 0.018006 (0.202340) | 0.425329 / 0.000490 (0.424840) | 0.006224 / 0.000200 (0.006024) | 0.000208 / 0.000054 (0.000153) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024864 / 0.037411 (-0.012547) | 0.072925 / 0.014526 (0.058399) | 0.083711 / 0.176557 (-0.092845) | 0.144213 / 0.737135 (-0.592923) | 0.084201 / 0.296338 (-0.212137) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.399467 / 0.215209 (0.184258) | 3.978979 / 2.077655 (1.901325) | 1.916994 / 1.504120 (0.412874) | 1.753098 / 1.541195 (0.211903) | 1.809866 / 1.468490 (0.341376) | 0.506806 / 4.584777 (-4.077971) | 3.051044 / 3.745712 (-0.694668) | 2.857624 / 5.269862 (-2.412237) | 1.872033 / 4.565676 (-2.693644) | 0.058543 / 0.424275 (-0.365732) | 0.006569 / 0.007607 (-0.001038) | 0.472630 / 0.226044 (0.246586) | 4.724862 / 2.268929 (2.455934) | 2.413068 / 55.444624 (-53.031556) | 2.046910 / 6.876477 (-4.829567) | 2.190455 / 2.142072 (0.048383) | 0.595228 / 4.805227 (-4.210000) | 0.125942 / 6.500664 (-6.374722) | 0.059474 / 0.075469 (-0.015995) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.235927 / 1.841788 (-0.605861) | 17.367803 / 8.074308 (9.293495) | 13.550362 / 10.191392 (3.358970) | 0.131664 / 0.680424 (-0.548760) | 0.016331 / 0.534201 (-0.517870) | 0.331295 / 0.579283 (-0.247988) | 0.367641 / 0.434364 (-0.066723) | 0.382595 / 0.540337 (-0.157742) | 0.540361 / 1.386936 (-0.846575) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006120 / 0.011353 (-0.005233) | 0.003691 / 0.011008 (-0.007318) | 0.062768 / 0.038508 (0.024259) | 0.058045 / 0.023109 (0.034936) | 0.443616 / 0.275898 (0.167718) | 0.473854 / 0.323480 (0.150374) | 0.004710 / 0.007986 (-0.003275) | 0.002915 / 0.004328 (-0.001414) | 0.062922 / 0.004250 (0.058672) | 0.048557 / 0.037052 (0.011505) | 0.446136 / 0.258489 (0.187647) | 0.479235 / 0.293841 (0.185394) | 0.028704 / 0.128546 (-0.099842) | 0.008170 / 0.075646 (-0.067477) | 0.068853 / 0.419271 (-0.350419) | 0.041393 / 0.043533 (-0.002140) | 0.444683 / 0.255139 (0.189544) | 0.466607 / 0.283200 (0.183407) | 0.020890 / 0.141683 (-0.120793) | 1.473745 / 1.452155 (0.021590) | 1.498772 / 1.492716 (0.006055) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.216875 / 0.018006 (0.198868) | 0.411700 / 0.000490 (0.411211) | 0.003337 / 0.000200 (0.003137) | 0.000079 / 0.000054 (0.000024) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027054 / 0.037411 (-0.010357) | 0.080617 / 0.014526 (0.066092) | 0.091052 / 0.176557 (-0.085505) | 0.144126 / 0.737135 (-0.593009) | 0.090123 / 0.296338 (-0.206216) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.461132 / 0.215209 (0.245922) | 4.598662 / 2.077655 (2.521008) | 2.539213 / 1.504120 (1.035093) | 2.362782 / 1.541195 (0.821587) | 2.428648 / 1.468490 (0.960157) | 0.506305 / 4.584777 (-4.078472) | 3.091132 / 3.745712 (-0.654581) | 2.884870 / 5.269862 (-2.384992) | 1.880806 / 4.565676 (-2.684870) | 0.058727 / 0.424275 (-0.365548) | 0.006452 / 0.007607 (-0.001155) | 0.533519 / 0.226044 (0.307474) | 5.346406 / 2.268929 (3.077478) | 2.987920 / 55.444624 (-52.456704) | 2.667591 / 6.876477 (-4.208885) | 2.848696 / 2.142072 (0.706623) | 0.601018 / 4.805227 (-4.204209) | 0.124929 / 6.500664 (-6.375735) | 0.061583 / 0.075469 (-0.013886) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.356825 / 1.841788 (-0.484962) | 17.964503 / 8.074308 (9.890195) | 14.691471 / 10.191392 (4.500079) | 0.132525 / 0.680424 (-0.547899) | 0.018061 / 0.534201 (-0.516140) | 0.335459 / 0.579283 (-0.243824) | 0.378260 / 0.434364 (-0.056104) | 0.390681 / 0.540337 (-0.149657) | 0.547030 / 1.386936 (-0.839906) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#8c55213a6c5fcff9b3dacce491caa68eacebe10d \"CML watermark\")\n", "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006624 / 0.011353 (-0.004729) | 0.004039 / 0.011008 (-0.006970) | 0.085862 / 0.038508 (0.047354) | 0.077183 / 0.023109 (0.054074) | 0.319132 / 0.275898 (0.043234) | 0.350818 / 0.323480 (0.027338) | 0.004122 / 0.007986 (-0.003864) | 0.003395 / 0.004328 (-0.000934) | 0.065237 / 0.004250 (0.060987) | 0.056675 / 0.037052 (0.019623) | 0.321040 / 0.258489 (0.062551) | 0.362011 / 0.293841 (0.068170) | 0.030988 / 0.128546 (-0.097559) | 0.008623 / 0.075646 (-0.067023) | 0.289433 / 0.419271 (-0.129839) | 0.052755 / 0.043533 (0.009222) | 0.323291 / 0.255139 (0.068152) | 0.340110 / 0.283200 (0.056911) | 0.026299 / 0.141683 (-0.115383) | 1.509405 / 1.452155 (0.057250) | 1.559993 / 1.492716 (0.067277) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.233285 / 0.018006 (0.215279) | 0.451633 / 0.000490 (0.451143) | 0.009954 / 0.000200 (0.009754) | 0.000098 / 0.000054 (0.000043) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029623 / 0.037411 (-0.007788) | 0.083942 / 0.014526 (0.069416) | 0.097378 / 0.176557 (-0.079178) | 0.152630 / 0.737135 (-0.584506) | 0.098379 / 0.296338 (-0.197959) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.386237 / 0.215209 (0.171028) | 3.850805 / 2.077655 (1.773150) | 1.896032 / 1.504120 (0.391912) | 1.729746 / 1.541195 (0.188551) | 1.867831 / 1.468490 (0.399341) | 0.481496 / 4.584777 (-4.103281) | 3.564432 / 3.745712 (-0.181280) | 3.336084 / 5.269862 (-1.933777) | 2.040944 / 4.565676 (-2.524732) | 0.057247 / 0.424275 (-0.367028) | 0.007275 / 0.007607 (-0.000332) | 0.464600 / 0.226044 (0.238556) | 4.648562 / 2.268929 (2.379634) | 2.394430 / 55.444624 (-53.050195) | 2.029748 / 6.876477 (-4.846728) | 2.280975 / 2.142072 (0.138902) | 0.619073 / 4.805227 (-4.186154) | 0.150504 / 6.500664 (-6.350160) | 0.061206 / 0.075469 (-0.014263) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.267309 / 1.841788 (-0.574479) | 19.062725 / 8.074308 (10.988417) | 14.192565 / 10.191392 (4.001173) | 0.162908 / 0.680424 (-0.517515) | 0.018445 / 0.534201 (-0.515756) | 0.392110 / 0.579283 (-0.187173) | 0.415340 / 0.434364 (-0.019024) | 0.456783 / 0.540337 (-0.083554) | 0.653019 / 1.386936 (-0.733917) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006995 / 0.011353 (-0.004358) | 0.004027 / 0.011008 (-0.006981) | 0.064124 / 0.038508 (0.025616) | 0.076004 / 0.023109 (0.052895) | 0.401760 / 0.275898 (0.125862) | 0.432339 / 0.323480 (0.108859) | 0.005471 / 0.007986 (-0.002515) | 0.003335 / 0.004328 (-0.000993) | 0.064164 / 0.004250 (0.059913) | 0.058101 / 0.037052 (0.021048) | 0.401698 / 0.258489 (0.143209) | 0.436033 / 0.293841 (0.142192) | 0.032789 / 0.128546 (-0.095757) | 0.008482 / 0.075646 (-0.067165) | 0.070707 / 0.419271 (-0.348565) | 0.048287 / 0.043533 (0.004755) | 0.395501 / 0.255139 (0.140362) | 0.419385 / 0.283200 (0.136186) | 0.024043 / 0.141683 (-0.117640) | 1.503310 / 1.452155 (0.051156) | 1.562160 / 1.492716 (0.069444) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.227629 / 0.018006 (0.209623) | 0.457306 / 0.000490 (0.456816) | 0.005835 / 0.000200 (0.005635) | 0.000109 / 0.000054 (0.000054) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032991 / 0.037411 (-0.004420) | 0.093265 / 0.014526 (0.078739) | 0.106595 / 0.176557 (-0.069961) | 0.158557 / 0.737135 (-0.578578) | 0.106805 / 0.296338 (-0.189533) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.436573 / 0.215209 (0.221364) | 4.355777 / 2.077655 (2.278122) | 2.323151 / 1.504120 (0.819031) | 2.164101 / 1.541195 (0.622906) | 2.252808 / 1.468490 (0.784318) | 0.494902 / 4.584777 (-4.089875) | 3.615073 / 3.745712 (-0.130639) | 3.329738 / 5.269862 (-1.940124) | 2.059137 / 4.565676 (-2.506539) | 0.058384 / 0.424275 (-0.365891) | 0.007330 / 0.007607 (-0.000277) | 0.512326 / 0.226044 (0.286281) | 5.125652 / 2.268929 (2.856724) | 2.861981 / 55.444624 (-52.582644) | 2.500172 / 6.876477 (-4.376305) | 2.715862 / 2.142072 (0.573789) | 0.597299 / 4.805227 (-4.207928) | 0.134346 / 6.500664 (-6.366318) | 0.060396 / 0.075469 (-0.015074) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.353771 / 1.841788 (-0.488017) | 19.334801 / 8.074308 (11.260493) | 14.669875 / 10.191392 (4.478483) | 0.167607 / 0.680424 (-0.512817) | 0.019839 / 0.534201 (-0.514362) | 0.395473 / 0.579283 (-0.183810) | 0.419822 / 0.434364 (-0.014542) | 0.471400 / 0.540337 (-0.068938) | 0.648297 / 1.386936 (-0.738639) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#d5a112e7f1ce1635725773d911c825adca7bcfe0 \"CML watermark\")\n", "@mariosasko let me know what you think or if you have better ideas to make it faster", "Yea lazy data files resolution seems a better approach actually" ]
Parallelize builder configs creation
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6255/reactions" }
PR_kwDODunzps5bCioS
{ "diff_url": "https://github.com/huggingface/datasets/pull/6255.diff", "html_url": "https://github.com/huggingface/datasets/pull/6255", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/6255.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6255" }
2023-09-23T11:56:20Z
https://api.github.com/repos/huggingface/datasets/issues/6255/comments
For datasets with lots of configs defined in YAML E.g. `load_dataset("uonlp/CulturaX", "fr", revision="refs/pr/6")` from >1min to 15sec
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/6255/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6255/timeline
closed
false
6,255
null
2023-09-26T15:44:19Z
null
true
1,909,672,104
https://api.github.com/repos/huggingface/datasets/issues/6254
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6254/events
[]
null
2023-10-03T14:42:53Z
[]
https://github.com/huggingface/datasets/issues/6254
NONE
completed
null
null
[ "Answered on the forum: https://discuss.huggingface.co/t/dataset-from-generator-cost-much-more-time-in-vscode-debugging-mode-then-running-mode/56005/2" ]
Dataset.from_generator() cost much more time in vscode debugging mode then running mode
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6254/reactions" }
I_kwDODunzps5x00io
null
2023-09-23T02:07:26Z
https://api.github.com/repos/huggingface/datasets/issues/6254/comments
### Describe the bug Hey there, I’m using Dataset.from_generator() to convert a torch_dataset to the Huggingface Dataset. However, when I debug my code on vscode, I find that it runs really slow on Dataset.from_generator() which may even 20 times longer then run the script on terminal. ### Steps to reproduce the bug I write a simple test code : ```python import os from functools import partial from typing import Callable import torch import time from torch.utils.data import Dataset as TorchDataset from datasets import load_from_disk, Dataset as HFDataset import torch from torch.utils.data import Dataset class SimpleDataset(Dataset): def __init__(self, data): self.data = data self.keys = list(data[0].keys()) def __len__(self): return len(self.data) def __getitem__(self, index): sample = self.data[index] return {key: sample[key] for key in self.keys} def TorchDataset2HuggingfaceDataset(torch_dataset: TorchDataset, cache_dir: str = None ) -> HFDataset: """ convert torch dataset to huggingface dataset """ generator : Callable[[], TorchDataset] = lambda: (sample for sample in torch_dataset) return HFDataset.from_generator(generator, cache_dir=cache_dir) if __name__ == '__main__': data = [ {'id': 1, 'name': 'Alice'}, {'id': 2, 'name': 'Bob'}, {'id': 3, 'name': 'Charlie'} ] torch_dataset = SimpleDataset(data) start_time = time.time() huggingface_dataset = TorchDataset2HuggingfaceDataset(torch_dataset) end_time = time.time() print("time: ", end_time - start_time) print(huggingface_dataset) ``` ### Expected behavior this test on my machine report that the running time on terminal is 0.086, however the running time in debugging mode on vscode is 0.25, which I think is much longer than expected. I’d like to know is the anything wrong in the code or just because of debugging? I have traced the code and I find is this func which I get stuck. ```python def create_config_id( self, config_kwargs: dict, custom_features: Optional[Features] = None, ) -> str: ... # stuck in this line suffix = Hasher.hash(config_kwargs_to_add_to_suffix) ``` ### Environment info - `datasets` version: 2.12.0 - Platform: Linux-5.11.0-27-generic-x86_64-with-glibc2.31 - Python version: 3.11.3 - Huggingface_hub version: 0.17.2 - PyArrow version: 11.0.0 - Pandas version: 2.0.1
{ "avatar_url": "https://avatars.githubusercontent.com/u/56437469?v=4", "events_url": "https://api.github.com/users/dontnet-wuenze/events{/privacy}", "followers_url": "https://api.github.com/users/dontnet-wuenze/followers", "following_url": "https://api.github.com/users/dontnet-wuenze/following{/other_user}", "gists_url": "https://api.github.com/users/dontnet-wuenze/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dontnet-wuenze", "id": 56437469, "login": "dontnet-wuenze", "node_id": "MDQ6VXNlcjU2NDM3NDY5", "organizations_url": "https://api.github.com/users/dontnet-wuenze/orgs", "received_events_url": "https://api.github.com/users/dontnet-wuenze/received_events", "repos_url": "https://api.github.com/users/dontnet-wuenze/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dontnet-wuenze/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dontnet-wuenze/subscriptions", "type": "User", "url": "https://api.github.com/users/dontnet-wuenze" }
https://api.github.com/repos/huggingface/datasets/issues/6254/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6254/timeline
closed
false
6,254
null
2023-10-03T14:42:53Z
null
false
1,906,618,910
https://api.github.com/repos/huggingface/datasets/issues/6253
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6253/events
[]
null
2023-09-21T14:16:44Z
[]
https://github.com/huggingface/datasets/pull/6253
MEMBER
null
false
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006591 / 0.011353 (-0.004762) | 0.003991 / 0.011008 (-0.007017) | 0.085197 / 0.038508 (0.046689) | 0.080312 / 0.023109 (0.057202) | 0.342026 / 0.275898 (0.066128) | 0.370749 / 0.323480 (0.047269) | 0.004124 / 0.007986 (-0.003861) | 0.003413 / 0.004328 (-0.000916) | 0.064363 / 0.004250 (0.060113) | 0.055920 / 0.037052 (0.018868) | 0.340667 / 0.258489 (0.082178) | 0.380138 / 0.293841 (0.086297) | 0.031115 / 0.128546 (-0.097431) | 0.008511 / 0.075646 (-0.067135) | 0.289065 / 0.419271 (-0.130207) | 0.052266 / 0.043533 (0.008734) | 0.343808 / 0.255139 (0.088669) | 0.353578 / 0.283200 (0.070378) | 0.024006 / 0.141683 (-0.117676) | 1.490322 / 1.452155 (0.038168) | 1.591133 / 1.492716 (0.098417) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.234718 / 0.018006 (0.216712) | 0.447023 / 0.000490 (0.446533) | 0.009343 / 0.000200 (0.009143) | 0.000259 / 0.000054 (0.000204) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030466 / 0.037411 (-0.006945) | 0.083367 / 0.014526 (0.068841) | 0.100532 / 0.176557 (-0.076024) | 0.158018 / 0.737135 (-0.579117) | 0.098280 / 0.296338 (-0.198059) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.408501 / 0.215209 (0.193292) | 4.066937 / 2.077655 (1.989282) | 2.034029 / 1.504120 (0.529909) | 1.842982 / 1.541195 (0.301788) | 1.987319 / 1.468490 (0.518829) | 0.492126 / 4.584777 (-4.092651) | 3.554027 / 3.745712 (-0.191685) | 3.289023 / 5.269862 (-1.980839) | 2.069796 / 4.565676 (-2.495880) | 0.057930 / 0.424275 (-0.366346) | 0.007308 / 0.007607 (-0.000299) | 0.482596 / 0.226044 (0.256552) | 4.830714 / 2.268929 (2.561785) | 2.506787 / 55.444624 (-52.937838) | 2.163498 / 6.876477 (-4.712979) | 2.389135 / 2.142072 (0.247062) | 0.597538 / 4.805227 (-4.207689) | 0.134268 / 6.500664 (-6.366396) | 0.061189 / 0.075469 (-0.014280) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.245328 / 1.841788 (-0.596460) | 19.145151 / 8.074308 (11.070843) | 14.742121 / 10.191392 (4.550729) | 0.144749 / 0.680424 (-0.535675) | 0.018433 / 0.534201 (-0.515768) | 0.391867 / 0.579283 (-0.187416) | 0.416555 / 0.434364 (-0.017809) | 0.454341 / 0.540337 (-0.085997) | 0.646833 / 1.386936 (-0.740103) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006669 / 0.011353 (-0.004684) | 0.004031 / 0.011008 (-0.006978) | 0.064347 / 0.038508 (0.025839) | 0.076857 / 0.023109 (0.053748) | 0.415864 / 0.275898 (0.139966) | 0.468615 / 0.323480 (0.145135) | 0.005383 / 0.007986 (-0.002603) | 0.003314 / 0.004328 (-0.001015) | 0.064829 / 0.004250 (0.060578) | 0.057182 / 0.037052 (0.020129) | 0.417055 / 0.258489 (0.158566) | 0.472725 / 0.293841 (0.178884) | 0.031938 / 0.128546 (-0.096608) | 0.008564 / 0.075646 (-0.067082) | 0.070649 / 0.419271 (-0.348623) | 0.047439 / 0.043533 (0.003906) | 0.409589 / 0.255139 (0.154450) | 0.433700 / 0.283200 (0.150500) | 0.024132 / 0.141683 (-0.117551) | 1.500825 / 1.452155 (0.048670) | 1.592059 / 1.492716 (0.099343) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.225652 / 0.018006 (0.207646) | 0.444188 / 0.000490 (0.443698) | 0.004581 / 0.000200 (0.004381) | 0.000104 / 0.000054 (0.000050) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033272 / 0.037411 (-0.004139) | 0.096833 / 0.014526 (0.082307) | 0.107134 / 0.176557 (-0.069422) | 0.159299 / 0.737135 (-0.577836) | 0.107533 / 0.296338 (-0.188806) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.429100 / 0.215209 (0.213890) | 4.281051 / 2.077655 (2.203396) | 2.318713 / 1.504120 (0.814593) | 2.165645 / 1.541195 (0.624451) | 2.250224 / 1.468490 (0.781734) | 0.495791 / 4.584777 (-4.088986) | 3.591953 / 3.745712 (-0.153760) | 3.303426 / 5.269862 (-1.966436) | 2.076861 / 4.565676 (-2.488816) | 0.058369 / 0.424275 (-0.365906) | 0.007387 / 0.007607 (-0.000220) | 0.501270 / 0.226044 (0.275225) | 5.014987 / 2.268929 (2.746059) | 2.800951 / 55.444624 (-52.643673) | 2.464316 / 6.876477 (-4.412161) | 2.685259 / 2.142072 (0.543187) | 0.584797 / 4.805227 (-4.220430) | 0.131889 / 6.500664 (-6.368775) | 0.061021 / 0.075469 (-0.014448) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.366982 / 1.841788 (-0.474806) | 19.820376 / 8.074308 (11.746068) | 14.968664 / 10.191392 (4.777272) | 0.165344 / 0.680424 (-0.515080) | 0.019956 / 0.534201 (-0.514245) | 0.395843 / 0.579283 (-0.183441) | 0.420854 / 0.434364 (-0.013510) | 0.465065 / 0.540337 (-0.075272) | 0.651531 / 1.386936 (-0.735405) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#980ca0e13300f5392cd87189d5afd5942927afc7 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005974 / 0.011353 (-0.005379) | 0.003714 / 0.011008 (-0.007294) | 0.080049 / 0.038508 (0.041541) | 0.061233 / 0.023109 (0.038124) | 0.317187 / 0.275898 (0.041289) | 0.352725 / 0.323480 (0.029245) | 0.004867 / 0.007986 (-0.003119) | 0.002953 / 0.004328 (-0.001376) | 0.063156 / 0.004250 (0.058905) | 0.046752 / 0.037052 (0.009700) | 0.320171 / 0.258489 (0.061682) | 0.367572 / 0.293841 (0.073731) | 0.027253 / 0.128546 (-0.101293) | 0.008100 / 0.075646 (-0.067546) | 0.261206 / 0.419271 (-0.158066) | 0.044581 / 0.043533 (0.001048) | 0.331169 / 0.255139 (0.076030) | 0.348719 / 0.283200 (0.065519) | 0.021397 / 0.141683 (-0.120286) | 1.528315 / 1.452155 (0.076160) | 1.533789 / 1.492716 (0.041073) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.233336 / 0.018006 (0.215329) | 0.416866 / 0.000490 (0.416376) | 0.008805 / 0.000200 (0.008605) | 0.000240 / 0.000054 (0.000186) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024754 / 0.037411 (-0.012657) | 0.073311 / 0.014526 (0.058785) | 0.085419 / 0.176557 (-0.091138) | 0.146380 / 0.737135 (-0.590756) | 0.085545 / 0.296338 (-0.210793) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.431426 / 0.215209 (0.216217) | 4.315899 / 2.077655 (2.238244) | 2.232492 / 1.504120 (0.728372) | 2.064174 / 1.541195 (0.522979) | 2.158982 / 1.468490 (0.690492) | 0.499375 / 4.584777 (-4.085402) | 3.093259 / 3.745712 (-0.652454) | 2.848260 / 5.269862 (-2.421601) | 1.853097 / 4.565676 (-2.712579) | 0.057143 / 0.424275 (-0.367132) | 0.006349 / 0.007607 (-0.001258) | 0.507747 / 0.226044 (0.281702) | 5.078872 / 2.268929 (2.809944) | 2.717697 / 55.444624 (-52.726927) | 2.363564 / 6.876477 (-4.512913) | 2.485756 / 2.142072 (0.343684) | 0.595888 / 4.805227 (-4.209340) | 0.127285 / 6.500664 (-6.373379) | 0.060639 / 0.075469 (-0.014830) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.219287 / 1.841788 (-0.622501) | 17.300038 / 8.074308 (9.225730) | 13.747230 / 10.191392 (3.555838) | 0.144841 / 0.680424 (-0.535583) | 0.016587 / 0.534201 (-0.517614) | 0.336891 / 0.579283 (-0.242392) | 0.376128 / 0.434364 (-0.058236) | 0.385749 / 0.540337 (-0.154588) | 0.552218 / 1.386936 (-0.834718) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006477 / 0.011353 (-0.004876) | 0.003709 / 0.011008 (-0.007299) | 0.064708 / 0.038508 (0.026200) | 0.062627 / 0.023109 (0.039518) | 0.444721 / 0.275898 (0.168823) | 0.477825 / 0.323480 (0.154345) | 0.004890 / 0.007986 (-0.003096) | 0.002896 / 0.004328 (-0.001432) | 0.063781 / 0.004250 (0.059530) | 0.050488 / 0.037052 (0.013436) | 0.453466 / 0.258489 (0.194977) | 0.483303 / 0.293841 (0.189462) | 0.028814 / 0.128546 (-0.099732) | 0.008207 / 0.075646 (-0.067440) | 0.070140 / 0.419271 (-0.349131) | 0.041487 / 0.043533 (-0.002045) | 0.454599 / 0.255139 (0.199460) | 0.468374 / 0.283200 (0.185174) | 0.019758 / 0.141683 (-0.121925) | 1.437542 / 1.452155 (-0.014613) | 1.507965 / 1.492716 (0.015249) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.223358 / 0.018006 (0.205352) | 0.413824 / 0.000490 (0.413334) | 0.004593 / 0.000200 (0.004393) | 0.000089 / 0.000054 (0.000035) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026278 / 0.037411 (-0.011134) | 0.081992 / 0.014526 (0.067466) | 0.089969 / 0.176557 (-0.086587) | 0.143668 / 0.737135 (-0.593467) | 0.091273 / 0.296338 (-0.205066) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.461198 / 0.215209 (0.245989) | 4.615398 / 2.077655 (2.537743) | 2.552291 / 1.504120 (1.048171) | 2.373789 / 1.541195 (0.832595) | 2.431591 / 1.468490 (0.963101) | 0.507683 / 4.584777 (-4.077094) | 3.148771 / 3.745712 (-0.596941) | 2.849118 / 5.269862 (-2.420744) | 1.883001 / 4.565676 (-2.682675) | 0.059423 / 0.424275 (-0.364852) | 0.006463 / 0.007607 (-0.001144) | 0.535129 / 0.226044 (0.309085) | 5.362870 / 2.268929 (3.093941) | 3.016548 / 55.444624 (-52.428076) | 2.666205 / 6.876477 (-4.210271) | 2.821396 / 2.142072 (0.679324) | 0.606596 / 4.805227 (-4.198631) | 0.125991 / 6.500664 (-6.374673) | 0.063566 / 0.075469 (-0.011903) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.364771 / 1.841788 (-0.477017) | 18.000713 / 8.074308 (9.926404) | 14.840330 / 10.191392 (4.648937) | 0.144770 / 0.680424 (-0.535653) | 0.018060 / 0.534201 (-0.516141) | 0.334470 / 0.579283 (-0.244813) | 0.387386 / 0.434364 (-0.046978) | 0.398743 / 0.540337 (-0.141595) | 0.555437 / 1.386936 (-0.831499) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#5b974c9af6b45b6ebdbbf4b3418f25506c1c0618 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006491 / 0.011353 (-0.004862) | 0.004058 / 0.011008 (-0.006950) | 0.084462 / 0.038508 (0.045954) | 0.072310 / 0.023109 (0.049201) | 0.352458 / 0.275898 (0.076560) | 0.385829 / 0.323480 (0.062350) | 0.003978 / 0.007986 (-0.004007) | 0.003455 / 0.004328 (-0.000873) | 0.064070 / 0.004250 (0.059819) | 0.055577 / 0.037052 (0.018525) | 0.361288 / 0.258489 (0.102799) | 0.400147 / 0.293841 (0.106306) | 0.030785 / 0.128546 (-0.097761) | 0.008676 / 0.075646 (-0.066971) | 0.287481 / 0.419271 (-0.131791) | 0.052643 / 0.043533 (0.009110) | 0.354670 / 0.255139 (0.099531) | 0.382322 / 0.283200 (0.099122) | 0.025657 / 0.141683 (-0.116026) | 1.486798 / 1.452155 (0.034643) | 1.588439 / 1.492716 (0.095723) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.240881 / 0.018006 (0.222875) | 0.463997 / 0.000490 (0.463507) | 0.009688 / 0.000200 (0.009488) | 0.000601 / 0.000054 (0.000546) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029071 / 0.037411 (-0.008340) | 0.083077 / 0.014526 (0.068551) | 0.119857 / 0.176557 (-0.056699) | 0.153387 / 0.737135 (-0.583749) | 0.132162 / 0.296338 (-0.164177) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.383822 / 0.215209 (0.168613) | 3.828572 / 2.077655 (1.750918) | 1.877629 / 1.504120 (0.373509) | 1.708757 / 1.541195 (0.167562) | 1.771658 / 1.468490 (0.303168) | 0.482439 / 4.584777 (-4.102338) | 3.496247 / 3.745712 (-0.249466) | 3.282055 / 5.269862 (-1.987807) | 2.053069 / 4.565676 (-2.512607) | 0.056626 / 0.424275 (-0.367649) | 0.007338 / 0.007607 (-0.000269) | 0.461257 / 0.226044 (0.235213) | 4.605326 / 2.268929 (2.336397) | 2.408365 / 55.444624 (-53.036260) | 1.986550 / 6.876477 (-4.889926) | 2.225220 / 2.142072 (0.083148) | 0.601301 / 4.805227 (-4.203927) | 0.132217 / 6.500664 (-6.368447) | 0.061217 / 0.075469 (-0.014252) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.268706 / 1.841788 (-0.573081) | 18.892026 / 8.074308 (10.817717) | 14.093892 / 10.191392 (3.902500) | 0.162483 / 0.680424 (-0.517941) | 0.018372 / 0.534201 (-0.515829) | 0.391901 / 0.579283 (-0.187382) | 0.401578 / 0.434364 (-0.032786) | 0.456741 / 0.540337 (-0.083596) | 0.646760 / 1.386936 (-0.740176) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006657 / 0.011353 (-0.004696) | 0.003981 / 0.011008 (-0.007027) | 0.066126 / 0.038508 (0.027617) | 0.072673 / 0.023109 (0.049564) | 0.409970 / 0.275898 (0.134072) | 0.430797 / 0.323480 (0.107317) | 0.005477 / 0.007986 (-0.002508) | 0.003362 / 0.004328 (-0.000966) | 0.065532 / 0.004250 (0.061282) | 0.056018 / 0.037052 (0.018966) | 0.406676 / 0.258489 (0.148187) | 0.438516 / 0.293841 (0.144675) | 0.032795 / 0.128546 (-0.095751) | 0.008580 / 0.075646 (-0.067066) | 0.072692 / 0.419271 (-0.346579) | 0.048110 / 0.043533 (0.004577) | 0.396826 / 0.255139 (0.141687) | 0.418442 / 0.283200 (0.135242) | 0.023269 / 0.141683 (-0.118414) | 1.499438 / 1.452155 (0.047283) | 1.568842 / 1.492716 (0.076126) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.218729 / 0.018006 (0.200723) | 0.450771 / 0.000490 (0.450281) | 0.004996 / 0.000200 (0.004796) | 0.000086 / 0.000054 (0.000031) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031484 / 0.037411 (-0.005928) | 0.092927 / 0.014526 (0.078401) | 0.107849 / 0.176557 (-0.068707) | 0.156658 / 0.737135 (-0.580478) | 0.106373 / 0.296338 (-0.189965) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.434658 / 0.215209 (0.219449) | 4.336386 / 2.077655 (2.258731) | 2.322577 / 1.504120 (0.818457) | 2.149505 / 1.541195 (0.608310) | 2.201967 / 1.468490 (0.733476) | 0.496994 / 4.584777 (-4.087783) | 3.533065 / 3.745712 (-0.212647) | 3.235750 / 5.269862 (-2.034112) | 2.034854 / 4.565676 (-2.530823) | 0.058258 / 0.424275 (-0.366017) | 0.007260 / 0.007607 (-0.000347) | 0.509115 / 0.226044 (0.283071) | 5.088427 / 2.268929 (2.819499) | 2.793551 / 55.444624 (-52.651073) | 2.430588 / 6.876477 (-4.445889) | 2.625998 / 2.142072 (0.483926) | 0.611676 / 4.805227 (-4.193552) | 0.133343 / 6.500664 (-6.367321) | 0.059888 / 0.075469 (-0.015581) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.377292 / 1.841788 (-0.464496) | 19.214299 / 8.074308 (11.139991) | 14.629146 / 10.191392 (4.437754) | 0.171283 / 0.680424 (-0.509141) | 0.020348 / 0.534201 (-0.513853) | 0.397823 / 0.579283 (-0.181461) | 0.411590 / 0.434364 (-0.022774) | 0.470850 / 0.540337 (-0.069487) | 0.658667 / 1.386936 (-0.728269) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#a1e1867e932f14233244fb25713f3c94c46ff50a \"CML watermark\")\n" ]
Check builder cls default config name in inspect
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/6253/reactions" }
PR_kwDODunzps5a3s__
{ "diff_url": "https://github.com/huggingface/datasets/pull/6253.diff", "html_url": "https://github.com/huggingface/datasets/pull/6253", "merged_at": "2023-09-21T14:08:00Z", "patch_url": "https://github.com/huggingface/datasets/pull/6253.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6253" }
2023-09-21T10:15:32Z
https://api.github.com/repos/huggingface/datasets/issues/6253/comments
Fix https://github.com/huggingface/datasets-server/issues/1812 this was causing this issue: ```ipython In [1]: from datasets import * In [2]: inspect.get_dataset_config_names("aakanksha/udpos") Out[2]: ['default'] In [3]: load_dataset_builder("aakanksha/udpos").config.name Out[3]: 'en' ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/6253/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6253/timeline
closed
false
6,253
null
2023-09-21T14:08:00Z
null
true
1,906,375,378
https://api.github.com/repos/huggingface/datasets/issues/6252
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6252/events
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
null
2024-03-19T15:29:43Z
[]
https://github.com/huggingface/datasets/issues/6252
NONE
completed
null
{ "closed_at": null, "closed_issues": 3, "created_at": "2023-02-13T16:22:42Z", "creator": { "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }, "description": "Next major release", "due_on": null, "html_url": "https://github.com/huggingface/datasets/milestone/10", "id": 9038583, "labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/10/labels", "node_id": "MI_kwDODunzps4Aier3", "number": 10, "open_issues": 5, "state": "open", "title": "3.0", "updated_at": "2024-06-28T06:51:30Z", "url": "https://api.github.com/repos/huggingface/datasets/milestones/10" }
[ "Indeed, it makes sense to do this by default. \r\n\r\nIn the meantime, you can use `.with_transform` to transpose the images when accessing them:\r\n\r\n```python\r\nimport PIL.ImageOps\r\n\r\ndef exif_transpose_transform(batch):\r\n batch[\"image\"] = [PIL.ImageOps.exif_transpose(image) for image in batch[\"image\"]]\r\n return batch\r\n\r\ndataset = dataset.with_transform(exif_transpose_transform)\r\n```", "This operation sets some `Image` attributes to `None` (`.format`, `.filename`, etc.), causing our tests to fail, so I think we should wait for Datasets 3.0 to make this change. In version 3.0, storing image paths will be replaced by embedding image bytes, so there will be fewer instances where we use the `.filename` attribute." ]
exif_transpose not done to Image (PIL problem)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6252/reactions" }
I_kwDODunzps5xoPrS
null
2023-09-21T08:11:46Z
https://api.github.com/repos/huggingface/datasets/issues/6252/comments
### Feature request I noticed that some of my images loaded using PIL have some metadata related to exif that can rotate them when loading. Since the dataset.features.Image uses PIL for loading, the loaded image may be rotated (width and height will be inverted) thus for tasks as object detection and layoutLM this can create some inconsistencies (between input bboxes and input images). For now there is no option in datasets.features.Image to specify that. We need to do the following when preparing examples (when preparing images for training, test or inference): ``` from PIL import Image, ImageOps pil = ImageOps.exif_transpose(pil) ``` reference: https://stackoverflow.com/a/63950647/5720150 Is it possible to add this by default to the datasets.feature.Image ? or to add the option to do the ImageOps.exif_transpose? Thank you ### Motivation Prevent having inverted data related to exif metadata that may affect object detection tasks ### Your contribution Changing in datasets.featrues.Image I can help with that.
{ "avatar_url": "https://avatars.githubusercontent.com/u/108274349?v=4", "events_url": "https://api.github.com/users/rhajou/events{/privacy}", "followers_url": "https://api.github.com/users/rhajou/followers", "following_url": "https://api.github.com/users/rhajou/following{/other_user}", "gists_url": "https://api.github.com/users/rhajou/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/rhajou", "id": 108274349, "login": "rhajou", "node_id": "U_kgDOBnQirQ", "organizations_url": "https://api.github.com/users/rhajou/orgs", "received_events_url": "https://api.github.com/users/rhajou/received_events", "repos_url": "https://api.github.com/users/rhajou/repos", "site_admin": false, "starred_url": "https://api.github.com/users/rhajou/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rhajou/subscriptions", "type": "User", "url": "https://api.github.com/users/rhajou" }
https://api.github.com/repos/huggingface/datasets/issues/6252/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6252/timeline
closed
false
6,252
null
2024-03-19T15:29:43Z
null
false
1,904,418,426
https://api.github.com/repos/huggingface/datasets/issues/6251
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6251/events
[]
null
2023-09-27T06:37:03Z
[]
https://github.com/huggingface/datasets/pull/6251
MEMBER
null
false
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "This function reads an entire Arrow table in one go, which is not ideal memory-wise, so I don't think we should encourage using this function, considering we want to keep RAM usage as low as possible in the streaming mode. \r\n\r\n(Note that Parquet files are compressed, meaning the loaded table can be significantly larger than the size in Parquet.)\r\n\r\nInstead, we should suggest the authors to use:\r\n```python\r\nwith open(doc_path, \"rb\") as f:\r\n parquet_file = pq.ParquetFile(f)\r\n for batch in parquet_file.iter_batches():\r\n pa_table = pa.Table.from_batches([batch])\r\n yield idx, pa_table\r\n idx += 1\r\n```", "@mariosasko I think the potential problem you evoke is independent of whether or not we support streaming mode:\r\n- if the user's script with `read_table` works in non-streaming mode, it will also work in streaming mode after this PR\r\n\r\nIn fact, what we should suggest instead is to follow the scriptless approach, so that our `parquet` packaged module is used, with all the optimizations implemented. But this approach is not possible in all cases, and some use cases need to implement a script. And if they have small Parquet files and use `read_table`, I think we should support streaming.\r\n\r\nIn summary, let me clarify the goal and the scope of this PR:\r\n- a user needs using a loading script\r\n- their files are small enough so that they use `read_table`\r\n- their loading script works in non-streaming mode\r\n- therefore, this PR allows loading their dataset in streaming mode as well", "Yes, the no-script approach with metadata configs makes the most sense.\r\n\r\n> their files are small enough so that they use read_table\r\n\r\nSome of the Parquet files in that repo are larger than 1 GB ...\r\n\r\nAlso, I'd wait for more instances of people using the `read_table` function on the Hub before merging this PR.", "@mariosasko, yes, this solution is not specifically for the \"uonlp/CulturaX\" dataset, but for other use cases as I explained above: indeed, they finally removed the use of `read_table` because their data files are too large.\r\n\r\n> Also, I'd wait for more instances of people using the `read_table` function on the Hub before merging this PR.\r\n\r\nDo you know how many datasets are currently using `read_table`?", "> Do you know how many datasets are currently using read_table?\r\n\r\nZero (based on the script that checks the script contents of the public Hub datasets). ", "I see... Thanks! :hugs: ", "@mariosasko thanks for pointing the script! :hugs: \r\n\r\nHowever, I have found some Hub datasets that are using `read_table`, e.g.:\r\n- https://huggingface.co/datasets/jglaser/protein_ligand_contacts\r\n- https://huggingface.co/datasets/AresEkb/prof_standards_sbert_large_mt_nlu_ru\r\n- https://huggingface.co/datasets/victorcosta/pt_legislation\r\n- https://huggingface.co/datasets/jglaser/binding_affinity\r\n- https://huggingface.co/datasets/jglaser/pdbbind_complexes\r\n- https://huggingface.co/datasets/victorcosta/ria_pt__proems_format", "I'm merging this PR as discussed in private.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008267 / 0.011353 (-0.003086) | 0.005813 / 0.011008 (-0.005195) | 0.108802 / 0.038508 (0.070294) | 0.093996 / 0.023109 (0.070886) | 0.403115 / 0.275898 (0.127217) | 0.457299 / 0.323480 (0.133819) | 0.006277 / 0.007986 (-0.001709) | 0.004701 / 0.004328 (0.000373) | 0.080700 / 0.004250 (0.076449) | 0.077906 / 0.037052 (0.040854) | 0.409972 / 0.258489 (0.151483) | 0.477707 / 0.293841 (0.183867) | 0.041816 / 0.128546 (-0.086731) | 0.011250 / 0.075646 (-0.064397) | 0.390634 / 0.419271 (-0.028637) | 0.065361 / 0.043533 (0.021828) | 0.404501 / 0.255139 (0.149362) | 0.448162 / 0.283200 (0.164962) | 0.032823 / 0.141683 (-0.108860) | 1.899892 / 1.452155 (0.447737) | 2.044561 / 1.492716 (0.551844) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.241093 / 0.018006 (0.223086) | 0.482111 / 0.000490 (0.481622) | 0.005505 / 0.000200 (0.005305) | 0.000094 / 0.000054 (0.000039) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034861 / 0.037411 (-0.002551) | 0.109296 / 0.014526 (0.094770) | 0.127594 / 0.176557 (-0.048962) | 0.191815 / 0.737135 (-0.545320) | 0.122630 / 0.296338 (-0.173709) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.452194 / 0.215209 (0.236985) | 4.486200 / 2.077655 (2.408545) | 2.155635 / 1.504120 (0.651515) | 2.004569 / 1.541195 (0.463374) | 2.142570 / 1.468490 (0.674080) | 0.561488 / 4.584777 (-4.023289) | 4.381102 / 3.745712 (0.635390) | 3.914920 / 5.269862 (-1.354942) | 2.474271 / 4.565676 (-2.091406) | 0.067528 / 0.424275 (-0.356747) | 0.008723 / 0.007607 (0.001116) | 0.536077 / 0.226044 (0.310033) | 5.342050 / 2.268929 (3.073122) | 2.735747 / 55.444624 (-52.708877) | 2.353938 / 6.876477 (-4.522539) | 2.442878 / 2.142072 (0.300805) | 0.685404 / 4.805227 (-4.119823) | 0.156657 / 6.500664 (-6.344007) | 0.071714 / 0.075469 (-0.003755) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.562852 / 1.841788 (-0.278935) | 24.538203 / 8.074308 (16.463895) | 16.857777 / 10.191392 (6.666385) | 0.184221 / 0.680424 (-0.496203) | 0.021688 / 0.534201 (-0.512513) | 0.470700 / 0.579283 (-0.108583) | 0.470593 / 0.434364 (0.036229) | 0.645066 / 0.540337 (0.104729) | 0.756075 / 1.386936 (-0.630861) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009486 / 0.011353 (-0.001867) | 0.004694 / 0.011008 (-0.006314) | 0.080216 / 0.038508 (0.041708) | 0.093479 / 0.023109 (0.070369) | 0.537353 / 0.275898 (0.261455) | 0.551631 / 0.323480 (0.228151) | 0.007373 / 0.007986 (-0.000613) | 0.004044 / 0.004328 (-0.000285) | 0.075301 / 0.004250 (0.071051) | 0.069408 / 0.037052 (0.032355) | 0.527962 / 0.258489 (0.269473) | 0.559423 / 0.293841 (0.265582) | 0.039351 / 0.128546 (-0.089195) | 0.010801 / 0.075646 (-0.064845) | 0.092803 / 0.419271 (-0.326468) | 0.058876 / 0.043533 (0.015343) | 0.513742 / 0.255139 (0.258603) | 0.574666 / 0.283200 (0.291466) | 0.030277 / 0.141683 (-0.111406) | 1.884936 / 1.452155 (0.432782) | 2.008260 / 1.492716 (0.515543) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.242162 / 0.018006 (0.224156) | 0.467400 / 0.000490 (0.466910) | 0.005348 / 0.000200 (0.005148) | 0.000103 / 0.000054 (0.000049) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.038022 / 0.037411 (0.000611) | 0.108239 / 0.014526 (0.093713) | 0.121514 / 0.176557 (-0.055042) | 0.184951 / 0.737135 (-0.552184) | 0.123138 / 0.296338 (-0.173200) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.558587 / 0.215209 (0.343377) | 5.740312 / 2.077655 (3.662657) | 3.172164 / 1.504120 (1.668044) | 2.852908 / 1.541195 (1.311713) | 2.894435 / 1.468490 (1.425945) | 0.586399 / 4.584777 (-3.998378) | 4.498342 / 3.745712 (0.752630) | 4.000569 / 5.269862 (-1.269292) | 2.610988 / 4.565676 (-1.954688) | 0.068415 / 0.424275 (-0.355860) | 0.008602 / 0.007607 (0.000994) | 0.614731 / 0.226044 (0.388686) | 6.068158 / 2.268929 (3.799229) | 3.301070 / 55.444624 (-52.143554) | 2.868034 / 6.876477 (-4.008443) | 2.959072 / 2.142072 (0.816999) | 0.684174 / 4.805227 (-4.121053) | 0.154099 / 6.500664 (-6.346565) | 0.070641 / 0.075469 (-0.004828) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.835667 / 1.841788 (-0.006120) | 24.981645 / 8.074308 (16.907337) | 17.218517 / 10.191392 (7.027125) | 0.197055 / 0.680424 (-0.483368) | 0.025465 / 0.534201 (-0.508736) | 0.523498 / 0.579283 (-0.055785) | 0.528268 / 0.434364 (0.093904) | 0.599518 / 0.540337 (0.059180) | 0.887206 / 1.386936 (-0.499730) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#dd786d3b8dc94f1ab717327e88f65879b525091d \"CML watermark\")\n" ]
Support streaming datasets with pyarrow.parquet.read_table
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6251/reactions" }
PR_kwDODunzps5awQsy
{ "diff_url": "https://github.com/huggingface/datasets/pull/6251.diff", "html_url": "https://github.com/huggingface/datasets/pull/6251", "merged_at": "2023-09-27T06:26:24Z", "patch_url": "https://github.com/huggingface/datasets/pull/6251.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6251" }
2023-09-20T08:07:02Z
https://api.github.com/repos/huggingface/datasets/issues/6251/comments
Support streaming datasets with `pyarrow.parquet.read_table`. See: https://huggingface.co/datasets/uonlp/CulturaX/discussions/2 CC: @AndreaFrancis
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/6251/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6251/timeline
closed
false
6,251
null
2023-09-27T06:26:24Z
null
true
1,901,390,945
https://api.github.com/repos/huggingface/datasets/issues/6247
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6247/events
[]
null
2023-09-19T18:51:49Z
[]
https://github.com/huggingface/datasets/pull/6247
CONTRIBUTOR
null
false
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008892 / 0.011353 (-0.002461) | 0.005140 / 0.011008 (-0.005868) | 0.110951 / 0.038508 (0.072442) | 0.086159 / 0.023109 (0.063050) | 0.391117 / 0.275898 (0.115218) | 0.440884 / 0.323480 (0.117404) | 0.006562 / 0.007986 (-0.001423) | 0.003711 / 0.004328 (-0.000618) | 0.081848 / 0.004250 (0.077598) | 0.063187 / 0.037052 (0.026135) | 0.369771 / 0.258489 (0.111282) | 0.447685 / 0.293841 (0.153844) | 0.046623 / 0.128546 (-0.081923) | 0.014024 / 0.075646 (-0.061622) | 0.418556 / 0.419271 (-0.000715) | 0.064660 / 0.043533 (0.021127) | 0.379416 / 0.255139 (0.124277) | 0.415800 / 0.283200 (0.132600) | 0.036899 / 0.141683 (-0.104784) | 1.710280 / 1.452155 (0.258125) | 1.932326 / 1.492716 (0.439610) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.311351 / 0.018006 (0.293345) | 0.621121 / 0.000490 (0.620631) | 0.013677 / 0.000200 (0.013477) | 0.000543 / 0.000054 (0.000488) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031310 / 0.037411 (-0.006102) | 0.099546 / 0.014526 (0.085020) | 0.122100 / 0.176557 (-0.054457) | 0.186477 / 0.737135 (-0.550659) | 0.116634 / 0.296338 (-0.179704) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.574639 / 0.215209 (0.359430) | 5.976678 / 2.077655 (3.899023) | 2.535482 / 1.504120 (1.031362) | 2.248873 / 1.541195 (0.707678) | 2.361696 / 1.468490 (0.893205) | 0.866700 / 4.584777 (-3.718077) | 5.298018 / 3.745712 (1.552306) | 4.753240 / 5.269862 (-0.516622) | 3.124698 / 4.565676 (-1.440979) | 0.101852 / 0.424275 (-0.322423) | 0.009117 / 0.007607 (0.001510) | 0.723730 / 0.226044 (0.497685) | 7.172649 / 2.268929 (4.903720) | 3.400410 / 55.444624 (-52.044214) | 2.626619 / 6.876477 (-4.249857) | 2.948692 / 2.142072 (0.806620) | 0.991589 / 4.805227 (-3.813638) | 0.208902 / 6.500664 (-6.291762) | 0.076172 / 0.075469 (0.000703) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.621880 / 1.841788 (-0.219907) | 22.735673 / 8.074308 (14.661365) | 20.376990 / 10.191392 (10.185598) | 0.232219 / 0.680424 (-0.448204) | 0.028616 / 0.534201 (-0.505585) | 0.455725 / 0.579283 (-0.123558) | 0.562796 / 0.434364 (0.128432) | 0.545344 / 0.540337 (0.005007) | 0.759440 / 1.386936 (-0.627496) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009845 / 0.011353 (-0.001508) | 0.005289 / 0.011008 (-0.005719) | 0.083117 / 0.038508 (0.044609) | 0.098467 / 0.023109 (0.075357) | 0.532345 / 0.275898 (0.256447) | 0.571000 / 0.323480 (0.247520) | 0.007223 / 0.007986 (-0.000763) | 0.004442 / 0.004328 (0.000114) | 0.081710 / 0.004250 (0.077459) | 0.071132 / 0.037052 (0.034080) | 0.540093 / 0.258489 (0.281604) | 0.582244 / 0.293841 (0.288403) | 0.048509 / 0.128546 (-0.080038) | 0.013897 / 0.075646 (-0.061749) | 0.092579 / 0.419271 (-0.326692) | 0.073409 / 0.043533 (0.029876) | 0.537369 / 0.255139 (0.282230) | 0.551403 / 0.283200 (0.268203) | 0.038847 / 0.141683 (-0.102835) | 1.940848 / 1.452155 (0.488693) | 2.045597 / 1.492716 (0.552881) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.303883 / 0.018006 (0.285877) | 0.600237 / 0.000490 (0.599748) | 0.006030 / 0.000200 (0.005830) | 0.000124 / 0.000054 (0.000070) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.036633 / 0.037411 (-0.000778) | 0.105853 / 0.014526 (0.091327) | 0.126289 / 0.176557 (-0.050267) | 0.190022 / 0.737135 (-0.547113) | 0.123251 / 0.296338 (-0.173087) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.711893 / 0.215209 (0.496684) | 6.979781 / 2.077655 (4.902126) | 3.491514 / 1.504120 (1.987394) | 3.268077 / 1.541195 (1.726882) | 3.241777 / 1.468490 (1.773287) | 0.875913 / 4.584777 (-3.708864) | 5.458421 / 3.745712 (1.712709) | 4.818355 / 5.269862 (-0.451507) | 3.256046 / 4.565676 (-1.309631) | 0.095000 / 0.424275 (-0.329275) | 0.009072 / 0.007607 (0.001465) | 0.818468 / 0.226044 (0.592424) | 8.027702 / 2.268929 (5.758773) | 4.363234 / 55.444624 (-51.081390) | 3.695269 / 6.876477 (-3.181207) | 3.902601 / 2.142072 (1.760528) | 1.039007 / 4.805227 (-3.766220) | 0.212050 / 6.500664 (-6.288614) | 0.081438 / 0.075469 (0.005969) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.746945 / 1.841788 (-0.094842) | 25.274283 / 8.074308 (17.199975) | 23.514717 / 10.191392 (13.323325) | 0.232580 / 0.680424 (-0.447843) | 0.032083 / 0.534201 (-0.502118) | 0.482873 / 0.579283 (-0.096410) | 0.585730 / 0.434364 (0.151366) | 0.602066 / 0.540337 (0.061729) | 0.796391 / 1.386936 (-0.590546) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#0d7cb68fe37dbfd81e5f82e19d8f9847c337788d \"CML watermark\")\n" ]
Update create_dataset.mdx
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6247/reactions" }
PR_kwDODunzps5amAQ1
{ "diff_url": "https://github.com/huggingface/datasets/pull/6247.diff", "html_url": "https://github.com/huggingface/datasets/pull/6247", "merged_at": "2023-09-19T18:40:10Z", "patch_url": "https://github.com/huggingface/datasets/pull/6247.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6247" }
2023-09-18T17:06:29Z
https://api.github.com/repos/huggingface/datasets/issues/6247/comments
modified , as AudioFolder and ImageFolder not in Dataset Library. ``` from datasets import AudioFolder ``` and ```from datasets import ImageFolder``` to ```from datasets import load_dataset``` ``` cannot import name 'AudioFolder' from 'datasets' (/home/eswardivi/miniconda3/envs/Hugformers/lib/python3.10/site-packages/datasets/__init__.py) ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/76403422?v=4", "events_url": "https://api.github.com/users/EswarDivi/events{/privacy}", "followers_url": "https://api.github.com/users/EswarDivi/followers", "following_url": "https://api.github.com/users/EswarDivi/following{/other_user}", "gists_url": "https://api.github.com/users/EswarDivi/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/EswarDivi", "id": 76403422, "login": "EswarDivi", "node_id": "MDQ6VXNlcjc2NDAzNDIy", "organizations_url": "https://api.github.com/users/EswarDivi/orgs", "received_events_url": "https://api.github.com/users/EswarDivi/received_events", "repos_url": "https://api.github.com/users/EswarDivi/repos", "site_admin": false, "starred_url": "https://api.github.com/users/EswarDivi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/EswarDivi/subscriptions", "type": "User", "url": "https://api.github.com/users/EswarDivi" }
https://api.github.com/repos/huggingface/datasets/issues/6247/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6247/timeline
closed
false
6,247
null
2023-09-19T18:40:10Z
null
true
1,899,848,414
https://api.github.com/repos/huggingface/datasets/issues/6246
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6246/events
[]
null
2023-09-18T16:20:09Z
[]
https://github.com/huggingface/datasets/issues/6246
NONE
completed
null
null
[ "I think it's an issue with the code.\r\n\r\nSpecifically:\r\n```python\r\ndataset = dataset['train'].add_column(\"/workspace/data\", new_column)\r\n```\r\n\r\nNow `dataset` is the train set with a new column. \r\nTo fix this, you can do:\r\n\r\n```python\r\ndataset['train'] = dataset['train'].add_column(\"/workspace/data\", new_column)\r\n```", "> I think it's an issue with the code.\r\n> \r\n> Specifically:\r\n> \r\n> ```python\r\n> dataset = dataset['train'].add_column(\"/workspace/data\", new_column)\r\n> ```\r\n> \r\n> Now `dataset` is the train set with a new column. To fix this, you can do:\r\n> \r\n> ```python\r\n> dataset['train'] = dataset['train'].add_column(\"/workspace/data\", new_column)\r\n> ```\r\n\r\nThanks for your response, but i can not access mask images, please let me know why the problem still persists. Here is the notebook for reference: https://colab.research.google.com/drive/10lZ_zLtU4itYVmIVTvIEVbjfOtCZaAZy?usp=sharing ", "I think there is a slight misunderstanding.\r\n```python\r\nnew_column = [\"mask\"] * len(dataset[\"train\"])\r\ndataset['train'] = dataset['train'].add_column(\"/workspace/data\", new_column)\r\n```\r\n\r\nadds a column with the string `mask` to your dataset.\r\nIf you're trying to load the images `\"mask_{idx}.png\"` in your dataset, you could try:\r\n\r\n```\r\nfrom datasets import Image\r\n\r\ndataset['train'] = dataset['train'].map(lambda u, idx: {'mask': f\"/workspace/data/mask_{idx}.png\", with_indices=True).cast_column(\"mask\", Image())\r\n```\r\n\r\nWhat this does is that it adds a column to your dataset name `mask` with the path to the mask, then it cast the column as an `Image` feature.\r\n\r\nThis [link](https://huggingface.co/docs/datasets/v2.5.1/en/image_load) explains how to load images.\r\n\r\nHope this helps!", "> I think there is a slight misunderstanding.\r\n> \r\n> ```python\r\n> new_column = [\"mask\"] * len(dataset[\"train\"])\r\n> dataset['train'] = dataset['train'].add_column(\"/workspace/data\", new_column)\r\n> ```\r\n> \r\n> adds a column with the string `mask` to your dataset. If you're trying to load the images `\"mask_{idx}.png\"` in your dataset, you could try:\r\n> \r\n> ```\r\n> from datasets import Image\r\n> \r\n> dataset['train'] = dataset['train'].map(lambda u, idx: {'mask': f\"/workspace/data/mask_{idx}.png\", with_indices=True).cast_column(\"mask\", Image())\r\n> ```\r\n> \r\n> What this does is that it adds a column to your dataset name `mask` with the path to the mask, then it cast the column as an `Image` feature.\r\n> \r\n> This [link](https://huggingface.co/docs/datasets/v2.5.1/en/image_load) explains how to load images.\r\n> \r\n> Hope this helps!\r\n\r\nThank you very much, this is really helpful...\r\ni made some changes for it to work:\r\n```\r\ndataset['train'] = dataset['train'].map(lambda u, idx: {'mask': f\"/content/data/mask_{idx}.png\"}, with_indices=True).cast_column(\"mask\", Image())\r\n```\r\nThanks Again @Dref360 " ]
Add new column to dataset
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6246/reactions" }
I_kwDODunzps5xPWLe
null
2023-09-17T16:59:48Z
https://api.github.com/repos/huggingface/datasets/issues/6246/comments
### Describe the bug ``` --------------------------------------------------------------------------- KeyError Traceback (most recent call last) [<ipython-input-9-bd197b36b6a0>](https://localhost:8080/#) in <cell line: 1>() ----> 1 dataset['train']['/workspace/data'] 3 frames [/usr/local/lib/python3.10/dist-packages/datasets/formatting/formatting.py](https://localhost:8080/#) in _check_valid_column_key(key, columns) 518 def _check_valid_column_key(key: str, columns: List[str]) -> None: 519 if key not in columns: --> 520 raise KeyError(f"Column {key} not in the dataset. Current columns in the dataset: {columns}") 521 522 KeyError: "Column train not in the dataset. Current columns in the dataset: ['image', '/workspace/data']" ``` ### Steps to reproduce the bug please find the notebook for reference: https://colab.research.google.com/drive/10lZ_zLtU4itYVmIVTvIEVbjfOtCZaAZy?usp=sharing ### Expected behavior add column to the dataset ### Environment info colab pro
{ "avatar_url": "https://avatars.githubusercontent.com/u/20493493?v=4", "events_url": "https://api.github.com/users/andysingal/events{/privacy}", "followers_url": "https://api.github.com/users/andysingal/followers", "following_url": "https://api.github.com/users/andysingal/following{/other_user}", "gists_url": "https://api.github.com/users/andysingal/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/andysingal", "id": 20493493, "login": "andysingal", "node_id": "MDQ6VXNlcjIwNDkzNDkz", "organizations_url": "https://api.github.com/users/andysingal/orgs", "received_events_url": "https://api.github.com/users/andysingal/received_events", "repos_url": "https://api.github.com/users/andysingal/repos", "site_admin": false, "starred_url": "https://api.github.com/users/andysingal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/andysingal/subscriptions", "type": "User", "url": "https://api.github.com/users/andysingal" }
https://api.github.com/repos/huggingface/datasets/issues/6246/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6246/timeline
closed
false
6,246
null
2023-09-18T16:20:09Z
null
false
1,898,861,422
https://api.github.com/repos/huggingface/datasets/issues/6244
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6244/events
[]
null
2023-09-26T15:41:38Z
[]
https://github.com/huggingface/datasets/pull/6244
COLLABORATOR
null
false
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006410 / 0.011353 (-0.004943) | 0.003995 / 0.011008 (-0.007013) | 0.083585 / 0.038508 (0.045076) | 0.074285 / 0.023109 (0.051176) | 0.307163 / 0.275898 (0.031265) | 0.344691 / 0.323480 (0.021212) | 0.004277 / 0.007986 (-0.003708) | 0.004192 / 0.004328 (-0.000136) | 0.065156 / 0.004250 (0.060905) | 0.056774 / 0.037052 (0.019721) | 0.315483 / 0.258489 (0.056994) | 0.361911 / 0.293841 (0.068070) | 0.030454 / 0.128546 (-0.098092) | 0.008600 / 0.075646 (-0.067047) | 0.286692 / 0.419271 (-0.132579) | 0.052354 / 0.043533 (0.008821) | 0.308997 / 0.255139 (0.053858) | 0.337847 / 0.283200 (0.054647) | 0.022459 / 0.141683 (-0.119224) | 1.482758 / 1.452155 (0.030604) | 1.572853 / 1.492716 (0.080137) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.288603 / 0.018006 (0.270597) | 0.632903 / 0.000490 (0.632413) | 0.013702 / 0.000200 (0.013502) | 0.000284 / 0.000054 (0.000230) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028448 / 0.037411 (-0.008964) | 0.082441 / 0.014526 (0.067916) | 0.099048 / 0.176557 (-0.077508) | 0.154370 / 0.737135 (-0.582765) | 0.146143 / 0.296338 (-0.150195) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.399250 / 0.215209 (0.184040) | 3.986683 / 2.077655 (1.909028) | 1.962606 / 1.504120 (0.458486) | 1.782653 / 1.541195 (0.241459) | 1.830251 / 1.468490 (0.361761) | 0.492498 / 4.584777 (-4.092278) | 3.549581 / 3.745712 (-0.196131) | 3.200056 / 5.269862 (-2.069806) | 2.028109 / 4.565676 (-2.537568) | 0.058222 / 0.424275 (-0.366053) | 0.007629 / 0.007607 (0.000022) | 0.482083 / 0.226044 (0.256039) | 4.824728 / 2.268929 (2.555800) | 2.448772 / 55.444624 (-52.995852) | 2.079629 / 6.876477 (-4.796848) | 2.267739 / 2.142072 (0.125667) | 0.586712 / 4.805227 (-4.218515) | 0.134073 / 6.500664 (-6.366591) | 0.060565 / 0.075469 (-0.014904) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.263244 / 1.841788 (-0.578544) | 18.964498 / 8.074308 (10.890190) | 14.125062 / 10.191392 (3.933670) | 0.167635 / 0.680424 (-0.512789) | 0.018469 / 0.534201 (-0.515732) | 0.390395 / 0.579283 (-0.188888) | 0.406055 / 0.434364 (-0.028309) | 0.460717 / 0.540337 (-0.079620) | 0.642746 / 1.386936 (-0.744190) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006637 / 0.011353 (-0.004716) | 0.003972 / 0.011008 (-0.007036) | 0.064569 / 0.038508 (0.026061) | 0.075450 / 0.023109 (0.052341) | 0.405250 / 0.275898 (0.129352) | 0.433530 / 0.323480 (0.110050) | 0.005625 / 0.007986 (-0.002361) | 0.004118 / 0.004328 (-0.000211) | 0.065092 / 0.004250 (0.060842) | 0.057979 / 0.037052 (0.020927) | 0.413732 / 0.258489 (0.155243) | 0.451983 / 0.293841 (0.158142) | 0.032170 / 0.128546 (-0.096377) | 0.008690 / 0.075646 (-0.066957) | 0.071792 / 0.419271 (-0.347479) | 0.048560 / 0.043533 (0.005027) | 0.410312 / 0.255139 (0.155173) | 0.427294 / 0.283200 (0.144095) | 0.023006 / 0.141683 (-0.118677) | 1.496319 / 1.452155 (0.044164) | 1.566744 / 1.492716 (0.074027) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.266812 / 0.018006 (0.248805) | 0.540277 / 0.000490 (0.539788) | 0.008998 / 0.000200 (0.008799) | 0.000101 / 0.000054 (0.000047) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032496 / 0.037411 (-0.004915) | 0.091387 / 0.014526 (0.076861) | 0.107516 / 0.176557 (-0.069041) | 0.160019 / 0.737135 (-0.577116) | 0.107686 / 0.296338 (-0.188652) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.433321 / 0.215209 (0.218111) | 4.330221 / 2.077655 (2.252566) | 2.367215 / 1.504120 (0.863095) | 2.192464 / 1.541195 (0.651269) | 2.200204 / 1.468490 (0.731714) | 0.488057 / 4.584777 (-4.096720) | 3.625429 / 3.745712 (-0.120283) | 3.282859 / 5.269862 (-1.987003) | 2.038716 / 4.565676 (-2.526960) | 0.057968 / 0.424275 (-0.366307) | 0.007753 / 0.007607 (0.000146) | 0.509133 / 0.226044 (0.283089) | 5.086445 / 2.268929 (2.817516) | 2.846017 / 55.444624 (-52.598607) | 2.469546 / 6.876477 (-4.406931) | 2.673218 / 2.142072 (0.531145) | 0.591228 / 4.805227 (-4.213999) | 0.131920 / 6.500664 (-6.368744) | 0.059967 / 0.075469 (-0.015502) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.375634 / 1.841788 (-0.466153) | 19.506752 / 8.074308 (11.432444) | 14.677876 / 10.191392 (4.486484) | 0.165071 / 0.680424 (-0.515353) | 0.020614 / 0.534201 (-0.513587) | 0.395967 / 0.579283 (-0.183316) | 0.424358 / 0.434364 (-0.010006) | 0.469954 / 0.540337 (-0.070384) | 0.643169 / 1.386936 (-0.743767) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#887a854f03c4ac6d2e99b9ef4d89e6fe8c46d6f1 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006072 / 0.011353 (-0.005281) | 0.003691 / 0.011008 (-0.007318) | 0.081683 / 0.038508 (0.043175) | 0.059114 / 0.023109 (0.036005) | 0.317053 / 0.275898 (0.041155) | 0.357672 / 0.323480 (0.034192) | 0.003577 / 0.007986 (-0.004408) | 0.003890 / 0.004328 (-0.000438) | 0.063667 / 0.004250 (0.059417) | 0.048233 / 0.037052 (0.011181) | 0.322854 / 0.258489 (0.064365) | 0.368014 / 0.293841 (0.074173) | 0.027750 / 0.128546 (-0.100796) | 0.008137 / 0.075646 (-0.067509) | 0.263906 / 0.419271 (-0.155366) | 0.045402 / 0.043533 (0.001870) | 0.315414 / 0.255139 (0.060275) | 0.340906 / 0.283200 (0.057707) | 0.023475 / 0.141683 (-0.118208) | 1.443922 / 1.452155 (-0.008233) | 1.550332 / 1.492716 (0.057616) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.211914 / 0.018006 (0.193908) | 0.423577 / 0.000490 (0.423088) | 0.003436 / 0.000200 (0.003236) | 0.000077 / 0.000054 (0.000022) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024675 / 0.037411 (-0.012737) | 0.072550 / 0.014526 (0.058024) | 0.084533 / 0.176557 (-0.092024) | 0.146106 / 0.737135 (-0.591029) | 0.085523 / 0.296338 (-0.210816) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.403498 / 0.215209 (0.188289) | 4.019000 / 2.077655 (1.941345) | 1.984821 / 1.504120 (0.480701) | 1.805071 / 1.541195 (0.263876) | 1.860906 / 1.468490 (0.392416) | 0.499570 / 4.584777 (-4.085207) | 3.088424 / 3.745712 (-0.657288) | 2.833693 / 5.269862 (-2.436169) | 1.869731 / 4.565676 (-2.695945) | 0.057606 / 0.424275 (-0.366669) | 0.006960 / 0.007607 (-0.000647) | 0.476085 / 0.226044 (0.250040) | 4.774063 / 2.268929 (2.505134) | 2.458079 / 55.444624 (-52.986545) | 2.106075 / 6.876477 (-4.770402) | 2.248373 / 2.142072 (0.106301) | 0.589767 / 4.805227 (-4.215460) | 0.124382 / 6.500664 (-6.376282) | 0.060705 / 0.075469 (-0.014764) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.287031 / 1.841788 (-0.554756) | 17.662455 / 8.074308 (9.588147) | 14.288812 / 10.191392 (4.097420) | 0.156168 / 0.680424 (-0.524256) | 0.016795 / 0.534201 (-0.517406) | 0.333726 / 0.579283 (-0.245557) | 0.362327 / 0.434364 (-0.072037) | 0.387773 / 0.540337 (-0.152564) | 0.547232 / 1.386936 (-0.839704) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006494 / 0.011353 (-0.004859) | 0.003762 / 0.011008 (-0.007247) | 0.062373 / 0.038508 (0.023864) | 0.066357 / 0.023109 (0.043247) | 0.448687 / 0.275898 (0.172789) | 0.482445 / 0.323480 (0.158965) | 0.004990 / 0.007986 (-0.002996) | 0.002945 / 0.004328 (-0.001384) | 0.062444 / 0.004250 (0.058194) | 0.051381 / 0.037052 (0.014329) | 0.449310 / 0.258489 (0.190821) | 0.483188 / 0.293841 (0.189347) | 0.029078 / 0.128546 (-0.099468) | 0.008146 / 0.075646 (-0.067501) | 0.067369 / 0.419271 (-0.351903) | 0.041732 / 0.043533 (-0.001801) | 0.451675 / 0.255139 (0.196536) | 0.470445 / 0.283200 (0.187246) | 0.021053 / 0.141683 (-0.120630) | 1.483627 / 1.452155 (0.031472) | 1.541594 / 1.492716 (0.048878) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.210247 / 0.018006 (0.192240) | 0.424663 / 0.000490 (0.424173) | 0.005394 / 0.000200 (0.005194) | 0.000076 / 0.000054 (0.000021) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026894 / 0.037411 (-0.010517) | 0.081324 / 0.014526 (0.066798) | 0.091362 / 0.176557 (-0.085195) | 0.145602 / 0.737135 (-0.591533) | 0.091896 / 0.296338 (-0.204443) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.469662 / 0.215209 (0.254453) | 4.689495 / 2.077655 (2.611840) | 2.596462 / 1.504120 (1.092342) | 2.422584 / 1.541195 (0.881389) | 2.476710 / 1.468490 (1.008220) | 0.507049 / 4.584777 (-4.077728) | 3.185519 / 3.745712 (-0.560193) | 2.879842 / 5.269862 (-2.390019) | 1.882643 / 4.565676 (-2.683034) | 0.058046 / 0.424275 (-0.366229) | 0.006797 / 0.007607 (-0.000811) | 0.545245 / 0.226044 (0.319201) | 5.449248 / 2.268929 (3.180319) | 3.057341 / 55.444624 (-52.387283) | 2.728385 / 6.876477 (-4.148092) | 2.898945 / 2.142072 (0.756873) | 0.600035 / 4.805227 (-4.205192) | 0.126337 / 6.500664 (-6.374327) | 0.061333 / 0.075469 (-0.014136) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.332966 / 1.841788 (-0.508821) | 17.960805 / 8.074308 (9.886497) | 14.978838 / 10.191392 (4.787446) | 0.148852 / 0.680424 (-0.531572) | 0.018307 / 0.534201 (-0.515894) | 0.335234 / 0.579283 (-0.244050) | 0.389659 / 0.434364 (-0.044704) | 0.393259 / 0.540337 (-0.147078) | 0.549237 / 1.386936 (-0.837699) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#278a5673172c30b915a9ebf64cc7aff9667b58fd \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008808 / 0.011353 (-0.002545) | 0.005001 / 0.011008 (-0.006008) | 0.110022 / 0.038508 (0.071514) | 0.078015 / 0.023109 (0.054906) | 0.384724 / 0.275898 (0.108826) | 0.441354 / 0.323480 (0.117874) | 0.005116 / 0.007986 (-0.002870) | 0.004308 / 0.004328 (-0.000020) | 0.081679 / 0.004250 (0.077429) | 0.061386 / 0.037052 (0.024333) | 0.398149 / 0.258489 (0.139660) | 0.464859 / 0.293841 (0.171018) | 0.047443 / 0.128546 (-0.081104) | 0.014693 / 0.075646 (-0.060954) | 0.365438 / 0.419271 (-0.053833) | 0.081689 / 0.043533 (0.038156) | 0.400458 / 0.255139 (0.145319) | 0.449958 / 0.283200 (0.166758) | 0.038266 / 0.141683 (-0.103417) | 1.795043 / 1.452155 (0.342888) | 1.908819 / 1.492716 (0.416102) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.297911 / 0.018006 (0.279905) | 0.601640 / 0.000490 (0.601150) | 0.015406 / 0.000200 (0.015206) | 0.000163 / 0.000054 (0.000108) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034520 / 0.037411 (-0.002891) | 0.092657 / 0.014526 (0.078131) | 0.113992 / 0.176557 (-0.062564) | 0.189075 / 0.737135 (-0.548061) | 0.106602 / 0.296338 (-0.189736) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.585838 / 0.215209 (0.370629) | 5.719281 / 2.077655 (3.641627) | 2.525914 / 1.504120 (1.021794) | 2.231908 / 1.541195 (0.690713) | 2.215272 / 1.468490 (0.746782) | 0.814425 / 4.584777 (-3.770352) | 5.243406 / 3.745712 (1.497694) | 4.476642 / 5.269862 (-0.793220) | 2.929438 / 4.565676 (-1.636239) | 0.092070 / 0.424275 (-0.332205) | 0.009358 / 0.007607 (0.001751) | 0.713975 / 0.226044 (0.487931) | 6.948846 / 2.268929 (4.679918) | 3.361125 / 55.444624 (-52.083500) | 2.575224 / 6.876477 (-4.301253) | 2.783082 / 2.142072 (0.641009) | 1.016205 / 4.805227 (-3.789022) | 0.202578 / 6.500664 (-6.298086) | 0.076696 / 0.075469 (0.001227) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.650889 / 1.841788 (-0.190898) | 23.358273 / 8.074308 (15.283965) | 19.882450 / 10.191392 (9.691058) | 0.228971 / 0.680424 (-0.451453) | 0.027736 / 0.534201 (-0.506465) | 0.472405 / 0.579283 (-0.106878) | 0.581799 / 0.434364 (0.147435) | 0.533000 / 0.540337 (-0.007338) | 0.815588 / 1.386936 (-0.571348) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009151 / 0.011353 (-0.002202) | 0.005074 / 0.011008 (-0.005934) | 0.078709 / 0.038508 (0.040201) | 0.077696 / 0.023109 (0.054586) | 0.522356 / 0.275898 (0.246458) | 0.562345 / 0.323480 (0.238865) | 0.006411 / 0.007986 (-0.001575) | 0.004379 / 0.004328 (0.000051) | 0.082402 / 0.004250 (0.078151) | 0.064223 / 0.037052 (0.027170) | 0.518184 / 0.258489 (0.259695) | 0.566221 / 0.293841 (0.272380) | 0.046796 / 0.128546 (-0.081750) | 0.013987 / 0.075646 (-0.061659) | 0.094925 / 0.419271 (-0.324346) | 0.058810 / 0.043533 (0.015277) | 0.520252 / 0.255139 (0.265113) | 0.566403 / 0.283200 (0.283203) | 0.034720 / 0.141683 (-0.106963) | 1.796809 / 1.452155 (0.344654) | 1.913787 / 1.492716 (0.421070) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.317449 / 0.018006 (0.299443) | 0.620154 / 0.000490 (0.619664) | 0.007066 / 0.000200 (0.006866) | 0.000126 / 0.000054 (0.000072) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.035252 / 0.037411 (-0.002160) | 0.111648 / 0.014526 (0.097122) | 0.120692 / 0.176557 (-0.055864) | 0.193202 / 0.737135 (-0.543933) | 0.127905 / 0.296338 (-0.168434) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.661012 / 0.215209 (0.445803) | 6.626680 / 2.077655 (4.549026) | 3.243065 / 1.504120 (1.738945) | 2.904053 / 1.541195 (1.362858) | 2.880516 / 1.468490 (1.412026) | 0.875650 / 4.584777 (-3.709127) | 5.381993 / 3.745712 (1.636281) | 4.743997 / 5.269862 (-0.525864) | 3.020736 / 4.565676 (-1.544940) | 0.106573 / 0.424275 (-0.317702) | 0.011151 / 0.007607 (0.003544) | 0.821990 / 0.226044 (0.595946) | 8.225383 / 2.268929 (5.956454) | 3.963232 / 55.444624 (-51.481392) | 3.288916 / 6.876477 (-3.587561) | 3.579435 / 2.142072 (1.437363) | 1.043379 / 4.805227 (-3.761848) | 0.207508 / 6.500664 (-6.293156) | 0.085109 / 0.075469 (0.009640) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.723798 / 1.841788 (-0.117990) | 24.709848 / 8.074308 (16.635540) | 22.484674 / 10.191392 (12.293282) | 0.260357 / 0.680424 (-0.420067) | 0.033539 / 0.534201 (-0.500662) | 0.487814 / 0.579283 (-0.091469) | 0.610171 / 0.434364 (0.175807) | 0.585012 / 0.540337 (0.044674) | 0.803764 / 1.386936 (-0.583172) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#f611e5815ce1bdcb4fa8556f55d85a6739cba0ea \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006661 / 0.011353 (-0.004692) | 0.004022 / 0.011008 (-0.006987) | 0.084269 / 0.038508 (0.045760) | 0.070707 / 0.023109 (0.047598) | 0.315035 / 0.275898 (0.039137) | 0.339830 / 0.323480 (0.016350) | 0.003994 / 0.007986 (-0.003991) | 0.004129 / 0.004328 (-0.000199) | 0.065383 / 0.004250 (0.061133) | 0.055493 / 0.037052 (0.018441) | 0.320521 / 0.258489 (0.062032) | 0.354301 / 0.293841 (0.060460) | 0.031177 / 0.128546 (-0.097370) | 0.008724 / 0.075646 (-0.066922) | 0.288298 / 0.419271 (-0.130974) | 0.052418 / 0.043533 (0.008885) | 0.319122 / 0.255139 (0.063983) | 0.335859 / 0.283200 (0.052659) | 0.026260 / 0.141683 (-0.115423) | 1.450039 / 1.452155 (-0.002115) | 1.545172 / 1.492716 (0.052455) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.234232 / 0.018006 (0.216226) | 0.454983 / 0.000490 (0.454493) | 0.007590 / 0.000200 (0.007390) | 0.000550 / 0.000054 (0.000495) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028714 / 0.037411 (-0.008698) | 0.083686 / 0.014526 (0.069160) | 0.162986 / 0.176557 (-0.013570) | 0.167574 / 0.737135 (-0.569561) | 0.273158 / 0.296338 (-0.023180) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.388275 / 0.215209 (0.173066) | 3.862034 / 2.077655 (1.784379) | 1.843561 / 1.504120 (0.339441) | 1.675224 / 1.541195 (0.134029) | 1.730394 / 1.468490 (0.261904) | 0.495259 / 4.584777 (-4.089518) | 3.627155 / 3.745712 (-0.118557) | 3.290590 / 5.269862 (-1.979272) | 2.032432 / 4.565676 (-2.533245) | 0.058212 / 0.424275 (-0.366063) | 0.007815 / 0.007607 (0.000208) | 0.460625 / 0.226044 (0.234580) | 4.616845 / 2.268929 (2.347916) | 2.339280 / 55.444624 (-53.105344) | 1.957216 / 6.876477 (-4.919261) | 2.129511 / 2.142072 (-0.012562) | 0.591782 / 4.805227 (-4.213446) | 0.136391 / 6.500664 (-6.364273) | 0.059627 / 0.075469 (-0.015842) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.278998 / 1.841788 (-0.562789) | 18.485496 / 8.074308 (10.411188) | 14.161273 / 10.191392 (3.969881) | 0.164346 / 0.680424 (-0.516078) | 0.018144 / 0.534201 (-0.516057) | 0.391601 / 0.579283 (-0.187682) | 0.424391 / 0.434364 (-0.009973) | 0.458209 / 0.540337 (-0.082129) | 0.645124 / 1.386936 (-0.741812) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006799 / 0.011353 (-0.004554) | 0.004023 / 0.011008 (-0.006985) | 0.065206 / 0.038508 (0.026698) | 0.074386 / 0.023109 (0.051277) | 0.437399 / 0.275898 (0.161501) | 0.467382 / 0.323480 (0.143903) | 0.005467 / 0.007986 (-0.002519) | 0.003324 / 0.004328 (-0.001005) | 0.064289 / 0.004250 (0.060039) | 0.057257 / 0.037052 (0.020205) | 0.440035 / 0.258489 (0.181546) | 0.477138 / 0.293841 (0.183298) | 0.032171 / 0.128546 (-0.096375) | 0.008400 / 0.075646 (-0.067247) | 0.070877 / 0.419271 (-0.348395) | 0.048180 / 0.043533 (0.004648) | 0.441274 / 0.255139 (0.186135) | 0.461386 / 0.283200 (0.178187) | 0.022576 / 0.141683 (-0.119106) | 1.520914 / 1.452155 (0.068759) | 1.575593 / 1.492716 (0.082877) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.221551 / 0.018006 (0.203545) | 0.447213 / 0.000490 (0.446723) | 0.004435 / 0.000200 (0.004235) | 0.000099 / 0.000054 (0.000044) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032123 / 0.037411 (-0.005288) | 0.091809 / 0.014526 (0.077283) | 0.103938 / 0.176557 (-0.072618) | 0.156878 / 0.737135 (-0.580258) | 0.105071 / 0.296338 (-0.191268) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.430389 / 0.215209 (0.215180) | 4.293496 / 2.077655 (2.215841) | 2.292801 / 1.504120 (0.788681) | 2.135320 / 1.541195 (0.594126) | 2.195720 / 1.468490 (0.727229) | 0.493277 / 4.584777 (-4.091500) | 3.685617 / 3.745712 (-0.060096) | 3.278897 / 5.269862 (-1.990965) | 2.036939 / 4.565676 (-2.528737) | 0.058766 / 0.424275 (-0.365509) | 0.007783 / 0.007607 (0.000176) | 0.511165 / 0.226044 (0.285120) | 5.126757 / 2.268929 (2.857829) | 2.756690 / 55.444624 (-52.687935) | 2.421745 / 6.876477 (-4.454732) | 2.597249 / 2.142072 (0.455177) | 0.647206 / 4.805227 (-4.158021) | 0.143392 / 6.500664 (-6.357273) | 0.060110 / 0.075469 (-0.015359) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.340289 / 1.841788 (-0.501499) | 19.057620 / 8.074308 (10.983312) | 14.832892 / 10.191392 (4.641500) | 0.167730 / 0.680424 (-0.512694) | 0.020178 / 0.534201 (-0.514023) | 0.394060 / 0.579283 (-0.185223) | 0.433976 / 0.434364 (-0.000388) | 0.474417 / 0.540337 (-0.065921) | 0.640653 / 1.386936 (-0.746283) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#d0519c6a1988a3344ecae37f7348c208bcbc99d6 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007661 / 0.011353 (-0.003692) | 0.004541 / 0.011008 (-0.006467) | 0.100547 / 0.038508 (0.062039) | 0.084257 / 0.023109 (0.061148) | 0.377627 / 0.275898 (0.101729) | 0.433764 / 0.323480 (0.110284) | 0.005995 / 0.007986 (-0.001990) | 0.003810 / 0.004328 (-0.000518) | 0.076409 / 0.004250 (0.072158) | 0.063411 / 0.037052 (0.026359) | 0.382504 / 0.258489 (0.124015) | 0.449721 / 0.293841 (0.155880) | 0.036499 / 0.128546 (-0.092047) | 0.009942 / 0.075646 (-0.065705) | 0.343839 / 0.419271 (-0.075433) | 0.062147 / 0.043533 (0.018614) | 0.383244 / 0.255139 (0.128105) | 0.415606 / 0.283200 (0.132406) | 0.027475 / 0.141683 (-0.114207) | 1.740413 / 1.452155 (0.288258) | 1.862210 / 1.492716 (0.369493) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.260064 / 0.018006 (0.242058) | 0.499001 / 0.000490 (0.498511) | 0.015811 / 0.000200 (0.015611) | 0.000119 / 0.000054 (0.000065) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033599 / 0.037411 (-0.003812) | 0.099354 / 0.014526 (0.084828) | 0.114693 / 0.176557 (-0.061864) | 0.180231 / 0.737135 (-0.556904) | 0.114715 / 0.296338 (-0.181623) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.459884 / 0.215209 (0.244675) | 4.580806 / 2.077655 (2.503151) | 2.270770 / 1.504120 (0.766650) | 2.077127 / 1.541195 (0.535932) | 2.167175 / 1.468490 (0.698685) | 0.570593 / 4.584777 (-4.014184) | 4.120926 / 3.745712 (0.375214) | 3.817595 / 5.269862 (-1.452267) | 2.404782 / 4.565676 (-2.160894) | 0.067972 / 0.424275 (-0.356304) | 0.009378 / 0.007607 (0.001771) | 0.549642 / 0.226044 (0.323597) | 5.490369 / 2.268929 (3.221440) | 2.905264 / 55.444624 (-52.539361) | 2.452935 / 6.876477 (-4.423542) | 2.700760 / 2.142072 (0.558688) | 0.700407 / 4.805227 (-4.104820) | 0.159349 / 6.500664 (-6.341315) | 0.074605 / 0.075469 (-0.000864) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.517803 / 1.841788 (-0.323985) | 22.343700 / 8.074308 (14.269392) | 16.411639 / 10.191392 (6.220247) | 0.169816 / 0.680424 (-0.510608) | 0.021532 / 0.534201 (-0.512668) | 0.470161 / 0.579283 (-0.109122) | 0.473412 / 0.434364 (0.039048) | 0.539690 / 0.540337 (-0.000647) | 0.774011 / 1.386936 (-0.612925) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007629 / 0.011353 (-0.003724) | 0.004651 / 0.011008 (-0.006357) | 0.075162 / 0.038508 (0.036654) | 0.085365 / 0.023109 (0.062256) | 0.493272 / 0.275898 (0.217374) | 0.535776 / 0.323480 (0.212296) | 0.006323 / 0.007986 (-0.001663) | 0.003785 / 0.004328 (-0.000544) | 0.076161 / 0.004250 (0.071911) | 0.065982 / 0.037052 (0.028929) | 0.513355 / 0.258489 (0.254866) | 0.549219 / 0.293841 (0.255378) | 0.038052 / 0.128546 (-0.090494) | 0.010055 / 0.075646 (-0.065592) | 0.083744 / 0.419271 (-0.335527) | 0.056708 / 0.043533 (0.013175) | 0.496273 / 0.255139 (0.241135) | 0.523709 / 0.283200 (0.240509) | 0.026502 / 0.141683 (-0.115181) | 1.793032 / 1.452155 (0.340877) | 1.870534 / 1.492716 (0.377817) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.252288 / 0.018006 (0.234281) | 0.490380 / 0.000490 (0.489890) | 0.005884 / 0.000200 (0.005684) | 0.000109 / 0.000054 (0.000054) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.038238 / 0.037411 (0.000827) | 0.110010 / 0.014526 (0.095485) | 0.125497 / 0.176557 (-0.051059) | 0.188154 / 0.737135 (-0.548981) | 0.126112 / 0.296338 (-0.170227) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.515837 / 0.215209 (0.300628) | 5.135153 / 2.077655 (3.057498) | 2.761740 / 1.504120 (1.257620) | 2.552718 / 1.541195 (1.011523) | 2.636425 / 1.468490 (1.167935) | 0.588442 / 4.584777 (-3.996335) | 4.220833 / 3.745712 (0.475120) | 3.874637 / 5.269862 (-1.395225) | 2.424668 / 4.565676 (-2.141009) | 0.069979 / 0.424275 (-0.354296) | 0.009349 / 0.007607 (0.001742) | 0.608936 / 0.226044 (0.382891) | 6.081209 / 2.268929 (3.812280) | 3.348067 / 55.444624 (-52.096557) | 2.919130 / 6.876477 (-3.957347) | 3.159093 / 2.142072 (1.017020) | 0.704059 / 4.805227 (-4.101169) | 0.158417 / 6.500664 (-6.342247) | 0.071321 / 0.075469 (-0.004148) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.595287 / 1.841788 (-0.246501) | 23.096619 / 8.074308 (15.022311) | 17.258041 / 10.191392 (7.066649) | 0.186197 / 0.680424 (-0.494227) | 0.023633 / 0.534201 (-0.510567) | 0.472181 / 0.579283 (-0.107102) | 0.493817 / 0.434364 (0.059453) | 0.567657 / 0.540337 (0.027320) | 0.793789 / 1.386936 (-0.593147) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#e0bd8444689c5d82344a62ddf79e5dc103fc67b8 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007084 / 0.011353 (-0.004268) | 0.004093 / 0.011008 (-0.006915) | 0.086395 / 0.038508 (0.047887) | 0.087734 / 0.023109 (0.064625) | 0.356936 / 0.275898 (0.081038) | 0.386413 / 0.323480 (0.062933) | 0.005531 / 0.007986 (-0.002454) | 0.003462 / 0.004328 (-0.000866) | 0.065503 / 0.004250 (0.061252) | 0.058973 / 0.037052 (0.021920) | 0.354151 / 0.258489 (0.095662) | 0.398812 / 0.293841 (0.104971) | 0.031508 / 0.128546 (-0.097038) | 0.008537 / 0.075646 (-0.067109) | 0.290942 / 0.419271 (-0.128329) | 0.053537 / 0.043533 (0.010004) | 0.352067 / 0.255139 (0.096928) | 0.375142 / 0.283200 (0.091943) | 0.025658 / 0.141683 (-0.116025) | 1.468496 / 1.452155 (0.016341) | 1.556926 / 1.492716 (0.064210) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.238858 / 0.018006 (0.220852) | 0.460018 / 0.000490 (0.459528) | 0.009613 / 0.000200 (0.009414) | 0.000326 / 0.000054 (0.000272) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030333 / 0.037411 (-0.007078) | 0.088431 / 0.014526 (0.073905) | 0.098130 / 0.176557 (-0.078427) | 0.155160 / 0.737135 (-0.581975) | 0.099963 / 0.296338 (-0.196375) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.385769 / 0.215209 (0.170560) | 3.836723 / 2.077655 (1.759069) | 1.861065 / 1.504120 (0.356945) | 1.685159 / 1.541195 (0.143965) | 1.780679 / 1.468490 (0.312189) | 0.491865 / 4.584777 (-4.092912) | 3.581139 / 3.745712 (-0.164573) | 3.366278 / 5.269862 (-1.903584) | 2.093094 / 4.565676 (-2.472583) | 0.058063 / 0.424275 (-0.366212) | 0.007903 / 0.007607 (0.000296) | 0.464866 / 0.226044 (0.238821) | 4.647754 / 2.268929 (2.378825) | 2.316466 / 55.444624 (-53.128158) | 1.984079 / 6.876477 (-4.892398) | 2.235020 / 2.142072 (0.092948) | 0.592591 / 4.805227 (-4.212636) | 0.135586 / 6.500664 (-6.365078) | 0.061434 / 0.075469 (-0.014035) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.282940 / 1.841788 (-0.558848) | 19.635975 / 8.074308 (11.561667) | 14.426135 / 10.191392 (4.234743) | 0.166732 / 0.680424 (-0.513692) | 0.018438 / 0.534201 (-0.515763) | 0.393173 / 0.579283 (-0.186110) | 0.417291 / 0.434364 (-0.017073) | 0.459188 / 0.540337 (-0.081149) | 0.632568 / 1.386936 (-0.754368) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007166 / 0.011353 (-0.004187) | 0.004254 / 0.011008 (-0.006754) | 0.064667 / 0.038508 (0.026159) | 0.085142 / 0.023109 (0.062033) | 0.410081 / 0.275898 (0.134183) | 0.445803 / 0.323480 (0.122323) | 0.005600 / 0.007986 (-0.002385) | 0.003520 / 0.004328 (-0.000809) | 0.064148 / 0.004250 (0.059897) | 0.059869 / 0.037052 (0.022817) | 0.407439 / 0.258489 (0.148950) | 0.451169 / 0.293841 (0.157329) | 0.032619 / 0.128546 (-0.095927) | 0.008706 / 0.075646 (-0.066940) | 0.071230 / 0.419271 (-0.348041) | 0.048499 / 0.043533 (0.004966) | 0.416401 / 0.255139 (0.161262) | 0.430737 / 0.283200 (0.147537) | 0.022511 / 0.141683 (-0.119172) | 1.517296 / 1.452155 (0.065141) | 1.581704 / 1.492716 (0.088988) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.220738 / 0.018006 (0.202732) | 0.454026 / 0.000490 (0.453536) | 0.004695 / 0.000200 (0.004495) | 0.000087 / 0.000054 (0.000033) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033202 / 0.037411 (-0.004209) | 0.097506 / 0.014526 (0.082980) | 0.106661 / 0.176557 (-0.069896) | 0.160554 / 0.737135 (-0.576581) | 0.109203 / 0.296338 (-0.187135) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.432013 / 0.215209 (0.216804) | 4.310399 / 2.077655 (2.232744) | 2.296529 / 1.504120 (0.792409) | 2.139929 / 1.541195 (0.598734) | 2.227432 / 1.468490 (0.758942) | 0.493697 / 4.584777 (-4.091080) | 3.639877 / 3.745712 (-0.105835) | 3.323165 / 5.269862 (-1.946697) | 2.084527 / 4.565676 (-2.481150) | 0.058812 / 0.424275 (-0.365463) | 0.007813 / 0.007607 (0.000206) | 0.512366 / 0.226044 (0.286321) | 5.119660 / 2.268929 (2.850732) | 2.783819 / 55.444624 (-52.660806) | 2.490669 / 6.876477 (-4.385808) | 2.696653 / 2.142072 (0.554581) | 0.627161 / 4.805227 (-4.178066) | 0.137032 / 6.500664 (-6.363632) | 0.064040 / 0.075469 (-0.011429) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.369578 / 1.841788 (-0.472210) | 20.421182 / 8.074308 (12.346873) | 15.719347 / 10.191392 (5.527955) | 0.166150 / 0.680424 (-0.514274) | 0.020262 / 0.534201 (-0.513939) | 0.395645 / 0.579283 (-0.183638) | 0.430363 / 0.434364 (-0.004001) | 0.477843 / 0.540337 (-0.062494) | 0.638501 / 1.386936 (-0.748435) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#c89e60cc50563dfc41ea039c6d3a1f6e43033e8e \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006141 / 0.011353 (-0.005211) | 0.003683 / 0.011008 (-0.007325) | 0.081127 / 0.038508 (0.042618) | 0.064285 / 0.023109 (0.041176) | 0.323038 / 0.275898 (0.047140) | 0.347280 / 0.323480 (0.023800) | 0.003518 / 0.007986 (-0.004467) | 0.002958 / 0.004328 (-0.001370) | 0.063093 / 0.004250 (0.058843) | 0.050682 / 0.037052 (0.013629) | 0.321222 / 0.258489 (0.062733) | 0.359266 / 0.293841 (0.065425) | 0.027515 / 0.128546 (-0.101032) | 0.007964 / 0.075646 (-0.067682) | 0.261305 / 0.419271 (-0.157966) | 0.044897 / 0.043533 (0.001365) | 0.320684 / 0.255139 (0.065545) | 0.335722 / 0.283200 (0.052522) | 0.023378 / 0.141683 (-0.118305) | 1.418211 / 1.452155 (-0.033943) | 1.523728 / 1.492716 (0.031011) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.222316 / 0.018006 (0.204310) | 0.426943 / 0.000490 (0.426454) | 0.008785 / 0.000200 (0.008585) | 0.000081 / 0.000054 (0.000026) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024716 / 0.037411 (-0.012695) | 0.075341 / 0.014526 (0.060816) | 0.089532 / 0.176557 (-0.087024) | 0.147638 / 0.737135 (-0.589498) | 0.085697 / 0.296338 (-0.210641) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.396395 / 0.215209 (0.181186) | 3.947280 / 2.077655 (1.869625) | 1.894762 / 1.504120 (0.390642) | 1.712094 / 1.541195 (0.170899) | 1.779049 / 1.468490 (0.310559) | 0.509206 / 4.584777 (-4.075571) | 3.073951 / 3.745712 (-0.671761) | 2.886826 / 5.269862 (-2.383035) | 1.894444 / 4.565676 (-2.671232) | 0.059519 / 0.424275 (-0.364756) | 0.006951 / 0.007607 (-0.000656) | 0.468213 / 0.226044 (0.242169) | 4.667134 / 2.268929 (2.398206) | 2.342516 / 55.444624 (-53.102108) | 1.992047 / 6.876477 (-4.884430) | 2.142059 / 2.142072 (-0.000014) | 0.600507 / 4.805227 (-4.204720) | 0.128982 / 6.500664 (-6.371682) | 0.062100 / 0.075469 (-0.013369) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.234500 / 1.841788 (-0.607288) | 17.951646 / 8.074308 (9.877338) | 13.862763 / 10.191392 (3.671371) | 0.143133 / 0.680424 (-0.537291) | 0.016643 / 0.534201 (-0.517558) | 0.333174 / 0.579283 (-0.246109) | 0.366956 / 0.434364 (-0.067408) | 0.384569 / 0.540337 (-0.155769) | 0.546830 / 1.386936 (-0.840106) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006146 / 0.011353 (-0.005207) | 0.003725 / 0.011008 (-0.007283) | 0.062099 / 0.038508 (0.023591) | 0.064117 / 0.023109 (0.041008) | 0.456100 / 0.275898 (0.180202) | 0.490794 / 0.323480 (0.167314) | 0.005652 / 0.007986 (-0.002334) | 0.002897 / 0.004328 (-0.001432) | 0.061909 / 0.004250 (0.057659) | 0.050634 / 0.037052 (0.013582) | 0.454422 / 0.258489 (0.195933) | 0.493208 / 0.293841 (0.199367) | 0.028822 / 0.128546 (-0.099724) | 0.008115 / 0.075646 (-0.067531) | 0.067214 / 0.419271 (-0.352058) | 0.041529 / 0.043533 (-0.002004) | 0.458016 / 0.255139 (0.202877) | 0.476059 / 0.283200 (0.192859) | 0.019926 / 0.141683 (-0.121757) | 1.465345 / 1.452155 (0.013190) | 1.533518 / 1.492716 (0.040802) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.218830 / 0.018006 (0.200823) | 0.418869 / 0.000490 (0.418380) | 0.005154 / 0.000200 (0.004954) | 0.000080 / 0.000054 (0.000026) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027648 / 0.037411 (-0.009763) | 0.083842 / 0.014526 (0.069316) | 0.092300 / 0.176557 (-0.084257) | 0.146098 / 0.737135 (-0.591037) | 0.093441 / 0.296338 (-0.202898) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.464426 / 0.215209 (0.249217) | 4.632705 / 2.077655 (2.555051) | 2.642091 / 1.504120 (1.137971) | 2.461768 / 1.541195 (0.920573) | 2.535554 / 1.468490 (1.067064) | 0.507506 / 4.584777 (-4.077271) | 3.095485 / 3.745712 (-0.650227) | 2.884261 / 5.269862 (-2.385601) | 1.908943 / 4.565676 (-2.656734) | 0.058622 / 0.424275 (-0.365653) | 0.006892 / 0.007607 (-0.000715) | 0.536045 / 0.226044 (0.310001) | 5.377448 / 2.268929 (3.108519) | 3.076023 / 55.444624 (-52.368602) | 2.745586 / 6.876477 (-4.130890) | 2.939582 / 2.142072 (0.797510) | 0.595639 / 4.805227 (-4.209589) | 0.125086 / 6.500664 (-6.375578) | 0.061075 / 0.075469 (-0.014394) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.342820 / 1.841788 (-0.498968) | 18.326240 / 8.074308 (10.251932) | 15.007094 / 10.191392 (4.815702) | 0.133037 / 0.680424 (-0.547387) | 0.018702 / 0.534201 (-0.515499) | 0.330245 / 0.579283 (-0.249038) | 0.381494 / 0.434364 (-0.052870) | 0.393705 / 0.540337 (-0.146633) | 0.533676 / 1.386936 (-0.853260) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#45291275d84448c235829fb62aa951070aa4061d \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007644 / 0.011353 (-0.003709) | 0.004759 / 0.011008 (-0.006249) | 0.100569 / 0.038508 (0.062061) | 0.089645 / 0.023109 (0.066536) | 0.376679 / 0.275898 (0.100781) | 0.413214 / 0.323480 (0.089735) | 0.006087 / 0.007986 (-0.001899) | 0.003832 / 0.004328 (-0.000496) | 0.075892 / 0.004250 (0.071641) | 0.064635 / 0.037052 (0.027582) | 0.376874 / 0.258489 (0.118385) | 0.436756 / 0.293841 (0.142915) | 0.036372 / 0.128546 (-0.092174) | 0.010047 / 0.075646 (-0.065599) | 0.345073 / 0.419271 (-0.074198) | 0.062092 / 0.043533 (0.018559) | 0.380503 / 0.255139 (0.125364) | 0.414800 / 0.283200 (0.131600) | 0.028274 / 0.141683 (-0.113409) | 1.732463 / 1.452155 (0.280308) | 1.859049 / 1.492716 (0.366333) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.267129 / 0.018006 (0.249123) | 0.509109 / 0.000490 (0.508619) | 0.012329 / 0.000200 (0.012130) | 0.000432 / 0.000054 (0.000377) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033773 / 0.037411 (-0.003638) | 0.102800 / 0.014526 (0.088274) | 0.114256 / 0.176557 (-0.062300) | 0.182048 / 0.737135 (-0.555087) | 0.118225 / 0.296338 (-0.178113) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.457553 / 0.215209 (0.242344) | 4.588212 / 2.077655 (2.510557) | 2.184138 / 1.504120 (0.680018) | 2.003570 / 1.541195 (0.462375) | 2.093217 / 1.468490 (0.624727) | 0.585679 / 4.584777 (-3.999098) | 4.175319 / 3.745712 (0.429607) | 3.914168 / 5.269862 (-1.355693) | 2.452992 / 4.565676 (-2.112684) | 0.068363 / 0.424275 (-0.355912) | 0.009314 / 0.007607 (0.001707) | 0.543640 / 0.226044 (0.317595) | 5.440853 / 2.268929 (3.171925) | 2.782415 / 55.444624 (-52.662210) | 2.332359 / 6.876477 (-4.544118) | 2.628520 / 2.142072 (0.486448) | 0.696838 / 4.805227 (-4.108389) | 0.160653 / 6.500664 (-6.340012) | 0.075599 / 0.075469 (0.000130) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.545305 / 1.841788 (-0.296483) | 23.073174 / 8.074308 (14.998866) | 16.974977 / 10.191392 (6.783585) | 0.183719 / 0.680424 (-0.496705) | 0.021633 / 0.534201 (-0.512568) | 0.471202 / 0.579283 (-0.108081) | 0.479385 / 0.434364 (0.045021) | 0.550872 / 0.540337 (0.010535) | 0.766825 / 1.386936 (-0.620111) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007918 / 0.011353 (-0.003435) | 0.004793 / 0.011008 (-0.006215) | 0.077273 / 0.038508 (0.038765) | 0.092079 / 0.023109 (0.068969) | 0.483269 / 0.275898 (0.207371) | 0.524919 / 0.323480 (0.201439) | 0.006273 / 0.007986 (-0.001713) | 0.004018 / 0.004328 (-0.000310) | 0.077188 / 0.004250 (0.072937) | 0.067891 / 0.037052 (0.030839) | 0.478531 / 0.258489 (0.220042) | 0.526956 / 0.293841 (0.233115) | 0.038309 / 0.128546 (-0.090237) | 0.010133 / 0.075646 (-0.065513) | 0.083892 / 0.419271 (-0.335379) | 0.057369 / 0.043533 (0.013836) | 0.509427 / 0.255139 (0.254288) | 0.506574 / 0.283200 (0.223374) | 0.027987 / 0.141683 (-0.113696) | 1.897469 / 1.452155 (0.445314) | 1.893102 / 1.492716 (0.400385) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.243003 / 0.018006 (0.224997) | 0.500267 / 0.000490 (0.499777) | 0.007442 / 0.000200 (0.007242) | 0.000110 / 0.000054 (0.000055) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.039266 / 0.037411 (0.001855) | 0.114438 / 0.014526 (0.099912) | 0.124528 / 0.176557 (-0.052029) | 0.189399 / 0.737135 (-0.547736) | 0.126703 / 0.296338 (-0.169635) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.518139 / 0.215209 (0.302930) | 5.162058 / 2.077655 (3.084403) | 2.835111 / 1.504120 (1.330991) | 2.640919 / 1.541195 (1.099724) | 2.736800 / 1.468490 (1.268310) | 0.582813 / 4.584777 (-4.001964) | 4.246269 / 3.745712 (0.500557) | 3.891161 / 5.269862 (-1.378701) | 2.445392 / 4.565676 (-2.120285) | 0.068943 / 0.424275 (-0.355332) | 0.009248 / 0.007607 (0.001641) | 0.604859 / 0.226044 (0.378815) | 6.030660 / 2.268929 (3.761731) | 3.409778 / 55.444624 (-52.034846) | 2.990488 / 6.876477 (-3.885988) | 3.281317 / 2.142072 (1.139245) | 0.697705 / 4.805227 (-4.107523) | 0.159502 / 6.500664 (-6.341162) | 0.072471 / 0.075469 (-0.002999) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.625428 / 1.841788 (-0.216360) | 23.602509 / 8.074308 (15.528201) | 18.091474 / 10.191392 (7.900082) | 0.172816 / 0.680424 (-0.507608) | 0.023708 / 0.534201 (-0.510493) | 0.473768 / 0.579283 (-0.105515) | 0.493713 / 0.434364 (0.059349) | 0.566326 / 0.540337 (0.025989) | 0.788670 / 1.386936 (-0.598266) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#1ee2359c17ccb35b57e195f2bfe8478f49630039 \"CML watermark\")\n", "> Thanks. Any comment on my comment below?\r\n> \r\n> >Maybe we should update the docstring of get_data_patterns accordingly? Currently it only gives examples of outputs with ** not in a single path segment (i.e. not with a / as prefix or suffix).\r\n\r\nYea right we need to update it indeed, the outputs are the ones from older versions of fsspec, and from older patterns that we don't use anymore.\r\n\r\nIn general in docstrings I also think we should encourage users to use `**/*` instead of `**` (which has a behavior that is unique to fsspec)", "Also just noticed that `KEYWORDS_IN_DIR_NAME_BASE_PATTERNS` seems to include `KEYWORDS_IN_FILENAME_BASE_PATTERNS`. I guess we can try to remove the filename one in another PR to remove this redundancy \r\n\r\n(noticed this by checking that the data pattern is the same for both the dir name and filename examples in the get_data_patterns docstring)", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006922 / 0.011353 (-0.004431) | 0.004459 / 0.011008 (-0.006549) | 0.084742 / 0.038508 (0.046234) | 0.089002 / 0.023109 (0.065893) | 0.310886 / 0.275898 (0.034988) | 0.340518 / 0.323480 (0.017038) | 0.007011 / 0.007986 (-0.000975) | 0.004566 / 0.004328 (0.000237) | 0.067260 / 0.004250 (0.063009) | 0.066349 / 0.037052 (0.029297) | 0.324029 / 0.258489 (0.065540) | 0.373785 / 0.293841 (0.079944) | 0.031780 / 0.128546 (-0.096766) | 0.009208 / 0.075646 (-0.066438) | 0.288871 / 0.419271 (-0.130401) | 0.054548 / 0.043533 (0.011015) | 0.313344 / 0.255139 (0.058205) | 0.336430 / 0.283200 (0.053231) | 0.029037 / 0.141683 (-0.112646) | 1.483797 / 1.452155 (0.031642) | 1.581884 / 1.492716 (0.089167) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.370520 / 0.018006 (0.352514) | 0.796720 / 0.000490 (0.796230) | 0.009329 / 0.000200 (0.009129) | 0.000109 / 0.000054 (0.000055) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033002 / 0.037411 (-0.004410) | 0.083442 / 0.014526 (0.068916) | 0.106468 / 0.176557 (-0.070088) | 0.165315 / 0.737135 (-0.571820) | 0.103048 / 0.296338 (-0.193291) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.386800 / 0.215209 (0.171591) | 3.843312 / 2.077655 (1.765658) | 1.848953 / 1.504120 (0.344834) | 1.679508 / 1.541195 (0.138313) | 1.733578 / 1.468490 (0.265088) | 0.488455 / 4.584777 (-4.096322) | 3.613594 / 3.745712 (-0.132118) | 3.533334 / 5.269862 (-1.736528) | 2.176216 / 4.565676 (-2.389460) | 0.056915 / 0.424275 (-0.367360) | 0.007349 / 0.007607 (-0.000258) | 0.465132 / 0.226044 (0.239088) | 4.638479 / 2.268929 (2.369550) | 2.354741 / 55.444624 (-53.089883) | 1.991777 / 6.876477 (-4.884700) | 2.249823 / 2.142072 (0.107751) | 0.582748 / 4.805227 (-4.222480) | 0.133829 / 6.500664 (-6.366835) | 0.060949 / 0.075469 (-0.014520) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.252027 / 1.841788 (-0.589760) | 20.660234 / 8.074308 (12.585926) | 14.328496 / 10.191392 (4.137104) | 0.164872 / 0.680424 (-0.515552) | 0.018867 / 0.534201 (-0.515334) | 0.392850 / 0.579283 (-0.186433) | 0.425684 / 0.434364 (-0.008679) | 0.461776 / 0.540337 (-0.078562) | 0.663688 / 1.386936 (-0.723248) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007010 / 0.011353 (-0.004343) | 0.004791 / 0.011008 (-0.006217) | 0.064738 / 0.038508 (0.026230) | 0.088648 / 0.023109 (0.065539) | 0.418106 / 0.275898 (0.142208) | 0.446767 / 0.323480 (0.123287) | 0.006761 / 0.007986 (-0.001224) | 0.004649 / 0.004328 (0.000320) | 0.066345 / 0.004250 (0.062094) | 0.068326 / 0.037052 (0.031274) | 0.423426 / 0.258489 (0.164937) | 0.463160 / 0.293841 (0.169319) | 0.032689 / 0.128546 (-0.095858) | 0.009299 / 0.075646 (-0.066347) | 0.071321 / 0.419271 (-0.347951) | 0.048752 / 0.043533 (0.005219) | 0.418932 / 0.255139 (0.163793) | 0.440673 / 0.283200 (0.157473) | 0.027898 / 0.141683 (-0.113785) | 1.531860 / 1.452155 (0.079705) | 1.620456 / 1.492716 (0.127739) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.354917 / 0.018006 (0.336911) | 0.792432 / 0.000490 (0.791943) | 0.006626 / 0.000200 (0.006426) | 0.000124 / 0.000054 (0.000070) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.036190 / 0.037411 (-0.001222) | 0.093052 / 0.014526 (0.078526) | 0.111927 / 0.176557 (-0.064629) | 0.165571 / 0.737135 (-0.571564) | 0.112159 / 0.296338 (-0.184180) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.437798 / 0.215209 (0.222589) | 4.367166 / 2.077655 (2.289511) | 2.343292 / 1.504120 (0.839172) | 2.169298 / 1.541195 (0.628103) | 2.224471 / 1.468490 (0.755981) | 0.487317 / 4.584777 (-4.097460) | 3.627825 / 3.745712 (-0.117887) | 3.500914 / 5.269862 (-1.768947) | 2.175862 / 4.565676 (-2.389815) | 0.057975 / 0.424275 (-0.366300) | 0.007509 / 0.007607 (-0.000098) | 0.517389 / 0.226044 (0.291345) | 5.169694 / 2.268929 (2.900766) | 2.850993 / 55.444624 (-52.593631) | 2.473111 / 6.876477 (-4.403366) | 2.746731 / 2.142072 (0.604659) | 0.586597 / 4.805227 (-4.218630) | 0.134082 / 6.500664 (-6.366582) | 0.061035 / 0.075469 (-0.014434) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.375186 / 1.841788 (-0.466602) | 20.960817 / 8.074308 (12.886509) | 15.035071 / 10.191392 (4.843679) | 0.169494 / 0.680424 (-0.510930) | 0.020654 / 0.534201 (-0.513547) | 0.398047 / 0.579283 (-0.181236) | 0.438117 / 0.434364 (0.003753) | 0.483896 / 0.540337 (-0.056441) | 0.690728 / 1.386936 (-0.696208) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#3e7fc64af912e5fcdcf949ed09d954332f0ae94a \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006892 / 0.011353 (-0.004461) | 0.004087 / 0.011008 (-0.006921) | 0.084695 / 0.038508 (0.046187) | 0.078084 / 0.023109 (0.054975) | 0.322976 / 0.275898 (0.047078) | 0.355332 / 0.323480 (0.031852) | 0.004235 / 0.007986 (-0.003750) | 0.003450 / 0.004328 (-0.000879) | 0.065355 / 0.004250 (0.061104) | 0.058593 / 0.037052 (0.021541) | 0.335761 / 0.258489 (0.077272) | 0.370392 / 0.293841 (0.076551) | 0.031720 / 0.128546 (-0.096827) | 0.008611 / 0.075646 (-0.067036) | 0.288213 / 0.419271 (-0.131059) | 0.053374 / 0.043533 (0.009842) | 0.321863 / 0.255139 (0.066724) | 0.341587 / 0.283200 (0.058387) | 0.025694 / 0.141683 (-0.115989) | 1.470502 / 1.452155 (0.018348) | 1.565068 / 1.492716 (0.072352) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.231063 / 0.018006 (0.213057) | 0.464996 / 0.000490 (0.464506) | 0.007316 / 0.000200 (0.007116) | 0.000288 / 0.000054 (0.000233) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029244 / 0.037411 (-0.008167) | 0.086303 / 0.014526 (0.071777) | 0.097281 / 0.176557 (-0.079276) | 0.153552 / 0.737135 (-0.583583) | 0.098488 / 0.296338 (-0.197850) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.382753 / 0.215209 (0.167544) | 3.826503 / 2.077655 (1.748848) | 1.848439 / 1.504120 (0.344319) | 1.688519 / 1.541195 (0.147324) | 1.787867 / 1.468490 (0.319377) | 0.489708 / 4.584777 (-4.095069) | 3.576780 / 3.745712 (-0.168932) | 3.341536 / 5.269862 (-1.928325) | 2.108787 / 4.565676 (-2.456889) | 0.057409 / 0.424275 (-0.366866) | 0.007325 / 0.007607 (-0.000282) | 0.459536 / 0.226044 (0.233492) | 4.590609 / 2.268929 (2.321681) | 2.313005 / 55.444624 (-53.131620) | 1.972389 / 6.876477 (-4.904087) | 2.218511 / 2.142072 (0.076439) | 0.613817 / 4.805227 (-4.191410) | 0.133846 / 6.500664 (-6.366818) | 0.062190 / 0.075469 (-0.013279) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.279860 / 1.841788 (-0.561928) | 19.549777 / 8.074308 (11.475469) | 14.225844 / 10.191392 (4.034452) | 0.164682 / 0.680424 (-0.515741) | 0.018321 / 0.534201 (-0.515880) | 0.389874 / 0.579283 (-0.189409) | 0.408597 / 0.434364 (-0.025767) | 0.454327 / 0.540337 (-0.086011) | 0.645571 / 1.386936 (-0.741365) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007021 / 0.011353 (-0.004332) | 0.004119 / 0.011008 (-0.006889) | 0.065393 / 0.038508 (0.026885) | 0.085005 / 0.023109 (0.061896) | 0.412221 / 0.275898 (0.136323) | 0.438266 / 0.323480 (0.114786) | 0.005594 / 0.007986 (-0.002392) | 0.003499 / 0.004328 (-0.000829) | 0.065053 / 0.004250 (0.060802) | 0.060608 / 0.037052 (0.023555) | 0.413938 / 0.258489 (0.155449) | 0.446192 / 0.293841 (0.152351) | 0.032232 / 0.128546 (-0.096314) | 0.008617 / 0.075646 (-0.067029) | 0.071296 / 0.419271 (-0.347976) | 0.048756 / 0.043533 (0.005223) | 0.404977 / 0.255139 (0.149838) | 0.426801 / 0.283200 (0.143602) | 0.023650 / 0.141683 (-0.118033) | 1.526928 / 1.452155 (0.074773) | 1.627504 / 1.492716 (0.134787) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.224318 / 0.018006 (0.206312) | 0.469717 / 0.000490 (0.469227) | 0.005539 / 0.000200 (0.005339) | 0.000098 / 0.000054 (0.000043) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034240 / 0.037411 (-0.003171) | 0.096449 / 0.014526 (0.081923) | 0.107309 / 0.176557 (-0.069247) | 0.160246 / 0.737135 (-0.576889) | 0.107595 / 0.296338 (-0.188743) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.434266 / 0.215209 (0.219057) | 4.325571 / 2.077655 (2.247916) | 2.324066 / 1.504120 (0.819946) | 2.140238 / 1.541195 (0.599044) | 2.244593 / 1.468490 (0.776103) | 0.486259 / 4.584777 (-4.098518) | 3.644120 / 3.745712 (-0.101592) | 3.372330 / 5.269862 (-1.897531) | 2.074779 / 4.565676 (-2.490897) | 0.057154 / 0.424275 (-0.367121) | 0.007304 / 0.007607 (-0.000303) | 0.516944 / 0.226044 (0.290899) | 5.174300 / 2.268929 (2.905372) | 2.816269 / 55.444624 (-52.628356) | 2.462943 / 6.876477 (-4.413534) | 2.735851 / 2.142072 (0.593779) | 0.589028 / 4.805227 (-4.216200) | 0.131804 / 6.500664 (-6.368860) | 0.060173 / 0.075469 (-0.015296) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.354540 / 1.841788 (-0.487248) | 20.436511 / 8.074308 (12.362203) | 15.541981 / 10.191392 (5.350589) | 0.168399 / 0.680424 (-0.512025) | 0.020716 / 0.534201 (-0.513485) | 0.396275 / 0.579283 (-0.183008) | 0.427232 / 0.434364 (-0.007132) | 0.475121 / 0.540337 (-0.065216) | 0.648579 / 1.386936 (-0.738357) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#4fa138fc0d9aa1536194fd46566840e698ccde03 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009071 / 0.011353 (-0.002282) | 0.005820 / 0.011008 (-0.005188) | 0.119974 / 0.038508 (0.081466) | 0.092145 / 0.023109 (0.069036) | 0.445349 / 0.275898 (0.169451) | 0.442488 / 0.323480 (0.119008) | 0.005352 / 0.007986 (-0.002634) | 0.004332 / 0.004328 (0.000003) | 0.084397 / 0.004250 (0.080147) | 0.064624 / 0.037052 (0.027572) | 0.430938 / 0.258489 (0.172448) | 0.503574 / 0.293841 (0.209733) | 0.047900 / 0.128546 (-0.080647) | 0.014237 / 0.075646 (-0.061409) | 0.366145 / 0.419271 (-0.053127) | 0.066344 / 0.043533 (0.022811) | 0.424582 / 0.255139 (0.169443) | 0.451845 / 0.283200 (0.168646) | 0.041409 / 0.141683 (-0.100274) | 1.886998 / 1.452155 (0.434843) | 2.011676 / 1.492716 (0.518960) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.301008 / 0.018006 (0.283001) | 0.608670 / 0.000490 (0.608180) | 0.011963 / 0.000200 (0.011763) | 0.000117 / 0.000054 (0.000063) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031996 / 0.037411 (-0.005415) | 0.102274 / 0.014526 (0.087748) | 0.121437 / 0.176557 (-0.055120) | 0.181647 / 0.737135 (-0.555489) | 0.121634 / 0.296338 (-0.174704) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.597070 / 0.215209 (0.381861) | 5.973808 / 2.077655 (3.896154) | 2.486345 / 1.504120 (0.982225) | 2.125395 / 1.541195 (0.584201) | 2.270864 / 1.468490 (0.802374) | 0.880031 / 4.584777 (-3.704746) | 5.396522 / 3.745712 (1.650809) | 4.702005 / 5.269862 (-0.567857) | 3.023087 / 4.565676 (-1.542589) | 0.097093 / 0.424275 (-0.327182) | 0.008457 / 0.007607 (0.000850) | 0.712164 / 0.226044 (0.486120) | 7.112867 / 2.268929 (4.843938) | 3.364509 / 55.444624 (-52.080115) | 2.646953 / 6.876477 (-4.229524) | 2.795967 / 2.142072 (0.653894) | 1.067182 / 4.805227 (-3.738046) | 0.218297 / 6.500664 (-6.282368) | 0.071720 / 0.075469 (-0.003750) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.640477 / 1.841788 (-0.201311) | 24.875163 / 8.074308 (16.800855) | 22.125706 / 10.191392 (11.934314) | 0.247267 / 0.680424 (-0.433157) | 0.033717 / 0.534201 (-0.500484) | 0.492422 / 0.579283 (-0.086862) | 0.578323 / 0.434364 (0.143959) | 0.579503 / 0.540337 (0.039165) | 0.816721 / 1.386936 (-0.570215) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009372 / 0.011353 (-0.001981) | 0.005449 / 0.011008 (-0.005559) | 0.095371 / 0.038508 (0.056863) | 0.086320 / 0.023109 (0.063211) | 0.539573 / 0.275898 (0.263675) | 0.580338 / 0.323480 (0.256858) | 0.007028 / 0.007986 (-0.000958) | 0.004196 / 0.004328 (-0.000133) | 0.082710 / 0.004250 (0.078460) | 0.064336 / 0.037052 (0.027284) | 0.521490 / 0.258489 (0.263001) | 0.567942 / 0.293841 (0.274101) | 0.049659 / 0.128546 (-0.078887) | 0.017297 / 0.075646 (-0.058350) | 0.093874 / 0.419271 (-0.325398) | 0.061664 / 0.043533 (0.018131) | 0.524476 / 0.255139 (0.269337) | 0.563255 / 0.283200 (0.280055) | 0.039990 / 0.141683 (-0.101693) | 1.854438 / 1.452155 (0.402283) | 1.819321 / 1.492716 (0.326605) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.298817 / 0.018006 (0.280811) | 0.629381 / 0.000490 (0.628891) | 0.006259 / 0.000200 (0.006059) | 0.000690 / 0.000054 (0.000635) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.041009 / 0.037411 (0.003598) | 0.123845 / 0.014526 (0.109319) | 0.138606 / 0.176557 (-0.037951) | 0.215042 / 0.737135 (-0.522093) | 0.129572 / 0.296338 (-0.166767) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.668823 / 0.215209 (0.453614) | 6.596762 / 2.077655 (4.519108) | 3.275429 / 1.504120 (1.771309) | 2.921747 / 1.541195 (1.380553) | 2.963748 / 1.468490 (1.495258) | 0.897588 / 4.584777 (-3.687188) | 5.683618 / 3.745712 (1.937906) | 5.051102 / 5.269862 (-0.218760) | 3.178855 / 4.565676 (-1.386822) | 0.107446 / 0.424275 (-0.316829) | 0.008967 / 0.007607 (0.001360) | 0.785577 / 0.226044 (0.559532) | 8.236556 / 2.268929 (5.967628) | 3.914725 / 55.444624 (-51.529899) | 3.129068 / 6.876477 (-3.747409) | 3.368383 / 2.142072 (1.226310) | 1.004307 / 4.805227 (-3.800920) | 0.204788 / 6.500664 (-6.295876) | 0.078250 / 0.075469 (0.002780) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.778574 / 1.841788 (-0.063213) | 25.583659 / 8.074308 (17.509351) | 23.505866 / 10.191392 (13.314474) | 0.228759 / 0.680424 (-0.451665) | 0.038348 / 0.534201 (-0.495853) | 0.468980 / 0.579283 (-0.110303) | 0.630194 / 0.434364 (0.195830) | 0.587535 / 0.540337 (0.047198) | 0.831761 / 1.386936 (-0.555175) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#68f4f847f3248f02fc99458310d9d786906d7a6f \"CML watermark\")\n", "I've addressed the comments. Let me know if it looks all good now :)", "Actually just found out that the current `**/*[-._ 0-9/]train[-._ 0-9/]**` doesn't match `data/train.csv` in bash (but does match in fsspec right now).\r\n\r\nSo there might be a risk that this pattern breaks in the future no ?", "@lhoestq `fsspec` has tests to check their specific (non-posix) behavior, so I think merging in the current state is fine. And if they make a breaking change in the future, we can align the patterns once again :) ", "Yea after more thoughts I also think it's fine. Feel free to merge !", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006920 / 0.011353 (-0.004433) | 0.004182 / 0.011008 (-0.006826) | 0.084629 / 0.038508 (0.046121) | 0.086052 / 0.023109 (0.062943) | 0.326062 / 0.275898 (0.050164) | 0.344190 / 0.323480 (0.020710) | 0.005393 / 0.007986 (-0.002593) | 0.003410 / 0.004328 (-0.000918) | 0.064327 / 0.004250 (0.060076) | 0.056556 / 0.037052 (0.019504) | 0.319255 / 0.258489 (0.060766) | 0.357943 / 0.293841 (0.064102) | 0.032097 / 0.128546 (-0.096450) | 0.008778 / 0.075646 (-0.066868) | 0.291057 / 0.419271 (-0.128215) | 0.053225 / 0.043533 (0.009692) | 0.307713 / 0.255139 (0.052574) | 0.350058 / 0.283200 (0.066858) | 0.024380 / 0.141683 (-0.117303) | 1.459482 / 1.452155 (0.007328) | 1.555711 / 1.492716 (0.062994) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.239487 / 0.018006 (0.221480) | 0.467604 / 0.000490 (0.467114) | 0.010742 / 0.000200 (0.010542) | 0.000285 / 0.000054 (0.000230) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029394 / 0.037411 (-0.008018) | 0.087404 / 0.014526 (0.072879) | 0.098701 / 0.176557 (-0.077855) | 0.154145 / 0.737135 (-0.582990) | 0.099726 / 0.296338 (-0.196612) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.389008 / 0.215209 (0.173799) | 3.873165 / 2.077655 (1.795510) | 1.860676 / 1.504120 (0.356556) | 1.679668 / 1.541195 (0.138474) | 1.782347 / 1.468490 (0.313857) | 0.489469 / 4.584777 (-4.095308) | 3.678706 / 3.745712 (-0.067006) | 3.404076 / 5.269862 (-1.865785) | 2.110972 / 4.565676 (-2.454704) | 0.057478 / 0.424275 (-0.366797) | 0.007443 / 0.007607 (-0.000164) | 0.464780 / 0.226044 (0.238736) | 4.643606 / 2.268929 (2.374678) | 2.355744 / 55.444624 (-53.088881) | 1.993992 / 6.876477 (-4.882485) | 2.245520 / 2.142072 (0.103447) | 0.592773 / 4.805227 (-4.212454) | 0.135369 / 6.500664 (-6.365295) | 0.062478 / 0.075469 (-0.012991) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.257537 / 1.841788 (-0.584251) | 19.828010 / 8.074308 (11.753702) | 14.709260 / 10.191392 (4.517868) | 0.168359 / 0.680424 (-0.512065) | 0.018907 / 0.534201 (-0.515294) | 0.397223 / 0.579283 (-0.182060) | 0.421760 / 0.434364 (-0.012604) | 0.464597 / 0.540337 (-0.075740) | 0.665905 / 1.386936 (-0.721031) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007247 / 0.011353 (-0.004106) | 0.004104 / 0.011008 (-0.006904) | 0.065008 / 0.038508 (0.026500) | 0.083485 / 0.023109 (0.060376) | 0.399808 / 0.275898 (0.123910) | 0.433374 / 0.323480 (0.109894) | 0.005453 / 0.007986 (-0.002532) | 0.003479 / 0.004328 (-0.000850) | 0.065126 / 0.004250 (0.060876) | 0.059945 / 0.037052 (0.022893) | 0.402018 / 0.258489 (0.143529) | 0.437927 / 0.293841 (0.144086) | 0.032654 / 0.128546 (-0.095892) | 0.008717 / 0.075646 (-0.066929) | 0.071737 / 0.419271 (-0.347534) | 0.048903 / 0.043533 (0.005370) | 0.402107 / 0.255139 (0.146968) | 0.417602 / 0.283200 (0.134402) | 0.024821 / 0.141683 (-0.116862) | 1.474471 / 1.452155 (0.022316) | 1.559571 / 1.492716 (0.066855) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.232010 / 0.018006 (0.214003) | 0.460768 / 0.000490 (0.460278) | 0.005250 / 0.000200 (0.005050) | 0.000109 / 0.000054 (0.000055) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033839 / 0.037411 (-0.003573) | 0.101617 / 0.014526 (0.087091) | 0.107984 / 0.176557 (-0.068573) | 0.160923 / 0.737135 (-0.576212) | 0.110367 / 0.296338 (-0.185971) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.433087 / 0.215209 (0.217878) | 4.324100 / 2.077655 (2.246445) | 2.312937 / 1.504120 (0.808817) | 2.159903 / 1.541195 (0.618708) | 2.240235 / 1.468490 (0.771745) | 0.500659 / 4.584777 (-4.084118) | 3.743801 / 3.745712 (-0.001911) | 3.441350 / 5.269862 (-1.828512) | 2.141370 / 4.565676 (-2.424306) | 0.059078 / 0.424275 (-0.365197) | 0.007468 / 0.007607 (-0.000139) | 0.508108 / 0.226044 (0.282064) | 5.076738 / 2.268929 (2.807809) | 2.825939 / 55.444624 (-52.618685) | 2.467762 / 6.876477 (-4.408715) | 2.705079 / 2.142072 (0.563006) | 0.603363 / 4.805227 (-4.201864) | 0.136267 / 6.500664 (-6.364397) | 0.062887 / 0.075469 (-0.012582) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.359344 / 1.841788 (-0.482443) | 20.581510 / 8.074308 (12.507202) | 15.534489 / 10.191392 (5.343097) | 0.192068 / 0.680424 (-0.488356) | 0.020831 / 0.534201 (-0.513370) | 0.403330 / 0.579283 (-0.175953) | 0.429536 / 0.434364 (-0.004828) | 0.479906 / 0.540337 (-0.060431) | 0.674170 / 1.386936 (-0.712766) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#33ac74c2df928dece49ca2cf25e14172896b442e \"CML watermark\")\n" ]
Add support for `fsspec>=2023.9.0`
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6244/reactions" }
PR_kwDODunzps5adtD3
{ "diff_url": "https://github.com/huggingface/datasets/pull/6244.diff", "html_url": "https://github.com/huggingface/datasets/pull/6244", "merged_at": "2023-09-26T15:32:51Z", "patch_url": "https://github.com/huggingface/datasets/pull/6244.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6244" }
2023-09-15T17:58:25Z
https://api.github.com/repos/huggingface/datasets/issues/6244/comments
Fix #6214
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
https://api.github.com/repos/huggingface/datasets/issues/6244/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6244/timeline
closed
false
6,244
null
2023-09-26T15:32:51Z
null
true
1,898,532,784
https://api.github.com/repos/huggingface/datasets/issues/6243
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6243/events
[]
null
2023-09-19T18:02:21Z
[]
https://github.com/huggingface/datasets/pull/6243
COLLABORATOR
null
false
null
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006784 / 0.011353 (-0.004569) | 0.004051 / 0.011008 (-0.006957) | 0.083790 / 0.038508 (0.045282) | 0.081219 / 0.023109 (0.058110) | 0.313195 / 0.275898 (0.037297) | 0.336954 / 0.323480 (0.013475) | 0.004324 / 0.007986 (-0.003662) | 0.004516 / 0.004328 (0.000188) | 0.065051 / 0.004250 (0.060801) | 0.057647 / 0.037052 (0.020595) | 0.316675 / 0.258489 (0.058186) | 0.357936 / 0.293841 (0.064095) | 0.030980 / 0.128546 (-0.097566) | 0.008844 / 0.075646 (-0.066802) | 0.287027 / 0.419271 (-0.132245) | 0.052130 / 0.043533 (0.008597) | 0.308125 / 0.255139 (0.052986) | 0.337345 / 0.283200 (0.054145) | 0.025781 / 0.141683 (-0.115902) | 1.466161 / 1.452155 (0.014006) | 1.565824 / 1.492716 (0.073108) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.299112 / 0.018006 (0.281106) | 0.640520 / 0.000490 (0.640030) | 0.008846 / 0.000200 (0.008647) | 0.000273 / 0.000054 (0.000219) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029853 / 0.037411 (-0.007559) | 0.081697 / 0.014526 (0.067172) | 0.099110 / 0.176557 (-0.077447) | 0.155864 / 0.737135 (-0.581271) | 0.098749 / 0.296338 (-0.197590) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.385722 / 0.215209 (0.170512) | 3.851490 / 2.077655 (1.773835) | 1.851995 / 1.504120 (0.347875) | 1.660398 / 1.541195 (0.119204) | 1.769370 / 1.468490 (0.300879) | 0.481523 / 4.584777 (-4.103254) | 3.550449 / 3.745712 (-0.195263) | 3.424782 / 5.269862 (-1.845079) | 2.106470 / 4.565676 (-2.459206) | 0.056500 / 0.424275 (-0.367775) | 0.007891 / 0.007607 (0.000284) | 0.465564 / 0.226044 (0.239520) | 4.662892 / 2.268929 (2.393964) | 2.305424 / 55.444624 (-53.139201) | 1.980524 / 6.876477 (-4.895953) | 2.218423 / 2.142072 (0.076350) | 0.584662 / 4.805227 (-4.220565) | 0.132325 / 6.500664 (-6.368340) | 0.060773 / 0.075469 (-0.014696) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.254261 / 1.841788 (-0.587527) | 19.479805 / 8.074308 (11.405497) | 14.222687 / 10.191392 (4.031295) | 0.149829 / 0.680424 (-0.530595) | 0.018630 / 0.534201 (-0.515571) | 0.395284 / 0.579283 (-0.183999) | 0.413385 / 0.434364 (-0.020978) | 0.462931 / 0.540337 (-0.077406) | 0.645359 / 1.386936 (-0.741577) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006991 / 0.011353 (-0.004362) | 0.004306 / 0.011008 (-0.006702) | 0.065213 / 0.038508 (0.026705) | 0.082442 / 0.023109 (0.059332) | 0.411294 / 0.275898 (0.135396) | 0.452176 / 0.323480 (0.128696) | 0.005802 / 0.007986 (-0.002183) | 0.003556 / 0.004328 (-0.000772) | 0.066163 / 0.004250 (0.061913) | 0.060680 / 0.037052 (0.023628) | 0.416975 / 0.258489 (0.158486) | 0.456353 / 0.293841 (0.162512) | 0.033584 / 0.128546 (-0.094963) | 0.008687 / 0.075646 (-0.066959) | 0.071300 / 0.419271 (-0.347972) | 0.049382 / 0.043533 (0.005849) | 0.409329 / 0.255139 (0.154190) | 0.434829 / 0.283200 (0.151629) | 0.022966 / 0.141683 (-0.118716) | 1.493847 / 1.452155 (0.041692) | 1.582372 / 1.492716 (0.089656) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.280578 / 0.018006 (0.262572) | 0.538122 / 0.000490 (0.537632) | 0.004515 / 0.000200 (0.004315) | 0.000098 / 0.000054 (0.000043) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033383 / 0.037411 (-0.004028) | 0.093426 / 0.014526 (0.078901) | 0.109314 / 0.176557 (-0.067242) | 0.162349 / 0.737135 (-0.574786) | 0.109849 / 0.296338 (-0.186490) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.431073 / 0.215209 (0.215864) | 4.311942 / 2.077655 (2.234287) | 2.291170 / 1.504120 (0.787051) | 2.132266 / 1.541195 (0.591072) | 2.236526 / 1.468490 (0.768036) | 0.492001 / 4.584777 (-4.092776) | 3.523013 / 3.745712 (-0.222699) | 3.413481 / 5.269862 (-1.856381) | 2.112979 / 4.565676 (-2.452698) | 0.058654 / 0.424275 (-0.365621) | 0.007729 / 0.007607 (0.000121) | 0.512027 / 0.226044 (0.285982) | 5.125264 / 2.268929 (2.856336) | 2.836281 / 55.444624 (-52.608344) | 2.447253 / 6.876477 (-4.429224) | 2.711908 / 2.142072 (0.569835) | 0.592598 / 4.805227 (-4.212629) | 0.134837 / 6.500664 (-6.365827) | 0.059813 / 0.075469 (-0.015656) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.373464 / 1.841788 (-0.468323) | 20.548983 / 8.074308 (12.474675) | 14.799833 / 10.191392 (4.608441) | 0.168601 / 0.680424 (-0.511823) | 0.020358 / 0.534201 (-0.513843) | 0.398790 / 0.579283 (-0.180494) | 0.416921 / 0.434364 (-0.017443) | 0.480542 / 0.540337 (-0.059795) | 0.645062 / 1.386936 (-0.741874) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#afd6fc193a91cb0461c8bf3b64db6943c23de846 \"CML watermark\")\n", "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008616 / 0.011353 (-0.002737) | 0.004957 / 0.011008 (-0.006051) | 0.102629 / 0.038508 (0.064121) | 0.080492 / 0.023109 (0.057383) | 0.461817 / 0.275898 (0.185919) | 0.487964 / 0.323480 (0.164484) | 0.006336 / 0.007986 (-0.001649) | 0.004607 / 0.004328 (0.000278) | 0.074311 / 0.004250 (0.070061) | 0.060368 / 0.037052 (0.023315) | 0.458076 / 0.258489 (0.199587) | 0.493028 / 0.293841 (0.199187) | 0.044153 / 0.128546 (-0.084394) | 0.014066 / 0.075646 (-0.061581) | 0.369848 / 0.419271 (-0.049424) | 0.061690 / 0.043533 (0.018157) | 0.439728 / 0.255139 (0.184590) | 0.484706 / 0.283200 (0.201506) | 0.034657 / 0.141683 (-0.107026) | 1.710591 / 1.452155 (0.258437) | 1.900225 / 1.492716 (0.407509) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.308837 / 0.018006 (0.290831) | 0.579561 / 0.000490 (0.579072) | 0.010163 / 0.000200 (0.009963) | 0.000613 / 0.000054 (0.000558) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028108 / 0.037411 (-0.009303) | 0.085072 / 0.014526 (0.070546) | 0.103375 / 0.176557 (-0.073182) | 0.173765 / 0.737135 (-0.563371) | 0.102460 / 0.296338 (-0.193879) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.602642 / 0.215209 (0.387433) | 5.582537 / 2.077655 (3.504882) | 2.405553 / 1.504120 (0.901434) | 2.057298 / 1.541195 (0.516103) | 2.223787 / 1.468490 (0.755297) | 0.846138 / 4.584777 (-3.738638) | 5.290306 / 3.745712 (1.544594) | 4.836066 / 5.269862 (-0.433795) | 2.951901 / 4.565676 (-1.613775) | 0.099432 / 0.424275 (-0.324843) | 0.009198 / 0.007607 (0.001591) | 0.731370 / 0.226044 (0.505325) | 6.663026 / 2.268929 (4.394098) | 3.200932 / 55.444624 (-52.243692) | 2.486654 / 6.876477 (-4.389823) | 2.833195 / 2.142072 (0.691123) | 0.989481 / 4.805227 (-3.815746) | 0.205176 / 6.500664 (-6.295488) | 0.073760 / 0.075469 (-0.001709) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.745494 / 1.841788 (-0.096294) | 24.649294 / 8.074308 (16.574986) | 22.312182 / 10.191392 (12.120790) | 0.245207 / 0.680424 (-0.435217) | 0.031971 / 0.534201 (-0.502230) | 0.495179 / 0.579283 (-0.084104) | 0.603233 / 0.434364 (0.168869) | 0.560906 / 0.540337 (0.020569) | 0.788292 / 1.386936 (-0.598644) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008922 / 0.011353 (-0.002431) | 0.005203 / 0.011008 (-0.005805) | 0.074414 / 0.038508 (0.035906) | 0.077552 / 0.023109 (0.054443) | 0.547217 / 0.275898 (0.271319) | 0.625298 / 0.323480 (0.301818) | 0.006135 / 0.007986 (-0.001851) | 0.004163 / 0.004328 (-0.000165) | 0.078014 / 0.004250 (0.073764) | 0.064484 / 0.037052 (0.027431) | 0.562356 / 0.258489 (0.303867) | 0.643613 / 0.293841 (0.349772) | 0.050155 / 0.128546 (-0.078391) | 0.013665 / 0.075646 (-0.061981) | 0.090224 / 0.419271 (-0.329048) | 0.063852 / 0.043533 (0.020319) | 0.560914 / 0.255139 (0.305775) | 0.591531 / 0.283200 (0.308331) | 0.036491 / 0.141683 (-0.105192) | 1.670898 / 1.452155 (0.218743) | 1.783924 / 1.492716 (0.291208) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.312764 / 0.018006 (0.294758) | 0.611116 / 0.000490 (0.610626) | 0.006367 / 0.000200 (0.006167) | 0.000130 / 0.000054 (0.000075) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033967 / 0.037411 (-0.003445) | 0.101550 / 0.014526 (0.087025) | 0.116953 / 0.176557 (-0.059604) | 0.180061 / 0.737135 (-0.557075) | 0.115220 / 0.296338 (-0.181118) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.642110 / 0.215209 (0.426901) | 6.361381 / 2.077655 (4.283727) | 2.948175 / 1.504120 (1.444055) | 2.633935 / 1.541195 (1.092740) | 2.822150 / 1.468490 (1.353660) | 0.931412 / 4.584777 (-3.653365) | 5.428540 / 3.745712 (1.682828) | 4.672920 / 5.269862 (-0.596941) | 3.102046 / 4.565676 (-1.463630) | 0.100825 / 0.424275 (-0.323450) | 0.009464 / 0.007607 (0.001857) | 0.774102 / 0.226044 (0.548058) | 7.715003 / 2.268929 (5.446074) | 3.987807 / 55.444624 (-51.456817) | 3.089129 / 6.876477 (-3.787347) | 3.333247 / 2.142072 (1.191174) | 1.012427 / 4.805227 (-3.792800) | 0.200662 / 6.500664 (-6.300002) | 0.072422 / 0.075469 (-0.003047) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.680364 / 1.841788 (-0.161424) | 24.484576 / 8.074308 (16.410268) | 21.920990 / 10.191392 (11.729598) | 0.218604 / 0.680424 (-0.461820) | 0.035818 / 0.534201 (-0.498383) | 0.470648 / 0.579283 (-0.108635) | 0.585108 / 0.434364 (0.150744) | 0.539152 / 0.540337 (-0.001185) | 0.763999 / 1.386936 (-0.622937) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#cfed1d09ed6c680085624d96eb99bfb2b0b27599 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006304 / 0.011353 (-0.005049) | 0.003884 / 0.011008 (-0.007125) | 0.084847 / 0.038508 (0.046339) | 0.069372 / 0.023109 (0.046263) | 0.318876 / 0.275898 (0.042978) | 0.344733 / 0.323480 (0.021253) | 0.005139 / 0.007986 (-0.002847) | 0.003203 / 0.004328 (-0.001125) | 0.065758 / 0.004250 (0.061507) | 0.054189 / 0.037052 (0.017137) | 0.317475 / 0.258489 (0.058986) | 0.359310 / 0.293841 (0.065469) | 0.030639 / 0.128546 (-0.097908) | 0.008657 / 0.075646 (-0.066989) | 0.289127 / 0.419271 (-0.130144) | 0.052344 / 0.043533 (0.008811) | 0.316122 / 0.255139 (0.060983) | 0.338339 / 0.283200 (0.055140) | 0.022677 / 0.141683 (-0.119006) | 1.551629 / 1.452155 (0.099474) | 1.617917 / 1.492716 (0.125201) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.231067 / 0.018006 (0.213061) | 0.450559 / 0.000490 (0.450070) | 0.008484 / 0.000200 (0.008284) | 0.000234 / 0.000054 (0.000179) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027054 / 0.037411 (-0.010357) | 0.081560 / 0.014526 (0.067034) | 0.094162 / 0.176557 (-0.082395) | 0.148583 / 0.737135 (-0.588552) | 0.093596 / 0.296338 (-0.202742) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.388616 / 0.215209 (0.173407) | 3.874905 / 2.077655 (1.797251) | 1.915845 / 1.504120 (0.411725) | 1.746410 / 1.541195 (0.205215) | 1.828789 / 1.468490 (0.360299) | 0.483270 / 4.584777 (-4.101506) | 3.489157 / 3.745712 (-0.256555) | 3.190086 / 5.269862 (-2.079776) | 1.978023 / 4.565676 (-2.587653) | 0.056290 / 0.424275 (-0.367985) | 0.007585 / 0.007607 (-0.000022) | 0.467051 / 0.226044 (0.241007) | 4.665971 / 2.268929 (2.397043) | 2.418550 / 55.444624 (-53.026075) | 2.048338 / 6.876477 (-4.828139) | 2.225275 / 2.142072 (0.083203) | 0.576601 / 4.805227 (-4.228626) | 0.131960 / 6.500664 (-6.368704) | 0.060177 / 0.075469 (-0.015292) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.249797 / 1.841788 (-0.591991) | 18.552939 / 8.074308 (10.478631) | 14.016616 / 10.191392 (3.825224) | 0.162869 / 0.680424 (-0.517555) | 0.018105 / 0.534201 (-0.516096) | 0.394838 / 0.579283 (-0.184445) | 0.403378 / 0.434364 (-0.030986) | 0.460931 / 0.540337 (-0.079407) | 0.637365 / 1.386936 (-0.749571) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006497 / 0.011353 (-0.004856) | 0.003928 / 0.011008 (-0.007080) | 0.063958 / 0.038508 (0.025450) | 0.069609 / 0.023109 (0.046500) | 0.401599 / 0.275898 (0.125701) | 0.428128 / 0.323480 (0.104648) | 0.005296 / 0.007986 (-0.002689) | 0.003332 / 0.004328 (-0.000996) | 0.063903 / 0.004250 (0.059652) | 0.056303 / 0.037052 (0.019250) | 0.400704 / 0.258489 (0.142214) | 0.435982 / 0.293841 (0.142141) | 0.032434 / 0.128546 (-0.096112) | 0.008570 / 0.075646 (-0.067077) | 0.070788 / 0.419271 (-0.348483) | 0.048252 / 0.043533 (0.004719) | 0.403269 / 0.255139 (0.148130) | 0.419796 / 0.283200 (0.136596) | 0.022598 / 0.141683 (-0.119085) | 1.481627 / 1.452155 (0.029472) | 1.578388 / 1.492716 (0.085672) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.224552 / 0.018006 (0.206546) | 0.444059 / 0.000490 (0.443570) | 0.003757 / 0.000200 (0.003557) | 0.000225 / 0.000054 (0.000171) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032173 / 0.037411 (-0.005239) | 0.092562 / 0.014526 (0.078036) | 0.104972 / 0.176557 (-0.071584) | 0.156467 / 0.737135 (-0.580669) | 0.104274 / 0.296338 (-0.192065) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.441693 / 0.215209 (0.226484) | 4.400217 / 2.077655 (2.322562) | 2.393862 / 1.504120 (0.889742) | 2.281178 / 1.541195 (0.739983) | 2.339895 / 1.468490 (0.871405) | 0.488734 / 4.584777 (-4.096043) | 3.523352 / 3.745712 (-0.222360) | 3.216761 / 5.269862 (-2.053101) | 2.007553 / 4.565676 (-2.558123) | 0.058050 / 0.424275 (-0.366225) | 0.007566 / 0.007607 (-0.000041) | 0.515439 / 0.226044 (0.289394) | 5.155086 / 2.268929 (2.886157) | 2.864958 / 55.444624 (-52.579666) | 2.592460 / 6.876477 (-4.284016) | 2.800449 / 2.142072 (0.658376) | 0.588441 / 4.805227 (-4.216786) | 0.131589 / 6.500664 (-6.369075) | 0.059075 / 0.075469 (-0.016394) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.353889 / 1.841788 (-0.487898) | 18.938285 / 8.074308 (10.863977) | 14.937141 / 10.191392 (4.745749) | 0.168811 / 0.680424 (-0.511613) | 0.020118 / 0.534201 (-0.514083) | 0.394791 / 0.579283 (-0.184492) | 0.414434 / 0.434364 (-0.019930) | 0.466821 / 0.540337 (-0.073517) | 0.629894 / 1.386936 (-0.757042) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#23921b08390db7dbb3186a8de40dc49a4066da76 \"CML watermark\")\n", "CI failures are unrelated", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005959 / 0.011353 (-0.005394) | 0.004164 / 0.011008 (-0.006844) | 0.082336 / 0.038508 (0.043828) | 0.070344 / 0.023109 (0.047234) | 0.348032 / 0.275898 (0.072134) | 0.366328 / 0.323480 (0.042848) | 0.003882 / 0.007986 (-0.004104) | 0.003619 / 0.004328 (-0.000709) | 0.063343 / 0.004250 (0.059093) | 0.056617 / 0.037052 (0.019564) | 0.351625 / 0.258489 (0.093136) | 0.395839 / 0.293841 (0.101998) | 0.030842 / 0.128546 (-0.097704) | 0.008363 / 0.075646 (-0.067284) | 0.300535 / 0.419271 (-0.118737) | 0.053303 / 0.043533 (0.009770) | 0.354782 / 0.255139 (0.099643) | 0.364918 / 0.283200 (0.081719) | 0.025365 / 0.141683 (-0.116318) | 1.555009 / 1.452155 (0.102854) | 1.597443 / 1.492716 (0.104727) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.239808 / 0.018006 (0.221801) | 0.488164 / 0.000490 (0.487675) | 0.013183 / 0.000200 (0.012983) | 0.000483 / 0.000054 (0.000429) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027938 / 0.037411 (-0.009473) | 0.078521 / 0.014526 (0.063995) | 0.095498 / 0.176557 (-0.081059) | 0.150884 / 0.737135 (-0.586251) | 0.097577 / 0.296338 (-0.198762) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.384546 / 0.215209 (0.169337) | 4.037707 / 2.077655 (1.960053) | 1.940321 / 1.504120 (0.436201) | 1.716741 / 1.541195 (0.175546) | 1.837200 / 1.468490 (0.368710) | 0.502112 / 4.584777 (-4.082665) | 3.770452 / 3.745712 (0.024740) | 3.325691 / 5.269862 (-1.944171) | 2.015622 / 4.565676 (-2.550055) | 0.056246 / 0.424275 (-0.368029) | 0.007320 / 0.007607 (-0.000287) | 0.445553 / 0.226044 (0.219509) | 4.567233 / 2.268929 (2.298304) | 2.319531 / 55.444624 (-53.125093) | 1.968664 / 6.876477 (-4.907813) | 2.122349 / 2.142072 (-0.019724) | 0.573688 / 4.805227 (-4.231540) | 0.131410 / 6.500664 (-6.369254) | 0.062767 / 0.075469 (-0.012702) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.255244 / 1.841788 (-0.586543) | 19.042480 / 8.074308 (10.968172) | 13.935342 / 10.191392 (3.743950) | 0.161259 / 0.680424 (-0.519165) | 0.020582 / 0.534201 (-0.513619) | 0.391365 / 0.579283 (-0.187918) | 0.417462 / 0.434364 (-0.016902) | 0.473121 / 0.540337 (-0.067216) | 0.674768 / 1.386936 (-0.712168) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006299 / 0.011353 (-0.005054) | 0.003969 / 0.011008 (-0.007040) | 0.063558 / 0.038508 (0.025050) | 0.073847 / 0.023109 (0.050738) | 0.407064 / 0.275898 (0.131166) | 0.440695 / 0.323480 (0.117215) | 0.005783 / 0.007986 (-0.002203) | 0.003517 / 0.004328 (-0.000812) | 0.065721 / 0.004250 (0.061470) | 0.056390 / 0.037052 (0.019338) | 0.419019 / 0.258489 (0.160530) | 0.450721 / 0.293841 (0.156880) | 0.034094 / 0.128546 (-0.094452) | 0.008594 / 0.075646 (-0.067052) | 0.069254 / 0.419271 (-0.350017) | 0.049218 / 0.043533 (0.005685) | 0.413312 / 0.255139 (0.158173) | 0.439454 / 0.283200 (0.156255) | 0.021481 / 0.141683 (-0.120202) | 1.517536 / 1.452155 (0.065382) | 1.530532 / 1.492716 (0.037815) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.235392 / 0.018006 (0.217386) | 0.477371 / 0.000490 (0.476881) | 0.007070 / 0.000200 (0.006870) | 0.000132 / 0.000054 (0.000077) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031909 / 0.037411 (-0.005502) | 0.092459 / 0.014526 (0.077933) | 0.105795 / 0.176557 (-0.070761) | 0.157745 / 0.737135 (-0.579390) | 0.104187 / 0.296338 (-0.192152) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.424385 / 0.215209 (0.209176) | 4.445371 / 2.077655 (2.367716) | 2.423639 / 1.504120 (0.919519) | 2.188167 / 1.541195 (0.646972) | 2.171023 / 1.468490 (0.702532) | 0.483566 / 4.584777 (-4.101211) | 3.825702 / 3.745712 (0.079990) | 3.276350 / 5.269862 (-1.993512) | 2.063075 / 4.565676 (-2.502602) | 0.061628 / 0.424275 (-0.362647) | 0.008176 / 0.007607 (0.000569) | 0.506697 / 0.226044 (0.280653) | 5.067924 / 2.268929 (2.798995) | 2.785567 / 55.444624 (-52.659057) | 2.457340 / 6.876477 (-4.419137) | 2.599646 / 2.142072 (0.457574) | 0.581550 / 4.805227 (-4.223677) | 0.131712 / 6.500664 (-6.368952) | 0.058776 / 0.075469 (-0.016693) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.356639 / 1.841788 (-0.485148) | 20.103463 / 8.074308 (12.029155) | 14.481010 / 10.191392 (4.289618) | 0.162870 / 0.680424 (-0.517554) | 0.023197 / 0.534201 (-0.511004) | 0.413042 / 0.579283 (-0.166241) | 0.427494 / 0.434364 (-0.006870) | 0.508457 / 0.540337 (-0.031880) | 0.662412 / 1.386936 (-0.724524) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#05fe5c06d42f84408b933c2809acb9b7449cbbb3 \"CML watermark\")\n" ]
Fix cast from fixed size list to variable size list
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6243/reactions" }
PR_kwDODunzps5aclIy
{ "diff_url": "https://github.com/huggingface/datasets/pull/6243.diff", "html_url": "https://github.com/huggingface/datasets/pull/6243", "merged_at": "2023-09-19T17:53:17Z", "patch_url": "https://github.com/huggingface/datasets/pull/6243.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6243" }
2023-09-15T14:23:33Z
https://api.github.com/repos/huggingface/datasets/issues/6243/comments
Fix #6242
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
https://api.github.com/repos/huggingface/datasets/issues/6243/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6243/timeline
closed
false
6,243
null
2023-09-19T17:53:17Z
null
true
1,896,899,123
https://api.github.com/repos/huggingface/datasets/issues/6242
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6242/events
[]
null
2023-09-19T17:53:18Z
[]
https://github.com/huggingface/datasets/issues/6242
MEMBER
completed
null
null
[ "While this issue may seem specific, it led to a silent problem in my workflow that took days to diagnose. If this feature is not intended to be supported, an error should be raised when encountering this configuration to prevent such issues.", "Thanks for reporting! This is a MRE:\r\n\r\n```python\r\nimport pyarrow as pa\r\nfrom datasets.table import cast_array_to_feature\r\nfrom datasets import Sequence, Value\r\ndata = [\r\n [[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]],\r\n [[7.0, 8.0, 9.0], [10.0, 11.0, 12.0]],\r\n]\r\narr = pa.array(data, pa.list_(pa.list_(pa.float32(), 3)))\r\ncast_array_to_feature(arr, Sequence(Sequence(Value(\"float32\"))))\r\n```\r\n\r\nI've opened a PR with a fix." ]
Data alteration when loading dataset with unspecified inner sequence length
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6242/reactions" }
I_kwDODunzps5xEGIz
null
2023-09-14T16:12:45Z
https://api.github.com/repos/huggingface/datasets/issues/6242/comments
### Describe the bug When a dataset saved with a specified inner sequence length is loaded without specifying that length, the original data is altered and becomes inconsistent. ### Steps to reproduce the bug ```python from datasets import Dataset, Features, Value, Sequence, load_dataset # Repository ID repo_id = "my_repo_id" # Define features with a specific length of 3 for each inner sequence specified_features = Features({"key": Sequence(Sequence(Value("float32"), length=3))}) # Create a dataset with the specified features data = [ [[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]], [[7.0, 8.0, 9.0], [10.0, 11.0, 12.0]], ] dataset = Dataset.from_dict({"key": data}, features=specified_features) # Push the dataset to the hub dataset.push_to_hub(repo_id) # Define features without specifying the length unspecified_features = Features({"key": Sequence(Sequence(Value("float32")))}) # Load the dataset from the hub with this new feature definition dataset = load_dataset(f"qgallouedec/{repo_id}", split="train", features=unspecified_features) # The obtained data is altered print(dataset.to_dict()) # {'key': [[[1.0], [2.0]], [[3.0], [4.0]]]} ``` ### Expected behavior ```python print(dataset.to_dict()) # {'key': [[[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]], [[7.0, 8.0, 9.0], [10.0, 11.0, 12.0]]]} ``` ### Environment info - `datasets` version: 2.14.4 - Platform: Linux-6.2.0-32-generic-x86_64-with-glibc2.35 - Python version: 3.9.12 - Huggingface_hub version: 0.15.1 - PyArrow version: 12.0.1 - Pandas version: 2.0.3
{ "avatar_url": "https://avatars.githubusercontent.com/u/45557362?v=4", "events_url": "https://api.github.com/users/qgallouedec/events{/privacy}", "followers_url": "https://api.github.com/users/qgallouedec/followers", "following_url": "https://api.github.com/users/qgallouedec/following{/other_user}", "gists_url": "https://api.github.com/users/qgallouedec/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/qgallouedec", "id": 45557362, "login": "qgallouedec", "node_id": "MDQ6VXNlcjQ1NTU3MzYy", "organizations_url": "https://api.github.com/users/qgallouedec/orgs", "received_events_url": "https://api.github.com/users/qgallouedec/received_events", "repos_url": "https://api.github.com/users/qgallouedec/repos", "site_admin": false, "starred_url": "https://api.github.com/users/qgallouedec/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/qgallouedec/subscriptions", "type": "User", "url": "https://api.github.com/users/qgallouedec" }
https://api.github.com/repos/huggingface/datasets/issues/6242/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6242/timeline
closed
false
6,242
null
2023-09-19T17:53:18Z
null
false
1,896,429,694
https://api.github.com/repos/huggingface/datasets/issues/6241
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6241/events
[]
null
2023-09-15T15:57:10Z
[]
https://github.com/huggingface/datasets/pull/6241
COLLABORATOR
null
false
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006753 / 0.011353 (-0.004600) | 0.004027 / 0.011008 (-0.006982) | 0.084200 / 0.038508 (0.045692) | 0.072233 / 0.023109 (0.049124) | 0.361535 / 0.275898 (0.085637) | 0.386196 / 0.323480 (0.062716) | 0.004047 / 0.007986 (-0.003939) | 0.003416 / 0.004328 (-0.000912) | 0.064724 / 0.004250 (0.060474) | 0.055740 / 0.037052 (0.018688) | 0.360422 / 0.258489 (0.101933) | 0.399230 / 0.293841 (0.105389) | 0.031537 / 0.128546 (-0.097009) | 0.008630 / 0.075646 (-0.067016) | 0.289652 / 0.419271 (-0.129620) | 0.052881 / 0.043533 (0.009348) | 0.359538 / 0.255139 (0.104399) | 0.379410 / 0.283200 (0.096211) | 0.024539 / 0.141683 (-0.117144) | 1.470891 / 1.452155 (0.018736) | 1.578879 / 1.492716 (0.086163) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.239200 / 0.018006 (0.221194) | 0.462100 / 0.000490 (0.461610) | 0.009055 / 0.000200 (0.008856) | 0.000406 / 0.000054 (0.000352) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028736 / 0.037411 (-0.008675) | 0.088051 / 0.014526 (0.073525) | 0.098101 / 0.176557 (-0.078456) | 0.152399 / 0.737135 (-0.584737) | 0.098776 / 0.296338 (-0.197563) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.401761 / 0.215209 (0.186552) | 4.014143 / 2.077655 (1.936488) | 2.033255 / 1.504120 (0.529135) | 1.855347 / 1.541195 (0.314152) | 1.996144 / 1.468490 (0.527654) | 0.488545 / 4.584777 (-4.096232) | 3.712030 / 3.745712 (-0.033682) | 3.439725 / 5.269862 (-1.830137) | 2.119289 / 4.565676 (-2.446388) | 0.057523 / 0.424275 (-0.366752) | 0.007780 / 0.007607 (0.000173) | 0.479522 / 0.226044 (0.253477) | 4.798218 / 2.268929 (2.529290) | 2.543816 / 55.444624 (-52.900809) | 2.180392 / 6.876477 (-4.696085) | 2.427195 / 2.142072 (0.285122) | 0.602071 / 4.805227 (-4.203156) | 0.133450 / 6.500664 (-6.367214) | 0.061975 / 0.075469 (-0.013494) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.250040 / 1.841788 (-0.591748) | 19.532327 / 8.074308 (11.458019) | 14.200298 / 10.191392 (4.008906) | 0.165165 / 0.680424 (-0.515259) | 0.018326 / 0.534201 (-0.515875) | 0.389788 / 0.579283 (-0.189495) | 0.419301 / 0.434364 (-0.015063) | 0.452645 / 0.540337 (-0.087693) | 0.643409 / 1.386936 (-0.743527) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007040 / 0.011353 (-0.004313) | 0.004157 / 0.011008 (-0.006851) | 0.065439 / 0.038508 (0.026931) | 0.083210 / 0.023109 (0.060101) | 0.406707 / 0.275898 (0.130809) | 0.442759 / 0.323480 (0.119279) | 0.006321 / 0.007986 (-0.001665) | 0.003684 / 0.004328 (-0.000645) | 0.064517 / 0.004250 (0.060266) | 0.060676 / 0.037052 (0.023624) | 0.413395 / 0.258489 (0.154906) | 0.446776 / 0.293841 (0.152935) | 0.032542 / 0.128546 (-0.096004) | 0.008614 / 0.075646 (-0.067033) | 0.071760 / 0.419271 (-0.347511) | 0.049646 / 0.043533 (0.006113) | 0.402409 / 0.255139 (0.147270) | 0.422775 / 0.283200 (0.139575) | 0.024846 / 0.141683 (-0.116836) | 1.522915 / 1.452155 (0.070761) | 1.566518 / 1.492716 (0.073802) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.234478 / 0.018006 (0.216472) | 0.461318 / 0.000490 (0.460828) | 0.006304 / 0.000200 (0.006105) | 0.000105 / 0.000054 (0.000051) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.036904 / 0.037411 (-0.000508) | 0.102144 / 0.014526 (0.087619) | 0.108985 / 0.176557 (-0.067572) | 0.162609 / 0.737135 (-0.574526) | 0.110295 / 0.296338 (-0.186044) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.438735 / 0.215209 (0.223526) | 4.377602 / 2.077655 (2.299948) | 2.375305 / 1.504120 (0.871185) | 2.215877 / 1.541195 (0.674682) | 2.317468 / 1.468490 (0.848978) | 0.495137 / 4.584777 (-4.089640) | 3.726323 / 3.745712 (-0.019389) | 3.493785 / 5.269862 (-1.776077) | 2.177891 / 4.565676 (-2.387785) | 0.058975 / 0.424275 (-0.365300) | 0.007897 / 0.007607 (0.000290) | 0.514063 / 0.226044 (0.288019) | 5.132714 / 2.268929 (2.863786) | 2.914125 / 55.444624 (-52.530499) | 2.532912 / 6.876477 (-4.343564) | 2.776438 / 2.142072 (0.634365) | 0.624831 / 4.805227 (-4.180396) | 0.135023 / 6.500664 (-6.365641) | 0.062040 / 0.075469 (-0.013429) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.359970 / 1.841788 (-0.481818) | 20.816464 / 8.074308 (12.742156) | 16.103544 / 10.191392 (5.912152) | 0.149120 / 0.680424 (-0.531304) | 0.020279 / 0.534201 (-0.513922) | 0.408727 / 0.579283 (-0.170556) | 0.436191 / 0.434364 (0.001827) | 0.485056 / 0.540337 (-0.055281) | 0.737727 / 1.386936 (-0.649209) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#d15280f435b7e27c9350a0cc37a07dbc5e2ea9ca \"CML watermark\")\n", "CI failures are unrelated", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008102 / 0.011353 (-0.003251) | 0.004886 / 0.011008 (-0.006123) | 0.090482 / 0.038508 (0.051974) | 0.071594 / 0.023109 (0.048485) | 0.428678 / 0.275898 (0.152780) | 0.442179 / 0.323480 (0.118699) | 0.004329 / 0.007986 (-0.003657) | 0.003756 / 0.004328 (-0.000573) | 0.087125 / 0.004250 (0.082874) | 0.055159 / 0.037052 (0.018107) | 0.437646 / 0.258489 (0.179157) | 0.446665 / 0.293841 (0.152824) | 0.046402 / 0.128546 (-0.082145) | 0.014248 / 0.075646 (-0.061398) | 0.331401 / 0.419271 (-0.087871) | 0.062010 / 0.043533 (0.018478) | 0.434774 / 0.255139 (0.179635) | 0.441063 / 0.283200 (0.157863) | 0.037424 / 0.141683 (-0.104258) | 1.720276 / 1.452155 (0.268121) | 1.731491 / 1.492716 (0.238775) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.302935 / 0.018006 (0.284929) | 0.590556 / 0.000490 (0.590067) | 0.014473 / 0.000200 (0.014274) | 0.000712 / 0.000054 (0.000658) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031289 / 0.037411 (-0.006122) | 0.091175 / 0.014526 (0.076649) | 0.112895 / 0.176557 (-0.063661) | 0.199558 / 0.737135 (-0.537577) | 0.113397 / 0.296338 (-0.182942) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.571586 / 0.215209 (0.356377) | 5.706894 / 2.077655 (3.629240) | 2.512701 / 1.504120 (1.008581) | 2.151705 / 1.541195 (0.610510) | 2.252738 / 1.468490 (0.784248) | 0.857524 / 4.584777 (-3.727253) | 5.189027 / 3.745712 (1.443315) | 4.464979 / 5.269862 (-0.804882) | 2.787486 / 4.565676 (-1.778190) | 0.090161 / 0.424275 (-0.334115) | 0.008649 / 0.007607 (0.001042) | 0.703367 / 0.226044 (0.477322) | 7.128971 / 2.268929 (4.860043) | 3.437475 / 55.444624 (-52.007149) | 2.562291 / 6.876477 (-4.314186) | 2.753419 / 2.142072 (0.611346) | 0.981964 / 4.805227 (-3.823263) | 0.194533 / 6.500664 (-6.306131) | 0.069659 / 0.075469 (-0.005810) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.510356 / 1.841788 (-0.331431) | 22.414117 / 8.074308 (14.339809) | 20.325418 / 10.191392 (10.134025) | 0.226823 / 0.680424 (-0.453601) | 0.029123 / 0.534201 (-0.505078) | 0.454656 / 0.579283 (-0.124627) | 0.559588 / 0.434364 (0.125224) | 0.547386 / 0.540337 (0.007048) | 0.770169 / 1.386936 (-0.616767) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010167 / 0.011353 (-0.001186) | 0.005164 / 0.011008 (-0.005844) | 0.094897 / 0.038508 (0.056388) | 0.078027 / 0.023109 (0.054918) | 0.474442 / 0.275898 (0.198544) | 0.503362 / 0.323480 (0.179882) | 0.006988 / 0.007986 (-0.000998) | 0.005369 / 0.004328 (0.001041) | 0.079547 / 0.004250 (0.075297) | 0.059382 / 0.037052 (0.022329) | 0.468759 / 0.258489 (0.210270) | 0.566780 / 0.293841 (0.272939) | 0.050791 / 0.128546 (-0.077755) | 0.013191 / 0.075646 (-0.062455) | 0.086086 / 0.419271 (-0.333186) | 0.060399 / 0.043533 (0.016866) | 0.492985 / 0.255139 (0.237846) | 0.509139 / 0.283200 (0.225940) | 0.034537 / 0.141683 (-0.107146) | 1.699166 / 1.452155 (0.247011) | 1.789781 / 1.492716 (0.297065) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.278776 / 0.018006 (0.260769) | 0.615877 / 0.000490 (0.615387) | 0.009062 / 0.000200 (0.008862) | 0.000112 / 0.000054 (0.000057) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032931 / 0.037411 (-0.004481) | 0.094796 / 0.014526 (0.080270) | 0.126697 / 0.176557 (-0.049859) | 0.168172 / 0.737135 (-0.568963) | 0.113906 / 0.296338 (-0.182433) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.602378 / 0.215209 (0.387169) | 5.987708 / 2.077655 (3.910054) | 2.800339 / 1.504120 (1.296219) | 2.474127 / 1.541195 (0.932932) | 2.502387 / 1.468490 (1.033897) | 0.808147 / 4.584777 (-3.776630) | 5.212691 / 3.745712 (1.466979) | 4.479452 / 5.269862 (-0.790409) | 2.831960 / 4.565676 (-1.733717) | 0.086777 / 0.424275 (-0.337498) | 0.009492 / 0.007607 (0.001885) | 0.716848 / 0.226044 (0.490803) | 7.099904 / 2.268929 (4.830975) | 3.794708 / 55.444624 (-51.649916) | 2.859826 / 6.876477 (-4.016650) | 3.109673 / 2.142072 (0.967600) | 0.936776 / 4.805227 (-3.868451) | 0.195152 / 6.500664 (-6.305512) | 0.074184 / 0.075469 (-0.001285) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.585419 / 1.841788 (-0.256369) | 22.420377 / 8.074308 (14.346068) | 20.761533 / 10.191392 (10.570141) | 0.228480 / 0.680424 (-0.451943) | 0.030944 / 0.534201 (-0.503257) | 0.444717 / 0.579283 (-0.134566) | 0.579632 / 0.434364 (0.145268) | 0.521669 / 0.540337 (-0.018669) | 0.748274 / 1.386936 (-0.638662) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#94e07965a400e6901f12e6f0f25c7090656c828c \"CML watermark\")\n" ]
Remove unused global variables in `audio.py`
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6241/reactions" }
PR_kwDODunzps5aVfl-
{ "diff_url": "https://github.com/huggingface/datasets/pull/6241.diff", "html_url": "https://github.com/huggingface/datasets/pull/6241", "merged_at": "2023-09-15T15:46:07Z", "patch_url": "https://github.com/huggingface/datasets/pull/6241.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6241" }
2023-09-14T12:06:32Z
https://api.github.com/repos/huggingface/datasets/issues/6241/comments
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
https://api.github.com/repos/huggingface/datasets/issues/6241/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6241/timeline
closed
false
6,241
null
2023-09-15T15:46:07Z
null
true
1,895,723,888
https://api.github.com/repos/huggingface/datasets/issues/6240
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6240/events
[]
null
2023-09-14T23:54:42Z
[]
https://github.com/huggingface/datasets/issues/6240
NONE
completed
null
null
[ "What type of dataset are you using in this script? `torch.utils.data.Dataset` or `datasets.Dataset`? Please share the `datasets` package version if it's the latter. Otherwise, it's better to move this issue to the `accelerate` repo.", "Very sorry, I thought I had a repo in `accelerate!`\r\nI will close this issue and repo the issue in the appropriate place." ]
Dataloader stuck on multiple GPUs
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6240/reactions" }
I_kwDODunzps5w_nNw
null
2023-09-14T05:30:30Z
https://api.github.com/repos/huggingface/datasets/issues/6240/comments
### Describe the bug I am trying to get CLIP to fine-tuning with my code. When I tried to run it on multiple GPUs using accelerate, I encountered the following phenomenon. - Validation dataloader stuck in 2nd epoch only on multi-GPU Specifically, when the "for inputs in valid_loader:" process is finished, it does not proceed to the next step. train_loader process is completed. Also, both train and valid are working correctly in the first epoch. The accelerate command at that time is as follows. `accelerate launch --multi_gpu --num_processes=2 {script_name.py} {--arg1} {--arg2} ...` - This will not happen when single GPU is used. `CUDA_VISIBLE_DEVICES="0" accelerate launch {script_name.py} --arg1 --arg2 ...` - Setting num_workers=0 in dataloader did not change the result. ### Steps to reproduce the bug 1. The codes for fine-tuning the regular CLIP were updated for accelerate. 2. Run the code with the accelerate command as `accelerate launch --multi_gpu --num_processes=2 {script_name.py} {--arg1} {--arg2} ...` and the above problem will occur. 3. CUDA_VISIBLE_DEVICES="0" accelerate launch {script_name.py} --arg1 --arg2 ...` , it works fine. ### Expected behavior It Should end normally as if it was run on a single GPU. ### Environment info Since `datasets-cli env` did not work, the environment is described below. - OS: Ubuntu 22.04 with Docker - Docker: 24.0.5, build ced0996 - Python: 3.10.12 - torch==2.0.1 - accelerate==0.21.0 - transformers==4.33.1
{ "avatar_url": "https://avatars.githubusercontent.com/u/40049003?v=4", "events_url": "https://api.github.com/users/kuri54/events{/privacy}", "followers_url": "https://api.github.com/users/kuri54/followers", "following_url": "https://api.github.com/users/kuri54/following{/other_user}", "gists_url": "https://api.github.com/users/kuri54/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/kuri54", "id": 40049003, "login": "kuri54", "node_id": "MDQ6VXNlcjQwMDQ5MDAz", "organizations_url": "https://api.github.com/users/kuri54/orgs", "received_events_url": "https://api.github.com/users/kuri54/received_events", "repos_url": "https://api.github.com/users/kuri54/repos", "site_admin": false, "starred_url": "https://api.github.com/users/kuri54/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kuri54/subscriptions", "type": "User", "url": "https://api.github.com/users/kuri54" }
https://api.github.com/repos/huggingface/datasets/issues/6240/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6240/timeline
closed
false
6,240
null
2023-09-14T23:54:42Z
null
false
1,895,349,382
https://api.github.com/repos/huggingface/datasets/issues/6239
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6239/events
[]
null
2023-09-15T14:32:10Z
[]
https://github.com/huggingface/datasets/issues/6239
NONE
completed
null
null
[ "I think this is the same issue as https://github.com/huggingface/datasets/issues/4776. Maybe installing `ffmpeg` can fix it:\r\n```python\r\nadd-apt-repository -y ppa:savoury1/ffmpeg4\r\napt-get -qq install -y ffmpeg\r\n```\r\n\r\nHowever, the best solution is to use a newer version of `datasets`. In the recent releases, we've replaced `torchaudio` with `soundfile`, which is easier to install and faster.", "@mariosasko \r\nThanks for your help" ]
Load local audio data doesn't work
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6239/reactions" }
I_kwDODunzps5w-LyG
null
2023-09-13T22:30:01Z
https://api.github.com/repos/huggingface/datasets/issues/6239/comments
### Describe the bug I get a RuntimeError from the following code: ```python audio_dataset = Dataset.from_dict({"audio": ["/kaggle/input/bengaliai-speech/train_mp3s/000005f3362c.mp3"]}).cast_column("audio", Audio()) audio_dataset[0] ``` ### Traceback <details> ```python RuntimeError Traceback (most recent call last) Cell In[33], line 1 ----> 1 train_dataset[0] File /opt/conda/lib/python3.10/site-packages/datasets/arrow_dataset.py:1764, in Dataset.__getitem__(self, key) 1762 def __getitem__(self, key): # noqa: F811 1763 """Can be used to index columns (by string names) or rows (by integer index or iterable of indices or bools).""" -> 1764 return self._getitem( 1765 key, 1766 ) File /opt/conda/lib/python3.10/site-packages/datasets/arrow_dataset.py:1749, in Dataset._getitem(self, key, decoded, **kwargs) 1747 formatter = get_formatter(format_type, features=self.features, decoded=decoded, **format_kwargs) 1748 pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None) -> 1749 formatted_output = format_table( 1750 pa_subtable, key, formatter=formatter, format_columns=format_columns, output_all_columns=output_all_columns 1751 ) 1752 return formatted_output File /opt/conda/lib/python3.10/site-packages/datasets/formatting/formatting.py:532, in format_table(table, key, formatter, format_columns, output_all_columns) 530 python_formatter = PythonFormatter(features=None) 531 if format_columns is None: --> 532 return formatter(pa_table, query_type=query_type) 533 elif query_type == "column": 534 if key in format_columns: File /opt/conda/lib/python3.10/site-packages/datasets/formatting/formatting.py:281, in Formatter.__call__(self, pa_table, query_type) 279 def __call__(self, pa_table: pa.Table, query_type: str) -> Union[RowFormat, ColumnFormat, BatchFormat]: 280 if query_type == "row": --> 281 return self.format_row(pa_table) 282 elif query_type == "column": 283 return self.format_column(pa_table) File /opt/conda/lib/python3.10/site-packages/datasets/formatting/formatting.py:312, in PythonFormatter.format_row(self, pa_table) 310 row = self.python_arrow_extractor().extract_row(pa_table) 311 if self.decoded: --> 312 row = self.python_features_decoder.decode_row(row) 313 return row File /opt/conda/lib/python3.10/site-packages/datasets/formatting/formatting.py:221, in PythonFeaturesDecoder.decode_row(self, row) 220 def decode_row(self, row: dict) -> dict: --> 221 return self.features.decode_example(row) if self.features else row File /opt/conda/lib/python3.10/site-packages/datasets/features/features.py:1386, in Features.decode_example(self, example) 1376 def decode_example(self, example: dict): 1377 """Decode example with custom feature decoding. 1378 1379 Args: (...) 1383 :obj:`dict[str, Any]` 1384 """ -> 1386 return { 1387 column_name: decode_nested_example(feature, value) 1388 if self._column_requires_decoding[column_name] 1389 else value 1390 for column_name, (feature, value) in zip_dict( 1391 {key: value for key, value in self.items() if key in example}, example 1392 ) 1393 } File /opt/conda/lib/python3.10/site-packages/datasets/features/features.py:1387, in <dictcomp>(.0) 1376 def decode_example(self, example: dict): 1377 """Decode example with custom feature decoding. 1378 1379 Args: (...) 1383 :obj:`dict[str, Any]` 1384 """ 1386 return { -> 1387 column_name: decode_nested_example(feature, value) 1388 if self._column_requires_decoding[column_name] 1389 else value 1390 for column_name, (feature, value) in zip_dict( 1391 {key: value for key, value in self.items() if key in example}, example 1392 ) 1393 } File /opt/conda/lib/python3.10/site-packages/datasets/features/features.py:1087, in decode_nested_example(schema, obj) 1085 # Object with special decoding: 1086 elif isinstance(schema, (Audio, Image)): -> 1087 return schema.decode_example(obj) if obj is not None else None 1088 return obj File /opt/conda/lib/python3.10/site-packages/datasets/features/audio.py:103, in Audio.decode_example(self, value) 101 raise ValueError(f"An audio sample should have one of 'path' or 'bytes' but both are None in {value}.") 102 elif path is not None and path.endswith("mp3"): --> 103 array, sampling_rate = self._decode_mp3(file if file else path) 104 elif path is not None and path.endswith("opus"): 105 if file: File /opt/conda/lib/python3.10/site-packages/datasets/features/audio.py:241, in Audio._decode_mp3(self, path_or_file) 238 except RuntimeError as err: 239 raise ImportError("To support decoding 'mp3' audio files, please install 'sox'.") from err --> 241 array, sampling_rate = torchaudio.load(path_or_file, format="mp3") 242 if self.sampling_rate and self.sampling_rate != sampling_rate: 243 if not hasattr(self, "_resampler") or self._resampler.orig_freq != sampling_rate: File /opt/conda/lib/python3.10/site-packages/torchaudio/backend/sox_io_backend.py:256, in load(filepath, frame_offset, num_frames, normalize, channels_first, format) 254 if ret is not None: 255 return ret --> 256 return _fallback_load(filepath, frame_offset, num_frames, normalize, channels_first, format) File /opt/conda/lib/python3.10/site-packages/torchaudio/backend/sox_io_backend.py:30, in _fail_load(filepath, frame_offset, num_frames, normalize, channels_first, format) 22 def _fail_load( 23 filepath: str, 24 frame_offset: int = 0, (...) 28 format: Optional[str] = None, 29 ) -> Tuple[torch.Tensor, int]: ---> 30 raise RuntimeError("Failed to load audio from {}".format(filepath)) RuntimeError: Failed to load audio from /kaggle/input/bengaliai-speech/train_mp3s/000005f3362c.mp3 ``` </details> ### Steps to reproduce the bug 1. - Create a custom dataset using Local files of type mp3. 3. - Try to read the first audio item. ### Expected behavior Expected output ```python audio_dataset[0]["audio"] {'array': array([ 0. , 0.00024414, -0.00024414, ..., -0.00024414, 0. , 0. ], dtype=float32), 'path': 'path/to/audio_1', 'sampling_rate': 16000} ``` ### Environment info N/A
{ "avatar_url": "https://avatars.githubusercontent.com/u/554032?v=4", "events_url": "https://api.github.com/users/abodacs/events{/privacy}", "followers_url": "https://api.github.com/users/abodacs/followers", "following_url": "https://api.github.com/users/abodacs/following{/other_user}", "gists_url": "https://api.github.com/users/abodacs/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/abodacs", "id": 554032, "login": "abodacs", "node_id": "MDQ6VXNlcjU1NDAzMg==", "organizations_url": "https://api.github.com/users/abodacs/orgs", "received_events_url": "https://api.github.com/users/abodacs/received_events", "repos_url": "https://api.github.com/users/abodacs/repos", "site_admin": false, "starred_url": "https://api.github.com/users/abodacs/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/abodacs/subscriptions", "type": "User", "url": "https://api.github.com/users/abodacs" }
https://api.github.com/repos/huggingface/datasets/issues/6239/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6239/timeline
closed
false
6,239
null
2023-09-15T14:32:10Z
null
false
1,895,207,828
https://api.github.com/repos/huggingface/datasets/issues/6238
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6238/events
[]
null
2023-09-17T07:05:07Z
[]
https://github.com/huggingface/datasets/issues/6238
NONE
completed
null
null
[ "`filter` treats the function's output as a (selection) mask - `True` keeps the sample, and `False` drops it. In your case, `bool(0)` evaluates to `False`, so dropping the first sample is the correct behavior.", "Oh gosh! 🤦 I totally misunderstood the API! My apologies!" ]
`dataset.filter` ALWAYS removes the first item from the dataset when using batched=True
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6238/reactions" }
I_kwDODunzps5w9pOU
null
2023-09-13T20:20:37Z
https://api.github.com/repos/huggingface/datasets/issues/6238/comments
### Describe the bug If you call batched=True when calling `filter`, the first item is _always_ filtered out, regardless of the filter condition. ### Steps to reproduce the bug Here's a minimal example: ```python def filter_batch_always_true(batch, indices): print("First index being passed into this filter function: ", indices[0]) return indices # Keep all indices data = {"value": list(range(10))} dataset = Dataset.from_dict(data) filtered_dataset = dataset.filter(filter_batch_always_true, with_indices=True, batched=True) print("Length of original dataset: ", len(dataset)) print("Length of filtered_dataset: ", len(filtered_dataset)) print("Is equal to original? ", len(filtered_dataset) == len(dataset)) print("First item of filtered dataset: ", filtered_dataset[0]) print("Last item of filtered dataset: ", filtered_dataset[-1]) ``` prints: ``` First index being passed into this filter function: 0 Length of original dataset: 10 Length of filtered_dataset: 9 Is equal to original? False First item of filtered dataset: {'value': 1} Last item of filtered dataset: {'value': 9} ``` ### Expected behavior Filter should respect the filter condition. ### Environment info - `datasets` version: 2.14.4 - Platform: macOS-13.5-arm64-arm-64bit - Python version: 3.9.18 - Huggingface_hub version: 0.17.1 - PyArrow version: 10.0.1 - Pandas version: 2.0.2
{ "avatar_url": "https://avatars.githubusercontent.com/u/1330693?v=4", "events_url": "https://api.github.com/users/Taytay/events{/privacy}", "followers_url": "https://api.github.com/users/Taytay/followers", "following_url": "https://api.github.com/users/Taytay/following{/other_user}", "gists_url": "https://api.github.com/users/Taytay/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Taytay", "id": 1330693, "login": "Taytay", "node_id": "MDQ6VXNlcjEzMzA2OTM=", "organizations_url": "https://api.github.com/users/Taytay/orgs", "received_events_url": "https://api.github.com/users/Taytay/received_events", "repos_url": "https://api.github.com/users/Taytay/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Taytay/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Taytay/subscriptions", "type": "User", "url": "https://api.github.com/users/Taytay" }
https://api.github.com/repos/huggingface/datasets/issues/6238/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6238/timeline
closed
false
6,238
null
2023-09-17T07:05:07Z
null
false
1,893,822,321
https://api.github.com/repos/huggingface/datasets/issues/6237
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6237/events
[]
null
2023-09-19T21:54:58Z
[]
https://github.com/huggingface/datasets/issues/6237
NONE
completed
null
null
[ "[This](https://huggingface.co/docs/datasets/nlp_process#map) is the most performant way to tokenize a dataset (`batched=True, num_proc=None, return_tensors=\"np\"`) \r\n\r\nIf`tokenizer.is_fast` returns `True`, `num_proc` must be `None/1` to benefit from the fast tokenizers' parallelism (the fast tokenizers are implemented in Rust, and Rust multi-threading doesn't work well with Python multi-processing)" ]
Tokenization with multiple workers is too slow
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6237/reactions" }
I_kwDODunzps5w4W9x
null
2023-09-13T06:18:34Z
https://api.github.com/repos/huggingface/datasets/issues/6237/comments
I am trying to tokenize a few million documents with multiple workers but the tokenization process is taking forever. Code snippet: ``` raw_datasets.map( encode_function, batched=False, num_proc=args.preprocessing_num_workers, load_from_cache_file=not args.overwrite_cache, remove_columns=[name for name in raw_datasets["train"].column_names if name not in ["input_ids", "labels", "attention_mask"]], desc="Tokenizing data", ) ``` Details: ``` transformers==4.28.0.dev0 datasets==4.28.0.dev0 preprocessing_num_workers==48 ``` tokenizer == decapoda-research/llama-7b-hf
{ "avatar_url": "https://avatars.githubusercontent.com/u/25720695?v=4", "events_url": "https://api.github.com/users/macabdul9/events{/privacy}", "followers_url": "https://api.github.com/users/macabdul9/followers", "following_url": "https://api.github.com/users/macabdul9/following{/other_user}", "gists_url": "https://api.github.com/users/macabdul9/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/macabdul9", "id": 25720695, "login": "macabdul9", "node_id": "MDQ6VXNlcjI1NzIwNjk1", "organizations_url": "https://api.github.com/users/macabdul9/orgs", "received_events_url": "https://api.github.com/users/macabdul9/received_events", "repos_url": "https://api.github.com/users/macabdul9/repos", "site_admin": false, "starred_url": "https://api.github.com/users/macabdul9/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/macabdul9/subscriptions", "type": "User", "url": "https://api.github.com/users/macabdul9" }
https://api.github.com/repos/huggingface/datasets/issues/6237/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6237/timeline
closed
false
6,237
null
2023-09-19T21:54:58Z
null
false
1,893,648,480
https://api.github.com/repos/huggingface/datasets/issues/6236
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6236/events
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
null
2023-09-18T01:11:21Z
[]
https://github.com/huggingface/datasets/issues/6236
NONE
null
null
null
[ "cc @Rocketknight1 ", "Hey! You can implement this yourself, just:\r\n\r\n1) Create the dataset with `to_tf_dataset()` with `shuffle=False`\r\n2) Add an `unbatch()` at the end (or use batch_size=1)\r\n3) Add a `shuffle()` to the resulting dataset with your desired buffer size\r\n4) Add a `batch()` at the end again to re-batch your dataset.\r\n\r\nNote that the way we construct datasets in `to_tf_dataset()`, we don't actually shuffle the entire dataset in-memory, using `tf.data.Dataset.shuffle()`! Instead, we shuffle an index array and then load from the dataset with that. This means that shuffling with `tf.data.Dataset.shuffle()` will probably be slower and use more memory than our approach - I don't think adding the option for smaller shuffle buffers will actually save you memory on this!", "Thanks for your reply! @Rocketknight1 \r\n\"We don't actually shuffle the entire dataset in-memory, using tf.data.Dataset.shuffle()! Instead, we shuffle an index array and then load from the dataset with that.\"\r\nIn such case, there will be random access to dataset data during shuffling. When the dataset is large, the performance can be X10 times slow. I have tried many ways with to_tf_dataset() trying to achieve comparable performance with tf.data.Dataset().shuffle(buffer_size).batch(). But the performance with to_tf_dataset() is still slow. \r\n" ]
Support buffer shuffle for to_tf_dataset
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6236/reactions" }
I_kwDODunzps5w3shg
null
2023-09-13T03:19:44Z
https://api.github.com/repos/huggingface/datasets/issues/6236/comments
### Feature request I'm using to_tf_dataset to convert a large dataset to tf.data.Dataset and use Keras fit to train model. Currently, to_tf_dataset only supports full size shuffle, which can be very slow on large dataset. tf.data.Dataset support buffer shuffle by default. shuffle( buffer_size, seed=None, reshuffle_each_iteration=None, name=None ) ### Motivation I'm very frustrated to find the loading with shuffling large dataset is very slow. It seems impossible to shuffle before training Keras with big dataset. ### Your contribution NA
{ "avatar_url": "https://avatars.githubusercontent.com/u/7635551?v=4", "events_url": "https://api.github.com/users/EthanRock/events{/privacy}", "followers_url": "https://api.github.com/users/EthanRock/followers", "following_url": "https://api.github.com/users/EthanRock/following{/other_user}", "gists_url": "https://api.github.com/users/EthanRock/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/EthanRock", "id": 7635551, "login": "EthanRock", "node_id": "MDQ6VXNlcjc2MzU1NTE=", "organizations_url": "https://api.github.com/users/EthanRock/orgs", "received_events_url": "https://api.github.com/users/EthanRock/received_events", "repos_url": "https://api.github.com/users/EthanRock/repos", "site_admin": false, "starred_url": "https://api.github.com/users/EthanRock/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/EthanRock/subscriptions", "type": "User", "url": "https://api.github.com/users/EthanRock" }
https://api.github.com/repos/huggingface/datasets/issues/6236/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6236/timeline
open
false
6,236
null
null
null
false
1,893,337,083
https://api.github.com/repos/huggingface/datasets/issues/6235
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6235/events
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
null
2023-09-12T21:51:08Z
[]
https://github.com/huggingface/datasets/issues/6235
NONE
null
null
null
[]
Support multiprocessing for download/extract nestedly
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6235/reactions" }
I_kwDODunzps5w2gf7
null
2023-09-12T21:51:08Z
https://api.github.com/repos/huggingface/datasets/issues/6235/comments
### Feature request Current multiprocessing for download/extract is not done nestedly. For example, when processing SlimPajama, there is only 3 processes (for train/test/val), while there are many files inside these 3 folders ``` Downloading data files #0: 0%| | 0/1 [00:00<?, ?obj/s] Downloading data files #1: 0%| | 0/1 [00:00<?, ?obj/s] Downloading data files #2: 0%| | 0/1 [00:00<?, ?obj/s] Extracting data files #0: 0%| | 0/1 [00:00<?, ?obj/s] Extracting data files #1: 0%| | 0/1 [00:00<?, ?obj/s] Extracting data files #2: 0%| | 0/1 [00:00<?, ?obj/s] ``` ### Motivation speedup dataset loading ### Your contribution I can help test the feature
{ "avatar_url": "https://avatars.githubusercontent.com/u/22725729?v=4", "events_url": "https://api.github.com/users/hgt312/events{/privacy}", "followers_url": "https://api.github.com/users/hgt312/followers", "following_url": "https://api.github.com/users/hgt312/following{/other_user}", "gists_url": "https://api.github.com/users/hgt312/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/hgt312", "id": 22725729, "login": "hgt312", "node_id": "MDQ6VXNlcjIyNzI1NzI5", "organizations_url": "https://api.github.com/users/hgt312/orgs", "received_events_url": "https://api.github.com/users/hgt312/received_events", "repos_url": "https://api.github.com/users/hgt312/repos", "site_admin": false, "starred_url": "https://api.github.com/users/hgt312/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hgt312/subscriptions", "type": "User", "url": "https://api.github.com/users/hgt312" }
https://api.github.com/repos/huggingface/datasets/issues/6235/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6235/timeline
open
false
6,235
null
null
null
false
1,891,804,286
https://api.github.com/repos/huggingface/datasets/issues/6233
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6233/events
[]
null
2023-09-13T18:20:50Z
[]
https://github.com/huggingface/datasets/pull/6233
CONTRIBUTOR
null
false
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008370 / 0.011353 (-0.002983) | 0.004674 / 0.011008 (-0.006334) | 0.103912 / 0.038508 (0.065404) | 0.101668 / 0.023109 (0.078559) | 0.417945 / 0.275898 (0.142047) | 0.454805 / 0.323480 (0.131325) | 0.004763 / 0.007986 (-0.003223) | 0.003934 / 0.004328 (-0.000394) | 0.078446 / 0.004250 (0.074196) | 0.068383 / 0.037052 (0.031331) | 0.415100 / 0.258489 (0.156611) | 0.475272 / 0.293841 (0.181431) | 0.036884 / 0.128546 (-0.091662) | 0.010097 / 0.075646 (-0.065549) | 0.354962 / 0.419271 (-0.064309) | 0.062688 / 0.043533 (0.019155) | 0.420643 / 0.255139 (0.165504) | 0.446504 / 0.283200 (0.163304) | 0.029075 / 0.141683 (-0.112608) | 1.791517 / 1.452155 (0.339363) | 1.859820 / 1.492716 (0.367104) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.246929 / 0.018006 (0.228923) | 0.519593 / 0.000490 (0.519103) | 0.006848 / 0.000200 (0.006648) | 0.000168 / 0.000054 (0.000114) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.035179 / 0.037411 (-0.002232) | 0.115582 / 0.014526 (0.101057) | 0.128235 / 0.176557 (-0.048321) | 0.187123 / 0.737135 (-0.550012) | 0.120862 / 0.296338 (-0.175477) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.463406 / 0.215209 (0.248197) | 4.615517 / 2.077655 (2.537863) | 2.250513 / 1.504120 (0.746393) | 2.061226 / 1.541195 (0.520032) | 2.189938 / 1.468490 (0.721448) | 0.582984 / 4.584777 (-4.001793) | 4.299464 / 3.745712 (0.553751) | 4.037274 / 5.269862 (-1.232588) | 2.608967 / 4.565676 (-1.956710) | 0.068944 / 0.424275 (-0.355331) | 0.009501 / 0.007607 (0.001894) | 0.567436 / 0.226044 (0.341392) | 5.662738 / 2.268929 (3.393809) | 2.849094 / 55.444624 (-52.595530) | 2.461013 / 6.876477 (-4.415464) | 2.663245 / 2.142072 (0.521172) | 0.704528 / 4.805227 (-4.100699) | 0.163583 / 6.500664 (-6.337081) | 0.075719 / 0.075469 (0.000250) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.604743 / 1.841788 (-0.237044) | 24.512054 / 8.074308 (16.437746) | 17.870939 / 10.191392 (7.679547) | 0.199188 / 0.680424 (-0.481236) | 0.023820 / 0.534201 (-0.510381) | 0.487520 / 0.579283 (-0.091763) | 0.512543 / 0.434364 (0.078179) | 0.575138 / 0.540337 (0.034801) | 0.759863 / 1.386936 (-0.627073) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010516 / 0.011353 (-0.000837) | 0.004779 / 0.011008 (-0.006229) | 0.078482 / 0.038508 (0.039974) | 0.108533 / 0.023109 (0.085424) | 0.498692 / 0.275898 (0.222794) | 0.534698 / 0.323480 (0.211218) | 0.007624 / 0.007986 (-0.000362) | 0.003938 / 0.004328 (-0.000391) | 0.077317 / 0.004250 (0.073067) | 0.078056 / 0.037052 (0.041004) | 0.493648 / 0.258489 (0.235159) | 0.540891 / 0.293841 (0.247050) | 0.040377 / 0.128546 (-0.088169) | 0.010155 / 0.075646 (-0.065491) | 0.084384 / 0.419271 (-0.334888) | 0.061419 / 0.043533 (0.017886) | 0.494474 / 0.255139 (0.239335) | 0.524656 / 0.283200 (0.241456) | 0.029052 / 0.141683 (-0.112631) | 1.794584 / 1.452155 (0.342429) | 1.939987 / 1.492716 (0.447270) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.377404 / 0.018006 (0.359398) | 0.516562 / 0.000490 (0.516072) | 0.109555 / 0.000200 (0.109356) | 0.001126 / 0.000054 (0.001071) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.039793 / 0.037411 (0.002382) | 0.123001 / 0.014526 (0.108475) | 0.127536 / 0.176557 (-0.049021) | 0.191681 / 0.737135 (-0.545455) | 0.128590 / 0.296338 (-0.167748) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.513689 / 0.215209 (0.298480) | 5.135114 / 2.077655 (3.057459) | 2.797885 / 1.504120 (1.293765) | 2.715332 / 1.541195 (1.174137) | 2.746437 / 1.468490 (1.277947) | 0.596480 / 4.584777 (-3.988297) | 4.382013 / 3.745712 (0.636301) | 3.965956 / 5.269862 (-1.303906) | 2.545206 / 4.565676 (-2.020471) | 0.069620 / 0.424275 (-0.354655) | 0.009321 / 0.007607 (0.001714) | 0.612424 / 0.226044 (0.386379) | 6.107037 / 2.268929 (3.838109) | 3.447246 / 55.444624 (-51.997379) | 3.073262 / 6.876477 (-3.803215) | 3.280185 / 2.142072 (1.138113) | 0.704776 / 4.805227 (-4.100451) | 0.160488 / 6.500664 (-6.340176) | 0.075730 / 0.075469 (0.000261) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.697035 / 1.841788 (-0.144753) | 24.766118 / 8.074308 (16.691809) | 18.476699 / 10.191392 (8.285307) | 0.176594 / 0.680424 (-0.503830) | 0.024249 / 0.534201 (-0.509952) | 0.478743 / 0.579283 (-0.100541) | 0.518774 / 0.434364 (0.084410) | 0.581498 / 0.540337 (0.041161) | 0.797784 / 1.386936 (-0.589152) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#722cea0f4929ff4ffcdbb7ca6b72cba229b9701a \"CML watermark\")\n" ]
Update README.md
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6233/reactions" }
PR_kwDODunzps5aF3kd
{ "diff_url": "https://github.com/huggingface/datasets/pull/6233.diff", "html_url": "https://github.com/huggingface/datasets/pull/6233", "merged_at": "2023-09-13T18:10:04Z", "patch_url": "https://github.com/huggingface/datasets/pull/6233.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6233" }
2023-09-12T06:53:06Z
https://api.github.com/repos/huggingface/datasets/issues/6233/comments
fixed a typo
{ "avatar_url": "https://avatars.githubusercontent.com/u/95188570?v=4", "events_url": "https://api.github.com/users/NinoRisteski/events{/privacy}", "followers_url": "https://api.github.com/users/NinoRisteski/followers", "following_url": "https://api.github.com/users/NinoRisteski/following{/other_user}", "gists_url": "https://api.github.com/users/NinoRisteski/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/NinoRisteski", "id": 95188570, "login": "NinoRisteski", "node_id": "U_kgDOBax2Wg", "organizations_url": "https://api.github.com/users/NinoRisteski/orgs", "received_events_url": "https://api.github.com/users/NinoRisteski/received_events", "repos_url": "https://api.github.com/users/NinoRisteski/repos", "site_admin": false, "starred_url": "https://api.github.com/users/NinoRisteski/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NinoRisteski/subscriptions", "type": "User", "url": "https://api.github.com/users/NinoRisteski" }
https://api.github.com/repos/huggingface/datasets/issues/6233/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6233/timeline
closed
false
6,233
null
2023-09-13T18:10:04Z
null
true
1,891,109,762
https://api.github.com/repos/huggingface/datasets/issues/6232
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6232/events
[]
null
2023-09-15T18:07:56Z
[]
https://github.com/huggingface/datasets/pull/6232
CONTRIBUTOR
null
false
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "CI errors are unrelated", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006681 / 0.011353 (-0.004672) | 0.004132 / 0.011008 (-0.006876) | 0.085045 / 0.038508 (0.046536) | 0.077680 / 0.023109 (0.054571) | 0.382042 / 0.275898 (0.106144) | 0.412932 / 0.323480 (0.089452) | 0.005339 / 0.007986 (-0.002646) | 0.003408 / 0.004328 (-0.000921) | 0.065280 / 0.004250 (0.061030) | 0.055732 / 0.037052 (0.018680) | 0.400231 / 0.258489 (0.141742) | 0.432497 / 0.293841 (0.138656) | 0.031532 / 0.128546 (-0.097014) | 0.008721 / 0.075646 (-0.066925) | 0.289612 / 0.419271 (-0.129660) | 0.053089 / 0.043533 (0.009556) | 0.383300 / 0.255139 (0.128161) | 0.401204 / 0.283200 (0.118004) | 0.023582 / 0.141683 (-0.118100) | 1.493854 / 1.452155 (0.041699) | 1.583497 / 1.492716 (0.090781) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.239163 / 0.018006 (0.221157) | 0.469555 / 0.000490 (0.469065) | 0.008325 / 0.000200 (0.008125) | 0.000113 / 0.000054 (0.000059) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028975 / 0.037411 (-0.008436) | 0.084195 / 0.014526 (0.069669) | 0.189394 / 0.176557 (0.012837) | 0.158010 / 0.737135 (-0.579125) | 0.097502 / 0.296338 (-0.198837) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.383085 / 0.215209 (0.167876) | 3.827030 / 2.077655 (1.749375) | 1.872279 / 1.504120 (0.368159) | 1.705808 / 1.541195 (0.164613) | 1.833706 / 1.468490 (0.365216) | 0.484744 / 4.584777 (-4.100033) | 3.658221 / 3.745712 (-0.087491) | 3.398462 / 5.269862 (-1.871399) | 2.064974 / 4.565676 (-2.500703) | 0.057740 / 0.424275 (-0.366535) | 0.007926 / 0.007607 (0.000319) | 0.465358 / 0.226044 (0.239314) | 4.652951 / 2.268929 (2.384022) | 2.328390 / 55.444624 (-53.116235) | 2.000606 / 6.876477 (-4.875870) | 2.268391 / 2.142072 (0.126318) | 0.586537 / 4.805227 (-4.218690) | 0.134749 / 6.500664 (-6.365915) | 0.061276 / 0.075469 (-0.014193) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.337913 / 1.841788 (-0.503875) | 20.232122 / 8.074308 (12.157814) | 14.478579 / 10.191392 (4.287187) | 0.167545 / 0.680424 (-0.512878) | 0.018745 / 0.534201 (-0.515456) | 0.401209 / 0.579283 (-0.178074) | 0.425748 / 0.434364 (-0.008616) | 0.462539 / 0.540337 (-0.077798) | 0.652446 / 1.386936 (-0.734490) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007159 / 0.011353 (-0.004194) | 0.004091 / 0.011008 (-0.006917) | 0.066202 / 0.038508 (0.027694) | 0.083096 / 0.023109 (0.059987) | 0.402160 / 0.275898 (0.126261) | 0.440565 / 0.323480 (0.117085) | 0.005757 / 0.007986 (-0.002228) | 0.003445 / 0.004328 (-0.000884) | 0.065498 / 0.004250 (0.061248) | 0.059787 / 0.037052 (0.022735) | 0.407017 / 0.258489 (0.148528) | 0.448270 / 0.293841 (0.154429) | 0.033606 / 0.128546 (-0.094941) | 0.008744 / 0.075646 (-0.066902) | 0.072902 / 0.419271 (-0.346369) | 0.050144 / 0.043533 (0.006611) | 0.401069 / 0.255139 (0.145930) | 0.426389 / 0.283200 (0.143189) | 0.023297 / 0.141683 (-0.118386) | 1.506152 / 1.452155 (0.053998) | 1.570211 / 1.492716 (0.077495) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.235759 / 0.018006 (0.217753) | 0.488410 / 0.000490 (0.487921) | 0.004587 / 0.000200 (0.004387) | 0.000115 / 0.000054 (0.000060) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034123 / 0.037411 (-0.003289) | 0.102163 / 0.014526 (0.087638) | 0.110892 / 0.176557 (-0.065664) | 0.166000 / 0.737135 (-0.571135) | 0.110845 / 0.296338 (-0.185494) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.431397 / 0.215209 (0.216188) | 4.291540 / 2.077655 (2.213885) | 2.298248 / 1.504120 (0.794128) | 2.134752 / 1.541195 (0.593557) | 2.207913 / 1.468490 (0.739423) | 0.490607 / 4.584777 (-4.094170) | 3.683078 / 3.745712 (-0.062635) | 3.314266 / 5.269862 (-1.955596) | 2.059488 / 4.565676 (-2.506188) | 0.057876 / 0.424275 (-0.366399) | 0.007696 / 0.007607 (0.000089) | 0.512186 / 0.226044 (0.286142) | 5.124071 / 2.268929 (2.855142) | 2.803913 / 55.444624 (-52.640711) | 2.428558 / 6.876477 (-4.447919) | 2.655207 / 2.142072 (0.513135) | 0.584589 / 4.805227 (-4.220638) | 0.133518 / 6.500664 (-6.367146) | 0.060729 / 0.075469 (-0.014740) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.352916 / 1.841788 (-0.488872) | 20.249632 / 8.074308 (12.175323) | 15.283079 / 10.191392 (5.091686) | 0.157601 / 0.680424 (-0.522823) | 0.019650 / 0.534201 (-0.514551) | 0.396398 / 0.579283 (-0.182885) | 0.430111 / 0.434364 (-0.004252) | 0.480627 / 0.540337 (-0.059710) | 0.642165 / 1.386936 (-0.744771) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#9b21e181b642bd55b3ef68c1948bfbcd388136d6 \"CML watermark\")\n" ]
Improve error message for missing function parameters
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6232/reactions" }
PR_kwDODunzps5aDhhK
{ "diff_url": "https://github.com/huggingface/datasets/pull/6232.diff", "html_url": "https://github.com/huggingface/datasets/pull/6232", "merged_at": "2023-09-15T17:59:02Z", "patch_url": "https://github.com/huggingface/datasets/pull/6232.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6232" }
2023-09-11T19:11:58Z
https://api.github.com/repos/huggingface/datasets/issues/6232/comments
The error message in the fingerprint module was missing the f-string 'f' symbol, so the error message returned by fingerprint.py, line 469 was literally "function {func} is missing parameters {fingerprint_names} in signature." This has been fixed.
{ "avatar_url": "https://avatars.githubusercontent.com/u/4016832?v=4", "events_url": "https://api.github.com/users/suavemint/events{/privacy}", "followers_url": "https://api.github.com/users/suavemint/followers", "following_url": "https://api.github.com/users/suavemint/following{/other_user}", "gists_url": "https://api.github.com/users/suavemint/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/suavemint", "id": 4016832, "login": "suavemint", "node_id": "MDQ6VXNlcjQwMTY4MzI=", "organizations_url": "https://api.github.com/users/suavemint/orgs", "received_events_url": "https://api.github.com/users/suavemint/received_events", "repos_url": "https://api.github.com/users/suavemint/repos", "site_admin": false, "starred_url": "https://api.github.com/users/suavemint/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/suavemint/subscriptions", "type": "User", "url": "https://api.github.com/users/suavemint" }
https://api.github.com/repos/huggingface/datasets/issues/6232/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6232/timeline
closed
false
6,232
null
2023-09-15T17:59:02Z
null
true
1,890,863,249
https://api.github.com/repos/huggingface/datasets/issues/6231
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6231/events
[]
null
2023-09-26T11:19:36Z
[]
https://github.com/huggingface/datasets/pull/6231
CONTRIBUTOR
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6231). All of your documentation changes will be reflected on that endpoint.", "realized that this pr is still not merged, @lhoestq maybe you can take a look at it? ", "I think https://github.com/huggingface/datasets/pull/6218 fixed the issue (a bit differently though)", "ah actually nope, let me check", "@lhoestq yeah the pr you're referencing doesn't fix the problem when two semantically analogous configs occur in datasets_info.json, i suggest to rewrite the legacy one if it exists during .push_to_hub", "Only the old versions of `datasets` use the JSON file over the README and they can only load one config so the name doesn't really matter.\r\n\r\nThat's why I chose to load the info from the JSON no matter the name (no check to see if it's \"username--dataset_name\") in my previous PR.\r\n\r\nI think you can remove the old info without even checking the name. In this case maybe no need to update load.py ", "(also minor: not checking the name makes it more robust to dataset renaming)", "@lhoestq okay makes sense... so you think it's not a problem that in some cases we might end up with `dataset_infos.json` having two keys in it?", "> @lhoestq okay makes sense... so you think it's not a problem that in some cases we might end up with dataset_infos.json having two keys in it?\r\n\r\nIdeally they should have only one config no ? Since old versions of `datasets` simply load the first config in the JSON.\r\nWe can overwrite it with the new default one (and no matter the name of the outdated config in the JSON)\r\n\r\n" ]
Overwrite legacy default config name in `dataset_infos.json` in packaged datasets
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6231/reactions" }
PR_kwDODunzps5aCr8_
{ "diff_url": "https://github.com/huggingface/datasets/pull/6231.diff", "html_url": "https://github.com/huggingface/datasets/pull/6231", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/6231.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6231" }
2023-09-11T16:27:09Z
https://api.github.com/repos/huggingface/datasets/issues/6231/comments
Currently if we push data as default config with `.push_to_hub` to a repo that has a legacy `dataset_infos.json` file containing a legacy default config name like `{username}--{dataset_name}`, new key `"default"` is added to `dataset_infos.json` along with the legacy one. I think the legacy one should be dropped in this case. Also, in `load.py` I suggest to check if a legacy config name is indeed a legacy config name because after this fix it might not be the case (this check was first introduced in https://github.com/huggingface/datasets/pull/6218)
{ "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/polinaeterna", "id": 16348744, "login": "polinaeterna", "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "repos_url": "https://api.github.com/users/polinaeterna/repos", "site_admin": false, "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "type": "User", "url": "https://api.github.com/users/polinaeterna" }
https://api.github.com/repos/huggingface/datasets/issues/6231/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6231/timeline
open
false
6,231
null
null
null
true
1,890,521,006
https://api.github.com/repos/huggingface/datasets/issues/6230
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6230/events
[]
null
2023-09-13T18:21:28Z
[]
https://github.com/huggingface/datasets/pull/6230
COLLABORATOR
null
false
null
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005894 / 0.011353 (-0.005459) | 0.003621 / 0.011008 (-0.007387) | 0.080446 / 0.038508 (0.041938) | 0.056800 / 0.023109 (0.033691) | 0.326485 / 0.275898 (0.050587) | 0.376207 / 0.323480 (0.052727) | 0.004640 / 0.007986 (-0.003346) | 0.002795 / 0.004328 (-0.001533) | 0.062815 / 0.004250 (0.058565) | 0.045761 / 0.037052 (0.008709) | 0.341417 / 0.258489 (0.082928) | 0.373129 / 0.293841 (0.079288) | 0.027226 / 0.128546 (-0.101321) | 0.007873 / 0.075646 (-0.067774) | 0.261737 / 0.419271 (-0.157535) | 0.044648 / 0.043533 (0.001115) | 0.320195 / 0.255139 (0.065056) | 0.381892 / 0.283200 (0.098692) | 0.020431 / 0.141683 (-0.121252) | 1.405332 / 1.452155 (-0.046823) | 1.455592 / 1.492716 (-0.037125) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.191539 / 0.018006 (0.173533) | 0.423655 / 0.000490 (0.423165) | 0.002741 / 0.000200 (0.002541) | 0.000069 / 0.000054 (0.000014) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023952 / 0.037411 (-0.013459) | 0.073387 / 0.014526 (0.058861) | 0.083746 / 0.176557 (-0.092810) | 0.144977 / 0.737135 (-0.592159) | 0.083808 / 0.296338 (-0.212530) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.436228 / 0.215209 (0.221019) | 4.370510 / 2.077655 (2.292855) | 2.340426 / 1.504120 (0.836306) | 2.202215 / 1.541195 (0.661021) | 2.258528 / 1.468490 (0.790037) | 0.503455 / 4.584777 (-4.081322) | 3.043695 / 3.745712 (-0.702017) | 2.784033 / 5.269862 (-2.485829) | 1.847956 / 4.565676 (-2.717721) | 0.057702 / 0.424275 (-0.366573) | 0.006703 / 0.007607 (-0.000904) | 0.510628 / 0.226044 (0.284583) | 5.101890 / 2.268929 (2.832961) | 2.816469 / 55.444624 (-52.628155) | 2.474220 / 6.876477 (-4.402257) | 2.617851 / 2.142072 (0.475779) | 0.593585 / 4.805227 (-4.211642) | 0.125895 / 6.500664 (-6.374769) | 0.062170 / 0.075469 (-0.013299) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.238792 / 1.841788 (-0.602996) | 18.096417 / 8.074308 (10.022108) | 13.548778 / 10.191392 (3.357386) | 0.144878 / 0.680424 (-0.535546) | 0.016644 / 0.534201 (-0.517557) | 0.334556 / 0.579283 (-0.244728) | 0.343680 / 0.434364 (-0.090684) | 0.383093 / 0.540337 (-0.157244) | 0.525075 / 1.386936 (-0.861861) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006125 / 0.011353 (-0.005228) | 0.003668 / 0.011008 (-0.007340) | 0.062650 / 0.038508 (0.024142) | 0.058882 / 0.023109 (0.035772) | 0.454643 / 0.275898 (0.178745) | 0.486659 / 0.323480 (0.163179) | 0.005558 / 0.007986 (-0.002427) | 0.002858 / 0.004328 (-0.001471) | 0.062603 / 0.004250 (0.058353) | 0.049701 / 0.037052 (0.012649) | 0.455903 / 0.258489 (0.197413) | 0.491544 / 0.293841 (0.197703) | 0.028581 / 0.128546 (-0.099965) | 0.008040 / 0.075646 (-0.067607) | 0.068314 / 0.419271 (-0.350957) | 0.040637 / 0.043533 (-0.002896) | 0.450288 / 0.255139 (0.195149) | 0.476330 / 0.283200 (0.193131) | 0.018989 / 0.141683 (-0.122693) | 1.455122 / 1.452155 (0.002967) | 1.496941 / 1.492716 (0.004225) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.227382 / 0.018006 (0.209376) | 0.432637 / 0.000490 (0.432147) | 0.002727 / 0.000200 (0.002527) | 0.000073 / 0.000054 (0.000019) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026125 / 0.037411 (-0.011286) | 0.081342 / 0.014526 (0.066817) | 0.091227 / 0.176557 (-0.085329) | 0.145175 / 0.737135 (-0.591960) | 0.091988 / 0.296338 (-0.204351) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.454293 / 0.215209 (0.239083) | 4.537912 / 2.077655 (2.460257) | 2.489146 / 1.504120 (0.985026) | 2.307166 / 1.541195 (0.765971) | 2.380866 / 1.468490 (0.912376) | 0.509015 / 4.584777 (-4.075762) | 3.111069 / 3.745712 (-0.634644) | 2.839181 / 5.269862 (-2.430681) | 1.874630 / 4.565676 (-2.691047) | 0.058540 / 0.424275 (-0.365735) | 0.006693 / 0.007607 (-0.000914) | 0.528408 / 0.226044 (0.302363) | 5.285802 / 2.268929 (3.016874) | 2.952090 / 55.444624 (-52.492534) | 2.591496 / 6.876477 (-4.284980) | 2.741080 / 2.142072 (0.599007) | 0.595610 / 4.805227 (-4.209617) | 0.124387 / 6.500664 (-6.376277) | 0.061032 / 0.075469 (-0.014437) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.365816 / 1.841788 (-0.475972) | 18.684534 / 8.074308 (10.610226) | 14.540438 / 10.191392 (4.349046) | 0.146793 / 0.680424 (-0.533631) | 0.018165 / 0.534201 (-0.516036) | 0.333794 / 0.579283 (-0.245489) | 0.345533 / 0.434364 (-0.088830) | 0.384453 / 0.540337 (-0.155885) | 0.529104 / 1.386936 (-0.857832) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#6c884967dd5f4e8aa3d1f3c2e3a414ae53afe261 \"CML watermark\")\n", "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006121 / 0.011353 (-0.005232) | 0.003683 / 0.011008 (-0.007325) | 0.083329 / 0.038508 (0.044821) | 0.063350 / 0.023109 (0.040241) | 0.329959 / 0.275898 (0.054061) | 0.396111 / 0.323480 (0.072631) | 0.003554 / 0.007986 (-0.004432) | 0.002907 / 0.004328 (-0.001421) | 0.064152 / 0.004250 (0.059902) | 0.049182 / 0.037052 (0.012130) | 0.343862 / 0.258489 (0.085373) | 0.414568 / 0.293841 (0.120727) | 0.027157 / 0.128546 (-0.101389) | 0.007957 / 0.075646 (-0.067689) | 0.261868 / 0.419271 (-0.157404) | 0.044938 / 0.043533 (0.001405) | 0.318470 / 0.255139 (0.063331) | 0.393319 / 0.283200 (0.110119) | 0.022848 / 0.141683 (-0.118835) | 1.419916 / 1.452155 (-0.032238) | 1.508783 / 1.492716 (0.016067) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.200530 / 0.018006 (0.182523) | 0.433586 / 0.000490 (0.433097) | 0.002063 / 0.000200 (0.001863) | 0.000070 / 0.000054 (0.000016) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024803 / 0.037411 (-0.012609) | 0.075894 / 0.014526 (0.061368) | 0.086488 / 0.176557 (-0.090069) | 0.149058 / 0.737135 (-0.588077) | 0.087046 / 0.296338 (-0.209292) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.390771 / 0.215209 (0.175562) | 3.886178 / 2.077655 (1.808523) | 1.868626 / 1.504120 (0.364506) | 1.708532 / 1.541195 (0.167338) | 1.788491 / 1.468490 (0.320001) | 0.505706 / 4.584777 (-4.079071) | 3.062094 / 3.745712 (-0.683618) | 2.898559 / 5.269862 (-2.371302) | 1.901225 / 4.565676 (-2.664452) | 0.058366 / 0.424275 (-0.365909) | 0.006851 / 0.007607 (-0.000756) | 0.465382 / 0.226044 (0.239337) | 4.650187 / 2.268929 (2.381258) | 2.316152 / 55.444624 (-53.128472) | 1.989597 / 6.876477 (-4.886879) | 2.169266 / 2.142072 (0.027194) | 0.593257 / 4.805227 (-4.211970) | 0.126440 / 6.500664 (-6.374224) | 0.062227 / 0.075469 (-0.013242) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.283591 / 1.841788 (-0.558197) | 18.384667 / 8.074308 (10.310358) | 14.079611 / 10.191392 (3.888219) | 0.150453 / 0.680424 (-0.529971) | 0.017100 / 0.534201 (-0.517101) | 0.330503 / 0.579283 (-0.248780) | 0.348134 / 0.434364 (-0.086230) | 0.385726 / 0.540337 (-0.154612) | 0.529147 / 1.386936 (-0.857789) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006168 / 0.011353 (-0.005185) | 0.003801 / 0.011008 (-0.007208) | 0.063168 / 0.038508 (0.024660) | 0.062331 / 0.023109 (0.039221) | 0.448321 / 0.275898 (0.172423) | 0.484416 / 0.323480 (0.160937) | 0.004827 / 0.007986 (-0.003159) | 0.002848 / 0.004328 (-0.001480) | 0.062736 / 0.004250 (0.058486) | 0.049128 / 0.037052 (0.012075) | 0.449276 / 0.258489 (0.190787) | 0.499035 / 0.293841 (0.205194) | 0.028577 / 0.128546 (-0.099969) | 0.008114 / 0.075646 (-0.067532) | 0.068297 / 0.419271 (-0.350974) | 0.040835 / 0.043533 (-0.002698) | 0.453556 / 0.255139 (0.198417) | 0.475420 / 0.283200 (0.192220) | 0.020292 / 0.141683 (-0.121390) | 1.472226 / 1.452155 (0.020071) | 1.523809 / 1.492716 (0.031093) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.230662 / 0.018006 (0.212655) | 0.439697 / 0.000490 (0.439207) | 0.009899 / 0.000200 (0.009699) | 0.000087 / 0.000054 (0.000033) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026418 / 0.037411 (-0.010993) | 0.082188 / 0.014526 (0.067662) | 0.091039 / 0.176557 (-0.085518) | 0.146646 / 0.737135 (-0.590489) | 0.091693 / 0.296338 (-0.204645) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.462086 / 0.215209 (0.246877) | 4.620925 / 2.077655 (2.543271) | 2.539234 / 1.504120 (1.035114) | 2.371178 / 1.541195 (0.829983) | 2.440538 / 1.468490 (0.972048) | 0.511047 / 4.584777 (-4.073730) | 3.082088 / 3.745712 (-0.663624) | 2.918162 / 5.269862 (-2.351700) | 1.899651 / 4.565676 (-2.666025) | 0.059003 / 0.424275 (-0.365272) | 0.006746 / 0.007607 (-0.000861) | 0.537863 / 0.226044 (0.311819) | 5.382355 / 2.268929 (3.113426) | 3.060091 / 55.444624 (-52.384534) | 2.754969 / 6.876477 (-4.121507) | 2.863156 / 2.142072 (0.721084) | 0.606888 / 4.805227 (-4.198339) | 0.127448 / 6.500664 (-6.373216) | 0.062975 / 0.075469 (-0.012494) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.336065 / 1.841788 (-0.505722) | 19.019902 / 8.074308 (10.945594) | 15.057979 / 10.191392 (4.866587) | 0.160646 / 0.680424 (-0.519778) | 0.018340 / 0.534201 (-0.515861) | 0.341664 / 0.579283 (-0.237619) | 0.356536 / 0.434364 (-0.077828) | 0.393974 / 0.540337 (-0.146363) | 0.546036 / 1.386936 (-0.840900) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#fd04e445bd36d7eb4af4d5a6b8519ab8e306ecf5 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007220 / 0.011353 (-0.004132) | 0.004537 / 0.011008 (-0.006471) | 0.087333 / 0.038508 (0.048825) | 0.095637 / 0.023109 (0.072528) | 0.323819 / 0.275898 (0.047921) | 0.358838 / 0.323480 (0.035358) | 0.005910 / 0.007986 (-0.002076) | 0.003781 / 0.004328 (-0.000548) | 0.064565 / 0.004250 (0.060315) | 0.062818 / 0.037052 (0.025766) | 0.322595 / 0.258489 (0.064106) | 0.371865 / 0.293841 (0.078024) | 0.031667 / 0.128546 (-0.096880) | 0.009068 / 0.075646 (-0.066579) | 0.290574 / 0.419271 (-0.128697) | 0.054618 / 0.043533 (0.011085) | 0.314708 / 0.255139 (0.059569) | 0.336647 / 0.283200 (0.053447) | 0.027070 / 0.141683 (-0.114613) | 1.500640 / 1.452155 (0.048485) | 1.586775 / 1.492716 (0.094059) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.294461 / 0.018006 (0.276455) | 0.580125 / 0.000490 (0.579635) | 0.008165 / 0.000200 (0.007965) | 0.000320 / 0.000054 (0.000266) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032352 / 0.037411 (-0.005059) | 0.092187 / 0.014526 (0.077661) | 0.104993 / 0.176557 (-0.071564) | 0.162738 / 0.737135 (-0.574397) | 0.103242 / 0.296338 (-0.193096) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.396732 / 0.215209 (0.181523) | 3.955049 / 2.077655 (1.877394) | 1.876762 / 1.504120 (0.372642) | 1.698477 / 1.541195 (0.157282) | 1.847086 / 1.468490 (0.378596) | 0.488306 / 4.584777 (-4.096471) | 3.658922 / 3.745712 (-0.086790) | 3.559050 / 5.269862 (-1.710812) | 2.187363 / 4.565676 (-2.378313) | 0.059795 / 0.424275 (-0.364480) | 0.008966 / 0.007607 (0.001359) | 0.474212 / 0.226044 (0.248168) | 4.732540 / 2.268929 (2.463611) | 2.466370 / 55.444624 (-52.978254) | 2.112105 / 6.876477 (-4.764372) | 2.414624 / 2.142072 (0.272552) | 0.595447 / 4.805227 (-4.209780) | 0.136705 / 6.500664 (-6.363959) | 0.062267 / 0.075469 (-0.013202) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.266518 / 1.841788 (-0.575270) | 21.009975 / 8.074308 (12.935666) | 14.823960 / 10.191392 (4.632568) | 0.165630 / 0.680424 (-0.514793) | 0.018499 / 0.534201 (-0.515702) | 0.396720 / 0.579283 (-0.182563) | 0.424807 / 0.434364 (-0.009557) | 0.463326 / 0.540337 (-0.077011) | 0.653132 / 1.386936 (-0.733804) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007789 / 0.011353 (-0.003564) | 0.004720 / 0.011008 (-0.006288) | 0.066656 / 0.038508 (0.028148) | 0.094219 / 0.023109 (0.071109) | 0.414965 / 0.275898 (0.139067) | 0.454808 / 0.323480 (0.131328) | 0.006088 / 0.007986 (-0.001898) | 0.003980 / 0.004328 (-0.000349) | 0.066048 / 0.004250 (0.061797) | 0.065875 / 0.037052 (0.028823) | 0.419994 / 0.258489 (0.161505) | 0.462001 / 0.293841 (0.168160) | 0.033534 / 0.128546 (-0.095013) | 0.009010 / 0.075646 (-0.066636) | 0.072778 / 0.419271 (-0.346493) | 0.049834 / 0.043533 (0.006301) | 0.411003 / 0.255139 (0.155864) | 0.430918 / 0.283200 (0.147718) | 0.025664 / 0.141683 (-0.116019) | 1.526771 / 1.452155 (0.074616) | 1.634767 / 1.492716 (0.142051) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.271180 / 0.018006 (0.253174) | 0.576704 / 0.000490 (0.576214) | 0.004362 / 0.000200 (0.004162) | 0.000112 / 0.000054 (0.000058) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.035648 / 0.037411 (-0.001763) | 0.102407 / 0.014526 (0.087881) | 0.111613 / 0.176557 (-0.064944) | 0.166173 / 0.737135 (-0.570962) | 0.113371 / 0.296338 (-0.182967) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.436031 / 0.215209 (0.220822) | 4.347071 / 2.077655 (2.269416) | 2.366937 / 1.504120 (0.862817) | 2.216356 / 1.541195 (0.675161) | 2.335933 / 1.468490 (0.867443) | 0.490484 / 4.584777 (-4.094293) | 3.730656 / 3.745712 (-0.015056) | 3.497248 / 5.269862 (-1.772613) | 2.215729 / 4.565676 (-2.349947) | 0.057905 / 0.424275 (-0.366370) | 0.007983 / 0.007607 (0.000376) | 0.510413 / 0.226044 (0.284369) | 5.114502 / 2.268929 (2.845574) | 2.871599 / 55.444624 (-52.573026) | 2.537514 / 6.876477 (-4.338962) | 2.819135 / 2.142072 (0.677063) | 0.588397 / 4.805227 (-4.216830) | 0.134665 / 6.500664 (-6.365999) | 0.063349 / 0.075469 (-0.012120) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.352962 / 1.841788 (-0.488826) | 21.628664 / 8.074308 (13.554356) | 15.962105 / 10.191392 (5.770713) | 0.167781 / 0.680424 (-0.512643) | 0.020965 / 0.534201 (-0.513236) | 0.402809 / 0.579283 (-0.176474) | 0.435153 / 0.434364 (0.000789) | 0.481394 / 0.540337 (-0.058944) | 0.658068 / 1.386936 (-0.728868) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#12adf38b90fde8e2a4e46fcbb023ee23b5c4e98c \"CML watermark\")\n" ]
Don't skip hidden files in `dl_manager.iter_files` when they are given as input
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6230/reactions" }
PR_kwDODunzps5aBh6L
{ "diff_url": "https://github.com/huggingface/datasets/pull/6230.diff", "html_url": "https://github.com/huggingface/datasets/pull/6230", "merged_at": "2023-09-13T18:12:09Z", "patch_url": "https://github.com/huggingface/datasets/pull/6230.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6230" }
2023-09-11T13:29:19Z
https://api.github.com/repos/huggingface/datasets/issues/6230/comments
Required for `load_dataset(<format>, data_files=["path/to/.hidden_file"])` to work as expected
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
https://api.github.com/repos/huggingface/datasets/issues/6230/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6230/timeline
closed
false
6,230
null
2023-09-13T18:12:09Z
null
true
1,889,050,954
https://api.github.com/repos/huggingface/datasets/issues/6229
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6229/events
[]
null
2023-09-20T16:11:53Z
[]
https://github.com/huggingface/datasets/issues/6229
NONE
completed
null
null
[ "From what I see, `MMSegInferencer` supports NumPy arrays, so replace the line `image_path = example['image']` with `image_path = np.array(example['image'])` to fix the issue (`example[\"image\"]` is a `PIL.Image` object). ", "> From what I see, `MMSegInferencer` supports NumPy arrays, so replace the line `image_path = example['image']` with `image_path = np.array(example['image'])` to fix the issue (`example[\"image\"]` is a `PIL.Image` object).\r\n\r\nThanks @mariosasko for your reply...\r\ni tried :\r\n```\r\n# Define a function to apply the code to each image in the dataset\r\ndef process_image(image_path):\r\n print(\"Processing image:\", image_path)\r\n result = inferencer(image_path)['predictions']\r\n mask = np.where(result == 12, 255, 0).astype('uint8')\r\n return Image.fromarray(mask)\r\n\r\n# Process and save masks for each image in the dataset\r\nfor idx, example in enumerate(dataset['train']):\r\n image_path = np.array(example['image'])\r\n mask_image = process_image(image_path)\r\n mask_image.save(f\"mask_{idx}.png\")\r\n```\r\nand got\r\n```\r\nProcessing image: [[[202 165 87]\r\n [203 166 88]\r\n [207 168 91]\r\n ...\r\n [243 205 122]\r\n [244 202 120]\r\n [242 200 118]]\r\n\r\n [[202 165 87]\r\n [203 166 88]\r\n [207 168 91]\r\n ...\r\n [244 206 123]\r\n [245 203 121]\r\n [243 201 119]]\r\n\r\n [[203 164 87]\r\n [204 165 88]\r\n [207 168 91]\r\n ...\r\n [245 207 126]\r\n [246 204 122]\r\n [245 203 121]]\r\n\r\n ...\r\n\r\n [[154 123 56]\r\n [155 124 57]\r\n [158 125 56]\r\n ...\r\n [ 3 3 1]\r\n [ 3 3 1]\r\n [ 3 3 1]]\r\n\r\n [[154 123 56]\r\n [154 123 56]\r\n [155 124 57]\r\n ...\r\n [ 2 2 0]\r\n [ 2 2 0]\r\n [ 2 2 0]]\r\n\r\n [[152 121 54]\r\n [152 121 54]\r\n [153 122 55]\r\n ...\r\n [ 2 2 0]\r\n [ 2 2 0]\r\n [ 2 2 0]]]\r\nInference ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ \r\nProcessing image: [[[ 39 44 40]\r\n [ 39 44 40]\r\n [ 39 43 44]\r\n ...\r\n [187 185 164]\r\n [208 204 175]\r\n [203 198 166]]\r\n\r\n [[ 42 47 43]\r\n [ 40 45 41]\r\n [ 40 44 45]\r\n ...\r\n [188 186 165]\r\n [202 198 169]\r\n [201 196 164]]\r\n\r\n [[ 41 46 42]\r\n [ 39 44 40]\r\n [ 40 44 45]\r\n ...\r\n [187 184 165]\r\n [197 193 166]\r\n [201 196 166]]\r\n\r\n ...\r\n\r\n [[ 29 27 30]\r\n [ 28 26 29]\r\n [ 25 23 26]\r\n ...\r\n [ 48 33 28]\r\n [ 44 31 25]\r\n [ 39 26 20]]\r\n\r\n [[ 34 29 33]\r\n [ 32 27 31]\r\n [ 29 24 28]\r\n ...\r\n [ 30 17 11]\r\n [ 36 23 15]\r\n [ 41 28 20]]\r\n\r\n [[ 35 30 34]\r\n [ 33 28 32]\r\n [ 28 23 27]\r\n ...\r\n [ 28 15 9]\r\n [ 41 28 20]\r\n [ 46 33 25]]]\r\nInference ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ \r\nProcessing image: [[[ 65 53 55]\r\n [ 65 53 55]\r\n [ 51 39 41]\r\n ...\r\n [133 127 111]\r\n [150 141 124]\r\n [133 124 107]]\r\n\r\n [[ 58 45 52]\r\n [ 61 48 55]\r\n [ 51 38 45]\r\n ...\r\n [148 141 123]\r\n [178 169 152]\r\n [144 135 118]]\r\n\r\n [[ 79 66 83]\r\n [ 73 60 77]\r\n [ 65 51 66]\r\n ...\r\n [140 131 114]\r\n [142 133 116]\r\n [147 136 118]]\r\n\r\n ...\r\n\r\n [[132 122 133]\r\n [ 95 85 94]\r\n [ 61 51 60]\r\n ...\r\n [ 39 28 42]\r\n [ 46 36 45]\r\n [ 25 16 21]]\r\n\r\n [[150 143 151]\r\n [114 107 115]\r\n [ 64 54 63]\r\n ...\r\n [ 47 35 47]\r\n [ 38 27 35]\r\n [140 129 133]]\r\n\r\n [[145 138 146]\r\n [115 108 116]\r\n [ 69 59 67]\r\n ...\r\n [ 31 19 31]\r\n [128 117 123]\r\n [196 185 189]]]\r\nInference ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ \r\nProcessing image: [[[159 151 140]\r\n [171 163 152]\r\n [161 148 142]\r\n ...\r\n [198 184 171]\r\n [189 175 162]\r\n [183 169 156]]\r\n\r\n [[128 118 106]\r\n [138 128 116]\r\n [138 125 116]\r\n ...\r\n [200 186 173]\r\n [190 176 163]\r\n [187 173 160]]\r\n\r\n [[165 153 137]\r\n [170 158 142]\r\n [174 162 148]\r\n ...\r\n [200 187 171]\r\n [188 175 159]\r\n [182 169 153]]\r\n```\r\nHowever , when trying to add to:\r\n```\r\nfrom datasets import load_dataset\r\ndataset = load_dataset('Andyrasika/cat_kingdom')\r\ndataset\r\n```\r\ni did \r\n```\r\nnew_column = [\"mask\"] * len(dataset[\"train\"])\r\nnew_column\r\ndataset = dataset.add_column(\"/workspace/data\", new_column)\r\n\r\nprint(dataset)\r\n```\r\ngot error:\r\n```\r\n---------------------------------------------------------------------------\r\nAttributeError Traceback (most recent call last)\r\nCell In[11], line 3\r\n 1 new_column = [\"mask\"] * len(dataset[\"train\"])\r\n 2 new_column\r\n----> 3 dataset = dataset.add_column(\"/workspace/data\", new_column)\r\n 5 print(dataset)\r\n\r\nAttributeError: 'DatasetDict' object has no attribute 'add_column'\r\n```", "https://github.com/huggingface/datasets/issues/6246 resolved the `add_column` error, so I'm closing this issue :) " ]
Apply inference on all images in the dataset
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6229/reactions" }
I_kwDODunzps5wmKFK
null
2023-09-10T08:36:12Z
https://api.github.com/repos/huggingface/datasets/issues/6229/comments
### Describe the bug ``` --------------------------------------------------------------------------- NotImplementedError Traceback (most recent call last) Cell In[14], line 11 9 for idx, example in enumerate(dataset['train']): 10 image_path = example['image'] ---> 11 mask_image = process_image(image_path) 12 mask_image.save(f"mask_{idx}.png") Cell In[14], line 4, in process_image(image_path) 2 def process_image(image_path): 3 print("Processing image:", image_path) ----> 4 result = inferencer(image_path)['predictions'] 5 mask = np.where(result == 12, 255, 0).astype('uint8') 6 return Image.fromarray(mask) File /usr/local/lib/python3.10/dist-packages/mmseg/apis/mmseg_inferencer.py:183, in MMSegInferencer.__call__(self, inputs, return_datasamples, batch_size, show, wait_time, out_dir, img_out_dir, pred_out_dir, **kwargs) 180 pred_out_dir = '' 181 img_out_dir = '' --> 183 return super().__call__( 184 inputs=inputs, 185 return_datasamples=return_datasamples, 186 batch_size=batch_size, 187 show=show, 188 wait_time=wait_time, 189 img_out_dir=img_out_dir, 190 pred_out_dir=pred_out_dir, 191 **kwargs) File /usr/local/lib/python3.10/dist-packages/mmengine/infer/infer.py:221, in BaseInferencer.__call__(self, inputs, return_datasamples, batch_size, **kwargs) 218 inputs = self.preprocess( 219 ori_inputs, batch_size=batch_size, **preprocess_kwargs) 220 preds = [] --> 221 for data in (track(inputs, description='Inference') 222 if self.show_progress else inputs): 223 preds.extend(self.forward(data, **forward_kwargs)) 224 visualization = self.visualize( 225 ori_inputs, preds, 226 **visualize_kwargs) # type: ignore # noqa: E501 File /usr/local/lib/python3.10/dist-packages/rich/progress.py:168, in track(sequence, description, total, auto_refresh, console, transient, get_time, refresh_per_second, style, complete_style, finished_style, pulse_style, update_period, disable, show_speed) 157 progress = Progress( 158 *columns, 159 auto_refresh=auto_refresh, (...) 164 disable=disable, 165 ) 167 with progress: --> 168 yield from progress.track( 169 sequence, total=total, description=description, update_period=update_period 170 ) File /usr/local/lib/python3.10/dist-packages/rich/progress.py:1210, in Progress.track(self, sequence, total, task_id, description, update_period) 1208 if self.live.auto_refresh: 1209 with _TrackThread(self, task_id, update_period) as track_thread: -> 1210 for value in sequence: 1211 yield value 1212 track_thread.completed += 1 File /usr/local/lib/python3.10/dist-packages/mmengine/infer/infer.py:291, in BaseInferencer.preprocess(self, inputs, batch_size, **kwargs) 266 """Process the inputs into a model-feedable format. 267 268 Customize your preprocess by overriding this method. Preprocess should (...) 287 Any: Data processed by the ``pipeline`` and ``collate_fn``. 288 """ 289 chunked_data = self._get_chunk_data( 290 map(self.pipeline, inputs), batch_size) --> 291 yield from map(self.collate_fn, chunked_data) File /usr/local/lib/python3.10/dist-packages/mmengine/infer/infer.py:588, in BaseInferencer._get_chunk_data(self, inputs, chunk_size) 586 chunk_data = [] 587 for _ in range(chunk_size): --> 588 processed_data = next(inputs_iter) 589 chunk_data.append(processed_data) 590 yield chunk_data File /usr/local/lib/python3.10/dist-packages/mmcv/transforms/base.py:12, in BaseTransform.__call__(self, results) 9 def __call__(self, 10 results: Dict) -> Optional[Union[Dict, Tuple[List, List]]]: ---> 12 return self.transform(results) File /usr/local/lib/python3.10/dist-packages/mmcv/transforms/wrappers.py:88, in Compose.transform(self, results) 79 """Call function to apply transforms sequentially. 80 81 Args: (...) 85 dict or None: Transformed results. 86 """ 87 for t in self.transforms: ---> 88 results = t(results) # type: ignore 89 if results is None: 90 return None File /usr/local/lib/python3.10/dist-packages/mmcv/transforms/base.py:12, in BaseTransform.__call__(self, results) 9 def __call__(self, 10 results: Dict) -> Optional[Union[Dict, Tuple[List, List]]]: ---> 12 return self.transform(results) File /usr/local/lib/python3.10/dist-packages/mmseg/datasets/transforms/loading.py:496, in InferencerLoader.transform(self, single_input) 494 inputs = single_input 495 else: --> 496 raise NotImplementedError 498 if 'img' in inputs: 499 return self.from_ndarray(inputs) NotImplementedError: ```` ### Steps to reproduce the bug ``` from datasets import load_dataset dataset = load_dataset('Andyrasika/cat_kingdom') dataset from mmseg.apis import MMSegInferencer checkpoint_name = 'segformer_mit-b5_8xb2-160k_ade20k-640x640' inferencer = MMSegInferencer(model=checkpoint_name) # Define a function to apply the code to each image in the dataset def process_image(image_path): print("Processing image:", image_path) result = inferencer(image_path)['predictions'] mask = np.where(result == 12, 255, 0).astype('uint8') return Image.fromarray(mask) # Process and save masks for each image in the dataset for idx, example in enumerate(dataset['train']): image_path = example['image'] mask_image = process_image(image_path) mask_image.save(f"mask_{idx}.png") ``` ### Expected behavior create a separate column with masks in the dataset and further shows as a separate column in hub ### Environment info jupyter notebook RTX 3090
{ "avatar_url": "https://avatars.githubusercontent.com/u/20493493?v=4", "events_url": "https://api.github.com/users/andysingal/events{/privacy}", "followers_url": "https://api.github.com/users/andysingal/followers", "following_url": "https://api.github.com/users/andysingal/following{/other_user}", "gists_url": "https://api.github.com/users/andysingal/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/andysingal", "id": 20493493, "login": "andysingal", "node_id": "MDQ6VXNlcjIwNDkzNDkz", "organizations_url": "https://api.github.com/users/andysingal/orgs", "received_events_url": "https://api.github.com/users/andysingal/received_events", "repos_url": "https://api.github.com/users/andysingal/repos", "site_admin": false, "starred_url": "https://api.github.com/users/andysingal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/andysingal/subscriptions", "type": "User", "url": "https://api.github.com/users/andysingal" }
https://api.github.com/repos/huggingface/datasets/issues/6229/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6229/timeline
closed
false
6,229
null
2023-09-20T16:11:52Z
null
false
1,887,959,311
https://api.github.com/repos/huggingface/datasets/issues/6228
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6228/events
[]
null
2023-09-08T18:02:49Z
[]
https://github.com/huggingface/datasets/pull/6228
COLLABORATOR
null
false
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009443 / 0.011353 (-0.001910) | 0.005274 / 0.011008 (-0.005734) | 0.105950 / 0.038508 (0.067441) | 0.079947 / 0.023109 (0.056837) | 0.414248 / 0.275898 (0.138350) | 0.440611 / 0.323480 (0.117131) | 0.006779 / 0.007986 (-0.001206) | 0.004301 / 0.004328 (-0.000028) | 0.080616 / 0.004250 (0.076366) | 0.061425 / 0.037052 (0.024372) | 0.418460 / 0.258489 (0.159971) | 0.468108 / 0.293841 (0.174267) | 0.051090 / 0.128546 (-0.077456) | 0.014133 / 0.075646 (-0.061513) | 0.376121 / 0.419271 (-0.043151) | 0.070715 / 0.043533 (0.027182) | 0.415435 / 0.255139 (0.160296) | 0.457925 / 0.283200 (0.174725) | 0.053653 / 0.141683 (-0.088030) | 1.872681 / 1.452155 (0.420527) | 1.961187 / 1.492716 (0.468470) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.255829 / 0.018006 (0.237823) | 0.574224 / 0.000490 (0.573735) | 0.007597 / 0.000200 (0.007397) | 0.000098 / 0.000054 (0.000044) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032562 / 0.037411 (-0.004849) | 0.097528 / 0.014526 (0.083003) | 0.113487 / 0.176557 (-0.063070) | 0.185670 / 0.737135 (-0.551465) | 0.118909 / 0.296338 (-0.177430) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.611441 / 0.215209 (0.396232) | 5.908576 / 2.077655 (3.830921) | 2.586758 / 1.504120 (1.082638) | 2.310199 / 1.541195 (0.769004) | 2.333396 / 1.468490 (0.864906) | 0.900884 / 4.584777 (-3.683893) | 5.438304 / 3.745712 (1.692591) | 4.806611 / 5.269862 (-0.463250) | 2.970631 / 4.565676 (-1.595046) | 0.097861 / 0.424275 (-0.326414) | 0.009873 / 0.007607 (0.002266) | 0.739553 / 0.226044 (0.513509) | 7.104953 / 2.268929 (4.836024) | 3.150128 / 55.444624 (-52.294497) | 2.469552 / 6.876477 (-4.406924) | 2.709206 / 2.142072 (0.567133) | 0.983081 / 4.805227 (-3.822147) | 0.205150 / 6.500664 (-6.295514) | 0.075947 / 0.075469 (0.000478) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.631255 / 1.841788 (-0.210532) | 24.213679 / 8.074308 (16.139370) | 21.514481 / 10.191392 (11.323089) | 0.220360 / 0.680424 (-0.460063) | 0.031663 / 0.534201 (-0.502538) | 0.516029 / 0.579283 (-0.063254) | 0.591461 / 0.434364 (0.157097) | 0.612398 / 0.540337 (0.072061) | 0.807609 / 1.386936 (-0.579328) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009443 / 0.011353 (-0.001910) | 0.005510 / 0.011008 (-0.005498) | 0.085722 / 0.038508 (0.047214) | 0.076256 / 0.023109 (0.053146) | 0.604248 / 0.275898 (0.328349) | 0.596222 / 0.323480 (0.272742) | 0.006786 / 0.007986 (-0.001200) | 0.004135 / 0.004328 (-0.000193) | 0.085934 / 0.004250 (0.081683) | 0.065890 / 0.037052 (0.028838) | 0.592080 / 0.258489 (0.333591) | 0.624560 / 0.293841 (0.330719) | 0.048200 / 0.128546 (-0.080346) | 0.015477 / 0.075646 (-0.060169) | 0.097042 / 0.419271 (-0.322230) | 0.060513 / 0.043533 (0.016981) | 0.557171 / 0.255139 (0.302032) | 0.582057 / 0.283200 (0.298858) | 0.035678 / 0.141683 (-0.106005) | 1.894947 / 1.452155 (0.442792) | 1.956652 / 1.492716 (0.463936) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.268927 / 0.018006 (0.250921) | 0.566086 / 0.000490 (0.565597) | 0.007190 / 0.000200 (0.006990) | 0.000101 / 0.000054 (0.000047) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.042090 / 0.037411 (0.004679) | 0.109618 / 0.014526 (0.095092) | 0.126588 / 0.176557 (-0.049968) | 0.200426 / 0.737135 (-0.536709) | 0.127032 / 0.296338 (-0.169306) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.669773 / 0.215209 (0.454564) | 6.453417 / 2.077655 (4.375763) | 3.119147 / 1.504120 (1.615027) | 2.818632 / 1.541195 (1.277437) | 2.930880 / 1.468490 (1.462390) | 0.922164 / 4.584777 (-3.662612) | 5.769564 / 3.745712 (2.023852) | 4.885108 / 5.269862 (-0.384754) | 3.041640 / 4.565676 (-1.524037) | 0.100186 / 0.424275 (-0.324090) | 0.009417 / 0.007607 (0.001810) | 0.783138 / 0.226044 (0.557094) | 8.113361 / 2.268929 (5.844432) | 4.018630 / 55.444624 (-51.425995) | 3.246772 / 6.876477 (-3.629704) | 3.520690 / 2.142072 (1.378618) | 1.063686 / 4.805227 (-3.741541) | 0.218667 / 6.500664 (-6.281997) | 0.084169 / 0.075469 (0.008700) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.791949 / 1.841788 (-0.049839) | 23.148341 / 8.074308 (15.074033) | 23.321125 / 10.191392 (13.129733) | 0.245391 / 0.680424 (-0.435032) | 0.031911 / 0.534201 (-0.502290) | 0.470707 / 0.579283 (-0.108576) | 0.608195 / 0.434364 (0.173832) | 0.559590 / 0.540337 (0.019253) | 0.786007 / 1.386936 (-0.600929) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#8e071f565cc0801f73f7f34fba92dc30a43946a9 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008428 / 0.011353 (-0.002925) | 0.004064 / 0.011008 (-0.006944) | 0.088421 / 0.038508 (0.049913) | 0.078042 / 0.023109 (0.054933) | 0.306356 / 0.275898 (0.030458) | 0.349766 / 0.323480 (0.026286) | 0.004086 / 0.007986 (-0.003900) | 0.003900 / 0.004328 (-0.000428) | 0.068379 / 0.004250 (0.064129) | 0.056214 / 0.037052 (0.019161) | 0.310211 / 0.258489 (0.051722) | 0.363692 / 0.293841 (0.069851) | 0.050421 / 0.128546 (-0.078125) | 0.011661 / 0.075646 (-0.063985) | 0.298400 / 0.419271 (-0.120871) | 0.063503 / 0.043533 (0.019970) | 0.339799 / 0.255139 (0.084660) | 0.359479 / 0.283200 (0.076279) | 0.039265 / 0.141683 (-0.102418) | 1.390578 / 1.452155 (-0.061576) | 1.573333 / 1.492716 (0.080617) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.260442 / 0.018006 (0.242436) | 0.560390 / 0.000490 (0.559900) | 0.003926 / 0.000200 (0.003726) | 0.000083 / 0.000054 (0.000029) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025809 / 0.037411 (-0.011602) | 0.081902 / 0.014526 (0.067376) | 0.093655 / 0.176557 (-0.082901) | 0.149432 / 0.737135 (-0.587703) | 0.099059 / 0.296338 (-0.197279) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.505644 / 0.215209 (0.290435) | 5.108292 / 2.077655 (3.030638) | 2.121689 / 1.504120 (0.617569) | 1.846576 / 1.541195 (0.305381) | 1.836587 / 1.468490 (0.368097) | 0.708088 / 4.584777 (-3.876689) | 4.562630 / 3.745712 (0.816918) | 3.934747 / 5.269862 (-1.335115) | 2.453409 / 4.565676 (-2.112267) | 0.081908 / 0.424275 (-0.342367) | 0.012996 / 0.007607 (0.005389) | 0.636588 / 0.226044 (0.410544) | 6.361086 / 2.268929 (4.092157) | 2.911681 / 55.444624 (-52.532943) | 2.271809 / 6.876477 (-4.604667) | 2.670327 / 2.142072 (0.528254) | 0.943688 / 4.805227 (-3.861539) | 0.191677 / 6.500664 (-6.308988) | 0.066008 / 0.075469 (-0.009461) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.400139 / 1.841788 (-0.441648) | 21.896198 / 8.074308 (13.821890) | 17.853604 / 10.191392 (7.662212) | 0.226603 / 0.680424 (-0.453821) | 0.026682 / 0.534201 (-0.507518) | 0.460131 / 0.579283 (-0.119152) | 0.536790 / 0.434364 (0.102427) | 0.492913 / 0.540337 (-0.047424) | 0.724290 / 1.386936 (-0.662646) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007795 / 0.011353 (-0.003557) | 0.009045 / 0.011008 (-0.001963) | 0.085480 / 0.038508 (0.046972) | 0.071881 / 0.023109 (0.048772) | 0.514520 / 0.275898 (0.238622) | 0.569762 / 0.323480 (0.246282) | 0.006126 / 0.007986 (-0.001859) | 0.004153 / 0.004328 (-0.000175) | 0.072150 / 0.004250 (0.067900) | 0.056511 / 0.037052 (0.019458) | 0.484097 / 0.258489 (0.225607) | 0.532673 / 0.293841 (0.238832) | 0.040974 / 0.128546 (-0.087572) | 0.012071 / 0.075646 (-0.063575) | 0.102608 / 0.419271 (-0.316663) | 0.052893 / 0.043533 (0.009360) | 0.485832 / 0.255139 (0.230693) | 0.530479 / 0.283200 (0.247280) | 0.031556 / 0.141683 (-0.110127) | 1.737508 / 1.452155 (0.285354) | 1.834637 / 1.492716 (0.341921) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.423314 / 0.018006 (0.405308) | 0.614163 / 0.000490 (0.613673) | 0.052784 / 0.000200 (0.052584) | 0.000206 / 0.000054 (0.000151) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031728 / 0.037411 (-0.005684) | 0.088048 / 0.014526 (0.073522) | 0.105759 / 0.176557 (-0.070798) | 0.181433 / 0.737135 (-0.555703) | 0.103133 / 0.296338 (-0.193205) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.659710 / 0.215209 (0.444501) | 5.876378 / 2.077655 (3.798723) | 2.899444 / 1.504120 (1.395324) | 2.871592 / 1.541195 (1.330397) | 2.861205 / 1.468490 (1.392715) | 0.879452 / 4.584777 (-3.705325) | 5.395988 / 3.745712 (1.650275) | 4.548359 / 5.269862 (-0.721502) | 2.946601 / 4.565676 (-1.619076) | 0.099832 / 0.424275 (-0.324443) | 0.008958 / 0.007607 (0.001351) | 0.778480 / 0.226044 (0.552435) | 7.672282 / 2.268929 (5.403354) | 3.963701 / 55.444624 (-51.480923) | 3.154950 / 6.876477 (-3.721527) | 3.351070 / 2.142072 (1.208997) | 1.059459 / 4.805227 (-3.745768) | 0.212035 / 6.500664 (-6.288629) | 0.076941 / 0.075469 (0.001472) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.639813 / 1.841788 (-0.201975) | 24.807517 / 8.074308 (16.733208) | 20.662500 / 10.191392 (10.471108) | 0.244486 / 0.680424 (-0.435937) | 0.032335 / 0.534201 (-0.501866) | 0.470896 / 0.579283 (-0.108387) | 0.581561 / 0.434364 (0.147197) | 0.495158 / 0.540337 (-0.045179) | 0.788350 / 1.386936 (-0.598586) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#99641ced2e08a28cb876f483babcdd43f7dd76d2 \"CML watermark\")\n" ]
Remove RGB -> BGR image conversion in Object Detection tutorial
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6228/reactions" }
PR_kwDODunzps5Z5HZi
{ "diff_url": "https://github.com/huggingface/datasets/pull/6228.diff", "html_url": "https://github.com/huggingface/datasets/pull/6228", "merged_at": "2023-09-08T17:52:16Z", "patch_url": "https://github.com/huggingface/datasets/pull/6228.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6228" }
2023-09-08T16:09:13Z
https://api.github.com/repos/huggingface/datasets/issues/6228/comments
Fix #6225
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
https://api.github.com/repos/huggingface/datasets/issues/6228/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6228/timeline
closed
false
6,228
null
2023-09-08T17:52:16Z
null
true
1,887,462,591
https://api.github.com/repos/huggingface/datasets/issues/6226
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6226/events
[]
null
2023-09-08T12:29:21Z
[]
https://github.com/huggingface/datasets/pull/6226
MEMBER
null
false
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005920 / 0.011353 (-0.005433) | 0.003623 / 0.011008 (-0.007385) | 0.079283 / 0.038508 (0.040775) | 0.058325 / 0.023109 (0.035216) | 0.313733 / 0.275898 (0.037835) | 0.360790 / 0.323480 (0.037310) | 0.004653 / 0.007986 (-0.003332) | 0.002876 / 0.004328 (-0.001452) | 0.062137 / 0.004250 (0.057886) | 0.045084 / 0.037052 (0.008031) | 0.328569 / 0.258489 (0.070079) | 0.368965 / 0.293841 (0.075124) | 0.027085 / 0.128546 (-0.101461) | 0.008051 / 0.075646 (-0.067595) | 0.260222 / 0.419271 (-0.159050) | 0.045477 / 0.043533 (0.001944) | 0.315344 / 0.255139 (0.060205) | 0.348215 / 0.283200 (0.065015) | 0.021352 / 0.141683 (-0.120331) | 1.432200 / 1.452155 (-0.019955) | 1.509217 / 1.492716 (0.016501) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.199843 / 0.018006 (0.181837) | 0.427925 / 0.000490 (0.427435) | 0.002903 / 0.000200 (0.002703) | 0.000067 / 0.000054 (0.000013) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023121 / 0.037411 (-0.014291) | 0.072451 / 0.014526 (0.057925) | 0.083260 / 0.176557 (-0.093296) | 0.142879 / 0.737135 (-0.594257) | 0.084053 / 0.296338 (-0.212286) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.394922 / 0.215209 (0.179713) | 3.956111 / 2.077655 (1.878456) | 1.926411 / 1.504120 (0.422291) | 1.743840 / 1.541195 (0.202646) | 1.776957 / 1.468490 (0.308467) | 0.502134 / 4.584777 (-4.082643) | 3.001721 / 3.745712 (-0.743991) | 2.852496 / 5.269862 (-2.417365) | 1.862794 / 4.565676 (-2.702883) | 0.057544 / 0.424275 (-0.366731) | 0.006751 / 0.007607 (-0.000856) | 0.470619 / 0.226044 (0.244575) | 4.696674 / 2.268929 (2.427746) | 2.326545 / 55.444624 (-53.118080) | 1.980888 / 6.876477 (-4.895589) | 2.139172 / 2.142072 (-0.002901) | 0.590256 / 4.805227 (-4.214971) | 0.125815 / 6.500664 (-6.374849) | 0.061000 / 0.075469 (-0.014469) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.261948 / 1.841788 (-0.579839) | 18.317473 / 8.074308 (10.243165) | 13.810883 / 10.191392 (3.619491) | 0.146180 / 0.680424 (-0.534244) | 0.016701 / 0.534201 (-0.517500) | 0.330731 / 0.579283 (-0.248552) | 0.345103 / 0.434364 (-0.089261) | 0.374449 / 0.540337 (-0.165889) | 0.522463 / 1.386936 (-0.864473) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006217 / 0.011353 (-0.005136) | 0.003678 / 0.011008 (-0.007331) | 0.062321 / 0.038508 (0.023813) | 0.059256 / 0.023109 (0.036147) | 0.444501 / 0.275898 (0.168603) | 0.475881 / 0.323480 (0.152401) | 0.004863 / 0.007986 (-0.003123) | 0.002916 / 0.004328 (-0.001412) | 0.062197 / 0.004250 (0.057946) | 0.048449 / 0.037052 (0.011396) | 0.443680 / 0.258489 (0.185191) | 0.484570 / 0.293841 (0.190729) | 0.028694 / 0.128546 (-0.099852) | 0.008096 / 0.075646 (-0.067550) | 0.068347 / 0.419271 (-0.350924) | 0.041031 / 0.043533 (-0.002502) | 0.443907 / 0.255139 (0.188768) | 0.469888 / 0.283200 (0.186689) | 0.020237 / 0.141683 (-0.121445) | 1.438484 / 1.452155 (-0.013671) | 1.512652 / 1.492716 (0.019936) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.243118 / 0.018006 (0.225111) | 0.416797 / 0.000490 (0.416308) | 0.010421 / 0.000200 (0.010221) | 0.000082 / 0.000054 (0.000028) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026191 / 0.037411 (-0.011220) | 0.080881 / 0.014526 (0.066355) | 0.093207 / 0.176557 (-0.083349) | 0.146708 / 0.737135 (-0.590428) | 0.091676 / 0.296338 (-0.204663) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.461475 / 0.215209 (0.246266) | 4.617351 / 2.077655 (2.539696) | 2.564369 / 1.504120 (1.060249) | 2.393263 / 1.541195 (0.852068) | 2.447343 / 1.468490 (0.978853) | 0.508764 / 4.584777 (-4.076013) | 3.075460 / 3.745712 (-0.670252) | 2.884683 / 5.269862 (-2.385179) | 1.866432 / 4.565676 (-2.699244) | 0.058759 / 0.424275 (-0.365516) | 0.006591 / 0.007607 (-0.001016) | 0.537718 / 0.226044 (0.311674) | 5.378709 / 2.268929 (3.109781) | 3.006751 / 55.444624 (-52.437873) | 2.666653 / 6.876477 (-4.209824) | 2.847559 / 2.142072 (0.705486) | 0.596878 / 4.805227 (-4.208350) | 0.125073 / 6.500664 (-6.375591) | 0.061345 / 0.075469 (-0.014124) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.349066 / 1.841788 (-0.492721) | 18.684735 / 8.074308 (10.610427) | 15.128142 / 10.191392 (4.936750) | 0.149254 / 0.680424 (-0.531170) | 0.017911 / 0.534201 (-0.516290) | 0.344057 / 0.579283 (-0.235226) | 0.363474 / 0.434364 (-0.070890) | 0.399425 / 0.540337 (-0.140912) | 0.549329 / 1.386936 (-0.837607) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#e675a2396efb5204a4553721001f3b46aa4cc334 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005843 / 0.011353 (-0.005510) | 0.003549 / 0.011008 (-0.007460) | 0.082318 / 0.038508 (0.043810) | 0.056835 / 0.023109 (0.033726) | 0.312968 / 0.275898 (0.037070) | 0.345918 / 0.323480 (0.022438) | 0.003239 / 0.007986 (-0.004747) | 0.002762 / 0.004328 (-0.001567) | 0.062362 / 0.004250 (0.058111) | 0.045934 / 0.037052 (0.008882) | 0.317035 / 0.258489 (0.058546) | 0.358473 / 0.293841 (0.064632) | 0.027311 / 0.128546 (-0.101235) | 0.007994 / 0.075646 (-0.067652) | 0.261565 / 0.419271 (-0.157706) | 0.044942 / 0.043533 (0.001410) | 0.313092 / 0.255139 (0.057953) | 0.339021 / 0.283200 (0.055821) | 0.021555 / 0.141683 (-0.120127) | 1.421232 / 1.452155 (-0.030923) | 1.487597 / 1.492716 (-0.005119) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.206432 / 0.018006 (0.188425) | 0.421932 / 0.000490 (0.421442) | 0.002825 / 0.000200 (0.002625) | 0.000065 / 0.000054 (0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022795 / 0.037411 (-0.014616) | 0.072666 / 0.014526 (0.058140) | 0.082779 / 0.176557 (-0.093778) | 0.142320 / 0.737135 (-0.594815) | 0.083343 / 0.296338 (-0.212995) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.394227 / 0.215209 (0.179018) | 3.931858 / 2.077655 (1.854203) | 1.909953 / 1.504120 (0.405833) | 1.711298 / 1.541195 (0.170104) | 1.745816 / 1.468490 (0.277326) | 0.503670 / 4.584777 (-4.081107) | 3.053677 / 3.745712 (-0.692035) | 2.802597 / 5.269862 (-2.467264) | 1.825315 / 4.565676 (-2.740362) | 0.057741 / 0.424275 (-0.366534) | 0.006581 / 0.007607 (-0.001027) | 0.463597 / 0.226044 (0.237552) | 4.638821 / 2.268929 (2.369893) | 2.301266 / 55.444624 (-53.143358) | 1.967111 / 6.876477 (-4.909365) | 2.097756 / 2.142072 (-0.044317) | 0.589840 / 4.805227 (-4.215387) | 0.125538 / 6.500664 (-6.375126) | 0.061203 / 0.075469 (-0.014266) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.291815 / 1.841788 (-0.549973) | 17.997040 / 8.074308 (9.922732) | 13.616252 / 10.191392 (3.424860) | 0.137349 / 0.680424 (-0.543075) | 0.016626 / 0.534201 (-0.517575) | 0.329611 / 0.579283 (-0.249672) | 0.346592 / 0.434364 (-0.087772) | 0.379521 / 0.540337 (-0.160817) | 0.528058 / 1.386936 (-0.858878) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006073 / 0.011353 (-0.005280) | 0.003594 / 0.011008 (-0.007414) | 0.062537 / 0.038508 (0.024029) | 0.057503 / 0.023109 (0.034394) | 0.449427 / 0.275898 (0.173529) | 0.482729 / 0.323480 (0.159249) | 0.004690 / 0.007986 (-0.003295) | 0.002901 / 0.004328 (-0.001428) | 0.062421 / 0.004250 (0.058171) | 0.046405 / 0.037052 (0.009353) | 0.456578 / 0.258489 (0.198089) | 0.492268 / 0.293841 (0.198427) | 0.028283 / 0.128546 (-0.100263) | 0.008028 / 0.075646 (-0.067618) | 0.067885 / 0.419271 (-0.351387) | 0.041273 / 0.043533 (-0.002260) | 0.449870 / 0.255139 (0.194731) | 0.472305 / 0.283200 (0.189106) | 0.018556 / 0.141683 (-0.123127) | 1.449016 / 1.452155 (-0.003138) | 1.490839 / 1.492716 (-0.001877) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.226569 / 0.018006 (0.208563) | 0.417106 / 0.000490 (0.416616) | 0.002784 / 0.000200 (0.002584) | 0.000072 / 0.000054 (0.000018) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025803 / 0.037411 (-0.011608) | 0.081084 / 0.014526 (0.066559) | 0.091851 / 0.176557 (-0.084706) | 0.143982 / 0.737135 (-0.593153) | 0.090511 / 0.296338 (-0.205827) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.463664 / 0.215209 (0.248454) | 4.634528 / 2.077655 (2.556874) | 2.574739 / 1.504120 (1.070619) | 2.412857 / 1.541195 (0.871662) | 2.442858 / 1.468490 (0.974368) | 0.511990 / 4.584777 (-4.072787) | 3.070345 / 3.745712 (-0.675367) | 2.842290 / 5.269862 (-2.427571) | 1.846727 / 4.565676 (-2.718950) | 0.058852 / 0.424275 (-0.365424) | 0.006624 / 0.007607 (-0.000983) | 0.539616 / 0.226044 (0.313571) | 5.410784 / 2.268929 (3.141856) | 3.065593 / 55.444624 (-52.379031) | 2.677930 / 6.876477 (-4.198547) | 2.817548 / 2.142072 (0.675476) | 0.602672 / 4.805227 (-4.202555) | 0.125689 / 6.500664 (-6.374975) | 0.062007 / 0.075469 (-0.013462) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.335336 / 1.841788 (-0.506452) | 18.310099 / 8.074308 (10.235791) | 14.818452 / 10.191392 (4.627060) | 0.154001 / 0.680424 (-0.526423) | 0.017892 / 0.534201 (-0.516309) | 0.345989 / 0.579283 (-0.233294) | 0.352108 / 0.434364 (-0.082256) | 0.394333 / 0.540337 (-0.146004) | 0.547680 / 1.386936 (-0.839256) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#d058d6e9b849acb5bc61d7df597a94253b487eb6 \"CML watermark\")\n" ]
Add push_to_hub with multiple configs docs
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 2, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/6226/reactions" }
PR_kwDODunzps5Z3arq
{ "diff_url": "https://github.com/huggingface/datasets/pull/6226.diff", "html_url": "https://github.com/huggingface/datasets/pull/6226", "merged_at": "2023-09-08T12:20:51Z", "patch_url": "https://github.com/huggingface/datasets/pull/6226.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6226" }
2023-09-08T11:08:55Z
https://api.github.com/repos/huggingface/datasets/issues/6226/comments
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/6226/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6226/timeline
closed
false
6,226
null
2023-09-08T12:20:51Z
null
true
1,887,054,320
https://api.github.com/repos/huggingface/datasets/issues/6225
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6225/events
[]
null
2023-09-08T17:52:18Z
[]
https://github.com/huggingface/datasets/issues/6225
NONE
completed
null
null
[ "Good catch!" ]
Conversion from RGB to BGR in Object Detection tutorial
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6225/reactions" }
I_kwDODunzps5weinw
null
2023-09-08T06:49:19Z
https://api.github.com/repos/huggingface/datasets/issues/6225/comments
The [tutorial](https://huggingface.co/docs/datasets/main/en/object_detection) mentions the necessity of conversion the input image from BGR to RGB > albumentations expects the image to be in BGR format, not RGB, so you’ll have to convert the image before applying the transform. [Link to tutorial](https://github.com/huggingface/datasets/blob/0a068dbf3b446417ffd89d32857608394ec699e6/docs/source/object_detection.mdx#L77) However, relevant albumentations' tutorials [on channels conversion](https://albumentations.ai/docs/examples/example/#read-the-image-from-the-disk-and-convert-it-from-the-bgr-color-space-to-the-rgb-color-space) and [on boxes](https://albumentations.ai/docs/examples/example_bboxes/) imply that it's not really true no more. I suggest removing this outdated conversion from the tutorial.
{ "avatar_url": "https://avatars.githubusercontent.com/u/33297401?v=4", "events_url": "https://api.github.com/users/samokhinv/events{/privacy}", "followers_url": "https://api.github.com/users/samokhinv/followers", "following_url": "https://api.github.com/users/samokhinv/following{/other_user}", "gists_url": "https://api.github.com/users/samokhinv/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/samokhinv", "id": 33297401, "login": "samokhinv", "node_id": "MDQ6VXNlcjMzMjk3NDAx", "organizations_url": "https://api.github.com/users/samokhinv/orgs", "received_events_url": "https://api.github.com/users/samokhinv/received_events", "repos_url": "https://api.github.com/users/samokhinv/repos", "site_admin": false, "starred_url": "https://api.github.com/users/samokhinv/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/samokhinv/subscriptions", "type": "User", "url": "https://api.github.com/users/samokhinv" }
https://api.github.com/repos/huggingface/datasets/issues/6225/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6225/timeline
closed
false
6,225
null
2023-09-08T17:52:17Z
null
false
1,886,043,692
https://api.github.com/repos/huggingface/datasets/issues/6224
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6224/events
[]
null
2023-09-07T15:46:10Z
[]
https://github.com/huggingface/datasets/pull/6224
COLLABORATOR
null
false
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009450 / 0.011353 (-0.001903) | 0.007339 / 0.011008 (-0.003669) | 0.110150 / 0.038508 (0.071641) | 0.087794 / 0.023109 (0.064685) | 0.472099 / 0.275898 (0.196201) | 0.476622 / 0.323480 (0.153142) | 0.005057 / 0.007986 (-0.002929) | 0.005262 / 0.004328 (0.000933) | 0.103059 / 0.004250 (0.098808) | 0.069815 / 0.037052 (0.032763) | 0.489377 / 0.258489 (0.230888) | 0.547087 / 0.293841 (0.253247) | 0.048883 / 0.128546 (-0.079663) | 0.019192 / 0.075646 (-0.056454) | 0.410865 / 0.419271 (-0.008407) | 0.076215 / 0.043533 (0.032682) | 0.484825 / 0.255139 (0.229686) | 0.519035 / 0.283200 (0.235835) | 0.042030 / 0.141683 (-0.099653) | 1.909630 / 1.452155 (0.457475) | 2.120869 / 1.492716 (0.628153) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.267600 / 0.018006 (0.249594) | 0.619135 / 0.000490 (0.618645) | 0.005897 / 0.000200 (0.005697) | 0.000142 / 0.000054 (0.000087) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033265 / 0.037411 (-0.004146) | 0.104476 / 0.014526 (0.089950) | 0.129199 / 0.176557 (-0.047358) | 0.196898 / 0.737135 (-0.540238) | 0.118852 / 0.296338 (-0.177487) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.598908 / 0.215209 (0.383699) | 6.263096 / 2.077655 (4.185441) | 2.672134 / 1.504120 (1.168014) | 2.428706 / 1.541195 (0.887511) | 2.431651 / 1.468490 (0.963161) | 0.918465 / 4.584777 (-3.666312) | 5.667857 / 3.745712 (1.922145) | 5.113696 / 5.269862 (-0.156166) | 3.276805 / 4.565676 (-1.288872) | 0.101829 / 0.424275 (-0.322446) | 0.010224 / 0.007607 (0.002617) | 0.741547 / 0.226044 (0.515502) | 7.517002 / 2.268929 (5.248073) | 3.546353 / 55.444624 (-51.898272) | 2.845956 / 6.876477 (-4.030521) | 3.172777 / 2.142072 (1.030705) | 1.153485 / 4.805227 (-3.651742) | 0.225758 / 6.500664 (-6.274906) | 0.084333 / 0.075469 (0.008864) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.704645 / 1.841788 (-0.137143) | 27.044110 / 8.074308 (18.969801) | 24.653837 / 10.191392 (14.462445) | 0.235452 / 0.680424 (-0.444971) | 0.029285 / 0.534201 (-0.504916) | 0.576122 / 0.579283 (-0.003161) | 0.626263 / 0.434364 (0.191899) | 0.600201 / 0.540337 (0.059864) | 0.838406 / 1.386936 (-0.548530) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.013754 / 0.011353 (0.002401) | 0.005954 / 0.011008 (-0.005054) | 0.089766 / 0.038508 (0.051258) | 0.096126 / 0.023109 (0.073017) | 0.556455 / 0.275898 (0.280557) | 0.579302 / 0.323480 (0.255822) | 0.009222 / 0.007986 (0.001236) | 0.006128 / 0.004328 (0.001800) | 0.099725 / 0.004250 (0.095475) | 0.075642 / 0.037052 (0.038589) | 0.556645 / 0.258489 (0.298156) | 0.615898 / 0.293841 (0.322057) | 0.057728 / 0.128546 (-0.070818) | 0.016746 / 0.075646 (-0.058900) | 0.098053 / 0.419271 (-0.321219) | 0.066676 / 0.043533 (0.023143) | 0.534156 / 0.255139 (0.279017) | 0.590020 / 0.283200 (0.306820) | 0.038782 / 0.141683 (-0.102901) | 1.952301 / 1.452155 (0.500146) | 2.104255 / 1.492716 (0.611539) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.305945 / 0.018006 (0.287939) | 0.643915 / 0.000490 (0.643426) | 0.006268 / 0.000200 (0.006068) | 0.000156 / 0.000054 (0.000102) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.039891 / 0.037411 (0.002479) | 0.117888 / 0.014526 (0.103363) | 0.134230 / 0.176557 (-0.042326) | 0.212544 / 0.737135 (-0.524591) | 0.128858 / 0.296338 (-0.167480) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.718165 / 0.215209 (0.502955) | 7.023867 / 2.077655 (4.946212) | 3.391344 / 1.504120 (1.887224) | 3.021248 / 1.541195 (1.480053) | 3.010217 / 1.468490 (1.541727) | 0.932608 / 4.584777 (-3.652169) | 5.787536 / 3.745712 (2.041824) | 5.221305 / 5.269862 (-0.048557) | 3.282552 / 4.565676 (-1.283125) | 0.105486 / 0.424275 (-0.318789) | 0.009800 / 0.007607 (0.002193) | 0.839358 / 0.226044 (0.613314) | 8.279712 / 2.268929 (6.010784) | 4.118466 / 55.444624 (-51.326158) | 3.407738 / 6.876477 (-3.468739) | 3.632538 / 2.142072 (1.490466) | 1.109673 / 4.805227 (-3.695555) | 0.216541 / 6.500664 (-6.284123) | 0.094031 / 0.075469 (0.018562) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.983979 / 1.841788 (0.142191) | 27.125882 / 8.074308 (19.051573) | 24.714002 / 10.191392 (14.522610) | 0.264417 / 0.680424 (-0.416007) | 0.034783 / 0.534201 (-0.499418) | 0.533304 / 0.579283 (-0.045979) | 0.647798 / 0.434364 (0.213434) | 0.588680 / 0.540337 (0.048343) | 0.854250 / 1.386936 (-0.532686) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#491604b46b1fd8d6fd1b7531f7917ccd657665a6 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006664 / 0.011353 (-0.004689) | 0.004164 / 0.011008 (-0.006844) | 0.085192 / 0.038508 (0.046684) | 0.073578 / 0.023109 (0.050469) | 0.356379 / 0.275898 (0.080481) | 0.389381 / 0.323480 (0.065902) | 0.005527 / 0.007986 (-0.002459) | 0.003488 / 0.004328 (-0.000840) | 0.065640 / 0.004250 (0.061390) | 0.055013 / 0.037052 (0.017960) | 0.358002 / 0.258489 (0.099513) | 0.400663 / 0.293841 (0.106822) | 0.030937 / 0.128546 (-0.097609) | 0.008838 / 0.075646 (-0.066808) | 0.287488 / 0.419271 (-0.131784) | 0.051503 / 0.043533 (0.007971) | 0.353945 / 0.255139 (0.098806) | 0.388778 / 0.283200 (0.105579) | 0.023346 / 0.141683 (-0.118337) | 1.479621 / 1.452155 (0.027466) | 1.559164 / 1.492716 (0.066448) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.245160 / 0.018006 (0.227154) | 0.561890 / 0.000490 (0.561400) | 0.004339 / 0.000200 (0.004139) | 0.000083 / 0.000054 (0.000028) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028460 / 0.037411 (-0.008952) | 0.082046 / 0.014526 (0.067520) | 0.098005 / 0.176557 (-0.078552) | 0.154171 / 0.737135 (-0.582965) | 0.097632 / 0.296338 (-0.198707) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.389993 / 0.215209 (0.174784) | 3.893287 / 2.077655 (1.815632) | 1.885668 / 1.504120 (0.381549) | 1.715055 / 1.541195 (0.173860) | 1.778008 / 1.468490 (0.309518) | 0.482818 / 4.584777 (-4.101959) | 3.572153 / 3.745712 (-0.173559) | 3.267666 / 5.269862 (-2.002196) | 2.088394 / 4.565676 (-2.477282) | 0.056961 / 0.424275 (-0.367314) | 0.007784 / 0.007607 (0.000177) | 0.466586 / 0.226044 (0.240542) | 4.652505 / 2.268929 (2.383576) | 2.491392 / 55.444624 (-52.953233) | 2.127600 / 6.876477 (-4.748877) | 2.296778 / 2.142072 (0.154705) | 0.582332 / 4.805227 (-4.222895) | 0.134372 / 6.500664 (-6.366292) | 0.061737 / 0.075469 (-0.013732) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.253647 / 1.841788 (-0.588140) | 19.802353 / 8.074308 (11.728045) | 14.262815 / 10.191392 (4.071423) | 0.169489 / 0.680424 (-0.510935) | 0.018108 / 0.534201 (-0.516093) | 0.391711 / 0.579283 (-0.187572) | 0.406169 / 0.434364 (-0.028195) | 0.456728 / 0.540337 (-0.083609) | 0.633538 / 1.386936 (-0.753398) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006661 / 0.011353 (-0.004692) | 0.004181 / 0.011008 (-0.006827) | 0.064945 / 0.038508 (0.026437) | 0.073965 / 0.023109 (0.050856) | 0.406549 / 0.275898 (0.130651) | 0.441568 / 0.323480 (0.118089) | 0.005579 / 0.007986 (-0.002407) | 0.003523 / 0.004328 (-0.000805) | 0.065270 / 0.004250 (0.061019) | 0.055596 / 0.037052 (0.018544) | 0.407701 / 0.258489 (0.149212) | 0.444609 / 0.293841 (0.150768) | 0.031749 / 0.128546 (-0.096797) | 0.008680 / 0.075646 (-0.066966) | 0.071154 / 0.419271 (-0.348117) | 0.047376 / 0.043533 (0.003843) | 0.406409 / 0.255139 (0.151270) | 0.420477 / 0.283200 (0.137278) | 0.023707 / 0.141683 (-0.117976) | 1.484516 / 1.452155 (0.032361) | 1.568493 / 1.492716 (0.075777) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.266534 / 0.018006 (0.248528) | 0.573806 / 0.000490 (0.573316) | 0.006247 / 0.000200 (0.006048) | 0.000165 / 0.000054 (0.000110) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033436 / 0.037411 (-0.003976) | 0.091947 / 0.014526 (0.077421) | 0.105556 / 0.176557 (-0.071000) | 0.162094 / 0.737135 (-0.575041) | 0.107879 / 0.296338 (-0.188459) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.429126 / 0.215209 (0.213917) | 4.281329 / 2.077655 (2.203675) | 2.295406 / 1.504120 (0.791286) | 2.123336 / 1.541195 (0.582141) | 2.190804 / 1.468490 (0.722314) | 0.492972 / 4.584777 (-4.091805) | 3.638485 / 3.745712 (-0.107227) | 3.304576 / 5.269862 (-1.965285) | 2.063694 / 4.565676 (-2.501983) | 0.058549 / 0.424275 (-0.365726) | 0.007591 / 0.007607 (-0.000016) | 0.504268 / 0.226044 (0.278223) | 5.031990 / 2.268929 (2.763061) | 2.773173 / 55.444624 (-52.671451) | 2.430789 / 6.876477 (-4.445688) | 2.699900 / 2.142072 (0.557828) | 0.593220 / 4.805227 (-4.212007) | 0.133710 / 6.500664 (-6.366954) | 0.059840 / 0.075469 (-0.015629) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.351158 / 1.841788 (-0.490629) | 20.176310 / 8.074308 (12.102002) | 14.933202 / 10.191392 (4.741810) | 0.169920 / 0.680424 (-0.510503) | 0.020156 / 0.534201 (-0.514045) | 0.397440 / 0.579283 (-0.181843) | 0.409395 / 0.434364 (-0.024969) | 0.471066 / 0.540337 (-0.069271) | 0.642670 / 1.386936 (-0.744266) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#cf90ca7fbfd9c4639cc3faf0a349eb26490e38fc \"CML watermark\")\n" ]
Ignore `dataset_info.json` in data files resolution
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6224/reactions" }
PR_kwDODunzps5Zym3j
{ "diff_url": "https://github.com/huggingface/datasets/pull/6224.diff", "html_url": "https://github.com/huggingface/datasets/pull/6224", "merged_at": "2023-09-07T15:37:20Z", "patch_url": "https://github.com/huggingface/datasets/pull/6224.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6224" }
2023-09-07T14:43:51Z
https://api.github.com/repos/huggingface/datasets/issues/6224/comments
`save_to_disk` creates this file, but also [`HugginFaceDatasetSever`](https://github.com/gradio-app/gradio/blob/26fef8c7f85a006c7e25cdbed1792df19c512d02/gradio/flagging.py#L214), so this is needed to avoid issues such as [this one](https://discord.com/channels/879548962464493619/1149295819938349107/1149295819938349107).
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
https://api.github.com/repos/huggingface/datasets/issues/6224/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6224/timeline
closed
false
6,224
null
2023-09-07T15:37:20Z
null
true
1,885,710,696
https://api.github.com/repos/huggingface/datasets/issues/6223
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6223/events
[]
null
2023-09-13T22:32:31Z
[]
https://github.com/huggingface/datasets/pull/6223
CONTRIBUTOR
null
false
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006757 / 0.011353 (-0.004596) | 0.004233 / 0.011008 (-0.006775) | 0.084123 / 0.038508 (0.045614) | 0.077513 / 0.023109 (0.054404) | 0.357024 / 0.275898 (0.081126) | 0.392956 / 0.323480 (0.069476) | 0.005408 / 0.007986 (-0.002577) | 0.003363 / 0.004328 (-0.000966) | 0.064395 / 0.004250 (0.060145) | 0.054711 / 0.037052 (0.017659) | 0.367287 / 0.258489 (0.108798) | 0.402934 / 0.293841 (0.109093) | 0.031845 / 0.128546 (-0.096701) | 0.008646 / 0.075646 (-0.067000) | 0.288740 / 0.419271 (-0.130531) | 0.053171 / 0.043533 (0.009638) | 0.360711 / 0.255139 (0.105572) | 0.388707 / 0.283200 (0.105507) | 0.025321 / 0.141683 (-0.116361) | 1.500684 / 1.452155 (0.048529) | 1.585747 / 1.492716 (0.093030) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.207329 / 0.018006 (0.189323) | 0.465304 / 0.000490 (0.464814) | 0.003229 / 0.000200 (0.003029) | 0.000080 / 0.000054 (0.000025) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028752 / 0.037411 (-0.008659) | 0.085327 / 0.014526 (0.070802) | 0.332210 / 0.176557 (0.155653) | 0.178779 / 0.737135 (-0.558356) | 0.097765 / 0.296338 (-0.198573) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.403710 / 0.215209 (0.188501) | 4.027069 / 2.077655 (1.949414) | 2.053451 / 1.504120 (0.549331) | 1.906647 / 1.541195 (0.365452) | 1.992507 / 1.468490 (0.524017) | 0.490203 / 4.584777 (-4.094574) | 3.696569 / 3.745712 (-0.049143) | 3.319919 / 5.269862 (-1.949943) | 2.072794 / 4.565676 (-2.492883) | 0.057893 / 0.424275 (-0.366383) | 0.007723 / 0.007607 (0.000116) | 0.485400 / 0.226044 (0.259355) | 4.842891 / 2.268929 (2.573963) | 2.578949 / 55.444624 (-52.865675) | 2.229217 / 6.876477 (-4.647259) | 2.468017 / 2.142072 (0.325945) | 0.595236 / 4.805227 (-4.209992) | 0.135641 / 6.500664 (-6.365023) | 0.061232 / 0.075469 (-0.014237) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.307059 / 1.841788 (-0.534729) | 20.108581 / 8.074308 (12.034273) | 14.438985 / 10.191392 (4.247593) | 0.168878 / 0.680424 (-0.511545) | 0.018208 / 0.534201 (-0.515993) | 0.395986 / 0.579283 (-0.183297) | 0.427440 / 0.434364 (-0.006924) | 0.459917 / 0.540337 (-0.080421) | 0.631379 / 1.386936 (-0.755557) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007002 / 0.011353 (-0.004351) | 0.004120 / 0.011008 (-0.006888) | 0.064817 / 0.038508 (0.026309) | 0.081297 / 0.023109 (0.058188) | 0.405598 / 0.275898 (0.129700) | 0.442360 / 0.323480 (0.118880) | 0.005475 / 0.007986 (-0.002511) | 0.003483 / 0.004328 (-0.000845) | 0.064750 / 0.004250 (0.060499) | 0.058111 / 0.037052 (0.021059) | 0.410154 / 0.258489 (0.151665) | 0.445137 / 0.293841 (0.151296) | 0.033314 / 0.128546 (-0.095232) | 0.008747 / 0.075646 (-0.066899) | 0.071595 / 0.419271 (-0.347676) | 0.048894 / 0.043533 (0.005361) | 0.409162 / 0.255139 (0.154023) | 0.428877 / 0.283200 (0.145677) | 0.024127 / 0.141683 (-0.117556) | 1.521369 / 1.452155 (0.069214) | 1.573505 / 1.492716 (0.080789) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.233199 / 0.018006 (0.215193) | 0.455619 / 0.000490 (0.455129) | 0.003688 / 0.000200 (0.003488) | 0.000081 / 0.000054 (0.000027) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033186 / 0.037411 (-0.004225) | 0.100528 / 0.014526 (0.086003) | 0.105617 / 0.176557 (-0.070940) | 0.159437 / 0.737135 (-0.577698) | 0.108064 / 0.296338 (-0.188274) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.435509 / 0.215209 (0.220300) | 4.339920 / 2.077655 (2.262265) | 2.368983 / 1.504120 (0.864863) | 2.211761 / 1.541195 (0.670566) | 2.301701 / 1.468490 (0.833211) | 0.495144 / 4.584777 (-4.089633) | 3.768882 / 3.745712 (0.023170) | 3.348940 / 5.269862 (-1.920922) | 2.081142 / 4.565676 (-2.484534) | 0.058184 / 0.424275 (-0.366091) | 0.007597 / 0.007607 (-0.000010) | 0.508806 / 0.226044 (0.282762) | 5.089226 / 2.268929 (2.820297) | 2.851930 / 55.444624 (-52.592694) | 2.512144 / 6.876477 (-4.364332) | 2.724461 / 2.142072 (0.582388) | 0.593446 / 4.805227 (-4.211781) | 0.134908 / 6.500664 (-6.365756) | 0.060811 / 0.075469 (-0.014658) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.362279 / 1.841788 (-0.479508) | 20.548216 / 8.074308 (12.473908) | 15.179181 / 10.191392 (4.987789) | 0.170249 / 0.680424 (-0.510175) | 0.020772 / 0.534201 (-0.513429) | 0.398737 / 0.579283 (-0.180546) | 0.441487 / 0.434364 (0.007124) | 0.480096 / 0.540337 (-0.060242) | 0.645825 / 1.386936 (-0.741111) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#a6fb8b9a833afb25311da395c6e0d9bf770ca2c7 \"CML watermark\")\n" ]
Update README.md
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6223/reactions" }
PR_kwDODunzps5Zxd5c
{ "diff_url": "https://github.com/huggingface/datasets/pull/6223.diff", "html_url": "https://github.com/huggingface/datasets/pull/6223", "merged_at": "2023-09-13T22:23:42Z", "patch_url": "https://github.com/huggingface/datasets/pull/6223.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6223" }
2023-09-07T11:33:20Z
https://api.github.com/repos/huggingface/datasets/issues/6223/comments
fixed a few typos
{ "avatar_url": "https://avatars.githubusercontent.com/u/95188570?v=4", "events_url": "https://api.github.com/users/NinoRisteski/events{/privacy}", "followers_url": "https://api.github.com/users/NinoRisteski/followers", "following_url": "https://api.github.com/users/NinoRisteski/following{/other_user}", "gists_url": "https://api.github.com/users/NinoRisteski/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/NinoRisteski", "id": 95188570, "login": "NinoRisteski", "node_id": "U_kgDOBax2Wg", "organizations_url": "https://api.github.com/users/NinoRisteski/orgs", "received_events_url": "https://api.github.com/users/NinoRisteski/received_events", "repos_url": "https://api.github.com/users/NinoRisteski/repos", "site_admin": false, "starred_url": "https://api.github.com/users/NinoRisteski/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NinoRisteski/subscriptions", "type": "User", "url": "https://api.github.com/users/NinoRisteski" }
https://api.github.com/repos/huggingface/datasets/issues/6223/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6223/timeline
closed
false
6,223
null
2023-09-13T22:23:42Z
null
true
1,884,875,510
https://api.github.com/repos/huggingface/datasets/issues/6222
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6222/events
[]
null
2023-10-03T14:18:41Z
[]
https://github.com/huggingface/datasets/pull/6222
CONTRIBUTOR
null
false
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006655 / 0.011353 (-0.004698) | 0.004115 / 0.011008 (-0.006893) | 0.083895 / 0.038508 (0.045387) | 0.072770 / 0.023109 (0.049661) | 0.311401 / 0.275898 (0.035503) | 0.341079 / 0.323480 (0.017599) | 0.005488 / 0.007986 (-0.002497) | 0.003530 / 0.004328 (-0.000799) | 0.064691 / 0.004250 (0.060441) | 0.053096 / 0.037052 (0.016044) | 0.314969 / 0.258489 (0.056480) | 0.358245 / 0.293841 (0.064404) | 0.030789 / 0.128546 (-0.097757) | 0.008868 / 0.075646 (-0.066779) | 0.288022 / 0.419271 (-0.131249) | 0.052092 / 0.043533 (0.008559) | 0.310061 / 0.255139 (0.054922) | 0.345369 / 0.283200 (0.062170) | 0.024100 / 0.141683 (-0.117582) | 1.520573 / 1.452155 (0.068418) | 1.593750 / 1.492716 (0.101033) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.242520 / 0.018006 (0.224514) | 0.567963 / 0.000490 (0.567473) | 0.003183 / 0.000200 (0.002983) | 0.000074 / 0.000054 (0.000020) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029473 / 0.037411 (-0.007939) | 0.083012 / 0.014526 (0.068486) | 0.262386 / 0.176557 (0.085830) | 0.155131 / 0.737135 (-0.582004) | 0.099880 / 0.296338 (-0.196458) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.382388 / 0.215209 (0.167179) | 3.816538 / 2.077655 (1.738884) | 1.863422 / 1.504120 (0.359302) | 1.694652 / 1.541195 (0.153457) | 1.738738 / 1.468490 (0.270248) | 0.477073 / 4.584777 (-4.107704) | 3.539244 / 3.745712 (-0.206468) | 3.238469 / 5.269862 (-2.031392) | 2.026154 / 4.565676 (-2.539523) | 0.056111 / 0.424275 (-0.368164) | 0.007615 / 0.007607 (0.000008) | 0.460620 / 0.226044 (0.234576) | 4.596383 / 2.268929 (2.327455) | 2.348645 / 55.444624 (-53.095979) | 1.977465 / 6.876477 (-4.899011) | 2.222828 / 2.142072 (0.080755) | 0.588065 / 4.805227 (-4.217162) | 0.132175 / 6.500664 (-6.368489) | 0.061322 / 0.075469 (-0.014147) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.260623 / 1.841788 (-0.581164) | 19.976475 / 8.074308 (11.902167) | 14.346488 / 10.191392 (4.155096) | 0.145614 / 0.680424 (-0.534810) | 0.018309 / 0.534201 (-0.515892) | 0.393644 / 0.579283 (-0.185639) | 0.405355 / 0.434364 (-0.029009) | 0.458355 / 0.540337 (-0.081982) | 0.630147 / 1.386936 (-0.756789) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006769 / 0.011353 (-0.004584) | 0.004172 / 0.011008 (-0.006836) | 0.064863 / 0.038508 (0.026355) | 0.076831 / 0.023109 (0.053722) | 0.419391 / 0.275898 (0.143493) | 0.439912 / 0.323480 (0.116432) | 0.006249 / 0.007986 (-0.001737) | 0.003571 / 0.004328 (-0.000757) | 0.064877 / 0.004250 (0.060626) | 0.056023 / 0.037052 (0.018971) | 0.419899 / 0.258489 (0.161410) | 0.459334 / 0.293841 (0.165493) | 0.032217 / 0.128546 (-0.096329) | 0.008628 / 0.075646 (-0.067019) | 0.071089 / 0.419271 (-0.348183) | 0.047463 / 0.043533 (0.003930) | 0.414961 / 0.255139 (0.159822) | 0.431408 / 0.283200 (0.148209) | 0.022406 / 0.141683 (-0.119277) | 1.511890 / 1.452155 (0.059735) | 1.580268 / 1.492716 (0.087551) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.280805 / 0.018006 (0.262799) | 0.553766 / 0.000490 (0.553276) | 0.006155 / 0.000200 (0.005955) | 0.000102 / 0.000054 (0.000047) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032980 / 0.037411 (-0.004431) | 0.092981 / 0.014526 (0.078456) | 0.108820 / 0.176557 (-0.067737) | 0.161709 / 0.737135 (-0.575426) | 0.109772 / 0.296338 (-0.186566) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.433659 / 0.215209 (0.218450) | 4.328577 / 2.077655 (2.250923) | 2.316899 / 1.504120 (0.812779) | 2.142645 / 1.541195 (0.601451) | 2.245518 / 1.468490 (0.777028) | 0.489448 / 4.584777 (-4.095329) | 3.630074 / 3.745712 (-0.115638) | 3.322749 / 5.269862 (-1.947112) | 2.062307 / 4.565676 (-2.503370) | 0.058153 / 0.424275 (-0.366122) | 0.007453 / 0.007607 (-0.000154) | 0.507234 / 0.226044 (0.281190) | 5.071830 / 2.268929 (2.802902) | 2.839374 / 55.444624 (-52.605250) | 2.429583 / 6.876477 (-4.446893) | 2.671940 / 2.142072 (0.529868) | 0.588256 / 4.805227 (-4.216972) | 0.135135 / 6.500664 (-6.365530) | 0.060963 / 0.075469 (-0.014506) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.337462 / 1.841788 (-0.504326) | 20.292912 / 8.074308 (12.218604) | 14.871809 / 10.191392 (4.680417) | 0.169214 / 0.680424 (-0.511209) | 0.020450 / 0.534201 (-0.513751) | 0.397094 / 0.579283 (-0.182189) | 0.411623 / 0.434364 (-0.022741) | 0.471560 / 0.540337 (-0.068777) | 0.647293 / 1.386936 (-0.739643) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#0a068dbf3b446417ffd89d32857608394ec699e6 \"CML watermark\")\n" ]
fix typo in Audio dataset documentation
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6222/reactions" }
PR_kwDODunzps5Zup2f
{ "diff_url": "https://github.com/huggingface/datasets/pull/6222.diff", "html_url": "https://github.com/huggingface/datasets/pull/6222", "merged_at": "2023-09-07T15:39:09Z", "patch_url": "https://github.com/huggingface/datasets/pull/6222.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6222" }
2023-09-06T23:17:24Z
https://api.github.com/repos/huggingface/datasets/issues/6222/comments
There is a typo in the section of the documentation dedicated to creating an audio dataset. The Dataset is incorrectly suffixed with a `Config` https://huggingface.co/datasets/indonesian-nlp/librivox-indonesia/blob/main/librivox-indonesia.py#L59
{ "avatar_url": "https://avatars.githubusercontent.com/u/3224332?v=4", "events_url": "https://api.github.com/users/prassanna-ravishankar/events{/privacy}", "followers_url": "https://api.github.com/users/prassanna-ravishankar/followers", "following_url": "https://api.github.com/users/prassanna-ravishankar/following{/other_user}", "gists_url": "https://api.github.com/users/prassanna-ravishankar/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/prassanna-ravishankar", "id": 3224332, "login": "prassanna-ravishankar", "node_id": "MDQ6VXNlcjMyMjQzMzI=", "organizations_url": "https://api.github.com/users/prassanna-ravishankar/orgs", "received_events_url": "https://api.github.com/users/prassanna-ravishankar/received_events", "repos_url": "https://api.github.com/users/prassanna-ravishankar/repos", "site_admin": false, "starred_url": "https://api.github.com/users/prassanna-ravishankar/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/prassanna-ravishankar/subscriptions", "type": "User", "url": "https://api.github.com/users/prassanna-ravishankar" }
https://api.github.com/repos/huggingface/datasets/issues/6222/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6222/timeline
closed
false
6,222
null
2023-09-07T15:39:09Z
null
true
1,884,324,631
https://api.github.com/repos/huggingface/datasets/issues/6221
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6221/events
[]
null
2023-09-06T18:32:07Z
[]
https://github.com/huggingface/datasets/issues/6221
COLLABORATOR
null
null
null
[ "Not a fan of pickling this sort of stuff either.\r\nNote that users can also share the code in their dataset documentation." ]
Support saving datasets with custom formatting
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6221/reactions" }
I_kwDODunzps5wUIMX
null
2023-09-06T16:03:32Z
https://api.github.com/repos/huggingface/datasets/issues/6221/comments
Requested in https://discuss.huggingface.co/t/using-set-transform-on-a-dataset-leads-to-an-exception/53036. I am not sure if supporting this is the best idea for the following reasons: >For this to work, we would have to pickle a custom transform, which means the transform and the objects it references need to be serializable. Also, deserializing these bytes would make `load_from_disk` unsafe, so I'm not sure this is a good idea. @lhoestq WDYT?
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
https://api.github.com/repos/huggingface/datasets/issues/6221/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6221/timeline
open
false
6,221
null
null
null
false
1,884,285,980
https://api.github.com/repos/huggingface/datasets/issues/6220
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6220/events
[]
null
2023-09-06T15:52:33Z
[]
https://github.com/huggingface/datasets/pull/6220
MEMBER
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6220). All of your documentation changes will be reflected on that endpoint.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005950 / 0.011353 (-0.005403) | 0.003578 / 0.011008 (-0.007431) | 0.079327 / 0.038508 (0.040819) | 0.057862 / 0.023109 (0.034752) | 0.317288 / 0.275898 (0.041390) | 0.358210 / 0.323480 (0.034730) | 0.004685 / 0.007986 (-0.003301) | 0.002879 / 0.004328 (-0.001450) | 0.062355 / 0.004250 (0.058105) | 0.045093 / 0.037052 (0.008041) | 0.322520 / 0.258489 (0.064031) | 0.367114 / 0.293841 (0.073273) | 0.027233 / 0.128546 (-0.101313) | 0.007941 / 0.075646 (-0.067705) | 0.260511 / 0.419271 (-0.158761) | 0.044355 / 0.043533 (0.000822) | 0.332993 / 0.255139 (0.077854) | 0.351363 / 0.283200 (0.068163) | 0.020784 / 0.141683 (-0.120899) | 1.429044 / 1.452155 (-0.023111) | 1.489355 / 1.492716 (-0.003362) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.180903 / 0.018006 (0.162897) | 0.421566 / 0.000490 (0.421077) | 0.003259 / 0.000200 (0.003059) | 0.000068 / 0.000054 (0.000014) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023765 / 0.037411 (-0.013646) | 0.072815 / 0.014526 (0.058289) | 0.084592 / 0.176557 (-0.091965) | 0.143556 / 0.737135 (-0.593579) | 0.083591 / 0.296338 (-0.212748) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.401896 / 0.215209 (0.186687) | 4.006344 / 2.077655 (1.928689) | 2.092280 / 1.504120 (0.588160) | 1.937828 / 1.541195 (0.396633) | 2.026901 / 1.468490 (0.558411) | 0.499999 / 4.584777 (-4.084778) | 3.008715 / 3.745712 (-0.736997) | 2.789735 / 5.269862 (-2.480127) | 1.827319 / 4.565676 (-2.738358) | 0.057413 / 0.424275 (-0.366862) | 0.006716 / 0.007607 (-0.000891) | 0.473061 / 0.226044 (0.247016) | 4.733256 / 2.268929 (2.464327) | 2.403922 / 55.444624 (-53.040702) | 2.017466 / 6.876477 (-4.859011) | 2.209710 / 2.142072 (0.067638) | 0.590813 / 4.805227 (-4.214414) | 0.124760 / 6.500664 (-6.375904) | 0.060976 / 0.075469 (-0.014494) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.229172 / 1.841788 (-0.612616) | 17.924644 / 8.074308 (9.850336) | 13.697347 / 10.191392 (3.505955) | 0.128258 / 0.680424 (-0.552166) | 0.016780 / 0.534201 (-0.517421) | 0.329301 / 0.579283 (-0.249982) | 0.344527 / 0.434364 (-0.089837) | 0.379482 / 0.540337 (-0.160855) | 0.513851 / 1.386936 (-0.873085) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005962 / 0.011353 (-0.005391) | 0.003613 / 0.011008 (-0.007396) | 0.062428 / 0.038508 (0.023920) | 0.058151 / 0.023109 (0.035042) | 0.452926 / 0.275898 (0.177027) | 0.489740 / 0.323480 (0.166260) | 0.006137 / 0.007986 (-0.001848) | 0.002890 / 0.004328 (-0.001438) | 0.062880 / 0.004250 (0.058629) | 0.046175 / 0.037052 (0.009123) | 0.452416 / 0.258489 (0.193927) | 0.486047 / 0.293841 (0.192206) | 0.028517 / 0.128546 (-0.100029) | 0.008102 / 0.075646 (-0.067544) | 0.068251 / 0.419271 (-0.351020) | 0.040569 / 0.043533 (-0.002964) | 0.461306 / 0.255139 (0.206167) | 0.477675 / 0.283200 (0.194475) | 0.020944 / 0.141683 (-0.120739) | 1.414300 / 1.452155 (-0.037855) | 1.502108 / 1.492716 (0.009391) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.217786 / 0.018006 (0.199780) | 0.410757 / 0.000490 (0.410267) | 0.002981 / 0.000200 (0.002781) | 0.000081 / 0.000054 (0.000027) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026846 / 0.037411 (-0.010565) | 0.080098 / 0.014526 (0.065572) | 0.090591 / 0.176557 (-0.085965) | 0.144674 / 0.737135 (-0.592461) | 0.091287 / 0.296338 (-0.205052) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.458224 / 0.215209 (0.243015) | 4.590541 / 2.077655 (2.512886) | 2.511251 / 1.504120 (1.007131) | 2.329165 / 1.541195 (0.787970) | 2.379187 / 1.468490 (0.910696) | 0.507485 / 4.584777 (-4.077292) | 3.135011 / 3.745712 (-0.610701) | 2.805913 / 5.269862 (-2.463948) | 1.851382 / 4.565676 (-2.714295) | 0.057981 / 0.424275 (-0.366294) | 0.006557 / 0.007607 (-0.001050) | 0.532496 / 0.226044 (0.306452) | 5.348802 / 2.268929 (3.079874) | 2.993379 / 55.444624 (-52.451245) | 2.636372 / 6.876477 (-4.240104) | 2.753219 / 2.142072 (0.611147) | 0.591989 / 4.805227 (-4.213238) | 0.126691 / 6.500664 (-6.373973) | 0.062359 / 0.075469 (-0.013110) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.345498 / 1.841788 (-0.496290) | 18.335767 / 8.074308 (10.261458) | 15.115449 / 10.191392 (4.924057) | 0.147382 / 0.680424 (-0.533041) | 0.017729 / 0.534201 (-0.516472) | 0.334337 / 0.579283 (-0.244946) | 0.359035 / 0.434364 (-0.075329) | 0.386319 / 0.540337 (-0.154019) | 0.536378 / 1.386936 (-0.850558) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#f2b028fd83d74e7701e7b8f2d87e740a989505a7 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009136 / 0.011353 (-0.002216) | 0.005567 / 0.011008 (-0.005442) | 0.120320 / 0.038508 (0.081812) | 0.078082 / 0.023109 (0.054973) | 0.405579 / 0.275898 (0.129681) | 0.459714 / 0.323480 (0.136234) | 0.006327 / 0.007986 (-0.001659) | 0.007187 / 0.004328 (0.002859) | 0.084373 / 0.004250 (0.080122) | 0.059727 / 0.037052 (0.022675) | 0.418918 / 0.258489 (0.160429) | 0.486767 / 0.293841 (0.192927) | 0.047715 / 0.128546 (-0.080831) | 0.014417 / 0.075646 (-0.061229) | 0.379847 / 0.419271 (-0.039425) | 0.067472 / 0.043533 (0.023939) | 0.419304 / 0.255139 (0.164166) | 0.466260 / 0.283200 (0.183060) | 0.036872 / 0.141683 (-0.104811) | 1.876273 / 1.452155 (0.424119) | 2.043856 / 1.492716 (0.551140) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.296266 / 0.018006 (0.278260) | 0.601843 / 0.000490 (0.601354) | 0.005663 / 0.000200 (0.005463) | 0.000102 / 0.000054 (0.000048) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033272 / 0.037411 (-0.004139) | 0.098839 / 0.014526 (0.084313) | 0.124658 / 0.176557 (-0.051899) | 0.190226 / 0.737135 (-0.546909) | 0.119288 / 0.296338 (-0.177051) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.600878 / 0.215209 (0.385668) | 6.011749 / 2.077655 (3.934095) | 2.611809 / 1.504120 (1.107689) | 2.314985 / 1.541195 (0.773790) | 2.398988 / 1.468490 (0.930498) | 0.835577 / 4.584777 (-3.749200) | 5.482848 / 3.745712 (1.737136) | 4.965393 / 5.269862 (-0.304469) | 3.082420 / 4.565676 (-1.483256) | 0.098048 / 0.424275 (-0.326227) | 0.009148 / 0.007607 (0.001541) | 0.725721 / 0.226044 (0.499676) | 7.297429 / 2.268929 (5.028501) | 3.558050 / 55.444624 (-51.886575) | 2.815884 / 6.876477 (-4.060593) | 3.094103 / 2.142072 (0.952031) | 1.023617 / 4.805227 (-3.781610) | 0.222453 / 6.500664 (-6.278211) | 0.081707 / 0.075469 (0.006238) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.788327 / 1.841788 (-0.053461) | 25.285829 / 8.074308 (17.211521) | 21.878811 / 10.191392 (11.687419) | 0.215494 / 0.680424 (-0.464930) | 0.032050 / 0.534201 (-0.502151) | 0.505210 / 0.579283 (-0.074073) | 0.623545 / 0.434364 (0.189181) | 0.583342 / 0.540337 (0.043005) | 0.826497 / 1.386936 (-0.560439) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009640 / 0.011353 (-0.001713) | 0.005479 / 0.011008 (-0.005529) | 0.088940 / 0.038508 (0.050432) | 0.084186 / 0.023109 (0.061077) | 0.552290 / 0.275898 (0.276392) | 0.583296 / 0.323480 (0.259816) | 0.006999 / 0.007986 (-0.000987) | 0.004597 / 0.004328 (0.000269) | 0.089407 / 0.004250 (0.085157) | 0.067210 / 0.037052 (0.030157) | 0.554968 / 0.258489 (0.296479) | 0.595635 / 0.293841 (0.301794) | 0.052245 / 0.128546 (-0.076301) | 0.015914 / 0.075646 (-0.059733) | 0.097037 / 0.419271 (-0.322235) | 0.063954 / 0.043533 (0.020421) | 0.533752 / 0.255139 (0.278614) | 0.573789 / 0.283200 (0.290589) | 0.036526 / 0.141683 (-0.105157) | 1.867713 / 1.452155 (0.415558) | 1.996901 / 1.492716 (0.504185) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.414967 / 0.018006 (0.396961) | 0.632367 / 0.000490 (0.631877) | 0.064061 / 0.000200 (0.063861) | 0.000565 / 0.000054 (0.000510) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.035953 / 0.037411 (-0.001458) | 0.112603 / 0.014526 (0.098077) | 0.126227 / 0.176557 (-0.050330) | 0.196881 / 0.737135 (-0.540255) | 0.127635 / 0.296338 (-0.168704) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.674735 / 0.215209 (0.459526) | 6.614578 / 2.077655 (4.536923) | 3.208198 / 1.504120 (1.704078) | 2.870412 / 1.541195 (1.329217) | 2.979358 / 1.468490 (1.510868) | 0.872589 / 4.584777 (-3.712187) | 5.501771 / 3.745712 (1.756059) | 4.865191 / 5.269862 (-0.404671) | 3.075281 / 4.565676 (-1.490396) | 0.098048 / 0.424275 (-0.326227) | 0.009121 / 0.007607 (0.001514) | 0.801639 / 0.226044 (0.575595) | 8.062040 / 2.268929 (5.793111) | 3.996693 / 55.444624 (-51.447931) | 3.343770 / 6.876477 (-3.532706) | 3.555977 / 2.142072 (1.413904) | 1.035050 / 4.805227 (-3.770177) | 0.227552 / 6.500664 (-6.273112) | 0.097733 / 0.075469 (0.022264) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.897210 / 1.841788 (0.055422) | 25.762459 / 8.074308 (17.688151) | 22.771290 / 10.191392 (12.579898) | 0.252650 / 0.680424 (-0.427773) | 0.032534 / 0.534201 (-0.501667) | 0.521047 / 0.579283 (-0.058236) | 0.620850 / 0.434364 (0.186486) | 0.612750 / 0.540337 (0.072413) | 0.837486 / 1.386936 (-0.549451) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#1f522e5bdd73c45f7ba0a03f2ecd4e7de7351f2e \"CML watermark\")\n" ]
Set dev version
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6220/reactions" }
PR_kwDODunzps5ZspRb
{ "diff_url": "https://github.com/huggingface/datasets/pull/6220.diff", "html_url": "https://github.com/huggingface/datasets/pull/6220", "merged_at": "2023-09-06T15:41:13Z", "patch_url": "https://github.com/huggingface/datasets/pull/6220.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6220" }
2023-09-06T15:40:33Z
https://api.github.com/repos/huggingface/datasets/issues/6220/comments
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/6220/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6220/timeline
closed
false
6,220
null
2023-09-06T15:41:13Z
null
true
1,884,244,334
https://api.github.com/repos/huggingface/datasets/issues/6219
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6219/events
[]
null
2023-09-06T15:46:20Z
[]
https://github.com/huggingface/datasets/pull/6219
MEMBER
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6219). All of your documentation changes will be reflected on that endpoint.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009523 / 0.011353 (-0.001830) | 0.005105 / 0.011008 (-0.005903) | 0.122664 / 0.038508 (0.084156) | 0.084688 / 0.023109 (0.061579) | 0.412057 / 0.275898 (0.136159) | 0.449690 / 0.323480 (0.126210) | 0.006627 / 0.007986 (-0.001358) | 0.004150 / 0.004328 (-0.000178) | 0.082079 / 0.004250 (0.077829) | 0.065289 / 0.037052 (0.028237) | 0.432934 / 0.258489 (0.174445) | 0.492068 / 0.293841 (0.198227) | 0.048317 / 0.128546 (-0.080229) | 0.015582 / 0.075646 (-0.060064) | 0.372050 / 0.419271 (-0.047222) | 0.070649 / 0.043533 (0.027116) | 0.431754 / 0.255139 (0.176615) | 0.473349 / 0.283200 (0.190149) | 0.037293 / 0.141683 (-0.104390) | 1.807537 / 1.452155 (0.355382) | 1.923073 / 1.492716 (0.430357) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.271214 / 0.018006 (0.253208) | 0.592961 / 0.000490 (0.592471) | 0.004062 / 0.000200 (0.003862) | 0.000089 / 0.000054 (0.000035) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034766 / 0.037411 (-0.002645) | 0.093014 / 0.014526 (0.078488) | 0.131332 / 0.176557 (-0.045225) | 0.188110 / 0.737135 (-0.549025) | 0.117617 / 0.296338 (-0.178722) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.668223 / 0.215209 (0.453013) | 6.707031 / 2.077655 (4.629376) | 3.040178 / 1.504120 (1.536058) | 2.641776 / 1.541195 (1.100581) | 2.524057 / 1.468490 (1.055567) | 0.893592 / 4.584777 (-3.691185) | 5.535848 / 3.745712 (1.790136) | 4.867067 / 5.269862 (-0.402794) | 2.999933 / 4.565676 (-1.565743) | 0.103602 / 0.424275 (-0.320673) | 0.008887 / 0.007607 (0.001280) | 0.822214 / 0.226044 (0.596169) | 8.028476 / 2.268929 (5.759547) | 3.708895 / 55.444624 (-51.735730) | 2.858314 / 6.876477 (-4.018163) | 3.101727 / 2.142072 (0.959655) | 1.083136 / 4.805227 (-3.722091) | 0.219588 / 6.500664 (-6.281076) | 0.080151 / 0.075469 (0.004682) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.645819 / 1.841788 (-0.195969) | 24.407887 / 8.074308 (16.333579) | 22.371901 / 10.191392 (12.180509) | 0.219557 / 0.680424 (-0.460867) | 0.037867 / 0.534201 (-0.496334) | 0.484136 / 0.579283 (-0.095147) | 0.620546 / 0.434364 (0.186182) | 0.562272 / 0.540337 (0.021934) | 0.774256 / 1.386936 (-0.612680) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009381 / 0.011353 (-0.001972) | 0.005565 / 0.011008 (-0.005444) | 0.091057 / 0.038508 (0.052549) | 0.078085 / 0.023109 (0.054975) | 0.538929 / 0.275898 (0.263031) | 0.555155 / 0.323480 (0.231675) | 0.007007 / 0.007986 (-0.000978) | 0.004268 / 0.004328 (-0.000060) | 0.086618 / 0.004250 (0.082368) | 0.064117 / 0.037052 (0.027065) | 0.523788 / 0.258489 (0.265299) | 0.586451 / 0.293841 (0.292610) | 0.050804 / 0.128546 (-0.077742) | 0.013964 / 0.075646 (-0.061682) | 0.096008 / 0.419271 (-0.323263) | 0.062242 / 0.043533 (0.018709) | 0.530398 / 0.255139 (0.275259) | 0.568527 / 0.283200 (0.285327) | 0.032456 / 0.141683 (-0.109227) | 1.894975 / 1.452155 (0.442820) | 2.084172 / 1.492716 (0.591455) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.295539 / 0.018006 (0.277533) | 0.588804 / 0.000490 (0.588314) | 0.006445 / 0.000200 (0.006245) | 0.000113 / 0.000054 (0.000059) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033965 / 0.037411 (-0.003447) | 0.111743 / 0.014526 (0.097217) | 0.128805 / 0.176557 (-0.047752) | 0.185013 / 0.737135 (-0.552123) | 0.129400 / 0.296338 (-0.166938) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.749784 / 0.215209 (0.534575) | 7.091075 / 2.077655 (5.013420) | 3.424517 / 1.504120 (1.920397) | 3.069103 / 1.541195 (1.527908) | 3.122431 / 1.468490 (1.653941) | 0.949277 / 4.584777 (-3.635500) | 5.648731 / 3.745712 (1.903019) | 4.937684 / 5.269862 (-0.332178) | 3.198027 / 4.565676 (-1.367650) | 0.100289 / 0.424275 (-0.323987) | 0.009411 / 0.007607 (0.001803) | 0.862604 / 0.226044 (0.636559) | 8.615410 / 2.268929 (6.346482) | 4.306428 / 55.444624 (-51.138196) | 3.591404 / 6.876477 (-3.285073) | 3.823899 / 2.142072 (1.681827) | 1.108006 / 4.805227 (-3.697221) | 0.215330 / 6.500664 (-6.285334) | 0.080755 / 0.075469 (0.005286) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.774914 / 1.841788 (-0.066873) | 25.360983 / 8.074308 (17.286675) | 23.624044 / 10.191392 (13.432652) | 0.226887 / 0.680424 (-0.453537) | 0.032625 / 0.534201 (-0.501576) | 0.499730 / 0.579283 (-0.079553) | 0.647819 / 0.434364 (0.213455) | 0.592239 / 0.540337 (0.051901) | 0.805751 / 1.386936 (-0.581185) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#0daa82428a0529478801574bcc68e1ed32051f3a \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008656 / 0.011353 (-0.002697) | 0.005545 / 0.011008 (-0.005463) | 0.107936 / 0.038508 (0.069428) | 0.077436 / 0.023109 (0.054327) | 0.391412 / 0.275898 (0.115514) | 0.452811 / 0.323480 (0.129331) | 0.004883 / 0.007986 (-0.003103) | 0.005125 / 0.004328 (0.000796) | 0.080006 / 0.004250 (0.075755) | 0.054425 / 0.037052 (0.017373) | 0.399667 / 0.258489 (0.141178) | 0.458099 / 0.293841 (0.164258) | 0.047302 / 0.128546 (-0.081244) | 0.014153 / 0.075646 (-0.061493) | 0.337281 / 0.419271 (-0.081991) | 0.062153 / 0.043533 (0.018620) | 0.399927 / 0.255139 (0.144788) | 0.407186 / 0.283200 (0.123987) | 0.036759 / 0.141683 (-0.104924) | 1.825935 / 1.452155 (0.373780) | 1.852238 / 1.492716 (0.359522) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.274163 / 0.018006 (0.256157) | 0.615624 / 0.000490 (0.615134) | 0.003782 / 0.000200 (0.003582) | 0.000115 / 0.000054 (0.000060) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026386 / 0.037411 (-0.011026) | 0.101151 / 0.014526 (0.086625) | 0.106115 / 0.176557 (-0.070442) | 0.161253 / 0.737135 (-0.575882) | 0.108861 / 0.296338 (-0.187478) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.587079 / 0.215209 (0.371870) | 6.141743 / 2.077655 (4.064089) | 2.727199 / 1.504120 (1.223079) | 2.526827 / 1.541195 (0.985632) | 2.598321 / 1.468490 (1.129831) | 0.904706 / 4.584777 (-3.680071) | 5.227742 / 3.745712 (1.482030) | 4.621627 / 5.269862 (-0.648234) | 2.931792 / 4.565676 (-1.633885) | 0.089538 / 0.424275 (-0.334737) | 0.008281 / 0.007607 (0.000674) | 0.675773 / 0.226044 (0.449729) | 7.212869 / 2.268929 (4.943941) | 3.541569 / 55.444624 (-51.903056) | 2.804034 / 6.876477 (-4.072443) | 3.080192 / 2.142072 (0.938120) | 1.034577 / 4.805227 (-3.770650) | 0.218727 / 6.500664 (-6.281937) | 0.084548 / 0.075469 (0.009079) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.528974 / 1.841788 (-0.312814) | 21.754329 / 8.074308 (13.680021) | 20.359808 / 10.191392 (10.168416) | 0.234719 / 0.680424 (-0.445705) | 0.026182 / 0.534201 (-0.508019) | 0.448956 / 0.579283 (-0.130327) | 0.577015 / 0.434364 (0.142651) | 0.513675 / 0.540337 (-0.026662) | 0.729780 / 1.386936 (-0.657156) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010427 / 0.011353 (-0.000926) | 0.005126 / 0.011008 (-0.005882) | 0.082759 / 0.038508 (0.044251) | 0.084892 / 0.023109 (0.061783) | 0.543826 / 0.275898 (0.267927) | 0.603050 / 0.323480 (0.279570) | 0.006667 / 0.007986 (-0.001319) | 0.004036 / 0.004328 (-0.000292) | 0.079534 / 0.004250 (0.075283) | 0.067523 / 0.037052 (0.030471) | 0.544845 / 0.258489 (0.286356) | 0.578823 / 0.293841 (0.284982) | 0.054786 / 0.128546 (-0.073760) | 0.014888 / 0.075646 (-0.060759) | 0.095696 / 0.419271 (-0.323576) | 0.064908 / 0.043533 (0.021375) | 0.558087 / 0.255139 (0.302948) | 0.593919 / 0.283200 (0.310719) | 0.039190 / 0.141683 (-0.102493) | 1.828680 / 1.452155 (0.376526) | 1.908891 / 1.492716 (0.416174) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.298926 / 0.018006 (0.280920) | 0.589467 / 0.000490 (0.588977) | 0.005276 / 0.000200 (0.005076) | 0.000112 / 0.000054 (0.000057) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034300 / 0.037411 (-0.003111) | 0.096990 / 0.014526 (0.082464) | 0.109347 / 0.176557 (-0.067209) | 0.171312 / 0.737135 (-0.565823) | 0.121736 / 0.296338 (-0.174603) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.641619 / 0.215209 (0.426410) | 6.365556 / 2.077655 (4.287901) | 2.947989 / 1.504120 (1.443869) | 2.631680 / 1.541195 (1.090485) | 2.602762 / 1.468490 (1.134272) | 0.812767 / 4.584777 (-3.772010) | 5.185753 / 3.745712 (1.440041) | 4.589897 / 5.269862 (-0.679964) | 2.833020 / 4.565676 (-1.732656) | 0.097782 / 0.424275 (-0.326493) | 0.008625 / 0.007607 (0.001018) | 0.741613 / 0.226044 (0.515568) | 7.662905 / 2.268929 (5.393976) | 3.533753 / 55.444624 (-51.910871) | 2.898929 / 6.876477 (-3.977547) | 3.042616 / 2.142072 (0.900544) | 0.933932 / 4.805227 (-3.871296) | 0.195710 / 6.500664 (-6.304954) | 0.066954 / 0.075469 (-0.008515) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.745353 / 1.841788 (-0.096434) | 23.820840 / 8.074308 (15.746532) | 20.892645 / 10.191392 (10.701253) | 0.234853 / 0.680424 (-0.445571) | 0.029149 / 0.534201 (-0.505051) | 0.458953 / 0.579283 (-0.120330) | 0.594278 / 0.434364 (0.159914) | 0.522929 / 0.540337 (-0.017409) | 0.753731 / 1.386936 (-0.633205) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#de6391d732ea0471ee5bdfb91b8cecc4503da96b \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005976 / 0.011353 (-0.005377) | 0.003636 / 0.011008 (-0.007372) | 0.079946 / 0.038508 (0.041437) | 0.060143 / 0.023109 (0.037034) | 0.314752 / 0.275898 (0.038854) | 0.353714 / 0.323480 (0.030234) | 0.004706 / 0.007986 (-0.003280) | 0.002862 / 0.004328 (-0.001466) | 0.061988 / 0.004250 (0.057737) | 0.045907 / 0.037052 (0.008855) | 0.316118 / 0.258489 (0.057629) | 0.358488 / 0.293841 (0.064647) | 0.027377 / 0.128546 (-0.101170) | 0.007970 / 0.075646 (-0.067677) | 0.261677 / 0.419271 (-0.157594) | 0.045289 / 0.043533 (0.001757) | 0.307931 / 0.255139 (0.052792) | 0.341364 / 0.283200 (0.058165) | 0.021021 / 0.141683 (-0.120662) | 1.440002 / 1.452155 (-0.012153) | 1.502904 / 1.492716 (0.010187) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.201746 / 0.018006 (0.183740) | 0.451114 / 0.000490 (0.450624) | 0.003351 / 0.000200 (0.003151) | 0.000067 / 0.000054 (0.000013) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024233 / 0.037411 (-0.013178) | 0.075042 / 0.014526 (0.060516) | 0.085636 / 0.176557 (-0.090920) | 0.144699 / 0.737135 (-0.592436) | 0.085222 / 0.296338 (-0.211117) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.389464 / 0.215209 (0.174255) | 3.889072 / 2.077655 (1.811417) | 1.908307 / 1.504120 (0.404187) | 1.738914 / 1.541195 (0.197719) | 1.866869 / 1.468490 (0.398379) | 0.500536 / 4.584777 (-4.084240) | 3.050155 / 3.745712 (-0.695557) | 2.832259 / 5.269862 (-2.437602) | 1.886657 / 4.565676 (-2.679020) | 0.059214 / 0.424275 (-0.365062) | 0.006711 / 0.007607 (-0.000896) | 0.467753 / 0.226044 (0.241709) | 4.666939 / 2.268929 (2.398011) | 2.471168 / 55.444624 (-52.973456) | 2.223508 / 6.876477 (-4.652968) | 2.176543 / 2.142072 (0.034470) | 0.593461 / 4.805227 (-4.211766) | 0.126216 / 6.500664 (-6.374448) | 0.061495 / 0.075469 (-0.013974) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.301279 / 1.841788 (-0.540509) | 18.317461 / 8.074308 (10.243153) | 13.877813 / 10.191392 (3.686421) | 0.143510 / 0.680424 (-0.536914) | 0.016826 / 0.534201 (-0.517375) | 0.328735 / 0.579283 (-0.250548) | 0.342272 / 0.434364 (-0.092092) | 0.375768 / 0.540337 (-0.164570) | 0.517600 / 1.386936 (-0.869336) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006215 / 0.011353 (-0.005138) | 0.003587 / 0.011008 (-0.007422) | 0.062248 / 0.038508 (0.023740) | 0.059830 / 0.023109 (0.036721) | 0.443278 / 0.275898 (0.167380) | 0.481279 / 0.323480 (0.157799) | 0.004773 / 0.007986 (-0.003213) | 0.002870 / 0.004328 (-0.001459) | 0.062730 / 0.004250 (0.058480) | 0.049422 / 0.037052 (0.012369) | 0.444196 / 0.258489 (0.185707) | 0.498614 / 0.293841 (0.204773) | 0.028477 / 0.128546 (-0.100069) | 0.008009 / 0.075646 (-0.067638) | 0.067919 / 0.419271 (-0.351352) | 0.040416 / 0.043533 (-0.003117) | 0.439460 / 0.255139 (0.184321) | 0.470529 / 0.283200 (0.187329) | 0.020767 / 0.141683 (-0.120916) | 1.478223 / 1.452155 (0.026068) | 1.538580 / 1.492716 (0.045863) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.271321 / 0.018006 (0.253315) | 0.456436 / 0.000490 (0.455946) | 0.011817 / 0.000200 (0.011617) | 0.000115 / 0.000054 (0.000061) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026355 / 0.037411 (-0.011056) | 0.081681 / 0.014526 (0.067155) | 0.091699 / 0.176557 (-0.084858) | 0.146115 / 0.737135 (-0.591021) | 0.094376 / 0.296338 (-0.201963) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.471677 / 0.215209 (0.256468) | 4.702909 / 2.077655 (2.625254) | 2.664882 / 1.504120 (1.160762) | 2.504106 / 1.541195 (0.962911) | 2.573226 / 1.468490 (1.104736) | 0.509679 / 4.584777 (-4.075097) | 3.034970 / 3.745712 (-0.710742) | 2.894704 / 5.269862 (-2.375157) | 1.915148 / 4.565676 (-2.650528) | 0.058312 / 0.424275 (-0.365963) | 0.006615 / 0.007607 (-0.000993) | 0.545339 / 0.226044 (0.319295) | 5.462261 / 2.268929 (3.193332) | 3.101482 / 55.444624 (-52.343143) | 2.755417 / 6.876477 (-4.121060) | 2.931440 / 2.142072 (0.789368) | 0.597521 / 4.805227 (-4.207707) | 0.125676 / 6.500664 (-6.374988) | 0.061798 / 0.075469 (-0.013671) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.356208 / 1.841788 (-0.485579) | 18.912492 / 8.074308 (10.838184) | 14.830128 / 10.191392 (4.638736) | 0.145992 / 0.680424 (-0.534432) | 0.019121 / 0.534201 (-0.515080) | 0.331534 / 0.579283 (-0.247749) | 0.361712 / 0.434364 (-0.072652) | 0.387532 / 0.540337 (-0.152805) | 0.536075 / 1.386936 (-0.850861) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#de6391d732ea0471ee5bdfb91b8cecc4503da96b \"CML watermark\")\n" ]
Release: 2.14.5
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6219/reactions" }
PR_kwDODunzps5ZsgPK
{ "diff_url": "https://github.com/huggingface/datasets/pull/6219.diff", "html_url": "https://github.com/huggingface/datasets/pull/6219", "merged_at": "2023-09-06T15:18:51Z", "patch_url": "https://github.com/huggingface/datasets/pull/6219.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6219" }
2023-09-06T15:17:10Z
https://api.github.com/repos/huggingface/datasets/issues/6219/comments
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/6219/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6219/timeline
closed
false
6,219
null
2023-09-06T15:18:51Z
null
true
1,883,734,000
https://api.github.com/repos/huggingface/datasets/issues/6218
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6218/events
[]
null
2023-09-07T08:31:29Z
[]
https://github.com/huggingface/datasets/pull/6218
MEMBER
null
false
null
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006529 / 0.011353 (-0.004823) | 0.004010 / 0.011008 (-0.006998) | 0.086258 / 0.038508 (0.047750) | 0.073775 / 0.023109 (0.050666) | 0.307573 / 0.275898 (0.031675) | 0.337091 / 0.323480 (0.013611) | 0.004251 / 0.007986 (-0.003735) | 0.003886 / 0.004328 (-0.000443) | 0.068238 / 0.004250 (0.063987) | 0.057000 / 0.037052 (0.019948) | 0.321751 / 0.258489 (0.063262) | 0.359227 / 0.293841 (0.065386) | 0.030841 / 0.128546 (-0.097705) | 0.008569 / 0.075646 (-0.067078) | 0.299523 / 0.419271 (-0.119748) | 0.052563 / 0.043533 (0.009030) | 0.312806 / 0.255139 (0.057667) | 0.342273 / 0.283200 (0.059074) | 0.025725 / 0.141683 (-0.115958) | 1.479263 / 1.452155 (0.027108) | 1.554975 / 1.492716 (0.062259) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.316328 / 0.018006 (0.298322) | 0.598993 / 0.000490 (0.598503) | 0.004548 / 0.000200 (0.004348) | 0.000080 / 0.000054 (0.000026) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027399 / 0.037411 (-0.010013) | 0.081683 / 0.014526 (0.067157) | 0.096968 / 0.176557 (-0.079589) | 0.151559 / 0.737135 (-0.585576) | 0.096558 / 0.296338 (-0.199781) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.383117 / 0.215209 (0.167908) | 3.818634 / 2.077655 (1.740979) | 1.878112 / 1.504120 (0.373992) | 1.729031 / 1.541195 (0.187836) | 1.770259 / 1.468490 (0.301769) | 0.484061 / 4.584777 (-4.100716) | 3.596998 / 3.745712 (-0.148715) | 3.246846 / 5.269862 (-2.023016) | 2.019481 / 4.565676 (-2.546195) | 0.057279 / 0.424275 (-0.366996) | 0.007455 / 0.007607 (-0.000152) | 0.465002 / 0.226044 (0.238958) | 4.644669 / 2.268929 (2.375741) | 2.346415 / 55.444624 (-53.098209) | 2.039686 / 6.876477 (-4.836791) | 2.172822 / 2.142072 (0.030750) | 0.582925 / 4.805227 (-4.222302) | 0.134246 / 6.500664 (-6.366418) | 0.060093 / 0.075469 (-0.015376) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.249033 / 1.841788 (-0.592755) | 19.585949 / 8.074308 (11.511641) | 14.100681 / 10.191392 (3.909289) | 0.147138 / 0.680424 (-0.533286) | 0.018307 / 0.534201 (-0.515894) | 0.397939 / 0.579283 (-0.181344) | 0.413916 / 0.434364 (-0.020448) | 0.465688 / 0.540337 (-0.074650) | 0.642140 / 1.386936 (-0.744797) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006627 / 0.011353 (-0.004726) | 0.004173 / 0.011008 (-0.006835) | 0.063850 / 0.038508 (0.025342) | 0.074733 / 0.023109 (0.051623) | 0.398111 / 0.275898 (0.122213) | 0.426344 / 0.323480 (0.102864) | 0.006261 / 0.007986 (-0.001725) | 0.003507 / 0.004328 (-0.000822) | 0.064511 / 0.004250 (0.060260) | 0.056508 / 0.037052 (0.019456) | 0.401750 / 0.258489 (0.143261) | 0.437081 / 0.293841 (0.143240) | 0.031815 / 0.128546 (-0.096732) | 0.008703 / 0.075646 (-0.066943) | 0.071411 / 0.419271 (-0.347861) | 0.048153 / 0.043533 (0.004620) | 0.399221 / 0.255139 (0.144082) | 0.429312 / 0.283200 (0.146112) | 0.022157 / 0.141683 (-0.119526) | 1.485656 / 1.452155 (0.033502) | 1.550967 / 1.492716 (0.058250) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.330575 / 0.018006 (0.312569) | 0.525553 / 0.000490 (0.525064) | 0.004574 / 0.000200 (0.004374) | 0.000093 / 0.000054 (0.000038) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031871 / 0.037411 (-0.005541) | 0.091819 / 0.014526 (0.077293) | 0.105542 / 0.176557 (-0.071015) | 0.158210 / 0.737135 (-0.578926) | 0.107167 / 0.296338 (-0.189172) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.430226 / 0.215209 (0.215017) | 4.293456 / 2.077655 (2.215801) | 2.289538 / 1.504120 (0.785418) | 2.122255 / 1.541195 (0.581060) | 2.181840 / 1.468490 (0.713350) | 0.498529 / 4.584777 (-4.086248) | 3.686636 / 3.745712 (-0.059077) | 3.287279 / 5.269862 (-1.982582) | 2.068397 / 4.565676 (-2.497280) | 0.058775 / 0.424275 (-0.365500) | 0.007583 / 0.007607 (-0.000024) | 0.507165 / 0.226044 (0.281121) | 5.072330 / 2.268929 (2.803401) | 2.796396 / 55.444624 (-52.648228) | 2.409946 / 6.876477 (-4.466531) | 2.657322 / 2.142072 (0.515250) | 0.597744 / 4.805227 (-4.207483) | 0.133803 / 6.500664 (-6.366861) | 0.060231 / 0.075469 (-0.015238) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.333130 / 1.841788 (-0.508658) | 20.545936 / 8.074308 (12.471627) | 14.875020 / 10.191392 (4.683628) | 0.168873 / 0.680424 (-0.511551) | 0.020316 / 0.534201 (-0.513885) | 0.397203 / 0.579283 (-0.182080) | 0.412412 / 0.434364 (-0.021952) | 0.479952 / 0.540337 (-0.060385) | 0.657155 / 1.386936 (-0.729781) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#13fbee4ca8742460e9baab86a89d9100a294df3e \"CML watermark\")\n", "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007885 / 0.011353 (-0.003468) | 0.005221 / 0.011008 (-0.005787) | 0.099457 / 0.038508 (0.060949) | 0.085867 / 0.023109 (0.062758) | 0.359922 / 0.275898 (0.084024) | 0.406479 / 0.323480 (0.082999) | 0.005001 / 0.007986 (-0.002985) | 0.003678 / 0.004328 (-0.000650) | 0.075647 / 0.004250 (0.071396) | 0.064318 / 0.037052 (0.027265) | 0.372180 / 0.258489 (0.113691) | 0.419206 / 0.293841 (0.125365) | 0.040438 / 0.128546 (-0.088108) | 0.010008 / 0.075646 (-0.065638) | 0.340911 / 0.419271 (-0.078360) | 0.063326 / 0.043533 (0.019793) | 0.359015 / 0.255139 (0.103876) | 0.408601 / 0.283200 (0.125402) | 0.029828 / 0.141683 (-0.111855) | 1.767822 / 1.452155 (0.315667) | 1.829079 / 1.492716 (0.336363) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.234455 / 0.018006 (0.216449) | 0.507786 / 0.000490 (0.507297) | 0.004009 / 0.000200 (0.003809) | 0.000101 / 0.000054 (0.000046) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033374 / 0.037411 (-0.004038) | 0.100817 / 0.014526 (0.086291) | 0.113415 / 0.176557 (-0.063141) | 0.180368 / 0.737135 (-0.556768) | 0.115446 / 0.296338 (-0.180893) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.488976 / 0.215209 (0.273767) | 4.911354 / 2.077655 (2.833699) | 2.623525 / 1.504120 (1.119405) | 2.424400 / 1.541195 (0.883206) | 2.497580 / 1.468490 (1.029089) | 0.561106 / 4.584777 (-4.023671) | 4.265649 / 3.745712 (0.519937) | 3.830267 / 5.269862 (-1.439595) | 2.404727 / 4.565676 (-2.160949) | 0.067303 / 0.424275 (-0.356972) | 0.009177 / 0.007607 (0.001570) | 0.588433 / 0.226044 (0.362388) | 5.871573 / 2.268929 (3.602645) | 3.087845 / 55.444624 (-52.356779) | 2.765381 / 6.876477 (-4.111096) | 3.007863 / 2.142072 (0.865791) | 0.687327 / 4.805227 (-4.117901) | 0.157687 / 6.500664 (-6.342977) | 0.071291 / 0.075469 (-0.004178) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.510931 / 1.841788 (-0.330857) | 22.129590 / 8.074308 (14.055282) | 16.780479 / 10.191392 (6.589087) | 0.168297 / 0.680424 (-0.512127) | 0.021294 / 0.534201 (-0.512907) | 0.464535 / 0.579283 (-0.114748) | 0.480041 / 0.434364 (0.045677) | 0.549185 / 0.540337 (0.008848) | 0.739438 / 1.386936 (-0.647498) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007834 / 0.011353 (-0.003518) | 0.004576 / 0.011008 (-0.006432) | 0.073331 / 0.038508 (0.034823) | 0.084688 / 0.023109 (0.061579) | 0.486367 / 0.275898 (0.210469) | 0.523127 / 0.323480 (0.199647) | 0.006278 / 0.007986 (-0.001708) | 0.003792 / 0.004328 (-0.000537) | 0.075416 / 0.004250 (0.071166) | 0.064053 / 0.037052 (0.027001) | 0.491908 / 0.258489 (0.233419) | 0.529177 / 0.293841 (0.235336) | 0.038483 / 0.128546 (-0.090063) | 0.009560 / 0.075646 (-0.066087) | 0.083431 / 0.419271 (-0.335841) | 0.057114 / 0.043533 (0.013581) | 0.486316 / 0.255139 (0.231177) | 0.512384 / 0.283200 (0.229185) | 0.028452 / 0.141683 (-0.113231) | 1.788886 / 1.452155 (0.336731) | 1.893834 / 1.492716 (0.401118) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.343018 / 0.018006 (0.325011) | 0.513673 / 0.000490 (0.513183) | 0.056778 / 0.000200 (0.056578) | 0.001799 / 0.000054 (0.001745) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.038530 / 0.037411 (0.001119) | 0.109286 / 0.014526 (0.094760) | 0.122812 / 0.176557 (-0.053745) | 0.187780 / 0.737135 (-0.549355) | 0.124083 / 0.296338 (-0.172255) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.509839 / 0.215209 (0.294630) | 5.085840 / 2.077655 (3.008186) | 2.746695 / 1.504120 (1.242575) | 2.542283 / 1.541195 (1.001088) | 2.650243 / 1.468490 (1.181753) | 0.592801 / 4.584777 (-3.991976) | 4.316721 / 3.745712 (0.571009) | 3.811672 / 5.269862 (-1.458189) | 2.433982 / 4.565676 (-2.131695) | 0.066861 / 0.424275 (-0.357414) | 0.008633 / 0.007607 (0.001026) | 0.590482 / 0.226044 (0.364437) | 5.923484 / 2.268929 (3.654556) | 3.282293 / 55.444624 (-52.162332) | 2.882716 / 6.876477 (-3.993761) | 3.139581 / 2.142072 (0.997509) | 0.690702 / 4.805227 (-4.114525) | 0.156781 / 6.500664 (-6.343883) | 0.071487 / 0.075469 (-0.003982) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.604557 / 1.841788 (-0.237231) | 24.000026 / 8.074308 (15.925718) | 17.548685 / 10.191392 (7.357293) | 0.174883 / 0.680424 (-0.505541) | 0.023812 / 0.534201 (-0.510389) | 0.473522 / 0.579283 (-0.105761) | 0.494683 / 0.434364 (0.060319) | 0.593352 / 0.540337 (0.053015) | 0.771852 / 1.386936 (-0.615084) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#b61c96a806fa97800bc8a66607fb0c78a5d04146 \"CML watermark\")\n", "thanks! i wonder if we should also fix (change config name) all the old `dataset_infos.json` on the Hub?", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006388 / 0.011353 (-0.004965) | 0.003876 / 0.011008 (-0.007132) | 0.083960 / 0.038508 (0.045452) | 0.068328 / 0.023109 (0.045219) | 0.337958 / 0.275898 (0.062060) | 0.370783 / 0.323480 (0.047303) | 0.003925 / 0.007986 (-0.004060) | 0.004221 / 0.004328 (-0.000107) | 0.064198 / 0.004250 (0.059947) | 0.052681 / 0.037052 (0.015629) | 0.348890 / 0.258489 (0.090401) | 0.389038 / 0.293841 (0.095197) | 0.031133 / 0.128546 (-0.097413) | 0.008566 / 0.075646 (-0.067080) | 0.288169 / 0.419271 (-0.131102) | 0.053290 / 0.043533 (0.009757) | 0.344654 / 0.255139 (0.089515) | 0.381287 / 0.283200 (0.098087) | 0.022350 / 0.141683 (-0.119333) | 1.459933 / 1.452155 (0.007778) | 1.543097 / 1.492716 (0.050380) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.212592 / 0.018006 (0.194586) | 0.461863 / 0.000490 (0.461373) | 0.003468 / 0.000200 (0.003268) | 0.000084 / 0.000054 (0.000029) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026849 / 0.037411 (-0.010563) | 0.081059 / 0.014526 (0.066533) | 0.093986 / 0.176557 (-0.082571) | 0.150328 / 0.737135 (-0.586807) | 0.094253 / 0.296338 (-0.202085) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.382198 / 0.215209 (0.166989) | 3.813878 / 2.077655 (1.736224) | 1.855686 / 1.504120 (0.351566) | 1.672995 / 1.541195 (0.131800) | 1.697705 / 1.468490 (0.229215) | 0.479920 / 4.584777 (-4.104857) | 3.608305 / 3.745712 (-0.137407) | 3.216712 / 5.269862 (-2.053149) | 1.984781 / 4.565676 (-2.580896) | 0.056801 / 0.424275 (-0.367475) | 0.007499 / 0.007607 (-0.000108) | 0.454155 / 0.226044 (0.228110) | 4.531147 / 2.268929 (2.262218) | 2.296149 / 55.444624 (-53.148475) | 1.968701 / 6.876477 (-4.907775) | 2.144286 / 2.142072 (0.002213) | 0.599254 / 4.805227 (-4.205973) | 0.138150 / 6.500664 (-6.362514) | 0.060118 / 0.075469 (-0.015351) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.282486 / 1.841788 (-0.559301) | 19.127792 / 8.074308 (11.053483) | 14.116521 / 10.191392 (3.925129) | 0.163792 / 0.680424 (-0.516632) | 0.018116 / 0.534201 (-0.516085) | 0.390789 / 0.579283 (-0.188494) | 0.409241 / 0.434364 (-0.025123) | 0.457824 / 0.540337 (-0.082513) | 0.624390 / 1.386936 (-0.762546) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006637 / 0.011353 (-0.004716) | 0.003932 / 0.011008 (-0.007076) | 0.063456 / 0.038508 (0.024948) | 0.070062 / 0.023109 (0.046953) | 0.410570 / 0.275898 (0.134672) | 0.436700 / 0.323480 (0.113220) | 0.005324 / 0.007986 (-0.002662) | 0.003263 / 0.004328 (-0.001065) | 0.063590 / 0.004250 (0.059340) | 0.054823 / 0.037052 (0.017770) | 0.408720 / 0.258489 (0.150231) | 0.441493 / 0.293841 (0.147652) | 0.031655 / 0.128546 (-0.096891) | 0.008421 / 0.075646 (-0.067225) | 0.070657 / 0.419271 (-0.348614) | 0.047370 / 0.043533 (0.003837) | 0.408217 / 0.255139 (0.153078) | 0.422178 / 0.283200 (0.138978) | 0.022282 / 0.141683 (-0.119401) | 1.511417 / 1.452155 (0.059262) | 1.570337 / 1.492716 (0.077620) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.224334 / 0.018006 (0.206327) | 0.447589 / 0.000490 (0.447099) | 0.004227 / 0.000200 (0.004027) | 0.000099 / 0.000054 (0.000045) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030797 / 0.037411 (-0.006615) | 0.091276 / 0.014526 (0.076750) | 0.102665 / 0.176557 (-0.073892) | 0.155423 / 0.737135 (-0.581712) | 0.103779 / 0.296338 (-0.192560) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.434509 / 0.215209 (0.219300) | 4.328910 / 2.077655 (2.251255) | 2.311424 / 1.504120 (0.807304) | 2.138380 / 1.541195 (0.597185) | 2.196293 / 1.468490 (0.727803) | 0.482123 / 4.584777 (-4.102654) | 3.597870 / 3.745712 (-0.147842) | 3.222426 / 5.269862 (-2.047435) | 1.994467 / 4.565676 (-2.571210) | 0.057517 / 0.424275 (-0.366758) | 0.007336 / 0.007607 (-0.000271) | 0.504968 / 0.226044 (0.278923) | 5.047940 / 2.268929 (2.779012) | 2.824014 / 55.444624 (-52.620610) | 2.457762 / 6.876477 (-4.418714) | 2.606970 / 2.142072 (0.464897) | 0.580758 / 4.805227 (-4.224469) | 0.132584 / 6.500664 (-6.368080) | 0.059258 / 0.075469 (-0.016211) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.354386 / 1.841788 (-0.487402) | 19.738147 / 8.074308 (11.663839) | 14.858001 / 10.191392 (4.666609) | 0.166074 / 0.680424 (-0.514350) | 0.020181 / 0.534201 (-0.514020) | 0.398333 / 0.579283 (-0.180950) | 0.406969 / 0.434364 (-0.027395) | 0.474515 / 0.540337 (-0.065822) | 0.649571 / 1.386936 (-0.737365) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#b3ac3b3a9c5f40a29fae71504574cfdeebefe349 \"CML watermark\")\n", "I would say we should delete all `dataset_infos.json` on the Hub...", "@albertvillanova @lhoestq @mariosasko should we really stop supporting it and delete from everywhere?\r\n(bc if not, I've found a bug in updating `dataset_infos.json` with `.push_to_hub` and I'd open a PR to fix it)", "We can only delete them for the datasets without namespace and open PRs for the others, so we need to keep supporting them for now" ]
Rename old push_to_hub configs to "default" in dataset_infos
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6218/reactions" }
PR_kwDODunzps5Zqw3Y
{ "diff_url": "https://github.com/huggingface/datasets/pull/6218.diff", "html_url": "https://github.com/huggingface/datasets/pull/6218", "merged_at": "2023-09-06T11:23:56Z", "patch_url": "https://github.com/huggingface/datasets/pull/6218.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6218" }
2023-09-06T10:40:05Z
https://api.github.com/repos/huggingface/datasets/issues/6218/comments
Fix ```python from datasets import load_dataset_builder b = load_dataset_builder("lambdalabs/pokemon-blip-captions", "default") print(b.info) ``` which should return ``` DatasetInfo( features={'image': Image(decode=True, id=None), 'text': Value(dtype='string', id=None)}, dataset_name='pokemon-blip-captions', config_name='default', version=0.0.0, splits={'train': SplitInfo(name='train', num_bytes=119417410.0, num_examples=833, shard_lengths=None, dataset_name='pokemon-blip-captions')}, download_checksums=None, download_size=99672355, dataset_size=119417410.0, size_in_bytes=219089765.0, ... ) ``` instead of and empty dataset info. The dataset has a dataset_infos.json file with a deprecated config name "lambdalabs--pokemon-blip-captions". We switched those config names to "default" in 2.14, so the builder.info should take this into account.
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/6218/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6218/timeline
closed
false
6,218
null
2023-09-06T11:23:56Z
null
true
1,883,614,607
https://api.github.com/repos/huggingface/datasets/issues/6217
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6217/events
[]
null
2023-09-08T17:08:52Z
[]
https://github.com/huggingface/datasets/issues/6217
MEMBER
null
null
null
[ "We need to implement the `Image` type as a PyArrow extension type (to allow us to override the Python conversion) for this to work as expected. For now, it's best to use your approach indeed." ]
`Dataset.to_dict()` ignore `decode=True` with Image feature
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6217/reactions" }
I_kwDODunzps5wRa2P
null
2023-09-06T09:26:16Z
https://api.github.com/repos/huggingface/datasets/issues/6217/comments
### Describe the bug `Dataset.to_dict` seems to ignore the decoding instruction passed in features. ### Steps to reproduce the bug ```python import datasets import numpy as np from PIL import Image img = np.random.randint(0, 256, (5, 5, 3), dtype=np.uint8) img = Image.fromarray(img) features = datasets.Features({"image": datasets.Image(decode=True)}) dataset = datasets.Dataset.from_dict({"image": [img]}, features=features) print({key: dataset[key] for key in dataset.column_names}) # {'image': [<PIL.PngImagePlugin.PngImageFile image mode=RGB size=5x5 at 0x7EFBC80E15B0>]} print(dataset.to_dict()) # {'image': [{'bytes': b'\x89PNG\r\n\x1a\n\x00\x00\x00\rIHDR\x00\x00\x00\x05\x00\x00\x00\x05\x08\x02\x00\x00\x00\x02\r\xb1\xb2\x00\x00\x00[IDATx\x9c\x01P\x00\xaf\xff\x01\x13\x1b<7\xe7\xe0\xdc^6\xed\x04\xc7M\xd2\x9f\x00X\x1b\xb0?\x1ba\x15\xc5 o\xd0\x80\xbe\x19/\x01\xec\x95\x1f\x9f\xffj\xfa1\xa7\xc4X\xea\xbe\xa4g\x00\xc4\x15\xdeC\xc7 \xbbaqe\xc8\xb9\xa9q\xe7\x00,?M\xc0)\xdaD`}\xb1\xdci\x1e\xafC\xa9]%.@\xa6\xf0\xb3\x00\x00\x00\x00IEND\xaeB`\x82', 'path': None}]} ``` ### Expected behavior I would expect `{key: dataset[key] for key in dataset.column_names}` and `dataset.to_dict()` to be equivalent. If the previous behavior is expected, then it should be stated [in the doc](https://huggingface.co/docs/datasets/v2.14.4/en/package_reference/main_classes#datasets.Dataset.to_dict). ### Environment info - `datasets` version: 2.14.4 - Platform: Linux-6.2.0-31-generic-x86_64-with-glibc2.35 - Python version: 3.9.12 - Huggingface_hub version: 0.15.1 - PyArrow version: 12.0.1 - Pandas version: 2.0.3 - Pillow 9.5.0 - numpy 1.25.2
{ "avatar_url": "https://avatars.githubusercontent.com/u/45557362?v=4", "events_url": "https://api.github.com/users/qgallouedec/events{/privacy}", "followers_url": "https://api.github.com/users/qgallouedec/followers", "following_url": "https://api.github.com/users/qgallouedec/following{/other_user}", "gists_url": "https://api.github.com/users/qgallouedec/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/qgallouedec", "id": 45557362, "login": "qgallouedec", "node_id": "MDQ6VXNlcjQ1NTU3MzYy", "organizations_url": "https://api.github.com/users/qgallouedec/orgs", "received_events_url": "https://api.github.com/users/qgallouedec/received_events", "repos_url": "https://api.github.com/users/qgallouedec/repos", "site_admin": false, "starred_url": "https://api.github.com/users/qgallouedec/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/qgallouedec/subscriptions", "type": "User", "url": "https://api.github.com/users/qgallouedec" }
https://api.github.com/repos/huggingface/datasets/issues/6217/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6217/timeline
open
false
6,217
null
null
null
false
1,883,492,703
https://api.github.com/repos/huggingface/datasets/issues/6216
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6216/events
[]
null
2023-09-06T08:52:18Z
[]
https://github.com/huggingface/datasets/pull/6216
MEMBER
null
false
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007801 / 0.011353 (-0.003552) | 0.004831 / 0.011008 (-0.006177) | 0.123101 / 0.038508 (0.084593) | 0.053246 / 0.023109 (0.030137) | 0.381787 / 0.275898 (0.105889) | 0.461822 / 0.323480 (0.138342) | 0.004655 / 0.007986 (-0.003331) | 0.004818 / 0.004328 (0.000490) | 0.090865 / 0.004250 (0.086614) | 0.070626 / 0.037052 (0.033574) | 0.409122 / 0.258489 (0.150633) | 0.449627 / 0.293841 (0.155787) | 0.037477 / 0.128546 (-0.091069) | 0.010677 / 0.075646 (-0.064970) | 0.419970 / 0.419271 (0.000699) | 0.064626 / 0.043533 (0.021093) | 0.379536 / 0.255139 (0.124397) | 0.405790 / 0.283200 (0.122590) | 0.027290 / 0.141683 (-0.114393) | 1.884973 / 1.452155 (0.432819) | 1.960547 / 1.492716 (0.467831) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.259393 / 0.018006 (0.241386) | 0.502130 / 0.000490 (0.501640) | 0.013053 / 0.000200 (0.012853) | 0.000336 / 0.000054 (0.000281) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033459 / 0.037411 (-0.003953) | 0.135888 / 0.014526 (0.121362) | 0.145354 / 0.176557 (-0.031203) | 0.213289 / 0.737135 (-0.523847) | 0.151239 / 0.296338 (-0.145100) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.510817 / 0.215209 (0.295608) | 5.077888 / 2.077655 (3.000234) | 2.502991 / 1.504120 (0.998871) | 2.275566 / 1.541195 (0.734371) | 2.353025 / 1.468490 (0.884535) | 0.659062 / 4.584777 (-3.925715) | 4.411399 / 3.745712 (0.665686) | 2.227395 / 5.269862 (-3.042467) | 1.306771 / 4.565676 (-3.258905) | 0.081121 / 0.424275 (-0.343154) | 0.014252 / 0.007607 (0.006645) | 0.635040 / 0.226044 (0.408996) | 6.357500 / 2.268929 (4.088572) | 3.056647 / 55.444624 (-52.387977) | 2.671997 / 6.876477 (-4.204480) | 2.847955 / 2.142072 (0.705883) | 0.808163 / 4.805227 (-3.997064) | 0.177176 / 6.500664 (-6.323488) | 0.079984 / 0.075469 (0.004515) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.490471 / 1.841788 (-0.351317) | 17.927433 / 8.074308 (9.853124) | 17.744967 / 10.191392 (7.553575) | 0.171034 / 0.680424 (-0.509390) | 0.021432 / 0.534201 (-0.512769) | 0.515745 / 0.579283 (-0.063538) | 0.504746 / 0.434364 (0.070382) | 0.630862 / 0.540337 (0.090524) | 0.755275 / 1.386936 (-0.631662) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008227 / 0.011353 (-0.003126) | 0.004864 / 0.011008 (-0.006144) | 0.092801 / 0.038508 (0.054293) | 0.054996 / 0.023109 (0.031887) | 0.500348 / 0.275898 (0.224450) | 0.565028 / 0.323480 (0.241548) | 0.004792 / 0.007986 (-0.003194) | 0.005052 / 0.004328 (0.000723) | 0.090640 / 0.004250 (0.086390) | 0.074427 / 0.037052 (0.037374) | 0.499908 / 0.258489 (0.241419) | 0.566260 / 0.293841 (0.272419) | 0.040011 / 0.128546 (-0.088536) | 0.010438 / 0.075646 (-0.065208) | 0.099385 / 0.419271 (-0.319887) | 0.060485 / 0.043533 (0.016952) | 0.480603 / 0.255139 (0.225464) | 0.508807 / 0.283200 (0.225607) | 0.025976 / 0.141683 (-0.115707) | 1.870860 / 1.452155 (0.418705) | 1.943460 / 1.492716 (0.450744) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.227753 / 0.018006 (0.209747) | 0.501859 / 0.000490 (0.501369) | 0.008211 / 0.000200 (0.008011) | 0.000127 / 0.000054 (0.000073) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.038329 / 0.037411 (0.000918) | 0.148214 / 0.014526 (0.133688) | 0.162704 / 0.176557 (-0.013852) | 0.218543 / 0.737135 (-0.518592) | 0.162992 / 0.296338 (-0.133347) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.553195 / 0.215209 (0.337986) | 5.568080 / 2.077655 (3.490425) | 2.936616 / 1.504120 (1.432496) | 2.712624 / 1.541195 (1.171429) | 2.713245 / 1.468490 (1.244755) | 0.648593 / 4.584777 (-3.936184) | 4.641361 / 3.745712 (0.895648) | 2.207064 / 5.269862 (-3.062798) | 1.315325 / 4.565676 (-3.250351) | 0.080285 / 0.424275 (-0.343990) | 0.014143 / 0.007607 (0.006536) | 0.672467 / 0.226044 (0.446423) | 6.730262 / 2.268929 (4.461333) | 3.344468 / 55.444624 (-52.100157) | 2.927837 / 6.876477 (-3.948640) | 3.124735 / 2.142072 (0.982662) | 0.795894 / 4.805227 (-4.009333) | 0.170985 / 6.500664 (-6.329679) | 0.077406 / 0.075469 (0.001937) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.598059 / 1.841788 (-0.243729) | 18.531854 / 8.074308 (10.457546) | 18.394895 / 10.191392 (8.203503) | 0.195702 / 0.680424 (-0.484722) | 0.023633 / 0.534201 (-0.510568) | 0.518110 / 0.579283 (-0.061173) | 0.517773 / 0.434364 (0.083409) | 0.617902 / 0.540337 (0.077565) | 0.736459 / 1.386936 (-0.650477) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#5d4bb03237b74c0009043d50c5b4e4339cb98b2b \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006943 / 0.011353 (-0.004410) | 0.004524 / 0.011008 (-0.006485) | 0.121603 / 0.038508 (0.083095) | 0.047462 / 0.023109 (0.024353) | 0.362393 / 0.275898 (0.086495) | 0.440577 / 0.323480 (0.117098) | 0.004153 / 0.007986 (-0.003832) | 0.003778 / 0.004328 (-0.000550) | 0.090402 / 0.004250 (0.086152) | 0.066268 / 0.037052 (0.029216) | 0.380721 / 0.258489 (0.122232) | 0.442959 / 0.293841 (0.149118) | 0.035228 / 0.128546 (-0.093318) | 0.010217 / 0.075646 (-0.065429) | 0.408587 / 0.419271 (-0.010684) | 0.062609 / 0.043533 (0.019076) | 0.372682 / 0.255139 (0.117543) | 0.389270 / 0.283200 (0.106070) | 0.026699 / 0.141683 (-0.114984) | 1.760476 / 1.452155 (0.308321) | 1.795081 / 1.492716 (0.302365) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.229912 / 0.018006 (0.211906) | 0.476837 / 0.000490 (0.476348) | 0.008178 / 0.000200 (0.007978) | 0.000100 / 0.000054 (0.000045) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031116 / 0.037411 (-0.006296) | 0.126767 / 0.014526 (0.112241) | 0.134242 / 0.176557 (-0.042315) | 0.202120 / 0.737135 (-0.535016) | 0.142777 / 0.296338 (-0.153561) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.470690 / 0.215209 (0.255481) | 4.723198 / 2.077655 (2.645543) | 2.163870 / 1.504120 (0.659750) | 1.914177 / 1.541195 (0.372982) | 2.034529 / 1.468490 (0.566038) | 0.620472 / 4.584777 (-3.964305) | 4.391008 / 3.745712 (0.645296) | 2.100966 / 5.269862 (-3.168896) | 1.225945 / 4.565676 (-3.339732) | 0.076279 / 0.424275 (-0.347996) | 0.013551 / 0.007607 (0.005944) | 0.600989 / 0.226044 (0.374945) | 5.946715 / 2.268929 (3.677787) | 2.665117 / 55.444624 (-52.779508) | 2.320004 / 6.876477 (-4.556473) | 2.413131 / 2.142072 (0.271059) | 0.771908 / 4.805227 (-4.033320) | 0.165438 / 6.500664 (-6.335226) | 0.074512 / 0.075469 (-0.000957) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.432728 / 1.841788 (-0.409060) | 17.398133 / 8.074308 (9.323824) | 16.819152 / 10.191392 (6.627760) | 0.191849 / 0.680424 (-0.488575) | 0.021557 / 0.534201 (-0.512644) | 0.514380 / 0.579283 (-0.064903) | 0.501453 / 0.434364 (0.067089) | 0.634091 / 0.540337 (0.093753) | 0.756786 / 1.386936 (-0.630150) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007946 / 0.011353 (-0.003407) | 0.004751 / 0.011008 (-0.006257) | 0.090190 / 0.038508 (0.051682) | 0.052841 / 0.023109 (0.029732) | 0.480150 / 0.275898 (0.204252) | 0.537509 / 0.323480 (0.214029) | 0.004833 / 0.007986 (-0.003153) | 0.004796 / 0.004328 (0.000467) | 0.090616 / 0.004250 (0.086366) | 0.074325 / 0.037052 (0.037273) | 0.483776 / 0.258489 (0.225287) | 0.552094 / 0.293841 (0.258254) | 0.039240 / 0.128546 (-0.089307) | 0.010416 / 0.075646 (-0.065230) | 0.100275 / 0.419271 (-0.318996) | 0.058086 / 0.043533 (0.014553) | 0.468989 / 0.255139 (0.213850) | 0.485502 / 0.283200 (0.202302) | 0.027514 / 0.141683 (-0.114169) | 1.849625 / 1.452155 (0.397470) | 1.919515 / 1.492716 (0.426798) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.248061 / 0.018006 (0.230055) | 0.475630 / 0.000490 (0.475141) | 0.006248 / 0.000200 (0.006048) | 0.000105 / 0.000054 (0.000050) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.037746 / 0.037411 (0.000335) | 0.141638 / 0.014526 (0.127112) | 0.149530 / 0.176557 (-0.027026) | 0.209255 / 0.737135 (-0.527880) | 0.156447 / 0.296338 (-0.139892) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.544640 / 0.215209 (0.329431) | 5.493152 / 2.077655 (3.415497) | 2.869733 / 1.504120 (1.365613) | 2.624216 / 1.541195 (1.083022) | 2.710818 / 1.468490 (1.242328) | 0.640626 / 4.584777 (-3.944151) | 4.516130 / 3.745712 (0.770418) | 2.128097 / 5.269862 (-3.141765) | 1.278990 / 4.565676 (-3.286686) | 0.077114 / 0.424275 (-0.347161) | 0.013280 / 0.007607 (0.005673) | 0.655552 / 0.226044 (0.429507) | 6.526875 / 2.268929 (4.257947) | 3.347072 / 55.444624 (-52.097553) | 2.992435 / 6.876477 (-3.884041) | 3.124351 / 2.142072 (0.982278) | 0.778523 / 4.805227 (-4.026704) | 0.161873 / 6.500664 (-6.338791) | 0.072897 / 0.075469 (-0.002572) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.587058 / 1.841788 (-0.254730) | 18.170612 / 8.074308 (10.096304) | 17.220483 / 10.191392 (7.029091) | 0.207863 / 0.680424 (-0.472561) | 0.023746 / 0.534201 (-0.510455) | 0.512607 / 0.579283 (-0.066676) | 0.513258 / 0.434364 (0.078894) | 0.597880 / 0.540337 (0.057543) | 0.714974 / 1.386936 (-0.671962) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#98b1bdd492df953ca7139bb8c9a1771d5c603797 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006224 / 0.011353 (-0.005128) | 0.003857 / 0.011008 (-0.007151) | 0.099786 / 0.038508 (0.061278) | 0.037919 / 0.023109 (0.014810) | 0.315294 / 0.275898 (0.039396) | 0.390178 / 0.323480 (0.066698) | 0.005358 / 0.007986 (-0.002628) | 0.002989 / 0.004328 (-0.001340) | 0.077834 / 0.004250 (0.073583) | 0.053315 / 0.037052 (0.016263) | 0.325155 / 0.258489 (0.066666) | 0.374712 / 0.293841 (0.080871) | 0.029176 / 0.128546 (-0.099370) | 0.008658 / 0.075646 (-0.066988) | 0.314245 / 0.419271 (-0.105027) | 0.046684 / 0.043533 (0.003151) | 0.316473 / 0.255139 (0.061334) | 0.346119 / 0.283200 (0.062919) | 0.022452 / 0.141683 (-0.119230) | 1.540497 / 1.452155 (0.088343) | 1.594888 / 1.492716 (0.102172) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.204349 / 0.018006 (0.186343) | 0.426842 / 0.000490 (0.426353) | 0.003060 / 0.000200 (0.002860) | 0.000073 / 0.000054 (0.000019) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023611 / 0.037411 (-0.013801) | 0.100247 / 0.014526 (0.085721) | 0.107824 / 0.176557 (-0.068733) | 0.166845 / 0.737135 (-0.570291) | 0.112782 / 0.296338 (-0.183556) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.423053 / 0.215209 (0.207844) | 4.235553 / 2.077655 (2.157899) | 1.936589 / 1.504120 (0.432469) | 1.738519 / 1.541195 (0.197325) | 1.787905 / 1.468490 (0.319415) | 0.573362 / 4.584777 (-4.011414) | 3.395272 / 3.745712 (-0.350440) | 1.765977 / 5.269862 (-3.503884) | 1.049596 / 4.565676 (-3.516081) | 0.068868 / 0.424275 (-0.355407) | 0.011028 / 0.007607 (0.003421) | 0.532835 / 0.226044 (0.306791) | 5.314890 / 2.268929 (3.045962) | 2.368733 / 55.444624 (-53.075891) | 2.033959 / 6.876477 (-4.842518) | 2.130481 / 2.142072 (-0.011591) | 0.689360 / 4.805227 (-4.115867) | 0.140271 / 6.500664 (-6.360393) | 0.068198 / 0.075469 (-0.007271) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.237212 / 1.841788 (-0.604576) | 14.182215 / 8.074308 (6.107907) | 14.972608 / 10.191392 (4.781216) | 0.133977 / 0.680424 (-0.546447) | 0.016759 / 0.534201 (-0.517442) | 0.361552 / 0.579283 (-0.217731) | 0.394932 / 0.434364 (-0.039432) | 0.442601 / 0.540337 (-0.097736) | 0.535709 / 1.386936 (-0.851227) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006327 / 0.011353 (-0.005026) | 0.003780 / 0.011008 (-0.007228) | 0.078358 / 0.038508 (0.039850) | 0.037271 / 0.023109 (0.014162) | 0.456766 / 0.275898 (0.180868) | 0.515721 / 0.323480 (0.192241) | 0.004770 / 0.007986 (-0.003216) | 0.002942 / 0.004328 (-0.001387) | 0.077383 / 0.004250 (0.073132) | 0.051773 / 0.037052 (0.014721) | 0.460722 / 0.258489 (0.202233) | 0.519997 / 0.293841 (0.226157) | 0.030461 / 0.128546 (-0.098085) | 0.008622 / 0.075646 (-0.067024) | 0.083271 / 0.419271 (-0.336000) | 0.042242 / 0.043533 (-0.001291) | 0.447691 / 0.255139 (0.192552) | 0.481965 / 0.283200 (0.198765) | 0.019510 / 0.141683 (-0.122173) | 1.536718 / 1.452155 (0.084563) | 1.588433 / 1.492716 (0.095717) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.215880 / 0.018006 (0.197874) | 0.426102 / 0.000490 (0.425612) | 0.003976 / 0.000200 (0.003776) | 0.000079 / 0.000054 (0.000025) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026168 / 0.037411 (-0.011243) | 0.105786 / 0.014526 (0.091260) | 0.113772 / 0.176557 (-0.062785) | 0.166576 / 0.737135 (-0.570559) | 0.117560 / 0.296338 (-0.178779) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.490485 / 0.215209 (0.275276) | 4.890105 / 2.077655 (2.812450) | 2.515099 / 1.504120 (1.010979) | 2.306591 / 1.541195 (0.765396) | 2.383634 / 1.468490 (0.915144) | 0.573780 / 4.584777 (-4.010997) | 3.474394 / 3.745712 (-0.271318) | 1.746795 / 5.269862 (-3.523067) | 1.044678 / 4.565676 (-3.520998) | 0.069176 / 0.424275 (-0.355099) | 0.011045 / 0.007607 (0.003438) | 0.597234 / 0.226044 (0.371189) | 5.979614 / 2.268929 (3.710685) | 3.024203 / 55.444624 (-52.420422) | 2.687502 / 6.876477 (-4.188975) | 2.781637 / 2.142072 (0.639565) | 0.690482 / 4.805227 (-4.114745) | 0.150138 / 6.500664 (-6.350526) | 0.077076 / 0.075469 (0.001607) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.307501 / 1.841788 (-0.534287) | 14.366780 / 8.074308 (6.292471) | 14.966981 / 10.191392 (4.775589) | 0.153829 / 0.680424 (-0.526594) | 0.018047 / 0.534201 (-0.516154) | 0.361391 / 0.579283 (-0.217892) | 0.398345 / 0.434364 (-0.036019) | 0.424574 / 0.540337 (-0.115764) | 0.517165 / 1.386936 (-0.869771) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#98b1bdd492df953ca7139bb8c9a1771d5c603797 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006944 / 0.011353 (-0.004409) | 0.004504 / 0.011008 (-0.006504) | 0.105224 / 0.038508 (0.066716) | 0.047830 / 0.023109 (0.024721) | 0.339723 / 0.275898 (0.063825) | 0.419249 / 0.323480 (0.095769) | 0.005510 / 0.007986 (-0.002476) | 0.003574 / 0.004328 (-0.000754) | 0.079879 / 0.004250 (0.075628) | 0.066610 / 0.037052 (0.029557) | 0.353818 / 0.258489 (0.095329) | 0.397992 / 0.293841 (0.104151) | 0.031551 / 0.128546 (-0.096995) | 0.009037 / 0.075646 (-0.066610) | 0.355310 / 0.419271 (-0.063961) | 0.054931 / 0.043533 (0.011398) | 0.335153 / 0.255139 (0.080014) | 0.357460 / 0.283200 (0.074260) | 0.026031 / 0.141683 (-0.115652) | 1.546705 / 1.452155 (0.094550) | 1.627324 / 1.492716 (0.134608) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.276708 / 0.018006 (0.258701) | 0.589402 / 0.000490 (0.588912) | 0.009560 / 0.000200 (0.009360) | 0.000095 / 0.000054 (0.000041) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031041 / 0.037411 (-0.006370) | 0.117219 / 0.014526 (0.102693) | 0.125200 / 0.176557 (-0.051356) | 0.181528 / 0.737135 (-0.555607) | 0.131898 / 0.296338 (-0.164440) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.409965 / 0.215209 (0.194756) | 4.102700 / 2.077655 (2.025045) | 1.887578 / 1.504120 (0.383458) | 1.696490 / 1.541195 (0.155295) | 1.821352 / 1.468490 (0.352862) | 0.545422 / 4.584777 (-4.039355) | 3.933784 / 3.745712 (0.188071) | 1.934254 / 5.269862 (-3.335607) | 1.114935 / 4.565676 (-3.450742) | 0.067615 / 0.424275 (-0.356660) | 0.012004 / 0.007607 (0.004397) | 0.522048 / 0.226044 (0.296004) | 5.209224 / 2.268929 (2.940296) | 2.369911 / 55.444624 (-53.074714) | 2.032960 / 6.876477 (-4.843517) | 2.228874 / 2.142072 (0.086802) | 0.673172 / 4.805227 (-4.132055) | 0.147017 / 6.500664 (-6.353647) | 0.067020 / 0.075469 (-0.008449) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.281490 / 1.841788 (-0.560298) | 16.129701 / 8.074308 (8.055393) | 15.474730 / 10.191392 (5.283338) | 0.143934 / 0.680424 (-0.536490) | 0.018311 / 0.534201 (-0.515890) | 0.435940 / 0.579283 (-0.143343) | 0.446846 / 0.434364 (0.012482) | 0.543943 / 0.540337 (0.003605) | 0.648041 / 1.386936 (-0.738895) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007380 / 0.011353 (-0.003973) | 0.004510 / 0.011008 (-0.006499) | 0.080741 / 0.038508 (0.042233) | 0.050907 / 0.023109 (0.027797) | 0.425548 / 0.275898 (0.149650) | 0.487959 / 0.323480 (0.164479) | 0.005887 / 0.007986 (-0.002099) | 0.003689 / 0.004328 (-0.000639) | 0.079588 / 0.004250 (0.075338) | 0.071841 / 0.037052 (0.034788) | 0.425172 / 0.258489 (0.166683) | 0.471185 / 0.293841 (0.177344) | 0.035768 / 0.128546 (-0.092779) | 0.009229 / 0.075646 (-0.066418) | 0.086021 / 0.419271 (-0.333250) | 0.052424 / 0.043533 (0.008891) | 0.413634 / 0.255139 (0.158495) | 0.422310 / 0.283200 (0.139111) | 0.026019 / 0.141683 (-0.115664) | 1.616861 / 1.452155 (0.164707) | 1.653660 / 1.492716 (0.160943) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.280096 / 0.018006 (0.262090) | 0.587853 / 0.000490 (0.587363) | 0.006560 / 0.000200 (0.006360) | 0.000181 / 0.000054 (0.000127) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033747 / 0.037411 (-0.003665) | 0.125089 / 0.014526 (0.110564) | 0.137995 / 0.176557 (-0.038561) | 0.188192 / 0.737135 (-0.548943) | 0.141438 / 0.296338 (-0.154900) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.471524 / 0.215209 (0.256315) | 4.713988 / 2.077655 (2.636334) | 2.414785 / 1.504120 (0.910665) | 2.226815 / 1.541195 (0.685620) | 2.259222 / 1.468490 (0.790732) | 0.551663 / 4.584777 (-4.033114) | 4.031399 / 3.745712 (0.285686) | 1.966917 / 5.269862 (-3.302945) | 1.154487 / 4.565676 (-3.411190) | 0.068500 / 0.424275 (-0.355775) | 0.012127 / 0.007607 (0.004520) | 0.579342 / 0.226044 (0.353298) | 5.757415 / 2.268929 (3.488486) | 2.820012 / 55.444624 (-52.624613) | 2.521783 / 6.876477 (-4.354694) | 2.699994 / 2.142072 (0.557921) | 0.686152 / 4.805227 (-4.119075) | 0.148521 / 6.500664 (-6.352143) | 0.068478 / 0.075469 (-0.006991) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.336260 / 1.841788 (-0.505528) | 17.016935 / 8.074308 (8.942627) | 16.406951 / 10.191392 (6.215559) | 0.166907 / 0.680424 (-0.513517) | 0.020166 / 0.534201 (-0.514035) | 0.437690 / 0.579283 (-0.141593) | 0.480337 / 0.434364 (0.045973) | 0.518065 / 0.540337 (-0.022272) | 0.625904 / 1.386936 (-0.761032) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#98b1bdd492df953ca7139bb8c9a1771d5c603797 \"CML watermark\")\n" ]
Release: 2.13.2
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6216/reactions" }
PR_kwDODunzps5Zp8al
{ "diff_url": "https://github.com/huggingface/datasets/pull/6216.diff", "html_url": "https://github.com/huggingface/datasets/pull/6216", "merged_at": "2023-09-06T08:22:43Z", "patch_url": "https://github.com/huggingface/datasets/pull/6216.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6216" }
2023-09-06T08:15:32Z
https://api.github.com/repos/huggingface/datasets/issues/6216/comments
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/6216/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6216/timeline
closed
false
6,216
null
2023-09-06T08:22:43Z
null
true
1,882,176,970
https://api.github.com/repos/huggingface/datasets/issues/6215
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6215/events
[]
null
2023-09-06T10:34:00Z
[]
https://github.com/huggingface/datasets/pull/6215
CONTRIBUTOR
null
false
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "oh wow good catch", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006681 / 0.011353 (-0.004672) | 0.003967 / 0.011008 (-0.007041) | 0.085590 / 0.038508 (0.047082) | 0.079285 / 0.023109 (0.056176) | 0.311583 / 0.275898 (0.035685) | 0.345578 / 0.323480 (0.022098) | 0.004115 / 0.007986 (-0.003871) | 0.004286 / 0.004328 (-0.000043) | 0.064405 / 0.004250 (0.060155) | 0.055084 / 0.037052 (0.018032) | 0.316117 / 0.258489 (0.057628) | 0.354737 / 0.293841 (0.060896) | 0.031280 / 0.128546 (-0.097266) | 0.008395 / 0.075646 (-0.067251) | 0.288910 / 0.419271 (-0.130362) | 0.051291 / 0.043533 (0.007759) | 0.309125 / 0.255139 (0.053986) | 0.349673 / 0.283200 (0.066473) | 0.025016 / 0.141683 (-0.116667) | 1.475577 / 1.452155 (0.023422) | 1.558967 / 1.492716 (0.066251) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.208504 / 0.018006 (0.190498) | 0.462270 / 0.000490 (0.461780) | 0.003476 / 0.000200 (0.003276) | 0.000073 / 0.000054 (0.000018) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030371 / 0.037411 (-0.007041) | 0.086157 / 0.014526 (0.071631) | 0.098162 / 0.176557 (-0.078395) | 0.154649 / 0.737135 (-0.582486) | 0.098697 / 0.296338 (-0.197642) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.405883 / 0.215209 (0.190674) | 4.049614 / 2.077655 (1.971959) | 2.075047 / 1.504120 (0.570927) | 1.917782 / 1.541195 (0.376587) | 2.030268 / 1.468490 (0.561778) | 0.483974 / 4.584777 (-4.100803) | 3.542147 / 3.745712 (-0.203566) | 3.305999 / 5.269862 (-1.963863) | 2.052287 / 4.565676 (-2.513390) | 0.057246 / 0.424275 (-0.367029) | 0.007631 / 0.007607 (0.000024) | 0.488189 / 0.226044 (0.262144) | 4.884784 / 2.268929 (2.615856) | 2.576304 / 55.444624 (-52.868320) | 2.241249 / 6.876477 (-4.635228) | 2.490512 / 2.142072 (0.348440) | 0.584495 / 4.805227 (-4.220733) | 0.134741 / 6.500664 (-6.365923) | 0.061639 / 0.075469 (-0.013830) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.317717 / 1.841788 (-0.524071) | 20.098594 / 8.074308 (12.024286) | 14.641051 / 10.191392 (4.449659) | 0.165291 / 0.680424 (-0.515133) | 0.019179 / 0.534201 (-0.515022) | 0.399506 / 0.579283 (-0.179777) | 0.407662 / 0.434364 (-0.026701) | 0.457965 / 0.540337 (-0.082372) | 0.626401 / 1.386936 (-0.760536) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007076 / 0.011353 (-0.004277) | 0.004125 / 0.011008 (-0.006884) | 0.064861 / 0.038508 (0.026353) | 0.082390 / 0.023109 (0.059281) | 0.423227 / 0.275898 (0.147329) | 0.452229 / 0.323480 (0.128750) | 0.005594 / 0.007986 (-0.002392) | 0.003465 / 0.004328 (-0.000863) | 0.064661 / 0.004250 (0.060411) | 0.057945 / 0.037052 (0.020892) | 0.424572 / 0.258489 (0.166083) | 0.465349 / 0.293841 (0.171509) | 0.032687 / 0.128546 (-0.095859) | 0.008573 / 0.075646 (-0.067074) | 0.073020 / 0.419271 (-0.346251) | 0.048423 / 0.043533 (0.004891) | 0.413425 / 0.255139 (0.158286) | 0.433778 / 0.283200 (0.150578) | 0.023942 / 0.141683 (-0.117741) | 1.495190 / 1.452155 (0.043036) | 1.586526 / 1.492716 (0.093810) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.271805 / 0.018006 (0.253799) | 0.454922 / 0.000490 (0.454432) | 0.015386 / 0.000200 (0.015186) | 0.000129 / 0.000054 (0.000074) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033804 / 0.037411 (-0.003607) | 0.099317 / 0.014526 (0.084791) | 0.107207 / 0.176557 (-0.069349) | 0.160926 / 0.737135 (-0.576210) | 0.108669 / 0.296338 (-0.187670) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.430776 / 0.215209 (0.215567) | 4.297622 / 2.077655 (2.219967) | 2.285918 / 1.504120 (0.781798) | 2.109608 / 1.541195 (0.568413) | 2.208326 / 1.468490 (0.739836) | 0.490016 / 4.584777 (-4.094761) | 3.570609 / 3.745712 (-0.175103) | 3.406335 / 5.269862 (-1.863526) | 2.070664 / 4.565676 (-2.495012) | 0.058089 / 0.424275 (-0.366186) | 0.007425 / 0.007607 (-0.000182) | 0.506972 / 0.226044 (0.280927) | 5.078643 / 2.268929 (2.809714) | 2.858973 / 55.444624 (-52.585651) | 2.457344 / 6.876477 (-4.419132) | 2.687727 / 2.142072 (0.545654) | 0.592134 / 4.805227 (-4.213093) | 0.133966 / 6.500664 (-6.366698) | 0.061800 / 0.075469 (-0.013669) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.337167 / 1.841788 (-0.504620) | 20.743951 / 8.074308 (12.669643) | 15.402686 / 10.191392 (5.211294) | 0.164548 / 0.680424 (-0.515876) | 0.020244 / 0.534201 (-0.513957) | 0.399044 / 0.579283 (-0.180239) | 0.414036 / 0.434364 (-0.020328) | 0.474141 / 0.540337 (-0.066197) | 0.654455 / 1.386936 (-0.732482) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#4de930c45a81a6dff1805bf45f59170e9f953eeb \"CML watermark\")\n" ]
Fix checking patterns to infer packaged builder
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/6215/reactions" }
PR_kwDODunzps5ZlcqC
{ "diff_url": "https://github.com/huggingface/datasets/pull/6215.diff", "html_url": "https://github.com/huggingface/datasets/pull/6215", "merged_at": "2023-09-06T10:25:00Z", "patch_url": "https://github.com/huggingface/datasets/pull/6215.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6215" }
2023-09-05T15:10:47Z
https://api.github.com/repos/huggingface/datasets/issues/6215/comments
Don't ignore results of pattern resolving if `self.data_files` is not None. Otherwise lines 854 and 1037 make no sense.
{ "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/polinaeterna", "id": 16348744, "login": "polinaeterna", "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "repos_url": "https://api.github.com/users/polinaeterna/repos", "site_admin": false, "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "type": "User", "url": "https://api.github.com/users/polinaeterna" }
https://api.github.com/repos/huggingface/datasets/issues/6215/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6215/timeline
closed
false
6,215
null
2023-09-06T10:25:00Z
null
true
1,881,736,469
https://api.github.com/repos/huggingface/datasets/issues/6214
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6214/events
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
null
2023-09-26T15:32:52Z
[ { "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" } ]
https://github.com/huggingface/datasets/issues/6214
MEMBER
completed
null
null
[]
Unpin fsspec < 2023.9.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6214/reactions" }
I_kwDODunzps5wKQUV
null
2023-09-05T11:02:58Z
https://api.github.com/repos/huggingface/datasets/issues/6214/comments
Once root issue is fixed, remove temporary pin of fsspec < 2023.9.0 introduced by: - #6210 Related to issue: - #6209 After investigation, I think the root issue is related to the new glob behavior with double asterisk `**` they have introduced in: - https://github.com/fsspec/filesystem_spec/pull/1329
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/6214/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6214/timeline
closed
false
6,214
null
2023-09-26T15:32:52Z
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
false
1,880,592,987
https://api.github.com/repos/huggingface/datasets/issues/6213
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6213/events
[]
null
2024-01-11T06:32:20Z
[]
https://github.com/huggingface/datasets/pull/6213
COLLABORATOR
null
true
null
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008451 / 0.011353 (-0.002902) | 0.005056 / 0.011008 (-0.005952) | 0.086367 / 0.038508 (0.047859) | 0.068030 / 0.023109 (0.044920) | 0.358812 / 0.275898 (0.082914) | 0.385790 / 0.323480 (0.062310) | 0.005608 / 0.007986 (-0.002378) | 0.004262 / 0.004328 (-0.000067) | 0.066618 / 0.004250 (0.062368) | 0.053901 / 0.037052 (0.016849) | 0.398456 / 0.258489 (0.139967) | 0.391681 / 0.293841 (0.097840) | 0.046743 / 0.128546 (-0.081804) | 0.014118 / 0.075646 (-0.061528) | 0.308479 / 0.419271 (-0.110793) | 0.064214 / 0.043533 (0.020681) | 0.367940 / 0.255139 (0.112801) | 0.387204 / 0.283200 (0.104004) | 0.036093 / 0.141683 (-0.105590) | 1.534182 / 1.452155 (0.082027) | 1.598357 / 1.492716 (0.105641) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.265910 / 0.018006 (0.247904) | 0.589453 / 0.000490 (0.588963) | 0.004881 / 0.000200 (0.004681) | 0.000090 / 0.000054 (0.000036) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032540 / 0.037411 (-0.004872) | 0.083153 / 0.014526 (0.068627) | 0.098960 / 0.176557 (-0.077597) | 0.162044 / 0.737135 (-0.575091) | 0.093602 / 0.296338 (-0.202736) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.517056 / 0.215209 (0.301847) | 5.167908 / 2.077655 (3.090253) | 2.359856 / 1.504120 (0.855736) | 2.092448 / 1.541195 (0.551253) | 2.100270 / 1.468490 (0.631780) | 0.742321 / 4.584777 (-3.842456) | 4.845010 / 3.745712 (1.099298) | 4.361808 / 5.269862 (-0.908054) | 2.621941 / 4.565676 (-1.943736) | 0.094907 / 0.424275 (-0.329369) | 0.009357 / 0.007607 (0.001750) | 0.719859 / 0.226044 (0.493814) | 6.929731 / 2.268929 (4.660802) | 3.240862 / 55.444624 (-52.203763) | 2.700817 / 6.876477 (-4.175659) | 2.904600 / 2.142072 (0.762527) | 0.924930 / 4.805227 (-3.880298) | 0.194390 / 6.500664 (-6.306274) | 0.078331 / 0.075469 (0.002862) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.539347 / 1.841788 (-0.302441) | 22.696358 / 8.074308 (14.622050) | 18.791692 / 10.191392 (8.600300) | 0.221376 / 0.680424 (-0.459048) | 0.029824 / 0.534201 (-0.504377) | 0.455604 / 0.579283 (-0.123679) | 0.573169 / 0.434364 (0.138805) | 0.507109 / 0.540337 (-0.033228) | 0.730986 / 1.386936 (-0.655950) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009308 / 0.011353 (-0.002045) | 0.005027 / 0.011008 (-0.005982) | 0.074094 / 0.038508 (0.035586) | 0.068277 / 0.023109 (0.045168) | 0.412716 / 0.275898 (0.136818) | 0.446883 / 0.323480 (0.123403) | 0.005864 / 0.007986 (-0.002122) | 0.003753 / 0.004328 (-0.000575) | 0.072575 / 0.004250 (0.068325) | 0.064434 / 0.037052 (0.027382) | 0.445395 / 0.258489 (0.186906) | 0.464520 / 0.293841 (0.170679) | 0.045303 / 0.128546 (-0.083243) | 0.013120 / 0.075646 (-0.062527) | 0.077830 / 0.419271 (-0.341441) | 0.057303 / 0.043533 (0.013770) | 0.420845 / 0.255139 (0.165706) | 0.431308 / 0.283200 (0.148109) | 0.033908 / 0.141683 (-0.107775) | 1.577667 / 1.452155 (0.125512) | 1.677321 / 1.492716 (0.184604) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.305855 / 0.018006 (0.287849) | 0.601442 / 0.000490 (0.600953) | 0.010722 / 0.000200 (0.010522) | 0.000158 / 0.000054 (0.000104) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029202 / 0.037411 (-0.008209) | 0.094576 / 0.014526 (0.080050) | 0.106734 / 0.176557 (-0.069822) | 0.168114 / 0.737135 (-0.569021) | 0.107241 / 0.296338 (-0.189098) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.643634 / 0.215209 (0.428425) | 6.391757 / 2.077655 (4.314103) | 3.011679 / 1.504120 (1.507559) | 2.379711 / 1.541195 (0.838517) | 2.387444 / 1.468490 (0.918954) | 0.823460 / 4.584777 (-3.761317) | 4.882240 / 3.745712 (1.136528) | 4.091170 / 5.269862 (-1.178691) | 2.688761 / 4.565676 (-1.876915) | 0.094555 / 0.424275 (-0.329720) | 0.008464 / 0.007607 (0.000857) | 0.665949 / 0.226044 (0.439905) | 6.948237 / 2.268929 (4.679309) | 3.384894 / 55.444624 (-52.059730) | 2.675570 / 6.876477 (-4.200907) | 3.073045 / 2.142072 (0.930973) | 0.969780 / 4.805227 (-3.835447) | 0.205859 / 6.500664 (-6.294805) | 0.072548 / 0.075469 (-0.002922) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.563869 / 1.841788 (-0.277919) | 22.431392 / 8.074308 (14.357084) | 19.434811 / 10.191392 (9.243419) | 0.255135 / 0.680424 (-0.425289) | 0.027799 / 0.534201 (-0.506402) | 0.427713 / 0.579283 (-0.151570) | 0.527030 / 0.434364 (0.092666) | 0.503660 / 0.540337 (-0.036678) | 0.730996 / 1.386936 (-0.655940) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#06c1940953807dbde4bc18af64bd3d87234edf00 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007597 / 0.011353 (-0.003756) | 0.004492 / 0.011008 (-0.006516) | 0.103779 / 0.038508 (0.065271) | 0.079287 / 0.023109 (0.056178) | 0.389651 / 0.275898 (0.113753) | 0.421955 / 0.323480 (0.098475) | 0.006023 / 0.007986 (-0.001963) | 0.003727 / 0.004328 (-0.000602) | 0.078604 / 0.004250 (0.074354) | 0.060810 / 0.037052 (0.023758) | 0.412170 / 0.258489 (0.153681) | 0.436218 / 0.293841 (0.142377) | 0.037282 / 0.128546 (-0.091264) | 0.010341 / 0.075646 (-0.065305) | 0.357652 / 0.419271 (-0.061620) | 0.063320 / 0.043533 (0.019788) | 0.389454 / 0.255139 (0.134315) | 0.433073 / 0.283200 (0.149874) | 0.028449 / 0.141683 (-0.113234) | 1.894107 / 1.452155 (0.441952) | 1.954190 / 1.492716 (0.461474) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.224477 / 0.018006 (0.206471) | 0.510878 / 0.000490 (0.510388) | 0.005013 / 0.000200 (0.004813) | 0.000092 / 0.000054 (0.000037) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032976 / 0.037411 (-0.004436) | 0.101073 / 0.014526 (0.086547) | 0.113990 / 0.176557 (-0.062566) | 0.183499 / 0.737135 (-0.553636) | 0.114283 / 0.296338 (-0.182056) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.473242 / 0.215209 (0.258033) | 4.719800 / 2.077655 (2.642146) | 2.318732 / 1.504120 (0.814612) | 2.102336 / 1.541195 (0.561141) | 2.143618 / 1.468490 (0.675128) | 0.594122 / 4.584777 (-3.990654) | 4.265961 / 3.745712 (0.520249) | 3.794635 / 5.269862 (-1.475226) | 2.394506 / 4.565676 (-2.171170) | 0.070091 / 0.424275 (-0.354184) | 0.009222 / 0.007607 (0.001614) | 0.564496 / 0.226044 (0.338452) | 5.644348 / 2.268929 (3.375419) | 2.934395 / 55.444624 (-52.510229) | 2.429076 / 6.876477 (-4.447401) | 2.592010 / 2.142072 (0.449937) | 0.713371 / 4.805227 (-4.091856) | 0.165019 / 6.500664 (-6.335646) | 0.075913 / 0.075469 (0.000444) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.570836 / 1.841788 (-0.270951) | 22.569763 / 8.074308 (14.495455) | 17.159658 / 10.191392 (6.968266) | 0.185716 / 0.680424 (-0.494708) | 0.021938 / 0.534201 (-0.512263) | 0.487204 / 0.579283 (-0.092079) | 0.472776 / 0.434364 (0.038412) | 0.565052 / 0.540337 (0.024714) | 0.763322 / 1.386936 (-0.623614) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007810 / 0.011353 (-0.003543) | 0.005140 / 0.011008 (-0.005869) | 0.079018 / 0.038508 (0.040510) | 0.080899 / 0.023109 (0.057790) | 0.489213 / 0.275898 (0.213315) | 0.525334 / 0.323480 (0.201854) | 0.006992 / 0.007986 (-0.000994) | 0.003729 / 0.004328 (-0.000599) | 0.079277 / 0.004250 (0.075026) | 0.064883 / 0.037052 (0.027831) | 0.496718 / 0.258489 (0.238229) | 0.534976 / 0.293841 (0.241135) | 0.038790 / 0.128546 (-0.089756) | 0.010122 / 0.075646 (-0.065524) | 0.087669 / 0.419271 (-0.331603) | 0.057959 / 0.043533 (0.014426) | 0.490611 / 0.255139 (0.235472) | 0.518376 / 0.283200 (0.235176) | 0.026561 / 0.141683 (-0.115122) | 1.843241 / 1.452155 (0.391086) | 1.952367 / 1.492716 (0.459651) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.289799 / 0.018006 (0.271792) | 0.486999 / 0.000490 (0.486509) | 0.017481 / 0.000200 (0.017281) | 0.000122 / 0.000054 (0.000068) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.037662 / 0.037411 (0.000250) | 0.113238 / 0.014526 (0.098712) | 0.123918 / 0.176557 (-0.052638) | 0.190484 / 0.737135 (-0.546652) | 0.126473 / 0.296338 (-0.169865) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.530622 / 0.215209 (0.315413) | 5.292093 / 2.077655 (3.214438) | 2.819354 / 1.504120 (1.315234) | 2.609821 / 1.541195 (1.068626) | 2.680090 / 1.468490 (1.211600) | 0.603490 / 4.584777 (-3.981287) | 4.344541 / 3.745712 (0.598828) | 3.874001 / 5.269862 (-1.395861) | 2.445302 / 4.565676 (-2.120375) | 0.071173 / 0.424275 (-0.353102) | 0.009131 / 0.007607 (0.001524) | 0.627273 / 0.226044 (0.401229) | 6.278637 / 2.268929 (4.009709) | 3.433762 / 55.444624 (-52.010862) | 2.973400 / 6.876477 (-3.903077) | 3.188165 / 2.142072 (1.046093) | 0.722824 / 4.805227 (-4.082404) | 0.165154 / 6.500664 (-6.335510) | 0.075268 / 0.075469 (-0.000202) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.652994 / 1.841788 (-0.188794) | 23.309030 / 8.074308 (15.234722) | 18.135649 / 10.191392 (7.944257) | 0.177543 / 0.680424 (-0.502881) | 0.024784 / 0.534201 (-0.509417) | 0.489952 / 0.579283 (-0.089331) | 0.485368 / 0.434364 (0.051004) | 0.580583 / 0.540337 (0.040246) | 0.787843 / 1.386936 (-0.599093) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#5982039f7814a204fe532240ca6aabe72430d834 \"CML watermark\")\n", "_The documentation is not available anymore as the PR was closed or merged._", "A bug in `FixedSizeArray.flatten` in `PyArrow<10.0.0` makes CI fail. Colab installs 9.0.0 by default, so we should be able to set the minimal version to `10.0.0` soon. Keeping this PR as a draft in the meantime.", "Closing this PR in favor of https://github.com/huggingface/datasets/pull/6283" ]
Better list array values handling in cast/embed storage
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6213/reactions" }
PR_kwDODunzps5ZgHLO
{ "diff_url": "https://github.com/huggingface/datasets/pull/6213.diff", "html_url": "https://github.com/huggingface/datasets/pull/6213", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/6213.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6213" }
2023-09-04T16:21:23Z
https://api.github.com/repos/huggingface/datasets/issues/6213/comments
Use [`array.flatten`](https://arrow.apache.org/docs/python/generated/pyarrow.ListArray.html#pyarrow.ListArray.flatten) that takes `.offset` into account instead of `array.values` in array cast/embed.
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
https://api.github.com/repos/huggingface/datasets/issues/6213/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6213/timeline
closed
false
6,213
null
2023-10-05T15:24:34Z
null
true
1,880,399,516
https://api.github.com/repos/huggingface/datasets/issues/6212
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6212/events
[]
null
2023-09-05T08:28:39Z
[]
https://github.com/huggingface/datasets/issues/6212
NONE
null
null
null
[ "Hi @exs-avianello, is it really needed? Note you can alternatively use `pathlib.Path` among others as it follows:\r\n\r\n```python\r\nimport datasets\r\nfrom pathlib import Path\r\n\r\n# save a parquet file at ~/path/to/data.parquet\r\n\r\ndata_files = Path.home() / \"path/to/data.parquet\"\r\ndataset = datasets.load_dataset(\"parquet\", data_files=data_files)\r\n```", "Hi @alvarobartt ! \r\n\r\nThis is definitely just a \"nice to have\" and I am personally more than happy to just use absolute paths client-side. I just wanted to flag it up in case it can help improve the package even more 🙌 It might not be immediately obvious from the stack trace that the error is triggered by the `~` in the path" ]
Tilde (~) is not supported for data_files
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6212/reactions" }
I_kwDODunzps5wFJ6c
null
2023-09-04T14:23:49Z
https://api.github.com/repos/huggingface/datasets/issues/6212/comments
### Describe the bug Attempting to `load_dataset` from a path starting with `~` (as a shorthand for the user's home directory) seems not to be fully working - at least as far as the `parquet` dataset builder is concerned. (the same file can be loaded correctly if providing its absolute path instead) I think that this is very similar to https://github.com/huggingface/datasets/issues/5757, but for `data_files` rather than `data_dir` ### Steps to reproduce the bug ```python import datasets # save a parquet file at ~/path/to/data.parquet data_files = "~/path/to/data.parquet" dataset = datasets.load_dataset("parquet", data_files=data_files) ``` ``` Downloading data files: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 12671.61it/s] Extracting data files: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 22671.91it/s] Generating train split: 0 examples [00:00, ? examples/s] Traceback (most recent call last): File ".venv/lib/python3.11/site-packages/datasets/builder.py", line 1949, in _prepare_split_single num_examples, num_bytes = writer.finalize() ^^^^^^^^^^^^^^^^^ File ".venv/lib/python3.11/site-packages/datasets/arrow_writer.py", line 598, in finalize raise SchemaInferenceError("Please pass `features` or at least one example when writing data") datasets.arrow_writer.SchemaInferenceError: Please pass `features` or at least one example when writing data The above exception was the direct cause of the following exception: Traceback (most recent call last): File ".venv/lib/python3.11/site-packages/datasets/load.py", line 2133, in load_dataset builder_instance.download_and_prepare( File ".venv/lib/python3.11/site-packages/datasets/builder.py", line 954, in download_and_prepare self._download_and_prepare( File ".venv/lib/python3.11/site-packages/datasets/builder.py", line 1049, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File ".venv/lib/python3.11/site-packages/datasets/builder.py", line 1813, in _prepare_split for job_id, done, content in self._prepare_split_single( File ".venv/lib/python3.11/site-packages/datasets/builder.py", line 1958, in _prepare_split_single raise DatasetGenerationError("An error occurred while generating the dataset") from e datasets.builder.DatasetGenerationError: An error occurred while generating the dataset ``` ### Expected behavior Can use `~` shorthand in paths when loading local (parquet) datasets. ### Environment info `datasets 2.14.3`
{ "avatar_url": "https://avatars.githubusercontent.com/u/128361578?v=4", "events_url": "https://api.github.com/users/exs-avianello/events{/privacy}", "followers_url": "https://api.github.com/users/exs-avianello/followers", "following_url": "https://api.github.com/users/exs-avianello/following{/other_user}", "gists_url": "https://api.github.com/users/exs-avianello/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/exs-avianello", "id": 128361578, "login": "exs-avianello", "node_id": "U_kgDOB6akag", "organizations_url": "https://api.github.com/users/exs-avianello/orgs", "received_events_url": "https://api.github.com/users/exs-avianello/received_events", "repos_url": "https://api.github.com/users/exs-avianello/repos", "site_admin": false, "starred_url": "https://api.github.com/users/exs-avianello/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/exs-avianello/subscriptions", "type": "User", "url": "https://api.github.com/users/exs-avianello" }
https://api.github.com/repos/huggingface/datasets/issues/6212/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6212/timeline
open
false
6,212
null
null
null
false
1,880,265,906
https://api.github.com/repos/huggingface/datasets/issues/6211
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6211/events
[]
null
2023-09-04T14:58:34Z
[]
https://github.com/huggingface/datasets/pull/6211
MEMBER
null
false
null
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007756 / 0.011353 (-0.003597) | 0.004733 / 0.011008 (-0.006275) | 0.095874 / 0.038508 (0.057366) | 0.081957 / 0.023109 (0.058848) | 0.426430 / 0.275898 (0.150532) | 0.457670 / 0.323480 (0.134190) | 0.004448 / 0.007986 (-0.003537) | 0.004956 / 0.004328 (0.000627) | 0.074195 / 0.004250 (0.069945) | 0.061101 / 0.037052 (0.024048) | 0.435134 / 0.258489 (0.176645) | 0.457245 / 0.293841 (0.163404) | 0.034945 / 0.128546 (-0.093601) | 0.010028 / 0.075646 (-0.065618) | 0.350724 / 0.419271 (-0.068548) | 0.064433 / 0.043533 (0.020901) | 0.417882 / 0.255139 (0.162743) | 0.445087 / 0.283200 (0.161887) | 0.027576 / 0.141683 (-0.114107) | 1.824066 / 1.452155 (0.371912) | 1.957568 / 1.492716 (0.464852) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.238568 / 0.018006 (0.220562) | 0.505289 / 0.000490 (0.504799) | 0.003527 / 0.000200 (0.003327) | 0.000120 / 0.000054 (0.000065) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032839 / 0.037411 (-0.004572) | 0.096708 / 0.014526 (0.082182) | 0.112100 / 0.176557 (-0.064456) | 0.177215 / 0.737135 (-0.559920) | 0.111273 / 0.296338 (-0.185066) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.475200 / 0.215209 (0.259991) | 4.725737 / 2.077655 (2.648082) | 2.414672 / 1.504120 (0.910552) | 2.196357 / 1.541195 (0.655162) | 2.329298 / 1.468490 (0.860808) | 0.575258 / 4.584777 (-4.009519) | 4.343630 / 3.745712 (0.597918) | 3.837665 / 5.269862 (-1.432196) | 2.497970 / 4.565676 (-2.067706) | 0.066467 / 0.424275 (-0.357808) | 0.008680 / 0.007607 (0.001073) | 0.569923 / 0.226044 (0.343878) | 5.634230 / 2.268929 (3.365302) | 2.959222 / 55.444624 (-52.485402) | 2.535954 / 6.876477 (-4.340523) | 2.804844 / 2.142072 (0.662771) | 0.682000 / 4.805227 (-4.123227) | 0.158193 / 6.500664 (-6.342471) | 0.072315 / 0.075469 (-0.003154) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.578148 / 1.841788 (-0.263639) | 22.993419 / 8.074308 (14.919110) | 16.524477 / 10.191392 (6.333085) | 0.169415 / 0.680424 (-0.511009) | 0.021520 / 0.534201 (-0.512681) | 0.455970 / 0.579283 (-0.123313) | 0.489022 / 0.434364 (0.054658) | 0.535656 / 0.540337 (-0.004682) | 0.802341 / 1.386936 (-0.584595) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008002 / 0.011353 (-0.003351) | 0.005577 / 0.011008 (-0.005431) | 0.087803 / 0.038508 (0.049295) | 0.091285 / 0.023109 (0.068176) | 0.500514 / 0.275898 (0.224616) | 0.549770 / 0.323480 (0.226290) | 0.006125 / 0.007986 (-0.001861) | 0.004031 / 0.004328 (-0.000297) | 0.077941 / 0.004250 (0.073691) | 0.071419 / 0.037052 (0.034367) | 0.497570 / 0.258489 (0.239081) | 0.542454 / 0.293841 (0.248613) | 0.040827 / 0.128546 (-0.087719) | 0.011029 / 0.075646 (-0.064617) | 0.088788 / 0.419271 (-0.330484) | 0.056970 / 0.043533 (0.013438) | 0.523934 / 0.255139 (0.268795) | 0.552507 / 0.283200 (0.269308) | 0.029794 / 0.141683 (-0.111889) | 1.817778 / 1.452155 (0.365623) | 1.955843 / 1.492716 (0.463126) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.246992 / 0.018006 (0.228986) | 0.467879 / 0.000490 (0.467390) | 0.005439 / 0.000200 (0.005239) | 0.000110 / 0.000054 (0.000056) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.037774 / 0.037411 (0.000363) | 0.109332 / 0.014526 (0.094806) | 0.120103 / 0.176557 (-0.056454) | 0.185259 / 0.737135 (-0.551876) | 0.126189 / 0.296338 (-0.170149) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.492856 / 0.215209 (0.277646) | 5.033209 / 2.077655 (2.955554) | 2.885551 / 1.504120 (1.381431) | 2.480304 / 1.541195 (0.939109) | 2.579092 / 1.468490 (1.110602) | 0.557671 / 4.584777 (-4.027106) | 4.352765 / 3.745712 (0.607053) | 4.039124 / 5.269862 (-1.230738) | 2.534342 / 4.565676 (-2.031335) | 0.067267 / 0.424275 (-0.357008) | 0.008891 / 0.007607 (0.001284) | 0.591592 / 0.226044 (0.365547) | 5.939982 / 2.268929 (3.671053) | 3.258389 / 55.444624 (-52.186235) | 2.843899 / 6.876477 (-4.032578) | 3.074217 / 2.142072 (0.932144) | 0.695065 / 4.805227 (-4.110162) | 0.156917 / 6.500664 (-6.343747) | 0.070185 / 0.075469 (-0.005284) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.586716 / 1.841788 (-0.255072) | 23.405837 / 8.074308 (15.331529) | 17.200851 / 10.191392 (7.009459) | 0.170073 / 0.680424 (-0.510351) | 0.023345 / 0.534201 (-0.510856) | 0.459192 / 0.579283 (-0.120091) | 0.477419 / 0.434364 (0.043055) | 0.558581 / 0.540337 (0.018244) | 0.814373 / 1.386936 (-0.572563) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#28bbe5667e6eaa1bb21685791fcf1a4ed1ef1777 \"CML watermark\")\n", "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006050 / 0.011353 (-0.005303) | 0.003661 / 0.011008 (-0.007348) | 0.081753 / 0.038508 (0.043245) | 0.061275 / 0.023109 (0.038166) | 0.316278 / 0.275898 (0.040380) | 0.350783 / 0.323480 (0.027303) | 0.004694 / 0.007986 (-0.003291) | 0.003003 / 0.004328 (-0.001326) | 0.062877 / 0.004250 (0.058627) | 0.046985 / 0.037052 (0.009933) | 0.315698 / 0.258489 (0.057208) | 0.364607 / 0.293841 (0.070766) | 0.027365 / 0.128546 (-0.101181) | 0.008016 / 0.075646 (-0.067631) | 0.261379 / 0.419271 (-0.157893) | 0.045173 / 0.043533 (0.001640) | 0.313499 / 0.255139 (0.058360) | 0.339383 / 0.283200 (0.056184) | 0.020855 / 0.141683 (-0.120828) | 1.429851 / 1.452155 (-0.022303) | 1.506112 / 1.492716 (0.013396) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.194872 / 0.018006 (0.176866) | 0.451951 / 0.000490 (0.451462) | 0.002790 / 0.000200 (0.002590) | 0.000070 / 0.000054 (0.000015) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024331 / 0.037411 (-0.013081) | 0.073156 / 0.014526 (0.058630) | 0.084054 / 0.176557 (-0.092502) | 0.145656 / 0.737135 (-0.591480) | 0.084998 / 0.296338 (-0.211340) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.391324 / 0.215209 (0.176115) | 3.898406 / 2.077655 (1.820751) | 1.891175 / 1.504120 (0.387055) | 1.698738 / 1.541195 (0.157543) | 1.774324 / 1.468490 (0.305834) | 0.495129 / 4.584777 (-4.089648) | 3.027027 / 3.745712 (-0.718685) | 2.821423 / 5.269862 (-2.448439) | 1.870761 / 4.565676 (-2.694915) | 0.057029 / 0.424275 (-0.367246) | 0.006715 / 0.007607 (-0.000892) | 0.465801 / 0.226044 (0.239757) | 4.650891 / 2.268929 (2.381962) | 2.425097 / 55.444624 (-53.019527) | 2.134731 / 6.876477 (-4.741745) | 2.312854 / 2.142072 (0.170781) | 0.589668 / 4.805227 (-4.215559) | 0.124673 / 6.500664 (-6.375991) | 0.060887 / 0.075469 (-0.014582) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.243622 / 1.841788 (-0.598166) | 18.501640 / 8.074308 (10.427332) | 13.853099 / 10.191392 (3.661707) | 0.130255 / 0.680424 (-0.550168) | 0.016824 / 0.534201 (-0.517377) | 0.332297 / 0.579283 (-0.246986) | 0.360346 / 0.434364 (-0.074018) | 0.388598 / 0.540337 (-0.151739) | 0.527551 / 1.386936 (-0.859385) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006181 / 0.011353 (-0.005172) | 0.003688 / 0.011008 (-0.007320) | 0.063395 / 0.038508 (0.024887) | 0.062531 / 0.023109 (0.039422) | 0.446565 / 0.275898 (0.170667) | 0.485224 / 0.323480 (0.161744) | 0.004982 / 0.007986 (-0.003004) | 0.002961 / 0.004328 (-0.001367) | 0.063124 / 0.004250 (0.058874) | 0.050234 / 0.037052 (0.013182) | 0.449731 / 0.258489 (0.191242) | 0.487293 / 0.293841 (0.193452) | 0.028528 / 0.128546 (-0.100018) | 0.008210 / 0.075646 (-0.067436) | 0.069520 / 0.419271 (-0.349751) | 0.041026 / 0.043533 (-0.002507) | 0.451370 / 0.255139 (0.196231) | 0.469151 / 0.283200 (0.185951) | 0.021076 / 0.141683 (-0.120607) | 1.439185 / 1.452155 (-0.012970) | 1.492634 / 1.492716 (-0.000082) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.235932 / 0.018006 (0.217926) | 0.430070 / 0.000490 (0.429581) | 0.007347 / 0.000200 (0.007147) | 0.000084 / 0.000054 (0.000029) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026102 / 0.037411 (-0.011309) | 0.081333 / 0.014526 (0.066807) | 0.090111 / 0.176557 (-0.086446) | 0.144578 / 0.737135 (-0.592557) | 0.091961 / 0.296338 (-0.204378) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.455761 / 0.215209 (0.240552) | 4.536345 / 2.077655 (2.458690) | 2.496833 / 1.504120 (0.992713) | 2.323325 / 1.541195 (0.782130) | 2.388364 / 1.468490 (0.919873) | 0.512010 / 4.584777 (-4.072767) | 3.106268 / 3.745712 (-0.639444) | 2.879224 / 5.269862 (-2.390637) | 1.893859 / 4.565676 (-2.671818) | 0.059131 / 0.424275 (-0.365144) | 0.006763 / 0.007607 (-0.000844) | 0.528205 / 0.226044 (0.302161) | 5.296649 / 2.268929 (3.027720) | 2.933787 / 55.444624 (-52.510838) | 2.598258 / 6.876477 (-4.278218) | 2.768195 / 2.142072 (0.626123) | 0.597430 / 4.805227 (-4.207797) | 0.125865 / 6.500664 (-6.374799) | 0.061684 / 0.075469 (-0.013785) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.341194 / 1.841788 (-0.500594) | 18.948225 / 8.074308 (10.873917) | 14.912340 / 10.191392 (4.720948) | 0.146905 / 0.680424 (-0.533519) | 0.017952 / 0.534201 (-0.516249) | 0.332299 / 0.579283 (-0.246984) | 0.362733 / 0.434364 (-0.071631) | 0.388278 / 0.540337 (-0.152060) | 0.546436 / 1.386936 (-0.840500) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#cb4f8357de001df656f2ea7af27625e189c3995b \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008314 / 0.011353 (-0.003038) | 0.004904 / 0.011008 (-0.006105) | 0.097486 / 0.038508 (0.058978) | 0.074627 / 0.023109 (0.051518) | 0.396395 / 0.275898 (0.120497) | 0.440519 / 0.323480 (0.117039) | 0.005964 / 0.007986 (-0.002022) | 0.004203 / 0.004328 (-0.000126) | 0.079998 / 0.004250 (0.075747) | 0.055158 / 0.037052 (0.018106) | 0.415439 / 0.258489 (0.156950) | 0.476101 / 0.293841 (0.182260) | 0.044761 / 0.128546 (-0.083785) | 0.013966 / 0.075646 (-0.061680) | 0.351279 / 0.419271 (-0.067993) | 0.067250 / 0.043533 (0.023717) | 0.414310 / 0.255139 (0.159171) | 0.458104 / 0.283200 (0.174904) | 0.033678 / 0.141683 (-0.108005) | 1.730539 / 1.452155 (0.278385) | 1.840013 / 1.492716 (0.347297) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.272708 / 0.018006 (0.254702) | 0.593563 / 0.000490 (0.593074) | 0.005153 / 0.000200 (0.004953) | 0.000179 / 0.000054 (0.000125) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029595 / 0.037411 (-0.007816) | 0.087994 / 0.014526 (0.073469) | 0.106066 / 0.176557 (-0.070491) | 0.180491 / 0.737135 (-0.556644) | 0.103707 / 0.296338 (-0.192631) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.566711 / 0.215209 (0.351502) | 5.589034 / 2.077655 (3.511380) | 2.364034 / 1.504120 (0.859914) | 2.119050 / 1.541195 (0.577855) | 2.103823 / 1.468490 (0.635333) | 0.819906 / 4.584777 (-3.764871) | 5.178464 / 3.745712 (1.432752) | 4.433986 / 5.269862 (-0.835875) | 2.825470 / 4.565676 (-1.740207) | 0.096907 / 0.424275 (-0.327368) | 0.008573 / 0.007607 (0.000966) | 0.677607 / 0.226044 (0.451563) | 6.811090 / 2.268929 (4.542162) | 3.140923 / 55.444624 (-52.303701) | 2.492251 / 6.876477 (-4.384225) | 2.660231 / 2.142072 (0.518158) | 0.980573 / 4.805227 (-3.824655) | 0.209028 / 6.500664 (-6.291636) | 0.079413 / 0.075469 (0.003944) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.578861 / 1.841788 (-0.262926) | 22.518269 / 8.074308 (14.443961) | 21.335916 / 10.191392 (11.144524) | 0.211311 / 0.680424 (-0.469113) | 0.033216 / 0.534201 (-0.500985) | 0.473266 / 0.579283 (-0.106017) | 0.581650 / 0.434364 (0.147286) | 0.522442 / 0.540337 (-0.017895) | 0.729039 / 1.386936 (-0.657897) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008349 / 0.011353 (-0.003003) | 0.005856 / 0.011008 (-0.005152) | 0.077855 / 0.038508 (0.039347) | 0.080608 / 0.023109 (0.057499) | 0.512533 / 0.275898 (0.236635) | 0.551862 / 0.323480 (0.228382) | 0.007004 / 0.007986 (-0.000982) | 0.004147 / 0.004328 (-0.000181) | 0.086625 / 0.004250 (0.082374) | 0.065962 / 0.037052 (0.028910) | 0.545590 / 0.258489 (0.287101) | 0.586313 / 0.293841 (0.292472) | 0.048719 / 0.128546 (-0.079827) | 0.014997 / 0.075646 (-0.060649) | 0.089510 / 0.419271 (-0.329761) | 0.060936 / 0.043533 (0.017404) | 0.498455 / 0.255139 (0.243316) | 0.535460 / 0.283200 (0.252260) | 0.034624 / 0.141683 (-0.107059) | 1.717401 / 1.452155 (0.265246) | 1.808772 / 1.492716 (0.316056) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.379504 / 0.018006 (0.361497) | 0.601756 / 0.000490 (0.601266) | 0.061740 / 0.000200 (0.061540) | 0.000497 / 0.000054 (0.000442) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031215 / 0.037411 (-0.006196) | 0.097501 / 0.014526 (0.082975) | 0.117434 / 0.176557 (-0.059122) | 0.166014 / 0.737135 (-0.571121) | 0.116466 / 0.296338 (-0.179873) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.699444 / 0.215209 (0.484235) | 6.329332 / 2.077655 (4.251678) | 3.072812 / 1.504120 (1.568693) | 2.729878 / 1.541195 (1.188683) | 2.933785 / 1.468490 (1.465295) | 0.935858 / 4.584777 (-3.648919) | 5.532532 / 3.745712 (1.786820) | 4.677139 / 5.269862 (-0.592722) | 2.963527 / 4.565676 (-1.602149) | 0.099661 / 0.424275 (-0.324614) | 0.009095 / 0.007607 (0.001488) | 0.751158 / 0.226044 (0.525114) | 7.652588 / 2.268929 (5.383660) | 3.802005 / 55.444624 (-51.642619) | 3.163126 / 6.876477 (-3.713351) | 3.401125 / 2.142072 (1.259052) | 0.998627 / 4.805227 (-3.806600) | 0.203310 / 6.500664 (-6.297354) | 0.073827 / 0.075469 (-0.001642) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.662989 / 1.841788 (-0.178799) | 23.777818 / 8.074308 (15.703510) | 20.855378 / 10.191392 (10.663986) | 0.279892 / 0.680424 (-0.400532) | 0.029303 / 0.534201 (-0.504898) | 0.473681 / 0.579283 (-0.105602) | 0.579148 / 0.434364 (0.144784) | 0.546931 / 0.540337 (0.006593) | 0.769740 / 1.386936 (-0.617196) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#63114e9cb78fe02dc145f923dec13d545a8d0327 \"CML watermark\")\n" ]
Fix empty splitinfo json
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6211/reactions" }
PR_kwDODunzps5Ze-pv
{ "diff_url": "https://github.com/huggingface/datasets/pull/6211.diff", "html_url": "https://github.com/huggingface/datasets/pull/6211", "merged_at": "2023-09-04T14:47:17Z", "patch_url": "https://github.com/huggingface/datasets/pull/6211.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6211" }
2023-09-04T13:13:53Z
https://api.github.com/repos/huggingface/datasets/issues/6211/comments
If a split is empty, then the JSON split info should mention num_bytes = 0 and num_examples = 0. Until now they were omited because the JSON dumps ignore the fields that are equal to the default values. This is needed in datasets-server since we parse this information to the viewer
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/6211/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6211/timeline
closed
false
6,211
null
2023-09-04T14:47:17Z
null
true
1,879,649,731
https://api.github.com/repos/huggingface/datasets/issues/6210
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6210/events
[]
null
2023-09-04T07:40:23Z
[]
https://github.com/huggingface/datasets/pull/6210
MEMBER
null
false
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006494 / 0.011353 (-0.004859) | 0.003896 / 0.011008 (-0.007112) | 0.083940 / 0.038508 (0.045432) | 0.068335 / 0.023109 (0.045225) | 0.365770 / 0.275898 (0.089872) | 0.403702 / 0.323480 (0.080222) | 0.004005 / 0.007986 (-0.003981) | 0.003276 / 0.004328 (-0.001052) | 0.064877 / 0.004250 (0.060626) | 0.053524 / 0.037052 (0.016472) | 0.372951 / 0.258489 (0.114462) | 0.420935 / 0.293841 (0.127094) | 0.030656 / 0.128546 (-0.097890) | 0.009048 / 0.075646 (-0.066599) | 0.287607 / 0.419271 (-0.131665) | 0.052042 / 0.043533 (0.008509) | 0.371446 / 0.255139 (0.116307) | 0.408781 / 0.283200 (0.125581) | 0.024228 / 0.141683 (-0.117455) | 1.483325 / 1.452155 (0.031170) | 1.544321 / 1.492716 (0.051605) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.212355 / 0.018006 (0.194349) | 0.463298 / 0.000490 (0.462808) | 0.005170 / 0.000200 (0.004970) | 0.000087 / 0.000054 (0.000032) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027824 / 0.037411 (-0.009587) | 0.081880 / 0.014526 (0.067354) | 0.094886 / 0.176557 (-0.081670) | 0.150024 / 0.737135 (-0.587111) | 0.096643 / 0.296338 (-0.199696) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.388521 / 0.215209 (0.173312) | 3.877251 / 2.077655 (1.799596) | 1.931085 / 1.504120 (0.426965) | 1.766525 / 1.541195 (0.225330) | 1.814802 / 1.468490 (0.346312) | 0.489478 / 4.584777 (-4.095299) | 3.570973 / 3.745712 (-0.174739) | 3.190211 / 5.269862 (-2.079651) | 2.015670 / 4.565676 (-2.550006) | 0.057773 / 0.424275 (-0.366503) | 0.007611 / 0.007607 (0.000004) | 0.462162 / 0.226044 (0.236117) | 4.616173 / 2.268929 (2.347244) | 2.360531 / 55.444624 (-53.084094) | 2.053680 / 6.876477 (-4.822797) | 2.228057 / 2.142072 (0.085985) | 0.584921 / 4.805227 (-4.220306) | 0.132470 / 6.500664 (-6.368194) | 0.060482 / 0.075469 (-0.014987) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.263393 / 1.841788 (-0.578394) | 19.416841 / 8.074308 (11.342532) | 14.049032 / 10.191392 (3.857640) | 0.162822 / 0.680424 (-0.517602) | 0.018189 / 0.534201 (-0.516012) | 0.391142 / 0.579283 (-0.188141) | 0.409367 / 0.434364 (-0.024997) | 0.454589 / 0.540337 (-0.085748) | 0.632946 / 1.386936 (-0.753990) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006568 / 0.011353 (-0.004785) | 0.004026 / 0.011008 (-0.006982) | 0.064522 / 0.038508 (0.026014) | 0.071738 / 0.023109 (0.048629) | 0.395771 / 0.275898 (0.119873) | 0.421553 / 0.323480 (0.098073) | 0.005291 / 0.007986 (-0.002694) | 0.003266 / 0.004328 (-0.001063) | 0.064464 / 0.004250 (0.060214) | 0.054622 / 0.037052 (0.017569) | 0.395010 / 0.258489 (0.136521) | 0.433895 / 0.293841 (0.140054) | 0.031670 / 0.128546 (-0.096876) | 0.008536 / 0.075646 (-0.067111) | 0.071059 / 0.419271 (-0.348212) | 0.047117 / 0.043533 (0.003584) | 0.391210 / 0.255139 (0.136071) | 0.411685 / 0.283200 (0.128486) | 0.022779 / 0.141683 (-0.118904) | 1.479900 / 1.452155 (0.027746) | 1.551853 / 1.492716 (0.059137) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.332814 / 0.018006 (0.314807) | 0.460654 / 0.000490 (0.460164) | 0.062257 / 0.000200 (0.062057) | 0.000374 / 0.000054 (0.000319) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031801 / 0.037411 (-0.005610) | 0.090730 / 0.014526 (0.076204) | 0.102955 / 0.176557 (-0.073602) | 0.155928 / 0.737135 (-0.581207) | 0.103028 / 0.296338 (-0.193310) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.434460 / 0.215209 (0.219251) | 4.331550 / 2.077655 (2.253895) | 2.335990 / 1.504120 (0.831870) | 2.183985 / 1.541195 (0.642790) | 2.233086 / 1.468490 (0.764595) | 0.488484 / 4.584777 (-4.096293) | 3.603856 / 3.745712 (-0.141856) | 3.229833 / 5.269862 (-2.040029) | 2.007366 / 4.565676 (-2.558311) | 0.057658 / 0.424275 (-0.366617) | 0.007339 / 0.007607 (-0.000268) | 0.512812 / 0.226044 (0.286768) | 5.141497 / 2.268929 (2.872569) | 2.847383 / 55.444624 (-52.597241) | 2.467010 / 6.876477 (-4.409467) | 2.644995 / 2.142072 (0.502923) | 0.581385 / 4.805227 (-4.223842) | 0.130755 / 6.500664 (-6.369909) | 0.058834 / 0.075469 (-0.016635) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.350162 / 1.841788 (-0.491626) | 19.768412 / 8.074308 (11.694104) | 15.079196 / 10.191392 (4.887804) | 0.167083 / 0.680424 (-0.513341) | 0.020372 / 0.534201 (-0.513829) | 0.402685 / 0.579283 (-0.176598) | 0.408338 / 0.434364 (-0.026026) | 0.476788 / 0.540337 (-0.063550) | 0.654765 / 1.386936 (-0.732171) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#ff803c7e9f256c5a137c25c090e18d844f9fc6e4 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008047 / 0.011353 (-0.003305) | 0.004662 / 0.011008 (-0.006346) | 0.102487 / 0.038508 (0.063978) | 0.096832 / 0.023109 (0.073723) | 0.375298 / 0.275898 (0.099400) | 0.420604 / 0.323480 (0.097124) | 0.004655 / 0.007986 (-0.003330) | 0.005699 / 0.004328 (0.001370) | 0.077681 / 0.004250 (0.073430) | 0.065987 / 0.037052 (0.028935) | 0.393146 / 0.258489 (0.134657) | 0.436324 / 0.293841 (0.142483) | 0.036168 / 0.128546 (-0.092378) | 0.010398 / 0.075646 (-0.065248) | 0.347579 / 0.419271 (-0.071693) | 0.061723 / 0.043533 (0.018190) | 0.377439 / 0.255139 (0.122300) | 0.416666 / 0.283200 (0.133467) | 0.031874 / 0.141683 (-0.109809) | 1.818885 / 1.452155 (0.366730) | 1.904749 / 1.492716 (0.412032) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.240497 / 0.018006 (0.222491) | 0.507907 / 0.000490 (0.507417) | 0.004574 / 0.000200 (0.004374) | 0.000098 / 0.000054 (0.000044) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033504 / 0.037411 (-0.003907) | 0.102919 / 0.014526 (0.088393) | 0.113014 / 0.176557 (-0.063543) | 0.181111 / 0.737135 (-0.556024) | 0.115047 / 0.296338 (-0.181291) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.453640 / 0.215209 (0.238431) | 4.514604 / 2.077655 (2.436949) | 2.219758 / 1.504120 (0.715638) | 2.004735 / 1.541195 (0.463541) | 2.112817 / 1.468490 (0.644327) | 0.579534 / 4.584777 (-4.005243) | 4.095994 / 3.745712 (0.350282) | 3.887204 / 5.269862 (-1.382658) | 2.461755 / 4.565676 (-2.103921) | 0.068930 / 0.424275 (-0.355345) | 0.009102 / 0.007607 (0.001495) | 0.540031 / 0.226044 (0.313987) | 5.394324 / 2.268929 (3.125396) | 2.738906 / 55.444624 (-52.705719) | 2.332041 / 6.876477 (-4.544436) | 2.600764 / 2.142072 (0.458692) | 0.697859 / 4.805227 (-4.107368) | 0.159247 / 6.500664 (-6.341417) | 0.073339 / 0.075469 (-0.002130) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.561082 / 1.841788 (-0.280706) | 23.581031 / 8.074308 (15.506723) | 17.011085 / 10.191392 (6.819693) | 0.196115 / 0.680424 (-0.484308) | 0.022050 / 0.534201 (-0.512151) | 0.470865 / 0.579283 (-0.108418) | 0.480539 / 0.434364 (0.046175) | 0.546458 / 0.540337 (0.006120) | 0.744353 / 1.386936 (-0.642583) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007884 / 0.011353 (-0.003468) | 0.004723 / 0.011008 (-0.006286) | 0.076431 / 0.038508 (0.037923) | 0.087016 / 0.023109 (0.063907) | 0.501880 / 0.275898 (0.225982) | 0.546286 / 0.323480 (0.222806) | 0.006224 / 0.007986 (-0.001762) | 0.003858 / 0.004328 (-0.000471) | 0.076485 / 0.004250 (0.072234) | 0.066758 / 0.037052 (0.029706) | 0.510090 / 0.258489 (0.251601) | 0.553935 / 0.293841 (0.260094) | 0.037785 / 0.128546 (-0.090761) | 0.009946 / 0.075646 (-0.065700) | 0.084001 / 0.419271 (-0.335270) | 0.056732 / 0.043533 (0.013199) | 0.490724 / 0.255139 (0.235585) | 0.528367 / 0.283200 (0.245168) | 0.026082 / 0.141683 (-0.115601) | 1.769200 / 1.452155 (0.317045) | 1.847559 / 1.492716 (0.354843) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.306752 / 0.018006 (0.288745) | 0.481215 / 0.000490 (0.480725) | 0.048231 / 0.000200 (0.048031) | 0.000249 / 0.000054 (0.000194) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.039517 / 0.037411 (0.002106) | 0.112884 / 0.014526 (0.098359) | 0.123858 / 0.176557 (-0.052698) | 0.188260 / 0.737135 (-0.548875) | 0.125819 / 0.296338 (-0.170520) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.515260 / 0.215209 (0.300051) | 5.125038 / 2.077655 (3.047383) | 2.785122 / 1.504120 (1.281003) | 2.590753 / 1.541195 (1.049558) | 2.682084 / 1.468490 (1.213594) | 0.581162 / 4.584777 (-4.003615) | 4.241776 / 3.745712 (0.496063) | 3.860979 / 5.269862 (-1.408883) | 2.434203 / 4.565676 (-2.131473) | 0.068580 / 0.424275 (-0.355695) | 0.008700 / 0.007607 (0.001093) | 0.604712 / 0.226044 (0.378667) | 6.044240 / 2.268929 (3.775311) | 3.379734 / 55.444624 (-52.064890) | 2.968906 / 6.876477 (-3.907571) | 3.195775 / 2.142072 (1.053703) | 0.702431 / 4.805227 (-4.102796) | 0.158752 / 6.500664 (-6.341912) | 0.072795 / 0.075469 (-0.002674) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.616354 / 1.841788 (-0.225434) | 24.258731 / 8.074308 (16.184423) | 17.505483 / 10.191392 (7.314091) | 0.173445 / 0.680424 (-0.506979) | 0.023215 / 0.534201 (-0.510986) | 0.472975 / 0.579283 (-0.106308) | 0.478425 / 0.434364 (0.044061) | 0.566950 / 0.540337 (0.026612) | 0.767648 / 1.386936 (-0.619288) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#a1d520a5226268f2c6f0303de3e8bfd72198b082 \"CML watermark\")\n" ]
Temporarily pin fsspec < 2023.9.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6210/reactions" }
PR_kwDODunzps5Zc4JF
{ "diff_url": "https://github.com/huggingface/datasets/pull/6210.diff", "html_url": "https://github.com/huggingface/datasets/pull/6210", "merged_at": "2023-09-04T07:30:00Z", "patch_url": "https://github.com/huggingface/datasets/pull/6210.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6210" }
2023-09-04T07:07:07Z
https://api.github.com/repos/huggingface/datasets/issues/6210/comments
Temporarily pin fsspec < 2023.9.0 until permanent solution is found. Hot fix #6209.
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/6210/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6210/timeline
closed
false
6,210
null
2023-09-04T07:30:00Z
null
true
1,879,622,000
https://api.github.com/repos/huggingface/datasets/issues/6209
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6209/events
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
null
2023-09-04T07:30:01Z
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
https://github.com/huggingface/datasets/issues/6209
MEMBER
completed
null
null
[]
CI is broken with AssertionError: 3 failed, 12 errors
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6209/reactions" }
I_kwDODunzps5wCMFw
null
2023-09-04T06:47:05Z
https://api.github.com/repos/huggingface/datasets/issues/6209/comments
Our CI is broken: 3 failed, 12 errors See: https://github.com/huggingface/datasets/actions/runs/6069947111/job/16465138041 ``` =========================== short test summary info ============================ FAILED tests/test_load.py::ModuleFactoryTest::test_LocalDatasetModuleFactoryWithoutScript_with_data_dir - AssertionError: assert ({NamedSplit('train'): ['/tmp/pytest-of-runner/pytest-0/popen-gw1/test_LocalDatasetModuleFactory2/data_dir2/subdir1/train.txt', '/tmp/pytest-of-runner/pytest-0/popen-gw1/test_LocalDatasetModuleFactory2/data_dir2/subdir1/train.txt'], NamedSplit('test'): ['/tmp/pytest-of-runner/pytest-0/popen-gw1/test_LocalDatasetModuleFactory2/data_dir2/subdir1/test.txt', '/tmp/pytest-of-runner/pytest-0/popen-gw1/test_LocalDatasetModuleFactory2/data_dir2/subdir1/test.txt']} is not None and 2 == 1) + where 2 = len(['/tmp/pytest-of-runner/pytest-0/popen-gw1/test_LocalDatasetModuleFactory2/data_dir2/subdir1/train.txt', '/tmp/pytest-of-runner/pytest-0/popen-gw1/test_LocalDatasetModuleFactory2/data_dir2/subdir1/train.txt']) FAILED tests/test_load.py::test_load_dataset_arrow[False] - AssertionError: assert 20 == 10 + where 20 = Dataset({\n features: ['col_1'],\n num_rows: 20\n}).num_rows FAILED tests/test_load.py::test_load_dataset_arrow[True] - assert 20 == 10 ERROR tests/packaged_modules/test_audiofolder.py::test_data_files_with_metadata_and_multiple_splits[jsonl-False] - AssertionError: assert 6 == 3 + where 6 = len(['/tmp/pytest-of-runner/pytest-0/popen-gw1/test_data_files_with_metadata_2/audiofolder_data_dir_with_metadata/train/audio_file.wav', '/tmp/pytest-of-runner/pytest-0/popen-gw1/test_data_files_with_metadata_2/audiofolder_data_dir_with_metadata/train/audio_file2.wav', '/tmp/pytest-of-runner/pytest-0/popen-gw1/test_data_files_with_metadata_2/audiofolder_data_dir_with_metadata/train/metadata.jsonl', '/tmp/pytest-of-runner/pytest-0/popen-gw1/test_data_files_with_metadata_2/audiofolder_data_dir_with_metadata/train/audio_file.wav', '/tmp/pytest-of-runner/pytest-0/popen-gw1/test_data_files_with_metadata_2/audiofolder_data_dir_with_metadata/train/audio_file2.wav', '/tmp/pytest-of-runner/pytest-0/popen-gw1/test_data_files_with_metadata_2/audiofolder_data_dir_with_metadata/train/metadata.jsonl']) ERROR tests/packaged_modules/test_audiofolder.py::test_data_files_with_metadata_and_multiple_splits[jsonl-True] - AssertionError: assert 6 == 3 + where 6 = len(['/tmp/pytest-of-runner/pytest-0/popen-gw1/test_data_files_with_metadata_3/audiofolder_data_dir_with_metadata/train/audio_file.wav', '/tmp/pytest-of-runner/pytest-0/popen-gw1/test_data_files_with_metadata_3/audiofolder_data_dir_with_metadata/train/audio_file2.wav', '/tmp/pytest-of-runner/pytest-0/popen-gw1/test_data_files_with_metadata_3/audiofolder_data_dir_with_metadata/train/metadata.jsonl', '/tmp/pytest-of-runner/pytest-0/popen-gw1/test_data_files_with_metadata_3/audiofolder_data_dir_with_metadata/train/audio_file.wav', '/tmp/pytest-of-runner/pytest-0/popen-gw1/test_data_files_with_metadata_3/audiofolder_data_dir_with_metadata/train/audio_file2.wav', '/tmp/pytest-of-runner/pytest-0/popen-gw1/test_data_files_with_metadata_3/audiofolder_data_dir_with_metadata/train/metadata.jsonl']) ERROR tests/packaged_modules/test_audiofolder.py::test_data_files_with_metadata_and_multiple_splits[csv-False] - AssertionError: assert 6 == 3 + where 6 = len(['/tmp/pytest-of-runner/pytest-0/popen-gw1/test_data_files_with_metadata_4/audiofolder_data_dir_with_metadata/train/audio_file.wav', '/tmp/pytest-of-runner/pytest-0/popen-gw1/test_data_files_with_metadata_4/audiofolder_data_dir_with_metadata/train/audio_file2.wav', '/tmp/pytest-of-runner/pytest-0/popen-gw1/test_data_files_with_metadata_4/audiofolder_data_dir_with_metadata/train/metadata.csv', '/tmp/pytest-of-runner/pytest-0/popen-gw1/test_data_files_with_metadata_4/audiofolder_data_dir_with_metadata/train/audio_file.wav', '/tmp/pytest-of-runner/pytest-0/popen-gw1/test_data_files_with_metadata_4/audiofolder_data_dir_with_metadata/train/audio_file2.wav', '/tmp/pytest-of-runner/pytest-0/popen-gw1/test_data_files_with_metadata_4/audiofolder_data_dir_with_metadata/train/metadata.csv']) ERROR tests/packaged_modules/test_audiofolder.py::test_data_files_with_metadata_and_multiple_splits[csv-True] - AssertionError: assert 6 == 3 + where 6 = len(['/tmp/pytest-of-runner/pytest-0/popen-gw1/test_data_files_with_metadata_5/audiofolder_data_dir_with_metadata/train/audio_file.wav', '/tmp/pytest-of-runner/pytest-0/popen-gw1/test_data_files_with_metadata_5/audiofolder_data_dir_with_metadata/train/audio_file2.wav', '/tmp/pytest-of-runner/pytest-0/popen-gw1/test_data_files_with_metadata_5/audiofolder_data_dir_with_metadata/train/metadata.csv', '/tmp/pytest-of-runner/pytest-0/popen-gw1/test_data_files_with_metadata_5/audiofolder_data_dir_with_metadata/train/audio_file.wav', '/tmp/pytest-of-runner/pytest-0/popen-gw1/test_data_files_with_metadata_5/audiofolder_data_dir_with_metadata/train/audio_file2.wav', '/tmp/pytest-of-runner/pytest-0/popen-gw1/test_data_files_with_metadata_5/audiofolder_data_dir_with_metadata/train/metadata.csv']) ERROR tests/packaged_modules/test_folder_based_builder.py::test_data_files_with_metadata_and_splits[1-False] - AssertionError: assert 6 == 3 + where 6 = len(['/tmp/pytest-of-runner/pytest-0/popen-gw0/test_data_files_with_metadata_3/autofolder_data_dir_with_metadata_two_splits/train/file.txt', '/tmp/pytest-of-runner/pytest-0/popen-gw0/test_data_files_with_metadata_3/autofolder_data_dir_with_metadata_two_splits/train/file2.txt', '/tmp/pytest-of-runner/pytest-0/popen-gw0/test_data_files_with_metadata_3/autofolder_data_dir_with_metadata_two_splits/train/metadata.jsonl', '/tmp/pytest-of-runner/pytest-0/popen-gw0/test_data_files_with_metadata_3/autofolder_data_dir_with_metadata_two_splits/train/file.txt', '/tmp/pytest-of-runner/pytest-0/popen-gw0/test_data_files_with_metadata_3/autofolder_data_dir_with_metadata_two_splits/train/file2.txt', '/tmp/pytest-of-runner/pytest-0/popen-gw0/test_data_files_with_metadata_3/autofolder_data_dir_with_metadata_two_splits/train/metadata.jsonl']) ERROR tests/packaged_modules/test_folder_based_builder.py::test_data_files_with_metadata_and_splits[1-True] - AssertionError: assert 6 == 3 + where 6 = len(['/tmp/pytest-of-runner/pytest-0/popen-gw0/test_data_files_with_metadata_4/autofolder_data_dir_with_metadata_two_splits/train/file.txt', '/tmp/pytest-of-runner/pytest-0/popen-gw0/test_data_files_with_metadata_4/autofolder_data_dir_with_metadata_two_splits/train/file2.txt', '/tmp/pytest-of-runner/pytest-0/popen-gw0/test_data_files_with_metadata_4/autofolder_data_dir_with_metadata_two_splits/train/metadata.jsonl', '/tmp/pytest-of-runner/pytest-0/popen-gw0/test_data_files_with_metadata_4/autofolder_data_dir_with_metadata_two_splits/train/file.txt', '/tmp/pytest-of-runner/pytest-0/popen-gw0/test_data_files_with_metadata_4/autofolder_data_dir_with_metadata_two_splits/train/file2.txt', '/tmp/pytest-of-runner/pytest-0/popen-gw0/test_data_files_with_metadata_4/autofolder_data_dir_with_metadata_two_splits/train/metadata.jsonl']) ERROR tests/packaged_modules/test_folder_based_builder.py::test_data_files_with_metadata_and_splits[2-False] - AssertionError: assert 6 == 3 + where 6 = len(['/tmp/pytest-of-runner/pytest-0/popen-gw0/test_data_files_with_metadata_5/autofolder_data_dir_with_metadata_two_splits/train/file.txt', '/tmp/pytest-of-runner/pytest-0/popen-gw0/test_data_files_with_metadata_5/autofolder_data_dir_with_metadata_two_splits/train/file2.txt', '/tmp/pytest-of-runner/pytest-0/popen-gw0/test_data_files_with_metadata_5/autofolder_data_dir_with_metadata_two_splits/train/metadata.jsonl', '/tmp/pytest-of-runner/pytest-0/popen-gw0/test_data_files_with_metadata_5/autofolder_data_dir_with_metadata_two_splits/train/file.txt', '/tmp/pytest-of-runner/pytest-0/popen-gw0/test_data_files_with_metadata_5/autofolder_data_dir_with_metadata_two_splits/train/file2.txt', '/tmp/pytest-of-runner/pytest-0/popen-gw0/test_data_files_with_metadata_5/autofolder_data_dir_with_metadata_two_splits/train/metadata.jsonl']) ERROR tests/packaged_modules/test_imagefolder.py::test_data_files_with_metadata_and_multiple_splits[jsonl-False] - AssertionError: assert 6 == 3 + where 6 = len(['/tmp/pytest-of-runner/pytest-0/popen-gw1/test_data_files_with_metadata_12/imagefolder_data_dir_with_metadata_two_splits/train/image_rgb.jpg', '/tmp/pytest-of-runner/pytest-0/popen-gw1/test_data_files_with_metadata_12/imagefolder_data_dir_with_metadata_two_splits/train/image_rgb2.jpg', '/tmp/pytest-of-runner/pytest-0/popen-gw1/test_data_files_with_metadata_12/imagefolder_data_dir_with_metadata_two_splits/train/metadata.jsonl', '/tmp/pytest-of-runner/pytest-0/popen-gw1/test_data_files_with_metadata_12/imagefolder_data_dir_with_metadata_two_splits/train/image_rgb.jpg', '/tmp/pytest-of-runner/pytest-0/popen-gw1/test_data_files_with_metadata_12/imagefolder_data_dir_with_metadata_two_splits/train/image_rgb2.jpg', '/tmp/pytest-of-runner/pytest-0/popen-gw1/test_data_files_with_metadata_12/imagefolder_data_dir_with_metadata_two_splits/train/metadata.jsonl']) ERROR tests/packaged_modules/test_imagefolder.py::test_data_files_with_metadata_and_multiple_splits[jsonl-True] - AssertionError: assert 6 == 3 + where 6 = len(['/tmp/pytest-of-runner/pytest-0/popen-gw1/test_data_files_with_metadata_13/imagefolder_data_dir_with_metadata_two_splits/train/image_rgb.jpg', '/tmp/pytest-of-runner/pytest-0/popen-gw1/test_data_files_with_metadata_13/imagefolder_data_dir_with_metadata_two_splits/train/image_rgb2.jpg', '/tmp/pytest-of-runner/pytest-0/popen-gw1/test_data_files_with_metadata_13/imagefolder_data_dir_with_metadata_two_splits/train/metadata.jsonl', '/tmp/pytest-of-runner/pytest-0/popen-gw1/test_data_files_with_metadata_13/imagefolder_data_dir_with_metadata_two_splits/train/image_rgb.jpg', '/tmp/pytest-of-runner/pytest-0/popen-gw1/test_data_files_with_metadata_13/imagefolder_data_dir_with_metadata_two_splits/train/image_rgb2.jpg', '/tmp/pytest-of-runner/pytest-0/popen-gw1/test_data_files_with_metadata_13/imagefolder_data_dir_with_metadata_two_splits/train/metadata.jsonl']) ERROR tests/packaged_modules/test_folder_based_builder.py::test_data_files_with_metadata_and_splits[2-True] - AssertionError: assert 6 == 3 + where 6 = len(['/tmp/pytest-of-runner/pytest-0/popen-gw0/test_data_files_with_metadata_6/autofolder_data_dir_with_metadata_two_splits/train/file.txt', '/tmp/pytest-of-runner/pytest-0/popen-gw0/test_data_files_with_metadata_6/autofolder_data_dir_with_metadata_two_splits/train/file2.txt', '/tmp/pytest-of-runner/pytest-0/popen-gw0/test_data_files_with_metadata_6/autofolder_data_dir_with_metadata_two_splits/train/metadata.jsonl', '/tmp/pytest-of-runner/pytest-0/popen-gw0/test_data_files_with_metadata_6/autofolder_data_dir_with_metadata_two_splits/train/file.txt', '/tmp/pytest-of-runner/pytest-0/popen-gw0/test_data_files_with_metadata_6/autofolder_data_dir_with_metadata_two_splits/train/file2.txt', '/tmp/pytest-of-runner/pytest-0/popen-gw0/test_data_files_with_metadata_6/autofolder_data_dir_with_metadata_two_splits/train/metadata.jsonl']) ERROR tests/packaged_modules/test_imagefolder.py::test_data_files_with_metadata_and_multiple_splits[csv-False] - AssertionError: assert 6 == 3 + where 6 = len(['/tmp/pytest-of-runner/pytest-0/popen-gw1/test_data_files_with_metadata_14/imagefolder_data_dir_with_metadata_two_splits/train/image_rgb.jpg', '/tmp/pytest-of-runner/pytest-0/popen-gw1/test_data_files_with_metadata_14/imagefolder_data_dir_with_metadata_two_splits/train/image_rgb2.jpg', '/tmp/pytest-of-runner/pytest-0/popen-gw1/test_data_files_with_metadata_14/imagefolder_data_dir_with_metadata_two_splits/train/metadata.csv', '/tmp/pytest-of-runner/pytest-0/popen-gw1/test_data_files_with_metadata_14/imagefolder_data_dir_with_metadata_two_splits/train/image_rgb.jpg', '/tmp/pytest-of-runner/pytest-0/popen-gw1/test_data_files_with_metadata_14/imagefolder_data_dir_with_metadata_two_splits/train/image_rgb2.jpg', '/tmp/pytest-of-runner/pytest-0/popen-gw1/test_data_files_with_metadata_14/imagefolder_data_dir_with_metadata_two_splits/train/metadata.csv']) ERROR tests/packaged_modules/test_imagefolder.py::test_data_files_with_metadata_and_multiple_splits[csv-True] - AssertionError: assert 6 == 3 + where 6 = len(['/tmp/pytest-of-runner/pytest-0/popen-gw1/test_data_files_with_metadata_15/imagefolder_data_dir_with_metadata_two_splits/train/image_rgb.jpg', '/tmp/pytest-of-runner/pytest-0/popen-gw1/test_data_files_with_metadata_15/imagefolder_data_dir_with_metadata_two_splits/train/image_rgb2.jpg', '/tmp/pytest-of-runner/pytest-0/popen-gw1/test_data_files_with_metadata_15/imagefolder_data_dir_with_metadata_two_splits/train/metadata.csv', '/tmp/pytest-of-runner/pytest-0/popen-gw1/test_data_files_with_metadata_15/imagefolder_data_dir_with_metadata_two_splits/train/image_rgb.jpg', '/tmp/pytest-of-runner/pytest-0/popen-gw1/test_data_files_with_metadata_15/imagefolder_data_dir_with_metadata_two_splits/train/image_rgb2.jpg', '/tmp/pytest-of-runner/pytest-0/popen-gw1/test_data_files_with_metadata_15/imagefolder_data_dir_with_metadata_two_splits/train/metadata.csv']) = 3 failed, 2383 passed, 26 skipped, 9 warnings, 12 errors in 280.79s (0:04:40) = ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/6209/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6209/timeline
closed
false
6,209
null
2023-09-04T07:30:01Z
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
false
1,879,572,646
https://api.github.com/repos/huggingface/datasets/issues/6208
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6208/events
[]
null
2023-09-04T09:22:19Z
[]
https://github.com/huggingface/datasets/pull/6208
MEMBER
null
false
null
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006797 / 0.011353 (-0.004556) | 0.003966 / 0.011008 (-0.007042) | 0.085296 / 0.038508 (0.046788) | 0.076873 / 0.023109 (0.053764) | 0.355795 / 0.275898 (0.079897) | 0.397132 / 0.323480 (0.073652) | 0.005325 / 0.007986 (-0.002660) | 0.003343 / 0.004328 (-0.000986) | 0.064966 / 0.004250 (0.060716) | 0.054519 / 0.037052 (0.017467) | 0.357864 / 0.258489 (0.099374) | 0.409238 / 0.293841 (0.115397) | 0.031620 / 0.128546 (-0.096926) | 0.008529 / 0.075646 (-0.067117) | 0.288502 / 0.419271 (-0.130769) | 0.053260 / 0.043533 (0.009728) | 0.355245 / 0.255139 (0.100106) | 0.384139 / 0.283200 (0.100939) | 0.024507 / 0.141683 (-0.117176) | 1.494696 / 1.452155 (0.042541) | 1.579847 / 1.492716 (0.087130) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.204011 / 0.018006 (0.186005) | 0.451729 / 0.000490 (0.451239) | 0.004628 / 0.000200 (0.004428) | 0.000081 / 0.000054 (0.000026) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028342 / 0.037411 (-0.009069) | 0.084647 / 0.014526 (0.070121) | 0.096174 / 0.176557 (-0.080383) | 0.151753 / 0.737135 (-0.585382) | 0.096347 / 0.296338 (-0.199991) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.387179 / 0.215209 (0.171970) | 3.861552 / 2.077655 (1.783898) | 1.844033 / 1.504120 (0.339913) | 1.678811 / 1.541195 (0.137616) | 1.793207 / 1.468490 (0.324717) | 0.485836 / 4.584777 (-4.098941) | 3.566274 / 3.745712 (-0.179438) | 3.269888 / 5.269862 (-1.999974) | 2.042850 / 4.565676 (-2.522827) | 0.057088 / 0.424275 (-0.367187) | 0.007627 / 0.007607 (0.000019) | 0.460510 / 0.226044 (0.234465) | 4.602019 / 2.268929 (2.333090) | 2.390984 / 55.444624 (-53.053641) | 1.976150 / 6.876477 (-4.900327) | 2.193394 / 2.142072 (0.051322) | 0.582775 / 4.805227 (-4.222453) | 0.133408 / 6.500664 (-6.367256) | 0.060577 / 0.075469 (-0.014893) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.248505 / 1.841788 (-0.593283) | 19.771301 / 8.074308 (11.696993) | 14.327871 / 10.191392 (4.136479) | 0.155288 / 0.680424 (-0.525136) | 0.018310 / 0.534201 (-0.515891) | 0.393664 / 0.579283 (-0.185619) | 0.410578 / 0.434364 (-0.023786) | 0.459301 / 0.540337 (-0.081037) | 0.631921 / 1.386936 (-0.755015) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006827 / 0.011353 (-0.004526) | 0.004094 / 0.011008 (-0.006915) | 0.065299 / 0.038508 (0.026791) | 0.079496 / 0.023109 (0.056387) | 0.403661 / 0.275898 (0.127763) | 0.434449 / 0.323480 (0.110969) | 0.005398 / 0.007986 (-0.002588) | 0.003410 / 0.004328 (-0.000919) | 0.064832 / 0.004250 (0.060582) | 0.056303 / 0.037052 (0.019250) | 0.397848 / 0.258489 (0.139359) | 0.438244 / 0.293841 (0.144403) | 0.032637 / 0.128546 (-0.095909) | 0.008584 / 0.075646 (-0.067063) | 0.071406 / 0.419271 (-0.347866) | 0.048265 / 0.043533 (0.004732) | 0.397814 / 0.255139 (0.142675) | 0.421601 / 0.283200 (0.138402) | 0.023815 / 0.141683 (-0.117868) | 1.504814 / 1.452155 (0.052659) | 1.577185 / 1.492716 (0.084469) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.231775 / 0.018006 (0.213769) | 0.445437 / 0.000490 (0.444948) | 0.005252 / 0.000200 (0.005052) | 0.000093 / 0.000054 (0.000039) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032777 / 0.037411 (-0.004634) | 0.095054 / 0.014526 (0.080528) | 0.106429 / 0.176557 (-0.070127) | 0.160111 / 0.737135 (-0.577024) | 0.108075 / 0.296338 (-0.188263) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.426034 / 0.215209 (0.210825) | 4.244668 / 2.077655 (2.167013) | 2.257938 / 1.504120 (0.753818) | 2.087993 / 1.541195 (0.546798) | 2.170878 / 1.468490 (0.702387) | 0.485228 / 4.584777 (-4.099549) | 3.725912 / 3.745712 (-0.019800) | 3.286925 / 5.269862 (-1.982937) | 2.059929 / 4.565676 (-2.505748) | 0.057813 / 0.424275 (-0.366462) | 0.007518 / 0.007607 (-0.000089) | 0.506632 / 0.226044 (0.280588) | 5.048340 / 2.268929 (2.779411) | 2.744756 / 55.444624 (-52.699869) | 2.406636 / 6.876477 (-4.469841) | 2.617552 / 2.142072 (0.475480) | 0.588476 / 4.805227 (-4.216751) | 0.133518 / 6.500664 (-6.367146) | 0.060778 / 0.075469 (-0.014691) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.356416 / 1.841788 (-0.485372) | 20.467516 / 8.074308 (12.393208) | 15.265443 / 10.191392 (5.074051) | 0.169201 / 0.680424 (-0.511223) | 0.020087 / 0.534201 (-0.514114) | 0.402332 / 0.579283 (-0.176951) | 0.414848 / 0.434364 (-0.019516) | 0.470422 / 0.540337 (-0.069916) | 0.647266 / 1.386936 (-0.739670) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#eb001b4cee7f1d71e393c3ad489a8a5cd8119df5 \"CML watermark\")\n", "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005804 / 0.011353 (-0.005549) | 0.003519 / 0.011008 (-0.007489) | 0.080003 / 0.038508 (0.041495) | 0.055419 / 0.023109 (0.032309) | 0.395254 / 0.275898 (0.119356) | 0.432714 / 0.323480 (0.109234) | 0.004438 / 0.007986 (-0.003548) | 0.002832 / 0.004328 (-0.001496) | 0.062026 / 0.004250 (0.057775) | 0.044334 / 0.037052 (0.007282) | 0.401278 / 0.258489 (0.142789) | 0.451516 / 0.293841 (0.157675) | 0.026791 / 0.128546 (-0.101755) | 0.007946 / 0.075646 (-0.067700) | 0.265166 / 0.419271 (-0.154106) | 0.044119 / 0.043533 (0.000586) | 0.399621 / 0.255139 (0.144482) | 0.422808 / 0.283200 (0.139609) | 0.019998 / 0.141683 (-0.121685) | 1.433559 / 1.452155 (-0.018596) | 1.596902 / 1.492716 (0.104186) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.195662 / 0.018006 (0.177656) | 0.423167 / 0.000490 (0.422677) | 0.003426 / 0.000200 (0.003227) | 0.000066 / 0.000054 (0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023318 / 0.037411 (-0.014094) | 0.072532 / 0.014526 (0.058006) | 0.082181 / 0.176557 (-0.094375) | 0.142214 / 0.737135 (-0.594921) | 0.083423 / 0.296338 (-0.212915) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.402270 / 0.215209 (0.187061) | 4.027607 / 2.077655 (1.949953) | 2.059803 / 1.504120 (0.555684) | 1.865115 / 1.541195 (0.323920) | 1.934976 / 1.468490 (0.466485) | 0.502145 / 4.584777 (-4.082632) | 2.970865 / 3.745712 (-0.774847) | 2.784155 / 5.269862 (-2.485707) | 1.822003 / 4.565676 (-2.743673) | 0.057699 / 0.424275 (-0.366576) | 0.006668 / 0.007607 (-0.000939) | 0.471164 / 0.226044 (0.245120) | 4.733079 / 2.268929 (2.464150) | 2.445119 / 55.444624 (-52.999505) | 2.132956 / 6.876477 (-4.743521) | 2.335998 / 2.142072 (0.193926) | 0.594881 / 4.805227 (-4.210347) | 0.125801 / 6.500664 (-6.374863) | 0.060780 / 0.075469 (-0.014689) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.233170 / 1.841788 (-0.608618) | 17.942205 / 8.074308 (9.867897) | 13.587020 / 10.191392 (3.395628) | 0.142110 / 0.680424 (-0.538314) | 0.016600 / 0.534201 (-0.517601) | 0.328659 / 0.579283 (-0.250624) | 0.347759 / 0.434364 (-0.086605) | 0.378651 / 0.540337 (-0.161687) | 0.523474 / 1.386936 (-0.863462) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006028 / 0.011353 (-0.005325) | 0.003552 / 0.011008 (-0.007456) | 0.062175 / 0.038508 (0.023667) | 0.057602 / 0.023109 (0.034493) | 0.444585 / 0.275898 (0.168687) | 0.471238 / 0.323480 (0.147758) | 0.004562 / 0.007986 (-0.003423) | 0.002871 / 0.004328 (-0.001457) | 0.063101 / 0.004250 (0.058851) | 0.046072 / 0.037052 (0.009020) | 0.448253 / 0.258489 (0.189764) | 0.478734 / 0.293841 (0.184893) | 0.028463 / 0.128546 (-0.100084) | 0.008090 / 0.075646 (-0.067557) | 0.068142 / 0.419271 (-0.351130) | 0.040517 / 0.043533 (-0.003016) | 0.447145 / 0.255139 (0.192006) | 0.469472 / 0.283200 (0.186273) | 0.019391 / 0.141683 (-0.122291) | 1.471195 / 1.452155 (0.019040) | 1.532966 / 1.492716 (0.040249) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.259894 / 0.018006 (0.241888) | 0.412987 / 0.000490 (0.412497) | 0.020780 / 0.000200 (0.020580) | 0.000084 / 0.000054 (0.000030) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026352 / 0.037411 (-0.011060) | 0.080024 / 0.014526 (0.065498) | 0.088041 / 0.176557 (-0.088516) | 0.142987 / 0.737135 (-0.594148) | 0.090108 / 0.296338 (-0.206231) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.458874 / 0.215209 (0.243665) | 4.573005 / 2.077655 (2.495351) | 2.507885 / 1.504120 (1.003765) | 2.335432 / 1.541195 (0.794238) | 2.379617 / 1.468490 (0.911126) | 0.503331 / 4.584777 (-4.081446) | 3.078284 / 3.745712 (-0.667428) | 2.750580 / 5.269862 (-2.519282) | 1.828100 / 4.565676 (-2.737577) | 0.057572 / 0.424275 (-0.366703) | 0.006553 / 0.007607 (-0.001054) | 0.532283 / 0.226044 (0.306239) | 5.310584 / 2.268929 (3.041656) | 2.943559 / 55.444624 (-52.501065) | 2.587544 / 6.876477 (-4.288932) | 2.718261 / 2.142072 (0.576188) | 0.590267 / 4.805227 (-4.214961) | 0.123229 / 6.500664 (-6.377435) | 0.060219 / 0.075469 (-0.015250) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.340773 / 1.841788 (-0.501014) | 18.420766 / 8.074308 (10.346458) | 14.630550 / 10.191392 (4.439158) | 0.146666 / 0.680424 (-0.533758) | 0.017905 / 0.534201 (-0.516296) | 0.332483 / 0.579283 (-0.246801) | 0.355490 / 0.434364 (-0.078874) | 0.382618 / 0.540337 (-0.157720) | 0.531336 / 1.386936 (-0.855600) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#d438617fc577bc0222527714edafea0c52ebf239 \"CML watermark\")\n", "There were CI errors unrelated to this PR.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008702 / 0.011353 (-0.002651) | 0.005060 / 0.011008 (-0.005948) | 0.097017 / 0.038508 (0.058509) | 0.073740 / 0.023109 (0.050631) | 0.435138 / 0.275898 (0.159240) | 0.512776 / 0.323480 (0.189296) | 0.006186 / 0.007986 (-0.001800) | 0.003970 / 0.004328 (-0.000358) | 0.089523 / 0.004250 (0.085273) | 0.054441 / 0.037052 (0.017389) | 0.447415 / 0.258489 (0.188926) | 0.464851 / 0.293841 (0.171010) | 0.050264 / 0.128546 (-0.078283) | 0.016643 / 0.075646 (-0.059004) | 0.350565 / 0.419271 (-0.068707) | 0.071220 / 0.043533 (0.027687) | 0.432531 / 0.255139 (0.177392) | 0.472994 / 0.283200 (0.189795) | 0.040229 / 0.141683 (-0.101454) | 1.743431 / 1.452155 (0.291276) | 1.778653 / 1.492716 (0.285936) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.261992 / 0.018006 (0.243986) | 0.571979 / 0.000490 (0.571489) | 0.006270 / 0.000200 (0.006071) | 0.000109 / 0.000054 (0.000054) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027821 / 0.037411 (-0.009590) | 0.081874 / 0.014526 (0.067348) | 0.103725 / 0.176557 (-0.072831) | 0.170593 / 0.737135 (-0.566542) | 0.108749 / 0.296338 (-0.187590) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.690774 / 0.215209 (0.475565) | 6.770902 / 2.077655 (4.693247) | 2.887218 / 1.504120 (1.383098) | 2.456226 / 1.541195 (0.915032) | 2.509422 / 1.468490 (1.040932) | 0.768451 / 4.584777 (-3.816326) | 4.988933 / 3.745712 (1.243221) | 4.151460 / 5.269862 (-1.118402) | 2.640472 / 4.565676 (-1.925205) | 0.093522 / 0.424275 (-0.330753) | 0.008614 / 0.007607 (0.001007) | 0.696281 / 0.226044 (0.470237) | 6.721077 / 2.268929 (4.452149) | 3.229760 / 55.444624 (-52.214864) | 2.668521 / 6.876477 (-4.207956) | 2.866420 / 2.142072 (0.724347) | 0.945328 / 4.805227 (-3.859899) | 0.197645 / 6.500664 (-6.303019) | 0.074442 / 0.075469 (-0.001027) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.630468 / 1.841788 (-0.211320) | 22.991661 / 8.074308 (14.917353) | 19.816919 / 10.191392 (9.625527) | 0.257410 / 0.680424 (-0.423014) | 0.027228 / 0.534201 (-0.506973) | 0.444515 / 0.579283 (-0.134768) | 0.597067 / 0.434364 (0.162703) | 0.528151 / 0.540337 (-0.012186) | 0.771276 / 1.386936 (-0.615660) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009154 / 0.011353 (-0.002199) | 0.004648 / 0.011008 (-0.006360) | 0.073054 / 0.038508 (0.034546) | 0.077146 / 0.023109 (0.054037) | 0.481659 / 0.275898 (0.205761) | 0.516985 / 0.323480 (0.193505) | 0.007447 / 0.007986 (-0.000538) | 0.003890 / 0.004328 (-0.000438) | 0.078701 / 0.004250 (0.074450) | 0.059183 / 0.037052 (0.022131) | 0.475350 / 0.258489 (0.216861) | 0.547834 / 0.293841 (0.253993) | 0.058440 / 0.128546 (-0.070106) | 0.013563 / 0.075646 (-0.062083) | 0.084320 / 0.419271 (-0.334951) | 0.065965 / 0.043533 (0.022433) | 0.483541 / 0.255139 (0.228402) | 0.513940 / 0.283200 (0.230740) | 0.042889 / 0.141683 (-0.098794) | 1.676050 / 1.452155 (0.223895) | 1.759206 / 1.492716 (0.266489) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.274848 / 0.018006 (0.256841) | 0.588965 / 0.000490 (0.588475) | 0.006312 / 0.000200 (0.006112) | 0.000120 / 0.000054 (0.000065) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033871 / 0.037411 (-0.003540) | 0.104013 / 0.014526 (0.089487) | 0.118457 / 0.176557 (-0.058099) | 0.178268 / 0.737135 (-0.558868) | 0.116972 / 0.296338 (-0.179366) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.609952 / 0.215209 (0.394743) | 5.788754 / 2.077655 (3.711100) | 2.812166 / 1.504120 (1.308046) | 2.362861 / 1.541195 (0.821666) | 2.641295 / 1.468490 (1.172804) | 0.767601 / 4.584777 (-3.817176) | 5.027439 / 3.745712 (1.281727) | 4.612511 / 5.269862 (-0.657351) | 2.654364 / 4.565676 (-1.911312) | 0.103100 / 0.424275 (-0.321175) | 0.012233 / 0.007607 (0.004626) | 0.749283 / 0.226044 (0.523238) | 7.511093 / 2.268929 (5.242165) | 3.585867 / 55.444624 (-51.858757) | 3.255110 / 6.876477 (-3.621366) | 3.260174 / 2.142072 (1.118102) | 0.958422 / 4.805227 (-3.846806) | 0.209096 / 6.500664 (-6.291568) | 0.075014 / 0.075469 (-0.000455) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.728283 / 1.841788 (-0.113504) | 25.411147 / 8.074308 (17.336839) | 21.335202 / 10.191392 (11.143810) | 0.199090 / 0.680424 (-0.481334) | 0.031288 / 0.534201 (-0.502913) | 0.449226 / 0.579283 (-0.130057) | 0.555570 / 0.434364 (0.121206) | 0.570297 / 0.540337 (0.029960) | 0.758673 / 1.386936 (-0.628263) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#fa696b4b4f0d11c5b8592eb31cb1d54a707e3d33 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006862 / 0.011353 (-0.004491) | 0.003959 / 0.011008 (-0.007049) | 0.087219 / 0.038508 (0.048711) | 0.078335 / 0.023109 (0.055226) | 0.319019 / 0.275898 (0.043121) | 0.342871 / 0.323480 (0.019391) | 0.004065 / 0.007986 (-0.003921) | 0.004346 / 0.004328 (0.000017) | 0.065243 / 0.004250 (0.060993) | 0.056698 / 0.037052 (0.019646) | 0.326906 / 0.258489 (0.068417) | 0.354323 / 0.293841 (0.060482) | 0.031252 / 0.128546 (-0.097295) | 0.008587 / 0.075646 (-0.067060) | 0.300323 / 0.419271 (-0.118948) | 0.052810 / 0.043533 (0.009277) | 0.323866 / 0.255139 (0.068727) | 0.346011 / 0.283200 (0.062811) | 0.025584 / 0.141683 (-0.116099) | 1.464475 / 1.452155 (0.012320) | 1.530868 / 1.492716 (0.038152) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.208927 / 0.018006 (0.190921) | 0.454147 / 0.000490 (0.453657) | 0.003945 / 0.000200 (0.003746) | 0.000081 / 0.000054 (0.000026) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029901 / 0.037411 (-0.007511) | 0.088889 / 0.014526 (0.074363) | 0.098181 / 0.176557 (-0.078375) | 0.156787 / 0.737135 (-0.580349) | 0.099015 / 0.296338 (-0.197324) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.384981 / 0.215209 (0.169772) | 3.831040 / 2.077655 (1.753386) | 1.858312 / 1.504120 (0.354192) | 1.686846 / 1.541195 (0.145651) | 1.771509 / 1.468490 (0.303019) | 0.485618 / 4.584777 (-4.099159) | 3.430961 / 3.745712 (-0.314751) | 3.264489 / 5.269862 (-2.005372) | 2.040125 / 4.565676 (-2.525551) | 0.057218 / 0.424275 (-0.367057) | 0.007640 / 0.007607 (0.000033) | 0.468072 / 0.226044 (0.242027) | 4.677214 / 2.268929 (2.408286) | 2.348425 / 55.444624 (-53.096199) | 1.994352 / 6.876477 (-4.882125) | 2.217020 / 2.142072 (0.074948) | 0.587467 / 4.805227 (-4.217760) | 0.133550 / 6.500664 (-6.367114) | 0.060571 / 0.075469 (-0.014898) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.271003 / 1.841788 (-0.570785) | 19.986365 / 8.074308 (11.912057) | 14.574046 / 10.191392 (4.382654) | 0.146212 / 0.680424 (-0.534212) | 0.018320 / 0.534201 (-0.515881) | 0.394524 / 0.579283 (-0.184759) | 0.399707 / 0.434364 (-0.034657) | 0.458965 / 0.540337 (-0.081372) | 0.619940 / 1.386936 (-0.766996) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006982 / 0.011353 (-0.004371) | 0.004061 / 0.011008 (-0.006947) | 0.064520 / 0.038508 (0.026012) | 0.076828 / 0.023109 (0.053719) | 0.402989 / 0.275898 (0.127090) | 0.439697 / 0.323480 (0.116217) | 0.005511 / 0.007986 (-0.002475) | 0.003378 / 0.004328 (-0.000950) | 0.064727 / 0.004250 (0.060477) | 0.058114 / 0.037052 (0.021062) | 0.402054 / 0.258489 (0.143565) | 0.442377 / 0.293841 (0.148536) | 0.032808 / 0.128546 (-0.095738) | 0.008604 / 0.075646 (-0.067043) | 0.070994 / 0.419271 (-0.348278) | 0.048738 / 0.043533 (0.005205) | 0.399786 / 0.255139 (0.144647) | 0.423537 / 0.283200 (0.140338) | 0.022397 / 0.141683 (-0.119286) | 1.504613 / 1.452155 (0.052458) | 1.571064 / 1.492716 (0.078348) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.226876 / 0.018006 (0.208870) | 0.451477 / 0.000490 (0.450987) | 0.004511 / 0.000200 (0.004311) | 0.000095 / 0.000054 (0.000041) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032998 / 0.037411 (-0.004413) | 0.095843 / 0.014526 (0.081317) | 0.105684 / 0.176557 (-0.070873) | 0.158175 / 0.737135 (-0.578960) | 0.107297 / 0.296338 (-0.189041) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.434912 / 0.215209 (0.219703) | 4.326394 / 2.077655 (2.248740) | 2.287310 / 1.504120 (0.783190) | 2.127987 / 1.541195 (0.586793) | 2.202485 / 1.468490 (0.733995) | 0.494305 / 4.584777 (-4.090472) | 3.575176 / 3.745712 (-0.170536) | 3.354358 / 5.269862 (-1.915504) | 2.074293 / 4.565676 (-2.491383) | 0.058967 / 0.424275 (-0.365308) | 0.007712 / 0.007607 (0.000105) | 0.513734 / 0.226044 (0.287690) | 5.107538 / 2.268929 (2.838610) | 2.776190 / 55.444624 (-52.668434) | 2.425051 / 6.876477 (-4.451426) | 2.666715 / 2.142072 (0.524643) | 0.598844 / 4.805227 (-4.206383) | 0.134186 / 6.500664 (-6.366478) | 0.062403 / 0.075469 (-0.013066) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.346730 / 1.841788 (-0.495058) | 20.533190 / 8.074308 (12.458882) | 15.174443 / 10.191392 (4.983051) | 0.167204 / 0.680424 (-0.513219) | 0.020619 / 0.534201 (-0.513582) | 0.399033 / 0.579283 (-0.180250) | 0.394428 / 0.434364 (-0.039936) | 0.468792 / 0.540337 (-0.071545) | 0.640122 / 1.386936 (-0.746814) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#2c4c2b529e2a262a5006e4caa55fbc003378006a \"CML watermark\")\n" ]
Do not filter out .zip extensions from no-script datasets
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6208/reactions" }
PR_kwDODunzps5ZcnpJ
{ "diff_url": "https://github.com/huggingface/datasets/pull/6208.diff", "html_url": "https://github.com/huggingface/datasets/pull/6208", "merged_at": "2023-09-04T09:13:32Z", "patch_url": "https://github.com/huggingface/datasets/pull/6208.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6208" }
2023-09-04T06:07:12Z
https://api.github.com/repos/huggingface/datasets/issues/6208/comments
This PR is a hotfix of: - #6207 That PR introduced the filtering out of `.zip` extensions. This PR reverts that. Hot fix #6207. Maybe we should do patch releases: the bug was introduced in 2.13.1. CC: @lhoestq
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/6208/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6208/timeline
closed
false
6,208
null
2023-09-04T09:13:32Z
null
true
1,879,555,234
https://api.github.com/repos/huggingface/datasets/issues/6207
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6207/events
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
null
2023-09-04T09:13:33Z
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
https://github.com/huggingface/datasets/issues/6207
MEMBER
completed
null
null
[]
No-script datasets with ZIP files do not load
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6207/reactions" }
I_kwDODunzps5wB7yi
null
2023-09-04T05:50:27Z
https://api.github.com/repos/huggingface/datasets/issues/6207/comments
While investigating an issue on a Hub dataset, I have discovered the no-script datasets containing ZIP files do not load. For example, that no-script dataset containing ZIP files, raises NonMatchingSplitsSizesError: ```python In [2]: ds = load_dataset("sidovic/LearningQ-qg") NonMatchingSplitsSizesError: [ { 'expected': SplitInfo(name='train', num_bytes=0, num_examples=188660, shard_lengths=None, dataset_name=None), 'recorded': SplitInfo(name='train', num_bytes=0, num_examples=0, shard_lengths=None, dataset_name='learning_q-qg') }, { 'expected': SplitInfo(name='validation', num_bytes=0, num_examples=20630, shard_lengths=None, dataset_name=None), 'recorded': SplitInfo(name='validation', num_bytes=0, num_examples=0, shard_lengths=None, dataset_name='learning_q-qg') }, { 'expected': SplitInfo(name='test', num_bytes=0, num_examples=18227, shard_lengths=None, dataset_name=None), 'recorded': SplitInfo(name='test', num_bytes=0, num_examples=0, shard_lengths=None, dataset_name='learning_q-qg') } ] ``` As another example, a no-script dataset containing just a (CSV)-ZIP file, raises a DatasetGenerationError: ``` > num_examples, num_bytes = writer.finalize() src/datasets/builder.py:1949: > raise SchemaInferenceError("Please pass `features` or at least one example when writing data") E datasets.arrow_writer.SchemaInferenceError: Please pass `features` or at least one example when writing data src/datasets/arrow_writer.py:598: SchemaInferenceError The above exception was the direct cause of the following exception: src/datasets/load.py:2143: in load_dataset builder_instance.download_and_prepare( src/datasets/builder.py:954: in download_and_prepare self._download_and_prepare( src/datasets/builder.py:1049: in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) src/datasets/builder.py:1813: in _prepare_split for job_id, done, content in self._prepare_split_single( > raise DatasetGenerationError("An error occurred while generating the dataset") from e E datasets.builder.DatasetGenerationError: An error occurred while generating the dataset src/datasets/builder.py:1958: DatasetGenerationError ``` After investigating, I think this bug was introduced in this PR: - #5972 Related to: - https://huggingface.co/datasets/sidovic/LearningQ-qg/discussions/1 CC: @lhoestq
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/6207/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6207/timeline
closed
false
6,207
null
2023-09-04T09:13:33Z
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
false
1,879,473,745
https://api.github.com/repos/huggingface/datasets/issues/6206
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6206/events
[]
null
2024-04-17T15:53:29Z
[]
https://github.com/huggingface/datasets/issues/6206
NONE
completed
null
null
[ "I solved the problem by modifying the \"self DEFAULT_WRITER_BATCH_SIZE\" in \"class MyDataset (datasets. GeneratorBasedBuilder) : __init__\"", "same problem, and this solution worked me also - you can set this var by setting the keyword argument `writer_batch_size=...` in `load_dataset(...,writer_batch_size=...)`" ]
When calling load_dataset, raise error: pyarrow.lib.ArrowInvalid: offset overflow while concatenating arrays
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 1, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/6206/reactions" }
I_kwDODunzps5wBn5R
null
2023-09-04T04:14:00Z
https://api.github.com/repos/huggingface/datasets/issues/6206/comments
### Describe the bug When calling load_dataset, raise error ``` Traceback (most recent call last): File "/home/aihao/miniconda3/envs/torch/lib/python3.11/site-packages/datasets/builder.py", line 1694, in _pre pare_split_single writer.write(example, key) File "/home/aihao/miniconda3/envs/torch/lib/python3.11/site-packages/datasets/arrow_writer.py", line 490, in write self.write_examples_on_file() File "/home/aihao/miniconda3/envs/torch/lib/python3.11/site-packages/datasets/arrow_writer.py", line 448, in write_examples_on_file self.write_batch(batch_examples=batch_examples) File "/home/aihao/miniconda3/envs/torch/lib/python3.11/site-packages/datasets/arrow_writer.py", line 559, in write_batch self.write_table(pa_table, writer_batch_size) File "/home/aihao/miniconda3/envs/torch/lib/python3.11/site-packages/datasets/arrow_writer.py", line 571, in write_table pa_table = pa_table.combine_chunks() ^^^^^^^^^^^^^^^^^^^^^^^^^ File "pyarrow/table.pxi", line 3439, in pyarrow.lib.Table.combine_chunks File "pyarrow/error.pxi", line 144, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 100, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: offset overflow while concatenating arrays The above exception was the direct cause of the following exception: Traceback (most recent call last): dataset = load_dataset( ^^^^^^^^^^^^^ File "/home/aihao/miniconda3/envs/torch/lib/python3.11/site-packages/datasets/load.py", line 2133, in load_da taset builder_instance.download_and_prepare( File "/home/aihao/miniconda3/envs/torch/lib/python3.11/site-packages/datasets/builder.py", line 954, in downl oad_and_prepare self._download_and_prepare( File "/home/aihao/miniconda3/envs/torch/lib/python3.11/site-packages/datasets/builder.py", line 1717, in _dow nload_and_prepare super()._download_and_prepare( File "/home/aihao/miniconda3/envs/torch/lib/python3.11/site-packages/datasets/builder.py", line 1049, in _dow nload_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/aihao/miniconda3/envs/torch/lib/python3.11/site-packages/datasets/builder.py", line 1555, in _pre pare_split for job_id, done, content in self._prepare_split_single( File "/home/aihao/miniconda3/envs/torch/lib/python3.11/site-packages/datasets/builder.py", line 1712, in _pre pare_split_single raise DatasetGenerationError("An error occurred while generating the dataset") from e datasets.builder.DatasetGenerationError: An error occurred while generating the dataset Setting num_proc from 8 back to 1 for the train split to disable multiprocessing as it only contains one shard. 09/04/2023 12:02:04 - WARNING - datasets.builder - Setting num_proc from 8 back to 1 for the train split to dis able multiprocessing as it only contains one shard. ``` ### Steps to reproduce the bug Call load_dataset with the large image as feature ### Expected behavior no error ### Environment info - `datasets` version: 2.14.3 - Platform: Linux-6.2.0-31-generic-x86_64-with-glibc2.35 - Python version: 3.11.4 - Huggingface_hub version: 0.16.4 - PyArrow version: 12.0.1 - Pandas version: 2.0.3
{ "avatar_url": "https://avatars.githubusercontent.com/u/51043929?v=4", "events_url": "https://api.github.com/users/aihao2000/events{/privacy}", "followers_url": "https://api.github.com/users/aihao2000/followers", "following_url": "https://api.github.com/users/aihao2000/following{/other_user}", "gists_url": "https://api.github.com/users/aihao2000/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/aihao2000", "id": 51043929, "login": "aihao2000", "node_id": "MDQ6VXNlcjUxMDQzOTI5", "organizations_url": "https://api.github.com/users/aihao2000/orgs", "received_events_url": "https://api.github.com/users/aihao2000/received_events", "repos_url": "https://api.github.com/users/aihao2000/repos", "site_admin": false, "starred_url": "https://api.github.com/users/aihao2000/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/aihao2000/subscriptions", "type": "User", "url": "https://api.github.com/users/aihao2000" }
https://api.github.com/repos/huggingface/datasets/issues/6206/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6206/timeline
closed
false
6,206
null
2023-09-04T06:05:49Z
null
false
1,877,491,602
https://api.github.com/repos/huggingface/datasets/issues/6203
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6203/events
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
null
2023-09-15T15:11:27Z
[]
https://github.com/huggingface/datasets/issues/6203
NONE
completed
null
null
[ "(cross-posting from the linked DVC issue)\r\n\r\nI think this should already work out of the box with the current `datasets` and `dvc.api` releases by passing the correct `storage_options` into the datasets calls. `storage_options` is essentially just the kwargs dict that gets passed to the fsspec fs constructor.\r\n\r\nThe main thing to note here is that the fsspec DVCFileSystem URL should be `dvc://folder/file.json` (i.e. this should be the DVCFileSystem path that is relative to the DVC repo root). You cannot use a URL like `https://gitlab.com/user/repo/folder/file.json`.\r\n\r\nI think something like this should work for you (in a venv where both DVC and datasets are installed):\r\n```python\r\nimport datasets\r\n\r\n# load a dataset from Git/DVC repository where Git repo is located at https://gitlab.com/user/repo.git\r\n# and path to dataset (relative to git/dvc repo root) is 'folder/file.json'\r\ndatasets.load_from_disk(\r\n \"dvc://folder/file.json\",\r\n storage_options={\"url\": \"https://gitlab.com/user/repo.git\"},\r\n)\r\n```\r\n\r\nbasically the `dvc://` is what tells fsspec to create a `DVCFileSystem` and it will construct it like\r\n```python\r\nfs = DVCFileSystem(**storage_options)\r\n```\r\n\r\nThen the subsequent calls use the rest of the `dvc://...` URL like \r\n```python\r\nfs.exists(\"folder/file.json\")\r\n```", "Hi @pmrowla Thank you for your help, that's very helpful, I was indeed using `fsspec` incorrectly here. There is still an issue with `datasets`:\r\n\r\n```python\r\nimport datasets\r\ndataset = datasets.load_dataset(\"json\", data_files=\"dvc://folder/file.jsonl\", storage_options={\"url\": \"https://gitlab.com/repo/folder/\"})\r\n```\r\n\r\nresults in the following exception:\r\n\r\n```\r\nTraceback (most recent call last): \r\n File \"/Users/bilelomrani/Documents/ILLUIN.nosync/instructions-finetuning/.venv/lib/python3.11/site-packages/scmrepo/fs.py\", line 217, in info\r\n ret = self.trie.info(key)\r\n ^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/bilelomrani/Documents/ILLUIN.nosync/instructions-finetuning/.venv/lib/python3.11/site-packages/scmrepo/git/objects.py\", line 141, in info\r\n obj = self.trie[key]\r\n ~~~~~~~~~^^^^^\r\n File \"/Users/bilelomrani/Documents/ILLUIN.nosync/instructions-finetuning/.venv/lib/python3.11/site-packages/pygtrie.py\", line 937, in __getitem__\r\n node, _ = self._get_node(key_or_slice)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/bilelomrani/Documents/ILLUIN.nosync/instructions-finetuning/.venv/lib/python3.11/site-packages/pygtrie.py\", line 630, in _get_node\r\n raise KeyError(key)\r\nKeyError: ('dvc:', 'datasets', 'spider', 'train.jsonl')\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/Users/bilelomrani/Documents/ILLUIN.nosync/instructions-finetuning/.venv/lib/python3.11/site-packages/datasets/load.py\", line 2129, in load_dataset\r\n builder_instance = load_dataset_builder(\r\n ^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/bilelomrani/Documents/ILLUIN.nosync/instructions-finetuning/.venv/lib/python3.11/site-packages/datasets/load.py\", line 1815, in load_dataset_builder\r\n dataset_module = dataset_module_factory(\r\n ^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/bilelomrani/Documents/ILLUIN.nosync/instructions-finetuning/.venv/lib/python3.11/site-packages/datasets/load.py\", line 1430, in dataset_module_factory\r\n ).get_module()\r\n ^^^^^^^^^^^^\r\n File \"/Users/bilelomrani/Documents/ILLUIN.nosync/instructions-finetuning/.venv/lib/python3.11/site-packages/datasets/load.py\", line 958, in get_module\r\n data_files = DataFilesDict.from_patterns(\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/bilelomrani/Documents/ILLUIN.nosync/instructions-finetuning/.venv/lib/python3.11/site-packages/datasets/data_files.py\", line 674, in from_patterns\r\n DataFilesList.from_patterns(\r\n File \"/Users/bilelomrani/Documents/ILLUIN.nosync/instructions-finetuning/.venv/lib/python3.11/site-packages/datasets/data_files.py\", line 589, in from_patterns\r\n origin_metadata = _get_origin_metadata(data_files, download_config=download_config)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/bilelomrani/Documents/ILLUIN.nosync/instructions-finetuning/.venv/lib/python3.11/site-packages/datasets/data_files.py\", line 504, in _get_origin_metadata\r\n return thread_map(\r\n ^^^^^^^^^^^\r\n File \"/Users/bilelomrani/Documents/ILLUIN.nosync/instructions-finetuning/.venv/lib/python3.11/site-packages/tqdm/contrib/concurrent.py\", line 69, in thread_map\r\n return _executor_map(ThreadPoolExecutor, fn, *iterables, **tqdm_kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/bilelomrani/Documents/ILLUIN.nosync/instructions-finetuning/.venv/lib/python3.11/site-packages/tqdm/contrib/concurrent.py\", line 51, in _executor_map\r\n return list(tqdm_class(ex.map(fn, *iterables, chunksize=chunksize), **kwargs))\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/bilelomrani/.pyenv/versions/3.11.4/lib/python3.11/concurrent/futures/_base.py\", line 619, in result_iterator\r\n yield _result_or_cancel(fs.pop())\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/bilelomrani/.pyenv/versions/3.11.4/lib/python3.11/concurrent/futures/_base.py\", line 317, in _result_or_cancel\r\n return fut.result(timeout)\r\n ^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/bilelomrani/.pyenv/versions/3.11.4/lib/python3.11/concurrent/futures/_base.py\", line 456, in result\r\n return self.__get_result()\r\n ^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/bilelomrani/.pyenv/versions/3.11.4/lib/python3.11/concurrent/futures/_base.py\", line 401, in __get_result\r\n raise self._exception\r\n File \"/Users/bilelomrani/.pyenv/versions/3.11.4/lib/python3.11/concurrent/futures/thread.py\", line 58, in run\r\n result = self.fn(*self.args, **self.kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/bilelomrani/Documents/ILLUIN.nosync/instructions-finetuning/.venv/lib/python3.11/site-packages/datasets/data_files.py\", line 491, in _get_single_origin_metadata\r\n info = fs.info(data_file)\r\n ^^^^^^^^^^^^^^^^^^\r\n File \"/Users/bilelomrani/Documents/ILLUIN.nosync/instructions-finetuning/.venv/lib/python3.11/site-packages/dvc/fs/dvc.py\", line 357, in info\r\n return self._info(key, path, ignore_subrepos=ignore_subrepos)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/bilelomrani/Documents/ILLUIN.nosync/instructions-finetuning/.venv/lib/python3.11/site-packages/dvc/fs/dvc.py\", line 377, in _info\r\n fs_info = fs.info(fs_path)\r\n ^^^^^^^^^^^^^^^^\r\n File \"/Users/bilelomrani/Documents/ILLUIN.nosync/instructions-finetuning/.venv/lib/python3.11/site-packages/dvc_objects/fs/base.py\", line 501, in info\r\n return self.fs.info(path, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/bilelomrani/Documents/ILLUIN.nosync/instructions-finetuning/.venv/lib/python3.11/site-packages/scmrepo/fs.py\", line 221, in info\r\n raise FileNotFoundError(errno.ENOENT, os.strerror(errno.ENOENT), path)\r\nFileNotFoundError: [Errno 2] No such file or directory: '/dvc:/folder/file.jsonl'\r\n```\r\n\r\nSomehow the URL gets turned into `/dvc:/folder/file.jsonl` inside `datasets`. Otherwise I can confirm that using `fsspec` properly with DVC works as expected.\r\n", "For the record, there was a `dvc.api.DVCFileSystem` bug which is fixed in DVC `main` and will be available in the next DVC release.\r\n\r\nTo use DVC with `datasets` you just need to pass the Git/DVC repo `url` in `storage_options` as discussed above.\r\n\r\n(note that this requires having both `datasets` and `dvc` installed in your python environment)\r\n```python\r\n>>> from datasets import load_dataset\r\n>>> load_dataset(\r\n... \"json\",\r\n... data_files=\"dvc://eval/metrics.json\",\r\n... storage_options={\"url\": \"https://github.com/iterative/example-get-started.git\"},\r\n... )\r\nDatasetDict({\r\n train: Dataset({\r\n features: ['avg_prec', 'roc_auc'],\r\n num_rows: 1\r\n })\r\n})\r\n```\r\n\r\nAny additional `DVCFileSystem` args can be passed in the same way, so to get a specific branch/tag/commit from the DVC repo you just need to specify the `rev` in `storage_options` like\r\n```\r\nstorage_options={\"url\": \"https://github.com/iterative/example-get-started.git\", \"rev\": \"main\"}\r\n```\r\n\r\nI think this issue can probably be closed now.", "Thank you for your help, closing." ]
Support loading from a DVC remote repository
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6203/reactions" }
I_kwDODunzps5v6D-S
null
2023-09-01T14:04:52Z
https://api.github.com/repos/huggingface/datasets/issues/6203/comments
### Feature request Adding support for loading a file from a DVC repository, tracked remotely on a SCM. ### Motivation DVC is a popular version control system to version and manage datasets. The files are stored on a remote object storage platform, but they are tracked using Git. Integration with DVC is possible through the `DVCFileSystem`. I have a Gitlab repository where multiple files are tracked using DVC and stored in a GCP bucket. I would like to be able to load these files using `datasets` directly using an URL. My goal is to write a generic code that abstracts the storage layer, such that my users will only have to pass in an `fsspec`-compliant URL and the corresponding files will be loaded. ### Your contribution I managed to instantiate a `DVCFileSystem` pointing to a Gitlab repo from a `fsspec` chained URL in [this pull request](https://github.com/iterative/dvc/pull/9903) to DVC. ```python from fsspec.core import url_to_fs fs, _ = url_to_fs("dvc::https://gitlab.com/repository/group/my-repo") ``` From now I'm not sure how to continue, it seems that `datasets` expects the URL to be fully qualified like so: `dvc::https://gitlab.com/repository/group/my-repo/my-folder/my-file.json` but this fails because `DVCFileSystem` expects the URL to point to the root of an SCM repo. Is there a way to make this work with `datasets`?
{ "avatar_url": "https://avatars.githubusercontent.com/u/16692099?v=4", "events_url": "https://api.github.com/users/bilelomrani1/events{/privacy}", "followers_url": "https://api.github.com/users/bilelomrani1/followers", "following_url": "https://api.github.com/users/bilelomrani1/following{/other_user}", "gists_url": "https://api.github.com/users/bilelomrani1/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/bilelomrani1", "id": 16692099, "login": "bilelomrani1", "node_id": "MDQ6VXNlcjE2NjkyMDk5", "organizations_url": "https://api.github.com/users/bilelomrani1/orgs", "received_events_url": "https://api.github.com/users/bilelomrani1/received_events", "repos_url": "https://api.github.com/users/bilelomrani1/repos", "site_admin": false, "starred_url": "https://api.github.com/users/bilelomrani1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bilelomrani1/subscriptions", "type": "User", "url": "https://api.github.com/users/bilelomrani1" }
https://api.github.com/repos/huggingface/datasets/issues/6203/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6203/timeline
closed
false
6,203
null
2023-09-15T15:11:27Z
null
false
1,876,630,351
https://api.github.com/repos/huggingface/datasets/issues/6202
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6202/events
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
null
2023-10-12T16:28:59Z
[]
https://github.com/huggingface/datasets/issues/6202
NONE
completed
null
null
[ "https://github.com/huggingface/datasets/blob/main/setup.py#L236\r\nCurrently has the highest version at 0.3.25; Not sure if there is any reason for this, other than that was the tested version?" ]
avoid downgrading jax version
{ "+1": 2, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/6202/reactions" }
I_kwDODunzps5v2xtP
null
2023-09-01T02:57:57Z
https://api.github.com/repos/huggingface/datasets/issues/6202/comments
### Feature request Whenever I `pip install datasets[jax]` it downgrades jax to version 0.3.25. I seem to be able to install this library first then upgrade jax back to version 0.4.13. ### Motivation It would be nice to not overwrite currently installed version of jax if possible. ### Your contribution I would be willing to beta test. Or maybe write some code if I could get pointed in the right direction, I'm not super familiar with this codebase.
{ "avatar_url": "https://avatars.githubusercontent.com/u/1332458?v=4", "events_url": "https://api.github.com/users/chrisflesher/events{/privacy}", "followers_url": "https://api.github.com/users/chrisflesher/followers", "following_url": "https://api.github.com/users/chrisflesher/following{/other_user}", "gists_url": "https://api.github.com/users/chrisflesher/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/chrisflesher", "id": 1332458, "login": "chrisflesher", "node_id": "MDQ6VXNlcjEzMzI0NTg=", "organizations_url": "https://api.github.com/users/chrisflesher/orgs", "received_events_url": "https://api.github.com/users/chrisflesher/received_events", "repos_url": "https://api.github.com/users/chrisflesher/repos", "site_admin": false, "starred_url": "https://api.github.com/users/chrisflesher/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chrisflesher/subscriptions", "type": "User", "url": "https://api.github.com/users/chrisflesher" }
https://api.github.com/repos/huggingface/datasets/issues/6202/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6202/timeline
closed
false
6,202
null
2023-10-12T16:28:59Z
null
false
1,875,256,775
https://api.github.com/repos/huggingface/datasets/issues/6201
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6201/events
[]
null
2023-09-05T11:07:07Z
[]
https://github.com/huggingface/datasets/pull/6201
MEMBER
null
false
null
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006852 / 0.011353 (-0.004501) | 0.004195 / 0.011008 (-0.006813) | 0.095008 / 0.038508 (0.056500) | 0.073469 / 0.023109 (0.050360) | 0.350170 / 0.275898 (0.074272) | 0.394309 / 0.323480 (0.070829) | 0.004391 / 0.007986 (-0.003595) | 0.003432 / 0.004328 (-0.000896) | 0.072849 / 0.004250 (0.068599) | 0.058595 / 0.037052 (0.021543) | 0.372335 / 0.258489 (0.113846) | 0.410616 / 0.293841 (0.116775) | 0.034477 / 0.128546 (-0.094069) | 0.009426 / 0.075646 (-0.066220) | 0.329262 / 0.419271 (-0.090009) | 0.057941 / 0.043533 (0.014408) | 0.358624 / 0.255139 (0.103485) | 0.413803 / 0.283200 (0.130604) | 0.025845 / 0.141683 (-0.115837) | 1.684289 / 1.452155 (0.232134) | 1.791567 / 1.492716 (0.298850) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.222731 / 0.018006 (0.204724) | 0.511615 / 0.000490 (0.511126) | 0.004163 / 0.000200 (0.003963) | 0.000088 / 0.000054 (0.000033) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033260 / 0.037411 (-0.004152) | 0.091685 / 0.014526 (0.077159) | 0.105655 / 0.176557 (-0.070901) | 0.167973 / 0.737135 (-0.569163) | 0.105458 / 0.296338 (-0.190880) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.441789 / 0.215209 (0.226580) | 4.404803 / 2.077655 (2.327148) | 2.163739 / 1.504120 (0.659620) | 1.956828 / 1.541195 (0.415633) | 2.042183 / 1.468490 (0.573693) | 0.552221 / 4.584777 (-4.032556) | 3.951769 / 3.745712 (0.206057) | 3.591983 / 5.269862 (-1.677878) | 2.225058 / 4.565676 (-2.340619) | 0.064528 / 0.424275 (-0.359747) | 0.008403 / 0.007607 (0.000796) | 0.528830 / 0.226044 (0.302786) | 5.233686 / 2.268929 (2.964757) | 2.681156 / 55.444624 (-52.763468) | 2.261188 / 6.876477 (-4.615289) | 2.470037 / 2.142072 (0.327964) | 0.661793 / 4.805227 (-4.143434) | 0.150138 / 6.500664 (-6.350527) | 0.068663 / 0.075469 (-0.006807) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.463086 / 1.841788 (-0.378701) | 21.408232 / 8.074308 (13.333924) | 15.521718 / 10.191392 (5.330326) | 0.164587 / 0.680424 (-0.515837) | 0.021035 / 0.534201 (-0.513166) | 0.445466 / 0.579283 (-0.133817) | 0.462489 / 0.434364 (0.028125) | 0.517733 / 0.540337 (-0.022604) | 0.724242 / 1.386936 (-0.662694) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007117 / 0.011353 (-0.004236) | 0.004230 / 0.011008 (-0.006778) | 0.072186 / 0.038508 (0.033678) | 0.076758 / 0.023109 (0.053648) | 0.452606 / 0.275898 (0.176708) | 0.491872 / 0.323480 (0.168392) | 0.005989 / 0.007986 (-0.001996) | 0.003611 / 0.004328 (-0.000717) | 0.072642 / 0.004250 (0.068392) | 0.058985 / 0.037052 (0.021933) | 0.463414 / 0.258489 (0.204925) | 0.497538 / 0.293841 (0.203697) | 0.036325 / 0.128546 (-0.092221) | 0.009814 / 0.075646 (-0.065832) | 0.078745 / 0.419271 (-0.340527) | 0.054308 / 0.043533 (0.010775) | 0.468210 / 0.255139 (0.213071) | 0.476434 / 0.283200 (0.193234) | 0.023683 / 0.141683 (-0.118000) | 1.706457 / 1.452155 (0.254302) | 1.775855 / 1.492716 (0.283139) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.241599 / 0.018006 (0.223592) | 0.483859 / 0.000490 (0.483370) | 0.006432 / 0.000200 (0.006233) | 0.000177 / 0.000054 (0.000123) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034723 / 0.037411 (-0.002688) | 0.104420 / 0.014526 (0.089894) | 0.121071 / 0.176557 (-0.055486) | 0.174899 / 0.737135 (-0.562237) | 0.119587 / 0.296338 (-0.176751) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.492731 / 0.215209 (0.277522) | 4.898621 / 2.077655 (2.820967) | 2.710931 / 1.504120 (1.206811) | 2.513889 / 1.541195 (0.972694) | 2.578073 / 1.468490 (1.109583) | 0.548318 / 4.584777 (-4.036459) | 4.048603 / 3.745712 (0.302891) | 3.637654 / 5.269862 (-1.632208) | 2.263682 / 4.565676 (-2.301994) | 0.065786 / 0.424275 (-0.358489) | 0.008119 / 0.007607 (0.000512) | 0.578693 / 0.226044 (0.352649) | 5.780619 / 2.268929 (3.511691) | 3.224625 / 55.444624 (-52.220000) | 2.838750 / 6.876477 (-4.037726) | 2.970276 / 2.142072 (0.828204) | 0.654423 / 4.805227 (-4.150805) | 0.148696 / 6.500664 (-6.351969) | 0.066469 / 0.075469 (-0.009000) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.574772 / 1.841788 (-0.267015) | 21.822356 / 8.074308 (13.748048) | 16.504127 / 10.191392 (6.312735) | 0.183357 / 0.680424 (-0.497067) | 0.022759 / 0.534201 (-0.511442) | 0.453746 / 0.579283 (-0.125537) | 0.447037 / 0.434364 (0.012673) | 0.536562 / 0.540337 (-0.003775) | 0.731063 / 1.386936 (-0.655873) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#a9027eb4d9c5b3fa60a18daa7aef121428964d90 \"CML watermark\")\n", "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008542 / 0.011353 (-0.002811) | 0.005481 / 0.011008 (-0.005527) | 0.100122 / 0.038508 (0.061614) | 0.078968 / 0.023109 (0.055858) | 0.403751 / 0.275898 (0.127853) | 0.457559 / 0.323480 (0.134079) | 0.006152 / 0.007986 (-0.001834) | 0.003805 / 0.004328 (-0.000523) | 0.072787 / 0.004250 (0.068536) | 0.054794 / 0.037052 (0.017741) | 0.419815 / 0.258489 (0.161326) | 0.437453 / 0.293841 (0.143612) | 0.044641 / 0.128546 (-0.083905) | 0.013755 / 0.075646 (-0.061892) | 0.374683 / 0.419271 (-0.044589) | 0.071442 / 0.043533 (0.027909) | 0.395814 / 0.255139 (0.140675) | 0.439042 / 0.283200 (0.155842) | 0.034596 / 0.141683 (-0.107087) | 1.655056 / 1.452155 (0.202902) | 1.826410 / 1.492716 (0.333694) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.278667 / 0.018006 (0.260661) | 0.617354 / 0.000490 (0.616864) | 0.004111 / 0.000200 (0.003911) | 0.000138 / 0.000054 (0.000083) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025905 / 0.037411 (-0.011506) | 0.084721 / 0.014526 (0.070195) | 0.099737 / 0.176557 (-0.076819) | 0.163016 / 0.737135 (-0.574119) | 0.095104 / 0.296338 (-0.201234) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.531589 / 0.215209 (0.316380) | 5.455303 / 2.077655 (3.377649) | 2.495112 / 1.504120 (0.990992) | 2.234139 / 1.541195 (0.692944) | 2.295090 / 1.468490 (0.826599) | 0.777627 / 4.584777 (-3.807150) | 5.053069 / 3.745712 (1.307357) | 4.488715 / 5.269862 (-0.781147) | 2.775991 / 4.565676 (-1.789686) | 0.094175 / 0.424275 (-0.330100) | 0.008681 / 0.007607 (0.001074) | 0.668174 / 0.226044 (0.442130) | 6.631876 / 2.268929 (4.362948) | 3.118055 / 55.444624 (-52.326569) | 2.480355 / 6.876477 (-4.396122) | 2.706643 / 2.142072 (0.564571) | 0.927173 / 4.805227 (-3.878054) | 0.217385 / 6.500664 (-6.283279) | 0.067110 / 0.075469 (-0.008359) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.517926 / 1.841788 (-0.323861) | 21.420546 / 8.074308 (13.346238) | 21.108266 / 10.191392 (10.916874) | 0.222449 / 0.680424 (-0.457975) | 0.027969 / 0.534201 (-0.506232) | 0.459484 / 0.579283 (-0.119799) | 0.582629 / 0.434364 (0.148265) | 0.520971 / 0.540337 (-0.019366) | 0.694270 / 1.386936 (-0.692666) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008257 / 0.011353 (-0.003096) | 0.004511 / 0.011008 (-0.006497) | 0.075031 / 0.038508 (0.036523) | 0.070526 / 0.023109 (0.047416) | 0.445595 / 0.275898 (0.169697) | 0.512312 / 0.323480 (0.188832) | 0.005933 / 0.007986 (-0.002052) | 0.003814 / 0.004328 (-0.000515) | 0.073553 / 0.004250 (0.069302) | 0.058174 / 0.037052 (0.021121) | 0.472307 / 0.258489 (0.213818) | 0.519679 / 0.293841 (0.225838) | 0.046027 / 0.128546 (-0.082520) | 0.011757 / 0.075646 (-0.063889) | 0.084883 / 0.419271 (-0.334388) | 0.056476 / 0.043533 (0.012943) | 0.475608 / 0.255139 (0.220469) | 0.507588 / 0.283200 (0.224388) | 0.031661 / 0.141683 (-0.110022) | 1.673183 / 1.452155 (0.221028) | 1.736836 / 1.492716 (0.244120) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.350887 / 0.018006 (0.332881) | 0.589796 / 0.000490 (0.589306) | 0.023066 / 0.000200 (0.022867) | 0.000106 / 0.000054 (0.000052) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030764 / 0.037411 (-0.006647) | 0.116967 / 0.014526 (0.102441) | 0.102760 / 0.176557 (-0.073796) | 0.167690 / 0.737135 (-0.569445) | 0.111350 / 0.296338 (-0.184988) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.584565 / 0.215209 (0.369356) | 5.898081 / 2.077655 (3.820426) | 2.770374 / 1.504120 (1.266254) | 2.467519 / 1.541195 (0.926324) | 2.463319 / 1.468490 (0.994829) | 0.794294 / 4.584777 (-3.790483) | 5.272285 / 3.745712 (1.526573) | 4.514830 / 5.269862 (-0.755032) | 2.937259 / 4.565676 (-1.628417) | 0.093702 / 0.424275 (-0.330574) | 0.008012 / 0.007607 (0.000405) | 0.772371 / 0.226044 (0.546327) | 7.574941 / 2.268929 (5.306013) | 3.710965 / 55.444624 (-51.733659) | 2.927964 / 6.876477 (-3.948513) | 3.256036 / 2.142072 (1.113964) | 1.051649 / 4.805227 (-3.753578) | 0.203055 / 6.500664 (-6.297609) | 0.081072 / 0.075469 (0.005603) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.574251 / 1.841788 (-0.267537) | 22.340801 / 8.074308 (14.266493) | 20.497769 / 10.191392 (10.306377) | 0.228725 / 0.680424 (-0.451699) | 0.029095 / 0.534201 (-0.505106) | 0.452460 / 0.579283 (-0.126823) | 0.586419 / 0.434364 (0.152055) | 0.571237 / 0.540337 (0.030900) | 0.745069 / 1.386936 (-0.641867) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#61b23b028dfc72c297391c5f670342732b9bd9fe \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006529 / 0.011353 (-0.004824) | 0.004062 / 0.011008 (-0.006946) | 0.083712 / 0.038508 (0.045204) | 0.072378 / 0.023109 (0.049269) | 0.358779 / 0.275898 (0.082881) | 0.387216 / 0.323480 (0.063736) | 0.004038 / 0.007986 (-0.003948) | 0.003316 / 0.004328 (-0.001013) | 0.065207 / 0.004250 (0.060956) | 0.054439 / 0.037052 (0.017386) | 0.370689 / 0.258489 (0.112200) | 0.411008 / 0.293841 (0.117167) | 0.031133 / 0.128546 (-0.097413) | 0.008600 / 0.075646 (-0.067047) | 0.287753 / 0.419271 (-0.131518) | 0.051845 / 0.043533 (0.008312) | 0.360327 / 0.255139 (0.105188) | 0.394791 / 0.283200 (0.111591) | 0.025139 / 0.141683 (-0.116544) | 1.488151 / 1.452155 (0.035996) | 1.556776 / 1.492716 (0.064059) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.209462 / 0.018006 (0.191456) | 0.459168 / 0.000490 (0.458678) | 0.006037 / 0.000200 (0.005837) | 0.000079 / 0.000054 (0.000025) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028444 / 0.037411 (-0.008967) | 0.082974 / 0.014526 (0.068448) | 0.094919 / 0.176557 (-0.081638) | 0.151875 / 0.737135 (-0.585260) | 0.096143 / 0.296338 (-0.200195) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.402675 / 0.215209 (0.187466) | 4.014787 / 2.077655 (1.937133) | 2.015793 / 1.504120 (0.511673) | 1.838976 / 1.541195 (0.297782) | 1.931733 / 1.468490 (0.463243) | 0.489435 / 4.584777 (-4.095342) | 3.581662 / 3.745712 (-0.164050) | 3.315392 / 5.269862 (-1.954469) | 2.053369 / 4.565676 (-2.512307) | 0.057749 / 0.424275 (-0.366526) | 0.007720 / 0.007607 (0.000113) | 0.483388 / 0.226044 (0.257343) | 4.820798 / 2.268929 (2.551870) | 2.544264 / 55.444624 (-52.900361) | 2.170513 / 6.876477 (-4.705963) | 2.416976 / 2.142072 (0.274903) | 0.588351 / 4.805227 (-4.216876) | 0.136988 / 6.500664 (-6.363676) | 0.062294 / 0.075469 (-0.013175) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.263807 / 1.841788 (-0.577980) | 19.888202 / 8.074308 (11.813894) | 14.352977 / 10.191392 (4.161585) | 0.167200 / 0.680424 (-0.513224) | 0.018449 / 0.534201 (-0.515752) | 0.393262 / 0.579283 (-0.186021) | 0.407854 / 0.434364 (-0.026510) | 0.455852 / 0.540337 (-0.084485) | 0.629024 / 1.386936 (-0.757912) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006642 / 0.011353 (-0.004710) | 0.004041 / 0.011008 (-0.006967) | 0.065823 / 0.038508 (0.027315) | 0.076810 / 0.023109 (0.053701) | 0.397680 / 0.275898 (0.121782) | 0.430104 / 0.323480 (0.106624) | 0.006035 / 0.007986 (-0.001951) | 0.003389 / 0.004328 (-0.000939) | 0.066056 / 0.004250 (0.061806) | 0.054222 / 0.037052 (0.017170) | 0.397964 / 0.258489 (0.139475) | 0.439277 / 0.293841 (0.145436) | 0.032394 / 0.128546 (-0.096152) | 0.008586 / 0.075646 (-0.067060) | 0.072538 / 0.419271 (-0.346734) | 0.048346 / 0.043533 (0.004813) | 0.399631 / 0.255139 (0.144492) | 0.418684 / 0.283200 (0.135484) | 0.022570 / 0.141683 (-0.119113) | 1.519788 / 1.452155 (0.067633) | 1.581457 / 1.492716 (0.088740) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.243443 / 0.018006 (0.225436) | 0.453095 / 0.000490 (0.452606) | 0.009940 / 0.000200 (0.009740) | 0.000121 / 0.000054 (0.000066) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032293 / 0.037411 (-0.005118) | 0.091681 / 0.014526 (0.077155) | 0.103729 / 0.176557 (-0.072827) | 0.156361 / 0.737135 (-0.580775) | 0.105034 / 0.296338 (-0.191305) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.427761 / 0.215209 (0.212551) | 4.266044 / 2.077655 (2.188390) | 2.285161 / 1.504120 (0.781041) | 2.118652 / 1.541195 (0.577457) | 2.203469 / 1.468490 (0.734979) | 0.494587 / 4.584777 (-4.090190) | 3.676706 / 3.745712 (-0.069006) | 3.252478 / 5.269862 (-2.017383) | 2.027432 / 4.565676 (-2.538245) | 0.057856 / 0.424275 (-0.366419) | 0.007279 / 0.007607 (-0.000328) | 0.502767 / 0.226044 (0.276723) | 5.031409 / 2.268929 (2.762480) | 2.741767 / 55.444624 (-52.702858) | 2.408480 / 6.876477 (-4.467997) | 2.607193 / 2.142072 (0.465121) | 0.590787 / 4.805227 (-4.214440) | 0.133633 / 6.500664 (-6.367031) | 0.061195 / 0.075469 (-0.014274) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.342824 / 1.841788 (-0.498964) | 20.137195 / 8.074308 (12.062887) | 14.986743 / 10.191392 (4.795351) | 0.168218 / 0.680424 (-0.512206) | 0.020209 / 0.534201 (-0.513992) | 0.397446 / 0.579283 (-0.181837) | 0.427496 / 0.434364 (-0.006868) | 0.475058 / 0.540337 (-0.065279) | 0.648439 / 1.386936 (-0.738497) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#0b54cbd6c01f52139dedbcf209ff41f0c88b9aa5 \"CML watermark\")\n" ]
Fix to_json ValueError and remove pandas pin
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6201/reactions" }
PR_kwDODunzps5ZOVbV
{ "diff_url": "https://github.com/huggingface/datasets/pull/6201.diff", "html_url": "https://github.com/huggingface/datasets/pull/6201", "merged_at": "2023-09-05T10:58:21Z", "patch_url": "https://github.com/huggingface/datasets/pull/6201.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6201" }
2023-08-31T10:38:08Z
https://api.github.com/repos/huggingface/datasets/issues/6201/comments
This PR fixes the root cause of the issue: - #6197 This PR also removes the temporary pin of `pandas` introduced by: - #6200 Note that for orient in ['records', 'values'], index value is ignored but - in `pandas` < 2.1.0, a ValueError is raised if not index and orient not in ['split', 'table'] - for orient = 'records', we need index = True - default index value is True - in `pandas` = 2.1.0, a ValueError is raised if index is True and orient in ['records', 'values'] - for orient = 'records', we need index = False or None - default index value is None This PR fixes the issue by not passing index and thus using default index value (valid for all pandas versions), unless orient is 'split' or 'table' (where we pass index = False, as it was done before this fix).
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/6201/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6201/timeline
closed
false
6,201
null
2023-09-05T10:58:21Z
null
true
1,875,169,551
https://api.github.com/repos/huggingface/datasets/issues/6200
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6200/events
[]
null
2023-08-31T10:33:24Z
[]
https://github.com/huggingface/datasets/pull/6200
MEMBER
null
false
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008978 / 0.011353 (-0.002375) | 0.005143 / 0.011008 (-0.005865) | 0.104787 / 0.038508 (0.066279) | 0.077069 / 0.023109 (0.053960) | 0.427703 / 0.275898 (0.151805) | 0.469865 / 0.323480 (0.146386) | 0.004618 / 0.007986 (-0.003368) | 0.004074 / 0.004328 (-0.000255) | 0.088656 / 0.004250 (0.084405) | 0.059798 / 0.037052 (0.022746) | 0.465906 / 0.258489 (0.207417) | 0.510281 / 0.293841 (0.216440) | 0.051192 / 0.128546 (-0.077354) | 0.013623 / 0.075646 (-0.062024) | 0.379339 / 0.419271 (-0.039932) | 0.077393 / 0.043533 (0.033860) | 0.445165 / 0.255139 (0.190026) | 0.473577 / 0.283200 (0.190378) | 0.038125 / 0.141683 (-0.103558) | 1.858635 / 1.452155 (0.406480) | 1.869033 / 1.492716 (0.376316) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.209011 / 0.018006 (0.191004) | 0.550978 / 0.000490 (0.550488) | 0.004904 / 0.000200 (0.004704) | 0.000106 / 0.000054 (0.000051) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031418 / 0.037411 (-0.005993) | 0.089623 / 0.014526 (0.075098) | 0.103491 / 0.176557 (-0.073066) | 0.178158 / 0.737135 (-0.558978) | 0.108515 / 0.296338 (-0.187824) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.648293 / 0.215209 (0.433084) | 6.332361 / 2.077655 (4.254707) | 2.469076 / 1.504120 (0.964956) | 2.286228 / 1.541195 (0.745033) | 2.257408 / 1.468490 (0.788918) | 0.918027 / 4.584777 (-3.666750) | 5.229539 / 3.745712 (1.483827) | 4.676150 / 5.269862 (-0.593712) | 3.220411 / 4.565676 (-1.345266) | 0.095863 / 0.424275 (-0.328413) | 0.008696 / 0.007607 (0.001089) | 0.722356 / 0.226044 (0.496312) | 7.796690 / 2.268929 (5.527762) | 3.715044 / 55.444624 (-51.729581) | 2.852696 / 6.876477 (-4.023780) | 2.891838 / 2.142072 (0.749766) | 1.195536 / 4.805227 (-3.609691) | 0.246908 / 6.500664 (-6.253756) | 0.079454 / 0.075469 (0.003984) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.652740 / 1.841788 (-0.189047) | 23.791791 / 8.074308 (15.717482) | 22.778999 / 10.191392 (12.587607) | 0.253878 / 0.680424 (-0.426546) | 0.031367 / 0.534201 (-0.502834) | 0.509460 / 0.579283 (-0.069823) | 0.603085 / 0.434364 (0.168721) | 0.603890 / 0.540337 (0.063553) | 0.826606 / 1.386936 (-0.560330) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010407 / 0.011353 (-0.000946) | 0.004751 / 0.011008 (-0.006257) | 0.086761 / 0.038508 (0.048253) | 0.087281 / 0.023109 (0.064172) | 0.498409 / 0.275898 (0.222511) | 0.560727 / 0.323480 (0.237247) | 0.006563 / 0.007986 (-0.001423) | 0.004078 / 0.004328 (-0.000251) | 0.086383 / 0.004250 (0.082133) | 0.065915 / 0.037052 (0.028862) | 0.521871 / 0.258489 (0.263382) | 0.582281 / 0.293841 (0.288440) | 0.057189 / 0.128546 (-0.071357) | 0.015514 / 0.075646 (-0.060133) | 0.102574 / 0.419271 (-0.316697) | 0.069155 / 0.043533 (0.025622) | 0.525000 / 0.255139 (0.269861) | 0.557968 / 0.283200 (0.274769) | 0.036934 / 0.141683 (-0.104749) | 1.919335 / 1.452155 (0.467181) | 1.870948 / 1.492716 (0.378231) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.241932 / 0.018006 (0.223926) | 0.560136 / 0.000490 (0.559646) | 0.006438 / 0.000200 (0.006238) | 0.000108 / 0.000054 (0.000053) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.036192 / 0.037411 (-0.001220) | 0.106829 / 0.014526 (0.092303) | 0.128667 / 0.176557 (-0.047890) | 0.200514 / 0.737135 (-0.536621) | 0.127542 / 0.296338 (-0.168797) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.754556 / 0.215209 (0.539347) | 7.237324 / 2.077655 (5.159670) | 3.267424 / 1.504120 (1.763304) | 2.789601 / 1.541195 (1.248407) | 2.875728 / 1.468490 (1.407238) | 0.894274 / 4.584777 (-3.690503) | 5.394556 / 3.745712 (1.648844) | 4.818523 / 5.269862 (-0.451338) | 2.965827 / 4.565676 (-1.599850) | 0.101967 / 0.424275 (-0.322308) | 0.008506 / 0.007607 (0.000899) | 0.803476 / 0.226044 (0.577432) | 8.614426 / 2.268929 (6.345497) | 4.169113 / 55.444624 (-51.275511) | 3.346346 / 6.876477 (-3.530130) | 3.418206 / 2.142072 (1.276134) | 1.111718 / 4.805227 (-3.693509) | 0.211302 / 6.500664 (-6.289362) | 0.072524 / 0.075469 (-0.002945) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.792705 / 1.841788 (-0.049083) | 24.442484 / 8.074308 (16.368176) | 23.375008 / 10.191392 (13.183616) | 0.227946 / 0.680424 (-0.452478) | 0.034376 / 0.534201 (-0.499825) | 0.489260 / 0.579283 (-0.090023) | 0.563220 / 0.434364 (0.128856) | 0.617405 / 0.540337 (0.077068) | 0.850577 / 1.386936 (-0.536359) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#b4413ea1eaca5023ace1e62ddf1070de2d41b4f6 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006594 / 0.011353 (-0.004759) | 0.004366 / 0.011008 (-0.006642) | 0.084241 / 0.038508 (0.045733) | 0.071876 / 0.023109 (0.048767) | 0.321604 / 0.275898 (0.045706) | 0.343501 / 0.323480 (0.020021) | 0.004069 / 0.007986 (-0.003917) | 0.003311 / 0.004328 (-0.001017) | 0.065079 / 0.004250 (0.060829) | 0.053754 / 0.037052 (0.016702) | 0.326199 / 0.258489 (0.067710) | 0.356552 / 0.293841 (0.062711) | 0.031568 / 0.128546 (-0.096979) | 0.008581 / 0.075646 (-0.067065) | 0.289170 / 0.419271 (-0.130101) | 0.053097 / 0.043533 (0.009564) | 0.309678 / 0.255139 (0.054539) | 0.345717 / 0.283200 (0.062517) | 0.024144 / 0.141683 (-0.117539) | 1.497351 / 1.452155 (0.045196) | 1.584691 / 1.492716 (0.091975) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.206357 / 0.018006 (0.188351) | 0.459611 / 0.000490 (0.459121) | 0.002586 / 0.000200 (0.002386) | 0.000079 / 0.000054 (0.000024) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027459 / 0.037411 (-0.009952) | 0.082197 / 0.014526 (0.067671) | 0.095004 / 0.176557 (-0.081553) | 0.151063 / 0.737135 (-0.586072) | 0.095107 / 0.296338 (-0.201231) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.384363 / 0.215209 (0.169154) | 3.836187 / 2.077655 (1.758533) | 1.898312 / 1.504120 (0.394192) | 1.727310 / 1.541195 (0.186115) | 1.803579 / 1.468490 (0.335089) | 0.485946 / 4.584777 (-4.098831) | 3.619134 / 3.745712 (-0.126578) | 3.255274 / 5.269862 (-2.014588) | 2.004603 / 4.565676 (-2.561074) | 0.057107 / 0.424275 (-0.367168) | 0.007601 / 0.007607 (-0.000006) | 0.456545 / 0.226044 (0.230500) | 4.556857 / 2.268929 (2.287929) | 2.379954 / 55.444624 (-53.064671) | 2.045874 / 6.876477 (-4.830603) | 2.203090 / 2.142072 (0.061018) | 0.585400 / 4.805227 (-4.219827) | 0.133018 / 6.500664 (-6.367646) | 0.059457 / 0.075469 (-0.016012) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.292581 / 1.841788 (-0.549207) | 19.360057 / 8.074308 (11.285749) | 14.105359 / 10.191392 (3.913967) | 0.166028 / 0.680424 (-0.514396) | 0.018243 / 0.534201 (-0.515958) | 0.392026 / 0.579283 (-0.187257) | 0.412735 / 0.434364 (-0.021629) | 0.459791 / 0.540337 (-0.080547) | 0.624539 / 1.386936 (-0.762397) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006677 / 0.011353 (-0.004676) | 0.003897 / 0.011008 (-0.007111) | 0.064139 / 0.038508 (0.025631) | 0.071346 / 0.023109 (0.048237) | 0.431180 / 0.275898 (0.155282) | 0.470870 / 0.323480 (0.147390) | 0.005562 / 0.007986 (-0.002423) | 0.003405 / 0.004328 (-0.000924) | 0.064532 / 0.004250 (0.060282) | 0.055317 / 0.037052 (0.018265) | 0.434667 / 0.258489 (0.176178) | 0.475765 / 0.293841 (0.181924) | 0.032392 / 0.128546 (-0.096154) | 0.008418 / 0.075646 (-0.067228) | 0.071069 / 0.419271 (-0.348203) | 0.047963 / 0.043533 (0.004430) | 0.440225 / 0.255139 (0.185086) | 0.454860 / 0.283200 (0.171661) | 0.022653 / 0.141683 (-0.119029) | 1.489444 / 1.452155 (0.037289) | 1.556913 / 1.492716 (0.064196) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.226733 / 0.018006 (0.208727) | 0.452005 / 0.000490 (0.451516) | 0.004715 / 0.000200 (0.004515) | 0.000099 / 0.000054 (0.000044) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032042 / 0.037411 (-0.005369) | 0.091226 / 0.014526 (0.076700) | 0.103639 / 0.176557 (-0.072917) | 0.157772 / 0.737135 (-0.579363) | 0.105466 / 0.296338 (-0.190872) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.439751 / 0.215209 (0.224542) | 4.357102 / 2.077655 (2.279448) | 2.362857 / 1.504120 (0.858737) | 2.180559 / 1.541195 (0.639364) | 2.279601 / 1.468490 (0.811111) | 0.495161 / 4.584777 (-4.089616) | 3.729199 / 3.745712 (-0.016513) | 3.334839 / 5.269862 (-1.935023) | 2.099315 / 4.565676 (-2.466362) | 0.058178 / 0.424275 (-0.366097) | 0.007303 / 0.007607 (-0.000304) | 0.506968 / 0.226044 (0.280924) | 5.078600 / 2.268929 (2.809671) | 2.846420 / 55.444624 (-52.598204) | 2.480644 / 6.876477 (-4.395833) | 2.693204 / 2.142072 (0.551132) | 0.590118 / 4.805227 (-4.215109) | 0.132900 / 6.500664 (-6.367764) | 0.060053 / 0.075469 (-0.015416) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.356713 / 1.841788 (-0.485075) | 20.380573 / 8.074308 (12.306265) | 15.066507 / 10.191392 (4.875115) | 0.180655 / 0.680424 (-0.499769) | 0.020954 / 0.534201 (-0.513247) | 0.399638 / 0.579283 (-0.179645) | 0.420694 / 0.434364 (-0.013670) | 0.476124 / 0.540337 (-0.064213) | 0.647192 / 1.386936 (-0.739744) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#0f8c58002481568eb1aa4f6f86c4509cf476800a \"CML watermark\")\n" ]
Temporarily pin pandas < 2.1.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6200/reactions" }
PR_kwDODunzps5ZOCee
{ "diff_url": "https://github.com/huggingface/datasets/pull/6200.diff", "html_url": "https://github.com/huggingface/datasets/pull/6200", "merged_at": "2023-08-31T10:24:38Z", "patch_url": "https://github.com/huggingface/datasets/pull/6200.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6200" }
2023-08-31T09:45:17Z
https://api.github.com/repos/huggingface/datasets/issues/6200/comments
Temporarily pin `pandas` < 2.1.0 until permanent solution is found. Hot fix #6197.
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/6200/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6200/timeline
closed
false
6,200
null
2023-08-31T10:24:38Z
null
true
1,875,165,185
https://api.github.com/repos/huggingface/datasets/issues/6199
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6199/events
[]
null
2023-08-31T19:05:07Z
[]
https://github.com/huggingface/datasets/issues/6199
NONE
null
null
null
[ "Hugging Face's datasets library may prioritize remote configurations. Make sure there are no conflicting configurations causing the library to prefer downloading data\r\nMay be try debugging\r\nraw_datasets = load_dataset('json', data_files=data_files)\r\nprint(raw_datasets)\r\n", "It doesn't download them but writes them to the local HF cache. The logging could indeed be better. Does loading the dataset succeed? If it doesn't, can you share the error stack trace?" ]
Use load_dataset for local json files, but it not works
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6199/reactions" }
I_kwDODunzps5vxMAB
null
2023-08-31T09:42:34Z
https://api.github.com/repos/huggingface/datasets/issues/6199/comments
### Describe the bug when I use load_dataset to load my local datasets,it always goes to Hugging Face to download the data instead of loading the local dataset. ### Steps to reproduce the bug `raw_datasets = load_dataset( ‘json’, data_files=data_files)` ### Expected behavior ![image](https://github.com/huggingface/datasets/assets/50519434/add3747f-6481-4da7-b374-8f81c5a6472c) ### Environment info python version 3.8.5 datasets version 2.12 os version unbuntu 18.04
{ "avatar_url": "https://avatars.githubusercontent.com/u/50519434?v=4", "events_url": "https://api.github.com/users/Garen-in-bush/events{/privacy}", "followers_url": "https://api.github.com/users/Garen-in-bush/followers", "following_url": "https://api.github.com/users/Garen-in-bush/following{/other_user}", "gists_url": "https://api.github.com/users/Garen-in-bush/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Garen-in-bush", "id": 50519434, "login": "Garen-in-bush", "node_id": "MDQ6VXNlcjUwNTE5NDM0", "organizations_url": "https://api.github.com/users/Garen-in-bush/orgs", "received_events_url": "https://api.github.com/users/Garen-in-bush/received_events", "repos_url": "https://api.github.com/users/Garen-in-bush/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Garen-in-bush/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Garen-in-bush/subscriptions", "type": "User", "url": "https://api.github.com/users/Garen-in-bush" }
https://api.github.com/repos/huggingface/datasets/issues/6199/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6199/timeline
open
false
6,199
null
null
null
false
1,875,092,027
https://api.github.com/repos/huggingface/datasets/issues/6198
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6198/events
[]
null
2023-08-31T13:57:31Z
[]
https://github.com/huggingface/datasets/pull/6198
MEMBER
null
false
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007621 / 0.011353 (-0.003732) | 0.004534 / 0.011008 (-0.006475) | 0.099834 / 0.038508 (0.061326) | 0.083029 / 0.023109 (0.059920) | 0.387559 / 0.275898 (0.111661) | 0.422453 / 0.323480 (0.098973) | 0.006070 / 0.007986 (-0.001916) | 0.003725 / 0.004328 (-0.000604) | 0.075923 / 0.004250 (0.071672) | 0.060578 / 0.037052 (0.023525) | 0.403569 / 0.258489 (0.145079) | 0.444991 / 0.293841 (0.151150) | 0.035847 / 0.128546 (-0.092699) | 0.009872 / 0.075646 (-0.065774) | 0.335506 / 0.419271 (-0.083766) | 0.060509 / 0.043533 (0.016976) | 0.381034 / 0.255139 (0.125895) | 0.426938 / 0.283200 (0.143738) | 0.027662 / 0.141683 (-0.114021) | 1.729565 / 1.452155 (0.277410) | 1.842082 / 1.492716 (0.349366) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.230371 / 0.018006 (0.212365) | 0.518216 / 0.000490 (0.517726) | 0.003897 / 0.000200 (0.003697) | 0.000087 / 0.000054 (0.000033) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031942 / 0.037411 (-0.005470) | 0.096609 / 0.014526 (0.082083) | 0.112707 / 0.176557 (-0.063850) | 0.178849 / 0.737135 (-0.558286) | 0.112793 / 0.296338 (-0.183546) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.445896 / 0.215209 (0.230687) | 4.451173 / 2.077655 (2.373519) | 2.183380 / 1.504120 (0.679260) | 1.991583 / 1.541195 (0.450388) | 2.096219 / 1.468490 (0.627729) | 0.566692 / 4.584777 (-4.018085) | 4.078278 / 3.745712 (0.332566) | 3.787950 / 5.269862 (-1.481911) | 2.372651 / 4.565676 (-2.193025) | 0.065500 / 0.424275 (-0.358775) | 0.008918 / 0.007607 (0.001311) | 0.535589 / 0.226044 (0.309545) | 5.364130 / 2.268929 (3.095201) | 2.805381 / 55.444624 (-52.639244) | 2.350769 / 6.876477 (-4.525708) | 2.592887 / 2.142072 (0.450814) | 0.675475 / 4.805227 (-4.129752) | 0.153907 / 6.500664 (-6.346757) | 0.071138 / 0.075469 (-0.004331) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.498236 / 1.841788 (-0.343552) | 22.810460 / 8.074308 (14.736152) | 16.275035 / 10.191392 (6.083643) | 0.200242 / 0.680424 (-0.480182) | 0.021553 / 0.534201 (-0.512648) | 0.469437 / 0.579283 (-0.109846) | 0.477752 / 0.434364 (0.043388) | 0.537411 / 0.540337 (-0.002927) | 0.741730 / 1.386936 (-0.645206) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008009 / 0.011353 (-0.003344) | 0.004626 / 0.011008 (-0.006382) | 0.074871 / 0.038508 (0.036363) | 0.085214 / 0.023109 (0.062105) | 0.478057 / 0.275898 (0.202159) | 0.522038 / 0.323480 (0.198558) | 0.007055 / 0.007986 (-0.000931) | 0.003813 / 0.004328 (-0.000515) | 0.076238 / 0.004250 (0.071988) | 0.065738 / 0.037052 (0.028686) | 0.484391 / 0.258489 (0.225902) | 0.524425 / 0.293841 (0.230584) | 0.038375 / 0.128546 (-0.090171) | 0.009964 / 0.075646 (-0.065682) | 0.084027 / 0.419271 (-0.335245) | 0.056979 / 0.043533 (0.013447) | 0.486910 / 0.255139 (0.231771) | 0.501185 / 0.283200 (0.217985) | 0.027000 / 0.141683 (-0.114683) | 1.767378 / 1.452155 (0.315224) | 1.870511 / 1.492716 (0.377795) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.267067 / 0.018006 (0.249061) | 0.501714 / 0.000490 (0.501224) | 0.012379 / 0.000200 (0.012179) | 0.000129 / 0.000054 (0.000075) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.036706 / 0.037411 (-0.000706) | 0.110064 / 0.014526 (0.095538) | 0.124896 / 0.176557 (-0.051660) | 0.186730 / 0.737135 (-0.550405) | 0.123501 / 0.296338 (-0.172837) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.510793 / 0.215209 (0.295583) | 5.133056 / 2.077655 (3.055401) | 2.776456 / 1.504120 (1.272336) | 2.595557 / 1.541195 (1.054362) | 2.717922 / 1.468490 (1.249432) | 0.578333 / 4.584777 (-4.006444) | 4.169935 / 3.745712 (0.424223) | 3.800078 / 5.269862 (-1.469784) | 2.385866 / 4.565676 (-2.179810) | 0.068114 / 0.424275 (-0.356161) | 0.008771 / 0.007607 (0.001164) | 0.597894 / 0.226044 (0.371850) | 5.970293 / 2.268929 (3.701364) | 3.352715 / 55.444624 (-52.091909) | 2.972062 / 6.876477 (-3.904415) | 3.179232 / 2.142072 (1.037160) | 0.689838 / 4.805227 (-4.115389) | 0.154890 / 6.500664 (-6.345774) | 0.072321 / 0.075469 (-0.003148) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.613666 / 1.841788 (-0.228121) | 23.441538 / 8.074308 (15.367230) | 17.105417 / 10.191392 (6.914025) | 0.171449 / 0.680424 (-0.508975) | 0.023257 / 0.534201 (-0.510944) | 0.466724 / 0.579283 (-0.112559) | 0.470835 / 0.434364 (0.036471) | 0.561860 / 0.540337 (0.021523) | 0.759048 / 1.386936 (-0.627888) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#0f3b6eaf69d3352394d3bf3c4d6ed01dd2af5860 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007557 / 0.011353 (-0.003796) | 0.004211 / 0.011008 (-0.006797) | 0.096243 / 0.038508 (0.057735) | 0.083603 / 0.023109 (0.060493) | 0.367114 / 0.275898 (0.091216) | 0.415182 / 0.323480 (0.091702) | 0.005796 / 0.007986 (-0.002189) | 0.003791 / 0.004328 (-0.000537) | 0.073505 / 0.004250 (0.069254) | 0.060335 / 0.037052 (0.023283) | 0.392182 / 0.258489 (0.133693) | 0.421315 / 0.293841 (0.127474) | 0.036128 / 0.128546 (-0.092419) | 0.009953 / 0.075646 (-0.065693) | 0.338965 / 0.419271 (-0.080307) | 0.061006 / 0.043533 (0.017473) | 0.372317 / 0.255139 (0.117178) | 0.414367 / 0.283200 (0.131167) | 0.026970 / 0.141683 (-0.114713) | 1.730381 / 1.452155 (0.278227) | 1.808340 / 1.492716 (0.315624) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.222622 / 0.018006 (0.204615) | 0.474064 / 0.000490 (0.473574) | 0.004817 / 0.000200 (0.004617) | 0.000089 / 0.000054 (0.000034) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032528 / 0.037411 (-0.004883) | 0.097457 / 0.014526 (0.082931) | 0.112273 / 0.176557 (-0.064283) | 0.177953 / 0.737135 (-0.559182) | 0.112358 / 0.296338 (-0.183981) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.442601 / 0.215209 (0.227392) | 4.442065 / 2.077655 (2.364410) | 2.156813 / 1.504120 (0.652694) | 1.970289 / 1.541195 (0.429094) | 2.052878 / 1.468490 (0.584388) | 0.562661 / 4.584777 (-4.022116) | 4.255529 / 3.745712 (0.509817) | 3.767650 / 5.269862 (-1.502212) | 2.431078 / 4.565676 (-2.134598) | 0.065624 / 0.424275 (-0.358651) | 0.008738 / 0.007607 (0.001131) | 0.546839 / 0.226044 (0.320795) | 5.362863 / 2.268929 (3.093934) | 2.695924 / 55.444624 (-52.748701) | 2.334589 / 6.876477 (-4.541888) | 2.530757 / 2.142072 (0.388685) | 0.675991 / 4.805227 (-4.129236) | 0.153852 / 6.500664 (-6.346813) | 0.069189 / 0.075469 (-0.006280) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.522916 / 1.841788 (-0.318872) | 21.515907 / 8.074308 (13.441599) | 16.411708 / 10.191392 (6.220316) | 0.168245 / 0.680424 (-0.512179) | 0.021165 / 0.534201 (-0.513036) | 0.461838 / 0.579283 (-0.117446) | 0.488867 / 0.434364 (0.054503) | 0.536278 / 0.540337 (-0.004059) | 0.766690 / 1.386936 (-0.620246) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007683 / 0.011353 (-0.003670) | 0.004401 / 0.011008 (-0.006608) | 0.075463 / 0.038508 (0.036955) | 0.081737 / 0.023109 (0.058628) | 0.466469 / 0.275898 (0.190571) | 0.514909 / 0.323480 (0.191429) | 0.006106 / 0.007986 (-0.001880) | 0.003936 / 0.004328 (-0.000393) | 0.076773 / 0.004250 (0.072523) | 0.061025 / 0.037052 (0.023973) | 0.473348 / 0.258489 (0.214858) | 0.525326 / 0.293841 (0.231485) | 0.038224 / 0.128546 (-0.090322) | 0.009559 / 0.075646 (-0.066087) | 0.080847 / 0.419271 (-0.338424) | 0.056738 / 0.043533 (0.013205) | 0.475116 / 0.255139 (0.219977) | 0.494689 / 0.283200 (0.211490) | 0.029364 / 0.141683 (-0.112319) | 1.796681 / 1.452155 (0.344527) | 1.850600 / 1.492716 (0.357884) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.327126 / 0.018006 (0.309119) | 0.469186 / 0.000490 (0.468696) | 0.050600 / 0.000200 (0.050400) | 0.000439 / 0.000054 (0.000385) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.036710 / 0.037411 (-0.000701) | 0.108669 / 0.014526 (0.094143) | 0.119808 / 0.176557 (-0.056748) | 0.181501 / 0.737135 (-0.555634) | 0.121487 / 0.296338 (-0.174852) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.509076 / 0.215209 (0.293867) | 5.056970 / 2.077655 (2.979316) | 2.775958 / 1.504120 (1.271838) | 2.592548 / 1.541195 (1.051353) | 2.654381 / 1.468490 (1.185890) | 0.557407 / 4.584777 (-4.027370) | 4.418232 / 3.745712 (0.672519) | 3.698072 / 5.269862 (-1.571790) | 2.380607 / 4.565676 (-2.185069) | 0.066242 / 0.424275 (-0.358034) | 0.008350 / 0.007607 (0.000743) | 0.572354 / 0.226044 (0.346309) | 5.857637 / 2.268929 (3.588709) | 3.242512 / 55.444624 (-52.202112) | 2.891144 / 6.876477 (-3.985332) | 3.217987 / 2.142072 (1.075915) | 0.676049 / 4.805227 (-4.129178) | 0.155515 / 6.500664 (-6.345149) | 0.068616 / 0.075469 (-0.006853) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.670048 / 1.841788 (-0.171740) | 22.629573 / 8.074308 (14.555265) | 16.887676 / 10.191392 (6.696284) | 0.168571 / 0.680424 (-0.511853) | 0.023361 / 0.534201 (-0.510840) | 0.463358 / 0.579283 (-0.115925) | 0.463278 / 0.434364 (0.028914) | 0.602397 / 0.540337 (0.062060) | 0.793249 / 1.386936 (-0.593687) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#eee318573aba6574a43d457aa0347348c1f3e4aa \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006693 / 0.011353 (-0.004660) | 0.004100 / 0.011008 (-0.006908) | 0.084166 / 0.038508 (0.045658) | 0.074469 / 0.023109 (0.051360) | 0.356092 / 0.275898 (0.080194) | 0.392389 / 0.323480 (0.068909) | 0.003996 / 0.007986 (-0.003990) | 0.004020 / 0.004328 (-0.000308) | 0.064997 / 0.004250 (0.060747) | 0.053897 / 0.037052 (0.016845) | 0.362942 / 0.258489 (0.104453) | 0.408694 / 0.293841 (0.114854) | 0.031656 / 0.128546 (-0.096890) | 0.008713 / 0.075646 (-0.066933) | 0.289306 / 0.419271 (-0.129966) | 0.053067 / 0.043533 (0.009534) | 0.358740 / 0.255139 (0.103601) | 0.393347 / 0.283200 (0.110147) | 0.025430 / 0.141683 (-0.116253) | 1.486114 / 1.452155 (0.033959) | 1.572698 / 1.492716 (0.079981) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.215423 / 0.018006 (0.197417) | 0.467694 / 0.000490 (0.467204) | 0.003965 / 0.000200 (0.003765) | 0.000112 / 0.000054 (0.000057) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027936 / 0.037411 (-0.009475) | 0.084235 / 0.014526 (0.069709) | 0.136275 / 0.176557 (-0.040282) | 0.151154 / 0.737135 (-0.585982) | 0.185592 / 0.296338 (-0.110747) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.393784 / 0.215209 (0.178575) | 3.927878 / 2.077655 (1.850223) | 1.961216 / 1.504120 (0.457096) | 1.802264 / 1.541195 (0.261069) | 1.971186 / 1.468490 (0.502696) | 0.487981 / 4.584777 (-4.096796) | 3.649046 / 3.745712 (-0.096666) | 3.302471 / 5.269862 (-1.967391) | 2.058075 / 4.565676 (-2.507602) | 0.057072 / 0.424275 (-0.367203) | 0.007624 / 0.007607 (0.000017) | 0.470139 / 0.226044 (0.244095) | 4.697711 / 2.268929 (2.428783) | 2.494813 / 55.444624 (-52.949811) | 2.133084 / 6.876477 (-4.743393) | 2.329740 / 2.142072 (0.187667) | 0.585857 / 4.805227 (-4.219371) | 0.134442 / 6.500664 (-6.366223) | 0.060860 / 0.075469 (-0.014609) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.248504 / 1.841788 (-0.593283) | 19.448427 / 8.074308 (11.374119) | 14.446139 / 10.191392 (4.254747) | 0.168081 / 0.680424 (-0.512342) | 0.018028 / 0.534201 (-0.516173) | 0.395061 / 0.579283 (-0.184222) | 0.418777 / 0.434364 (-0.015587) | 0.454509 / 0.540337 (-0.085828) | 0.628488 / 1.386936 (-0.758448) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006946 / 0.011353 (-0.004406) | 0.004096 / 0.011008 (-0.006912) | 0.065322 / 0.038508 (0.026813) | 0.074336 / 0.023109 (0.051227) | 0.405327 / 0.275898 (0.129429) | 0.436878 / 0.323480 (0.113398) | 0.006083 / 0.007986 (-0.001902) | 0.003345 / 0.004328 (-0.000984) | 0.065725 / 0.004250 (0.061474) | 0.056398 / 0.037052 (0.019345) | 0.406906 / 0.258489 (0.148417) | 0.443330 / 0.293841 (0.149489) | 0.033036 / 0.128546 (-0.095510) | 0.008503 / 0.075646 (-0.067144) | 0.071865 / 0.419271 (-0.347406) | 0.048956 / 0.043533 (0.005423) | 0.404579 / 0.255139 (0.149440) | 0.424904 / 0.283200 (0.141704) | 0.021786 / 0.141683 (-0.119897) | 1.491868 / 1.452155 (0.039713) | 1.565252 / 1.492716 (0.072536) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.231363 / 0.018006 (0.213357) | 0.454962 / 0.000490 (0.454472) | 0.004680 / 0.000200 (0.004480) | 0.000100 / 0.000054 (0.000045) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032569 / 0.037411 (-0.004843) | 0.094928 / 0.014526 (0.080402) | 0.108096 / 0.176557 (-0.068461) | 0.158727 / 0.737135 (-0.578409) | 0.106951 / 0.296338 (-0.189387) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.431469 / 0.215209 (0.216260) | 4.283929 / 2.077655 (2.206274) | 2.283891 / 1.504120 (0.779771) | 2.118172 / 1.541195 (0.576977) | 2.192628 / 1.468490 (0.724138) | 0.492026 / 4.584777 (-4.092751) | 3.692126 / 3.745712 (-0.053587) | 3.269827 / 5.269862 (-2.000035) | 2.028948 / 4.565676 (-2.536728) | 0.057932 / 0.424275 (-0.366344) | 0.007301 / 0.007607 (-0.000306) | 0.508411 / 0.226044 (0.282367) | 5.072803 / 2.268929 (2.803875) | 2.756532 / 55.444624 (-52.688092) | 2.432192 / 6.876477 (-4.444285) | 2.654864 / 2.142072 (0.512791) | 0.589458 / 4.805227 (-4.215769) | 0.133924 / 6.500664 (-6.366740) | 0.060764 / 0.075469 (-0.014705) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.350737 / 1.841788 (-0.491051) | 20.265217 / 8.074308 (12.190909) | 14.969039 / 10.191392 (4.777647) | 0.164226 / 0.680424 (-0.516198) | 0.020090 / 0.534201 (-0.514111) | 0.397010 / 0.579283 (-0.182273) | 0.412927 / 0.434364 (-0.021437) | 0.473931 / 0.540337 (-0.066406) | 0.653462 / 1.386936 (-0.733474) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#00cb5cc57337cdff338d7a54396bf25c5c5abd67 \"CML watermark\")\n" ]
Preserve split order in DataFilesDict
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6198/reactions" }
PR_kwDODunzps5ZNyBq
{ "diff_url": "https://github.com/huggingface/datasets/pull/6198.diff", "html_url": "https://github.com/huggingface/datasets/pull/6198", "merged_at": "2023-08-31T13:48:42Z", "patch_url": "https://github.com/huggingface/datasets/pull/6198.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6198" }
2023-08-31T09:00:26Z
https://api.github.com/repos/huggingface/datasets/issues/6198/comments
After investigation, I have found that this copy forces the splits to be sorted alphabetically: https://github.com/huggingface/datasets/blob/029227a116c14720afca71b9b22e78eb2a1c09a6/src/datasets/builder.py#L556 This PR removes the alphabetically sort of `DataFilesDict` keys. - Note that for a `dict`, the order of keys is relevant when hashing: ```python hash1 = Hasher.hash({'train': 'train.csv', 'test': 'test.csv'}) hash2 = Hasher.hash({'test': 'test.csv', 'train': 'train.csv'}) assert hash1 != hash2 ``` - The `DataFilesDict` is a subclass of `dict`, thus the order should be relevant as well ```python hash1 = Hasher.hash(DataFilesDict({'train': 'train.csv', 'test': 'test.csv'})) hash2 = Hasher.hash(DataFilesDict({'test': 'test.csv', 'train': 'train.csv'})) assert hash1 != hash2 ``` Fix #6196.
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/6198/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6198/timeline
closed
false
6,198
null
2023-08-31T13:48:42Z
null
true
1,875,078,155
https://api.github.com/repos/huggingface/datasets/issues/6197
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6197/events
[]
null
2023-09-01T10:35:10Z
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
https://github.com/huggingface/datasets/issues/6197
NONE
completed
null
null
[ "Thanks for reporting. We are investigating it.", "This issue is caused by latest `pandas` release 2.1.0 (released yesterday Aug 30).\r\n\r\nSee: https://github.com/huggingface/datasets/actions/runs/6035484010/job/16375932085?pr=6198\r\n", "People using previous releases of `datasets` should pin `pandas` in their local environment:\r\n```\r\npython -m pip install 'pandas<2.1.0'\r\n```" ]
ValueError: 'index=True' is only valid when 'orient' is 'split', 'table', 'index', or 'columns'
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/6197/reactions" }
I_kwDODunzps5vw2wL
null
2023-08-31T08:51:50Z
https://api.github.com/repos/huggingface/datasets/issues/6197/comments
### Describe the bug Saving a dataset `.to_json()` fails with a `ValueError` since the latest `pandas` [release](https://pandas.pydata.org/docs/dev/whatsnew/v2.1.0.html) (`2.1.0`) In their latest release we have: > Improved error handling when using [DataFrame.to_json()](https://pandas.pydata.org/docs/dev/reference/api/pandas.DataFrame.to_json.html#pandas.DataFrame.to_json) with incompatible index and orient arguments ([GH 52143](https://github.com/pandas-dev/pandas/issues/52143)) i.e. an error is now raised for invalid combinations of `index` and `orient`. This means that unfortunately the custom logic at this line might sometimes lead to contradictions: https://github.com/huggingface/datasets/blob/029227a116c14720afca71b9b22e78eb2a1c09a6/src/datasets/io/json.py#L96 e.g. for the default case `orient=records` leads to `index=True`, which now raises a `ValueError` ### Steps to reproduce the bug ```python import datasets if __name__ == '__main__': dataset = datasets.Dataset.from_dict({"A": [1, 2, 3], "B": [4, 5, 6]}) dataset.to_json("dataset.json") ``` ```shell >>> ValueError: 'index=True' is only valid when 'orient' is 'split', 'table', 'index', or 'columns'. ``` ### Expected behavior The dataset is successfully saved as `.json` ### Environment info `python >= 3.9` `pandas >= 2.1.0`
{ "avatar_url": "https://avatars.githubusercontent.com/u/128361578?v=4", "events_url": "https://api.github.com/users/exs-avianello/events{/privacy}", "followers_url": "https://api.github.com/users/exs-avianello/followers", "following_url": "https://api.github.com/users/exs-avianello/following{/other_user}", "gists_url": "https://api.github.com/users/exs-avianello/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/exs-avianello", "id": 128361578, "login": "exs-avianello", "node_id": "U_kgDOB6akag", "organizations_url": "https://api.github.com/users/exs-avianello/orgs", "received_events_url": "https://api.github.com/users/exs-avianello/received_events", "repos_url": "https://api.github.com/users/exs-avianello/repos", "site_admin": false, "starred_url": "https://api.github.com/users/exs-avianello/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/exs-avianello/subscriptions", "type": "User", "url": "https://api.github.com/users/exs-avianello" }
https://api.github.com/repos/huggingface/datasets/issues/6197/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6197/timeline
closed
false
6,197
null
2023-08-31T10:24:40Z
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
false
1,875,070,972
https://api.github.com/repos/huggingface/datasets/issues/6196
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6196/events
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
null
2023-08-31T13:48:43Z
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
https://github.com/huggingface/datasets/issues/6196
MEMBER
completed
null
null
[]
Split order is not preserved
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6196/reactions" }
I_kwDODunzps5vw0_8
null
2023-08-31T08:47:16Z
https://api.github.com/repos/huggingface/datasets/issues/6196/comments
I have noticed that in some cases the split order is not preserved. For example, consider a no-script dataset with configs: ```yaml configs: - config_name: default data_files: - split: train path: train.csv - split: test path: test.csv ``` - Note the defined split order is [train, test] Once the dataset is loaded, the split order is not preserved: ```python In [16]: ds Out[16]: DatasetDict({ test: Dataset({ features: ['text', 'label'], num_rows: 1 }) train: Dataset({ features: ['text', 'label'], num_rows: 2 }) }) ``` - Note the obtained split order is [test, train]
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/6196/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6196/timeline
closed
false
6,196
null
2023-08-31T13:48:43Z
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
false
1,874,195,585
https://api.github.com/repos/huggingface/datasets/issues/6195
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6195/events
[]
null
2023-11-03T10:14:21Z
[]
https://github.com/huggingface/datasets/issues/6195
NONE
completed
null
null
[ "realized that need to pass the path at `cache_file_name` like\r\n\r\n```python\r\ntokenized_datasets = raw_datasets[\"train\"].map(\r\n tokenize_function,\r\n batched=True,\r\n num_proc=data_args.preprocessing_num_workers,\r\n remove_columns=[text_column_name],\r\n load_from_cache_file=True,\r\n desc=\"Running tokenizer on dataset line_by_line\",\r\n # cache_file_names= {\"train\": \"cache-1982fea76aa54a13.arrow\"}\r\n cache_file_name=\"/project/huggingface_cache/datasets/..../cache-1982fea76aa54a13.arrow\",\r\n new_fingerprint=\"1982fea76aa54a13\"\r\n )\r\n```", "Thank you so much! I went through a lot of issues before finding similar experiences here. I have to say that the [docs](https://huggingface.co/docs/datasets/v2.11.0/en/package_reference/main_classes#datasets.Dataset.map) of `.map()` is really misleading, probably making people think that just assigning the file name to cache_file_name is enough." ]
Force to reuse cache at given path
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6195/reactions" }
I_kwDODunzps5vtfSB
null
2023-08-30T18:44:54Z
https://api.github.com/repos/huggingface/datasets/issues/6195/comments
### Describe the bug I have run the official example of MLM like: ```bash python run_mlm.py \ --model_name_or_path roberta-base \ --dataset_name togethercomputer/RedPajama-Data-1T \ --dataset_config_name arxiv \ --per_device_train_batch_size 10 \ --preprocessing_num_workers 20 \ --validation_split_percentage 0 \ --cache_dir /project/huggingface_cache/datasets \ --line_by_line \ --do_train \ --pad_to_max_length \ --output_dir /project/huggingface_cache/test-mlm ``` it successfully runs and at my cache folder has `cache-1982fea76aa54a13_00001_of_00020.arrow`..... `cache-1982fea76aa54a13_00020_of_00020.arrow ` as tokenization cache of `map` method. And the cache works fine every time I run the command above. However, when I switched to jupyter notebook (since I do not want to load datasets every time when I changed other parameters not related to the dataloading). It is not recognizing the cache files and starts to re-run the entire tokenization process. I changed my code to ```python tokenized_datasets = raw_datasets["train"].map( tokenize_function, batched=True, num_proc=data_args.preprocessing_num_workers, remove_columns=[text_column_name], load_from_cache_file=True, desc="Running tokenizer on dataset line_by_line", # cache_file_names= {"train": "cache-1982fea76aa54a13.arrow"} cache_file_name="cache-1982fea76aa54a13.arrow", new_fingerprint="1982fea76aa54a13" ) ``` it still does not recognize the previously cached files and trying to re-run the tokenization process. ### Steps to reproduce the bug use jupyter notebook for dataset map function. ### Expected behavior the map function accepts the given cache_file_name and new_fingerprint then load the previously cached files. ### Environment info - `datasets` version: 2.14.4.dev0 - Platform: Linux-3.10.0-1160.59.1.el7.x86_64-x86_64-with-glibc2.10 - Python version: 3.8.8 - Huggingface_hub version: 0.16.4 - PyArrow version: 12.0.1 - Pandas version: 2.0.3
{ "avatar_url": "https://avatars.githubusercontent.com/u/43507393?v=4", "events_url": "https://api.github.com/users/Luosuu/events{/privacy}", "followers_url": "https://api.github.com/users/Luosuu/followers", "following_url": "https://api.github.com/users/Luosuu/following{/other_user}", "gists_url": "https://api.github.com/users/Luosuu/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Luosuu", "id": 43507393, "login": "Luosuu", "node_id": "MDQ6VXNlcjQzNTA3Mzkz", "organizations_url": "https://api.github.com/users/Luosuu/orgs", "received_events_url": "https://api.github.com/users/Luosuu/received_events", "repos_url": "https://api.github.com/users/Luosuu/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Luosuu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Luosuu/subscriptions", "type": "User", "url": "https://api.github.com/users/Luosuu" }
https://api.github.com/repos/huggingface/datasets/issues/6195/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6195/timeline
closed
false
6,195
null
2023-08-30T19:00:45Z
null
false
1,872,598,223
https://api.github.com/repos/huggingface/datasets/issues/6194
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6194/events
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
null
2024-02-29T03:46:54Z
[]
https://github.com/huggingface/datasets/issues/6194
NONE
null
null
null
[ "The `fingerprint` parameter serves a slightly different purpose - we use it to inject a new fingerprint after transforming a `Dataset` (computed from the previous fingerprint + transform + transform args), e.g., to be able to compute the cache file for a transform. There is no concept of `fingerprint` before a `Dataset` is fully initialized, but we still need to hash the args (e.g., generator func) of the \"dataset creation methods\" (`from_generator`, `from_csv`, etc.) to compute the cache directory (to store the initial version and transformed dataset versions)\r\n\r\nI agree it should be easier to bypass the hashing mechanism in this instance, too. However, we should probably first address https://github.com/huggingface/datasets/issues/5080 before solving this (e.g., maybe exposing `hash` in `load_dataset`/`load_dataset_builder`.", "Adding +1 here:\r\n\r\nIf the generator needs to access some external resources or state, then it's not always straightforward to make it pickle-able. So I'd like to be able to override how the default cache key derivation needs to pickle the generator (and of course, I'd accept responsibility for that part of cache consistency).\r\n\r\nAppears to be a recurrent roadbump: #6118 #5963 #5819 #5750 #4983 ", "Silly hack incoming:\r\n\r\n```python\r\nimport uuid\r\n\r\nclass _DatasetGeneratorPickleHack:\r\n def __init__(self, generator, generator_id=None):\r\n self.generator = generator\r\n self.generator_id = (\r\n generator_id if generator_id is not None else str(uuid.uuid4())\r\n )\r\n\r\n def __call__(self, *args, **kwargs):\r\n return self.generator(*kwargs, **kwargs)\r\n\r\n def __reduce__(self):\r\n return (_DatasetGeneratorPickleHack_raise, (self.generator_id,))\r\n\r\n\r\ndef _DatasetGeneratorPickleHack_raise(*args, **kwargs):\r\n raise AssertionError(\"cannot actually unpickle _DatasetGeneratorPickleHack!\")\r\n```\r\n\r\nNow `Dataset.from_generator(_DatasetGeneratorPickleHack(gen))` works even if `gen` is unpicklable, because Dataset just pickles the shim object that avoids actually traversing `gen`. Then, one can work out how to set `generator_id` meaningfully to allow cache reuse.", "I'd like some way to do this too. I find that sometimes the hash doesn't cover enough, and that the dataset is not regenerated even when underlying data has changed, and by supplying a custom fingerprint I could do a better job of controlling when my dataset is regenerated.", "This is what I did and it works: \r\n\r\nhttps://github.com/stevemadere/s3-datasets/blob/e475a566a16d3051656a66f8ff4d3baa4c55a66c/src/tokengenerators/text_ds_2_tokens_generator.py#L200\r\n" ]
Support custom fingerprinting with `Dataset.from_generator`
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/6194/reactions" }
I_kwDODunzps5vnZTP
null
2023-08-29T22:43:13Z
https://api.github.com/repos/huggingface/datasets/issues/6194/comments
### Feature request When using `Dataset.from_generator`, the generator is hashed when building the fingerprint. Similar to `.map`, it would be interesting to let the user bypass this hashing by accepting a `fingerprint` argument to `.from_generator`. ### Motivation Using the `.from_generator` constructor with a non-picklable generator fails. By accepting a `fingerprint` argument to `.from_generator`, the user would have the opportunity to manually fingerprint the dataset and thus bypass the crash. ### Your contribution If validated, I can try to submit a PR for this.
{ "avatar_url": "https://avatars.githubusercontent.com/u/16692099?v=4", "events_url": "https://api.github.com/users/bilelomrani1/events{/privacy}", "followers_url": "https://api.github.com/users/bilelomrani1/followers", "following_url": "https://api.github.com/users/bilelomrani1/following{/other_user}", "gists_url": "https://api.github.com/users/bilelomrani1/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/bilelomrani1", "id": 16692099, "login": "bilelomrani1", "node_id": "MDQ6VXNlcjE2NjkyMDk5", "organizations_url": "https://api.github.com/users/bilelomrani1/orgs", "received_events_url": "https://api.github.com/users/bilelomrani1/received_events", "repos_url": "https://api.github.com/users/bilelomrani1/repos", "site_admin": false, "starred_url": "https://api.github.com/users/bilelomrani1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bilelomrani1/subscriptions", "type": "User", "url": "https://api.github.com/users/bilelomrani1" }
https://api.github.com/repos/huggingface/datasets/issues/6194/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6194/timeline
open
false
6,194
null
null
null
false
1,872,285,153
https://api.github.com/repos/huggingface/datasets/issues/6193
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6193/events
[]
null
2023-08-31T19:47:29Z
[]
https://github.com/huggingface/datasets/issues/6193
NONE
null
null
null
[ "Before dynamically loading `.py` scripts with `importlib.import_module`, we also parse their contents to check imports, which is tricky to implement for binary `.pyc` files (requires parsing bytecode), so I don't think this is something we want to support (unless more users request it ofc) as this use case is a bit too specific.\r\n\r\n@lhoestq What's your opinion on this?", "> Before dynamically loading .py scripts with importlib.import_module, we also parse their contents to check imports, which is tricky to implement for binary .pyc files (requires parsing bytecode), so I don't think this is something we want to support (unless more users request it ofc) as this use case is a bit too specific.\r\n\r\nYes indeed. Though you can use a .py that imports a package that contains your .pyc code and that you previously installed", "Hi @lhoestq ,\r\nCould you share some example code related to the approach that you are suggesting? " ]
Dataset loading script method does not work with .pyc file
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6193/reactions" }
I_kwDODunzps5vmM3h
null
2023-08-29T19:35:06Z
https://api.github.com/repos/huggingface/datasets/issues/6193/comments
### Describe the bug The huggingface dataset library specifically looks for ‘.py’ file while loading the dataset using loading script approach and it does not work with ‘.pyc’ file. While deploying in production, it becomes an issue when we are restricted to use only .pyc files. Is there any work around for this ? ### Steps to reproduce the bug 1. Create a dataset loading script to read the custom data. 2. compile the code to make sure that .pyc file is created 3. Delete the loading script and re-run the code. Usually, python should make use of complied .pyc files. However, in this case, the dataset library errors out with the message that it's unable to find the data loader loading script. ### Expected behavior The code should make use of .pyc file and run without any error. ### Environment info NA
{ "avatar_url": "https://avatars.githubusercontent.com/u/43389071?v=4", "events_url": "https://api.github.com/users/riteshkumarumassedu/events{/privacy}", "followers_url": "https://api.github.com/users/riteshkumarumassedu/followers", "following_url": "https://api.github.com/users/riteshkumarumassedu/following{/other_user}", "gists_url": "https://api.github.com/users/riteshkumarumassedu/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/riteshkumarumassedu", "id": 43389071, "login": "riteshkumarumassedu", "node_id": "MDQ6VXNlcjQzMzg5MDcx", "organizations_url": "https://api.github.com/users/riteshkumarumassedu/orgs", "received_events_url": "https://api.github.com/users/riteshkumarumassedu/received_events", "repos_url": "https://api.github.com/users/riteshkumarumassedu/repos", "site_admin": false, "starred_url": "https://api.github.com/users/riteshkumarumassedu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/riteshkumarumassedu/subscriptions", "type": "User", "url": "https://api.github.com/users/riteshkumarumassedu" }
https://api.github.com/repos/huggingface/datasets/issues/6193/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6193/timeline
open
false
6,193
null
null
null
false
1,871,911,640
https://api.github.com/repos/huggingface/datasets/issues/6192
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6192/events
[]
null
2023-08-30T14:01:56Z
[]
https://github.com/huggingface/datasets/pull/6192
COLLABORATOR
null
false
null
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005972 / 0.011353 (-0.005381) | 0.003636 / 0.011008 (-0.007372) | 0.080254 / 0.038508 (0.041746) | 0.059564 / 0.023109 (0.036455) | 0.310615 / 0.275898 (0.034717) | 0.359307 / 0.323480 (0.035827) | 0.003408 / 0.007986 (-0.004578) | 0.002941 / 0.004328 (-0.001388) | 0.063699 / 0.004250 (0.059449) | 0.046072 / 0.037052 (0.009020) | 0.318670 / 0.258489 (0.060181) | 0.369677 / 0.293841 (0.075836) | 0.026995 / 0.128546 (-0.101552) | 0.007954 / 0.075646 (-0.067693) | 0.261667 / 0.419271 (-0.157604) | 0.045167 / 0.043533 (0.001634) | 0.314276 / 0.255139 (0.059137) | 0.348871 / 0.283200 (0.065672) | 0.021748 / 0.141683 (-0.119935) | 1.438598 / 1.452155 (-0.013557) | 1.530119 / 1.492716 (0.037403) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.196894 / 0.018006 (0.178888) | 0.445757 / 0.000490 (0.445267) | 0.002842 / 0.000200 (0.002642) | 0.000069 / 0.000054 (0.000015) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024923 / 0.037411 (-0.012488) | 0.075186 / 0.014526 (0.060661) | 0.087193 / 0.176557 (-0.089364) | 0.147496 / 0.737135 (-0.589639) | 0.087083 / 0.296338 (-0.209255) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.423545 / 0.215209 (0.208336) | 4.187927 / 2.077655 (2.110273) | 2.008656 / 1.504120 (0.504536) | 1.791313 / 1.541195 (0.250119) | 1.849836 / 1.468490 (0.381346) | 0.499458 / 4.584777 (-4.085318) | 2.983206 / 3.745712 (-0.762506) | 2.801005 / 5.269862 (-2.468856) | 1.886207 / 4.565676 (-2.679469) | 0.057343 / 0.424275 (-0.366932) | 0.006666 / 0.007607 (-0.000941) | 0.483948 / 0.226044 (0.257904) | 4.874818 / 2.268929 (2.605890) | 2.439393 / 55.444624 (-53.005231) | 2.049861 / 6.876477 (-4.826616) | 2.217050 / 2.142072 (0.074977) | 0.589760 / 4.805227 (-4.215467) | 0.125298 / 6.500664 (-6.375366) | 0.061123 / 0.075469 (-0.014347) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.234721 / 1.841788 (-0.607067) | 18.193756 / 8.074308 (10.119448) | 13.682835 / 10.191392 (3.491443) | 0.129345 / 0.680424 (-0.551078) | 0.016589 / 0.534201 (-0.517612) | 0.332355 / 0.579283 (-0.246928) | 0.358408 / 0.434364 (-0.075955) | 0.382044 / 0.540337 (-0.158293) | 0.535403 / 1.386936 (-0.851533) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006193 / 0.011353 (-0.005160) | 0.003674 / 0.011008 (-0.007335) | 0.062481 / 0.038508 (0.023973) | 0.062096 / 0.023109 (0.038987) | 0.449592 / 0.275898 (0.173694) | 0.479245 / 0.323480 (0.155765) | 0.004793 / 0.007986 (-0.003193) | 0.002896 / 0.004328 (-0.001433) | 0.062887 / 0.004250 (0.058636) | 0.050049 / 0.037052 (0.012997) | 0.454940 / 0.258489 (0.196451) | 0.486115 / 0.293841 (0.192274) | 0.028585 / 0.128546 (-0.099961) | 0.007954 / 0.075646 (-0.067692) | 0.067744 / 0.419271 (-0.351528) | 0.040473 / 0.043533 (-0.003060) | 0.448408 / 0.255139 (0.193269) | 0.472423 / 0.283200 (0.189223) | 0.020549 / 0.141683 (-0.121133) | 1.563618 / 1.452155 (0.111463) | 1.520149 / 1.492716 (0.027432) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.226604 / 0.018006 (0.208598) | 0.417615 / 0.000490 (0.417126) | 0.003386 / 0.000200 (0.003186) | 0.000074 / 0.000054 (0.000019) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027264 / 0.037411 (-0.010147) | 0.081709 / 0.014526 (0.067184) | 0.091793 / 0.176557 (-0.084763) | 0.145559 / 0.737135 (-0.591576) | 0.091869 / 0.296338 (-0.204469) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.462917 / 0.215209 (0.247708) | 4.629512 / 2.077655 (2.551857) | 2.555715 / 1.504120 (1.051595) | 2.388064 / 1.541195 (0.846870) | 2.458320 / 1.468490 (0.989830) | 0.511615 / 4.584777 (-4.073162) | 3.124566 / 3.745712 (-0.621146) | 2.839190 / 5.269862 (-2.430672) | 1.894551 / 4.565676 (-2.671126) | 0.059565 / 0.424275 (-0.364710) | 0.006481 / 0.007607 (-0.001126) | 0.532023 / 0.226044 (0.305979) | 5.361507 / 2.268929 (3.092579) | 2.982594 / 55.444624 (-52.462031) | 2.644870 / 6.876477 (-4.231606) | 2.831476 / 2.142072 (0.689404) | 0.607381 / 4.805227 (-4.197846) | 0.126067 / 6.500664 (-6.374597) | 0.062130 / 0.075469 (-0.013339) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.350442 / 1.841788 (-0.491345) | 18.829553 / 8.074308 (10.755245) | 14.796701 / 10.191392 (4.605309) | 0.145393 / 0.680424 (-0.535031) | 0.018218 / 0.534201 (-0.515983) | 0.335500 / 0.579283 (-0.243783) | 0.359190 / 0.434364 (-0.075174) | 0.388377 / 0.540337 (-0.151960) | 0.534994 / 1.386936 (-0.851942) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#ff7629eb72f499d841d64aa03f97e0b1707d1cc7 \"CML watermark\")\n", "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006741 / 0.011353 (-0.004612) | 0.004097 / 0.011008 (-0.006911) | 0.084513 / 0.038508 (0.046005) | 0.074216 / 0.023109 (0.051107) | 0.352481 / 0.275898 (0.076583) | 0.394806 / 0.323480 (0.071326) | 0.005603 / 0.007986 (-0.002383) | 0.003482 / 0.004328 (-0.000847) | 0.065165 / 0.004250 (0.060914) | 0.054065 / 0.037052 (0.017013) | 0.359399 / 0.258489 (0.100910) | 0.409776 / 0.293841 (0.115935) | 0.030997 / 0.128546 (-0.097550) | 0.008717 / 0.075646 (-0.066929) | 0.288692 / 0.419271 (-0.130579) | 0.052372 / 0.043533 (0.008840) | 0.353867 / 0.255139 (0.098728) | 0.391212 / 0.283200 (0.108012) | 0.024033 / 0.141683 (-0.117650) | 1.496552 / 1.452155 (0.044398) | 1.567267 / 1.492716 (0.074550) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.294074 / 0.018006 (0.276067) | 0.595421 / 0.000490 (0.594931) | 0.003826 / 0.000200 (0.003626) | 0.000085 / 0.000054 (0.000030) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028676 / 0.037411 (-0.008736) | 0.082064 / 0.014526 (0.067538) | 0.542399 / 0.176557 (0.365842) | 0.217188 / 0.737135 (-0.519947) | 0.099364 / 0.296338 (-0.196975) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.384282 / 0.215209 (0.169073) | 3.832204 / 2.077655 (1.754550) | 1.842500 / 1.504120 (0.338380) | 1.668192 / 1.541195 (0.126997) | 1.745207 / 1.468490 (0.276717) | 0.481881 / 4.584777 (-4.102896) | 3.677819 / 3.745712 (-0.067893) | 3.329062 / 5.269862 (-1.940799) | 2.056882 / 4.565676 (-2.508795) | 0.056898 / 0.424275 (-0.367377) | 0.007624 / 0.007607 (0.000016) | 0.459712 / 0.226044 (0.233667) | 4.611100 / 2.268929 (2.342171) | 2.370244 / 55.444624 (-53.074381) | 2.032756 / 6.876477 (-4.843721) | 2.336056 / 2.142072 (0.193984) | 0.583503 / 4.805227 (-4.221725) | 0.135041 / 6.500664 (-6.365623) | 0.062245 / 0.075469 (-0.013224) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.303894 / 1.841788 (-0.537894) | 20.315185 / 8.074308 (12.240876) | 14.388779 / 10.191392 (4.197387) | 0.169060 / 0.680424 (-0.511364) | 0.018609 / 0.534201 (-0.515592) | 0.395140 / 0.579283 (-0.184143) | 0.418231 / 0.434364 (-0.016133) | 0.461496 / 0.540337 (-0.078842) | 0.630298 / 1.386936 (-0.756638) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006999 / 0.011353 (-0.004354) | 0.004197 / 0.011008 (-0.006812) | 0.064524 / 0.038508 (0.026016) | 0.078791 / 0.023109 (0.055682) | 0.397563 / 0.275898 (0.121665) | 0.423056 / 0.323480 (0.099576) | 0.005697 / 0.007986 (-0.002288) | 0.003592 / 0.004328 (-0.000736) | 0.066178 / 0.004250 (0.061928) | 0.058114 / 0.037052 (0.021062) | 0.398619 / 0.258489 (0.140130) | 0.435496 / 0.293841 (0.141655) | 0.032758 / 0.128546 (-0.095788) | 0.008677 / 0.075646 (-0.066970) | 0.071359 / 0.419271 (-0.347913) | 0.048636 / 0.043533 (0.005103) | 0.389762 / 0.255139 (0.134623) | 0.412109 / 0.283200 (0.128910) | 0.023511 / 0.141683 (-0.118172) | 1.514768 / 1.452155 (0.062613) | 1.580163 / 1.492716 (0.087446) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.370491 / 0.018006 (0.352485) | 0.529751 / 0.000490 (0.529261) | 0.016959 / 0.000200 (0.016759) | 0.000112 / 0.000054 (0.000057) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033361 / 0.037411 (-0.004051) | 0.091610 / 0.014526 (0.077084) | 0.106642 / 0.176557 (-0.069915) | 0.160906 / 0.737135 (-0.576229) | 0.106894 / 0.296338 (-0.189444) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.429932 / 0.215209 (0.214723) | 4.276459 / 2.077655 (2.198804) | 2.268518 / 1.504120 (0.764398) | 2.092512 / 1.541195 (0.551317) | 2.182218 / 1.468490 (0.713728) | 0.494464 / 4.584777 (-4.090313) | 3.750731 / 3.745712 (0.005019) | 3.352370 / 5.269862 (-1.917492) | 2.105630 / 4.565676 (-2.460046) | 0.058465 / 0.424275 (-0.365810) | 0.007449 / 0.007607 (-0.000158) | 0.506896 / 0.226044 (0.280851) | 5.070201 / 2.268929 (2.801272) | 2.758128 / 55.444624 (-52.686496) | 2.408378 / 6.876477 (-4.468099) | 2.690633 / 2.142072 (0.548561) | 0.595662 / 4.805227 (-4.209565) | 0.134355 / 6.500664 (-6.366309) | 0.060113 / 0.075469 (-0.015356) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.380413 / 1.841788 (-0.461375) | 20.691210 / 8.074308 (12.616901) | 15.682282 / 10.191392 (5.490890) | 0.165887 / 0.680424 (-0.514536) | 0.020541 / 0.534201 (-0.513660) | 0.397846 / 0.579283 (-0.181437) | 0.425374 / 0.434364 (-0.008990) | 0.476261 / 0.540337 (-0.064076) | 0.648617 / 1.386936 (-0.738319) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#88797b8827334674d7f78c39171c00f0a28ceed6 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008787 / 0.011353 (-0.002566) | 0.007569 / 0.011008 (-0.003439) | 0.103918 / 0.038508 (0.065410) | 0.083347 / 0.023109 (0.060238) | 0.441838 / 0.275898 (0.165940) | 0.420202 / 0.323480 (0.096722) | 0.007295 / 0.007986 (-0.000690) | 0.005366 / 0.004328 (0.001037) | 0.082659 / 0.004250 (0.078409) | 0.059711 / 0.037052 (0.022658) | 0.401821 / 0.258489 (0.143332) | 0.432906 / 0.293841 (0.139065) | 0.048662 / 0.128546 (-0.079885) | 0.014091 / 0.075646 (-0.061555) | 0.352583 / 0.419271 (-0.066689) | 0.064739 / 0.043533 (0.021206) | 0.410890 / 0.255139 (0.155751) | 0.443450 / 0.283200 (0.160251) | 0.035817 / 0.141683 (-0.105866) | 1.754687 / 1.452155 (0.302532) | 1.887338 / 1.492716 (0.394622) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.209440 / 0.018006 (0.191434) | 0.519641 / 0.000490 (0.519152) | 0.005726 / 0.000200 (0.005526) | 0.000107 / 0.000054 (0.000052) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031027 / 0.037411 (-0.006384) | 0.097503 / 0.014526 (0.082977) | 0.106985 / 0.176557 (-0.069572) | 0.178235 / 0.737135 (-0.558900) | 0.108110 / 0.296338 (-0.188228) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.594325 / 0.215209 (0.379116) | 6.159414 / 2.077655 (4.081759) | 2.664892 / 1.504120 (1.160772) | 2.363355 / 1.541195 (0.822160) | 2.410754 / 1.468490 (0.942264) | 0.842557 / 4.584777 (-3.742220) | 5.112059 / 3.745712 (1.366347) | 4.633152 / 5.269862 (-0.636709) | 2.965891 / 4.565676 (-1.599785) | 0.097922 / 0.424275 (-0.326353) | 0.008602 / 0.007607 (0.000995) | 0.773029 / 0.226044 (0.546985) | 7.462314 / 2.268929 (5.193386) | 3.584776 / 55.444624 (-51.859848) | 2.752375 / 6.876477 (-4.124102) | 2.976345 / 2.142072 (0.834272) | 1.049423 / 4.805227 (-3.755804) | 0.212001 / 6.500664 (-6.288663) | 0.074095 / 0.075469 (-0.001374) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.577905 / 1.841788 (-0.263883) | 23.280931 / 8.074308 (15.206623) | 21.017946 / 10.191392 (10.826554) | 0.228746 / 0.680424 (-0.451678) | 0.027877 / 0.534201 (-0.506324) | 0.469173 / 0.579283 (-0.110110) | 0.567614 / 0.434364 (0.133250) | 0.545041 / 0.540337 (0.004704) | 0.754743 / 1.386936 (-0.632194) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008958 / 0.011353 (-0.002395) | 0.005077 / 0.011008 (-0.005931) | 0.083990 / 0.038508 (0.045482) | 0.078586 / 0.023109 (0.055476) | 0.482164 / 0.275898 (0.206266) | 0.525575 / 0.323480 (0.202095) | 0.006031 / 0.007986 (-0.001955) | 0.003922 / 0.004328 (-0.000407) | 0.084547 / 0.004250 (0.080296) | 0.064539 / 0.037052 (0.027487) | 0.501256 / 0.258489 (0.242767) | 0.531985 / 0.293841 (0.238144) | 0.050438 / 0.128546 (-0.078109) | 0.014004 / 0.075646 (-0.061642) | 0.091269 / 0.419271 (-0.328003) | 0.060825 / 0.043533 (0.017292) | 0.492573 / 0.255139 (0.237434) | 0.517060 / 0.283200 (0.233861) | 0.033576 / 0.141683 (-0.108107) | 1.775719 / 1.452155 (0.323564) | 1.866865 / 1.492716 (0.374149) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.225026 / 0.018006 (0.207020) | 0.510715 / 0.000490 (0.510225) | 0.005791 / 0.000200 (0.005591) | 0.000116 / 0.000054 (0.000061) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032795 / 0.037411 (-0.004616) | 0.109206 / 0.014526 (0.094680) | 0.121441 / 0.176557 (-0.055115) | 0.179735 / 0.737135 (-0.557401) | 0.115825 / 0.296338 (-0.180514) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.633259 / 0.215209 (0.418050) | 6.298084 / 2.077655 (4.220430) | 2.892604 / 1.504120 (1.388484) | 2.570858 / 1.541195 (1.029663) | 2.611441 / 1.468490 (1.142951) | 0.897801 / 4.584777 (-3.686976) | 5.185863 / 3.745712 (1.440151) | 4.656897 / 5.269862 (-0.612965) | 3.078575 / 4.565676 (-1.487101) | 0.100563 / 0.424275 (-0.323712) | 0.008368 / 0.007607 (0.000761) | 0.749152 / 0.226044 (0.523108) | 7.687484 / 2.268929 (5.418556) | 3.689238 / 55.444624 (-51.755387) | 2.896779 / 6.876477 (-3.979698) | 3.158688 / 2.142072 (1.016615) | 1.083490 / 4.805227 (-3.721737) | 0.216994 / 6.500664 (-6.283670) | 0.074053 / 0.075469 (-0.001416) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.732812 / 1.841788 (-0.108976) | 23.952127 / 8.074308 (15.877819) | 22.078140 / 10.191392 (11.886748) | 0.229491 / 0.680424 (-0.450933) | 0.032070 / 0.534201 (-0.502131) | 0.503344 / 0.579283 (-0.075939) | 0.588489 / 0.434364 (0.154125) | 0.550199 / 0.540337 (0.009861) | 0.778203 / 1.386936 (-0.608733) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#7e95a508b8d1747b5331bdbbd3e1021e97602c49 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007569 / 0.011353 (-0.003784) | 0.004447 / 0.011008 (-0.006561) | 0.098573 / 0.038508 (0.060064) | 0.081743 / 0.023109 (0.058634) | 0.379912 / 0.275898 (0.104013) | 0.411203 / 0.323480 (0.087723) | 0.004492 / 0.007986 (-0.003494) | 0.005627 / 0.004328 (0.001298) | 0.075974 / 0.004250 (0.071724) | 0.062512 / 0.037052 (0.025459) | 0.386971 / 0.258489 (0.128482) | 0.433299 / 0.293841 (0.139458) | 0.035935 / 0.128546 (-0.092611) | 0.009845 / 0.075646 (-0.065801) | 0.342940 / 0.419271 (-0.076331) | 0.061343 / 0.043533 (0.017810) | 0.381984 / 0.255139 (0.126845) | 0.417921 / 0.283200 (0.134721) | 0.028469 / 0.141683 (-0.113214) | 1.758472 / 1.452155 (0.306317) | 1.847768 / 1.492716 (0.355051) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.234297 / 0.018006 (0.216291) | 0.520020 / 0.000490 (0.519531) | 0.007375 / 0.000200 (0.007175) | 0.000767 / 0.000054 (0.000713) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032738 / 0.037411 (-0.004673) | 0.097656 / 0.014526 (0.083130) | 0.112476 / 0.176557 (-0.064080) | 0.179222 / 0.737135 (-0.557913) | 0.113638 / 0.296338 (-0.182700) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.453677 / 0.215209 (0.238467) | 4.528143 / 2.077655 (2.450489) | 2.243874 / 1.504120 (0.739754) | 2.051546 / 1.541195 (0.510351) | 2.196050 / 1.468490 (0.727560) | 0.567345 / 4.584777 (-4.017432) | 4.133591 / 3.745712 (0.387879) | 3.855286 / 5.269862 (-1.414576) | 2.393496 / 4.565676 (-2.172180) | 0.066567 / 0.424275 (-0.357708) | 0.009038 / 0.007607 (0.001431) | 0.549166 / 0.226044 (0.323122) | 5.472767 / 2.268929 (3.203839) | 2.788012 / 55.444624 (-52.656612) | 2.426132 / 6.876477 (-4.450345) | 2.684856 / 2.142072 (0.542784) | 0.680198 / 4.805227 (-4.125029) | 0.157782 / 6.500664 (-6.342882) | 0.073000 / 0.075469 (-0.002469) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.622435 / 1.841788 (-0.219352) | 22.965715 / 8.074308 (14.891407) | 16.626903 / 10.191392 (6.435511) | 0.197156 / 0.680424 (-0.483268) | 0.025599 / 0.534201 (-0.508602) | 0.495550 / 0.579283 (-0.083733) | 0.466575 / 0.434364 (0.032211) | 0.565862 / 0.540337 (0.025525) | 0.793835 / 1.386936 (-0.593102) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007721 / 0.011353 (-0.003632) | 0.004652 / 0.011008 (-0.006356) | 0.076636 / 0.038508 (0.038127) | 0.082183 / 0.023109 (0.059074) | 0.474665 / 0.275898 (0.198767) | 0.511593 / 0.323480 (0.188113) | 0.006240 / 0.007986 (-0.001746) | 0.003750 / 0.004328 (-0.000578) | 0.076939 / 0.004250 (0.072689) | 0.063333 / 0.037052 (0.026281) | 0.476469 / 0.258489 (0.217980) | 0.512514 / 0.293841 (0.218674) | 0.037802 / 0.128546 (-0.090744) | 0.009975 / 0.075646 (-0.065671) | 0.084190 / 0.419271 (-0.335081) | 0.056705 / 0.043533 (0.013172) | 0.475429 / 0.255139 (0.220290) | 0.496414 / 0.283200 (0.213215) | 0.026039 / 0.141683 (-0.115644) | 1.796059 / 1.452155 (0.343905) | 1.867461 / 1.492716 (0.374745) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.285219 / 0.018006 (0.267213) | 0.506311 / 0.000490 (0.505821) | 0.018545 / 0.000200 (0.018345) | 0.000142 / 0.000054 (0.000088) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.037832 / 0.037411 (0.000420) | 0.110437 / 0.014526 (0.095911) | 0.122953 / 0.176557 (-0.053604) | 0.187049 / 0.737135 (-0.550087) | 0.123539 / 0.296338 (-0.172800) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.508120 / 0.215209 (0.292911) | 5.082836 / 2.077655 (3.005182) | 2.800411 / 1.504120 (1.296291) | 2.579457 / 1.541195 (1.038262) | 2.645945 / 1.468490 (1.177455) | 0.578574 / 4.584777 (-4.006203) | 4.163401 / 3.745712 (0.417689) | 3.858575 / 5.269862 (-1.411286) | 2.389892 / 4.565676 (-2.175785) | 0.068639 / 0.424275 (-0.355636) | 0.008779 / 0.007607 (0.001172) | 0.598925 / 0.226044 (0.372880) | 5.987147 / 2.268929 (3.718219) | 3.361791 / 55.444624 (-52.082833) | 2.910425 / 6.876477 (-3.966051) | 3.156849 / 2.142072 (1.014776) | 0.690945 / 4.805227 (-4.114283) | 0.157441 / 6.500664 (-6.343223) | 0.071596 / 0.075469 (-0.003873) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.672763 / 1.841788 (-0.169025) | 23.599525 / 8.074308 (15.525217) | 17.520087 / 10.191392 (7.328695) | 0.169174 / 0.680424 (-0.511250) | 0.023470 / 0.534201 (-0.510731) | 0.469234 / 0.579283 (-0.110050) | 0.470020 / 0.434364 (0.035656) | 0.579949 / 0.540337 (0.039611) | 0.771353 / 1.386936 (-0.615583) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#029227a116c14720afca71b9b22e78eb2a1c09a6 \"CML watermark\")\n" ]
Set minimal fsspec version requirement to 2023.1.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6192/reactions" }
PR_kwDODunzps5ZDGnI
{ "diff_url": "https://github.com/huggingface/datasets/pull/6192.diff", "html_url": "https://github.com/huggingface/datasets/pull/6192", "merged_at": "2023-08-30T13:51:32Z", "patch_url": "https://github.com/huggingface/datasets/pull/6192.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6192" }
2023-08-29T15:23:41Z
https://api.github.com/repos/huggingface/datasets/issues/6192/comments
Fix https://github.com/huggingface/datasets/issues/6141 Colab installs 2023.6.0, so we should be good 🙂
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
https://api.github.com/repos/huggingface/datasets/issues/6192/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6192/timeline
closed
false
6,192
null
2023-08-30T13:51:32Z
null
true
1,871,634,840
https://api.github.com/repos/huggingface/datasets/issues/6191
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6191/events
[]
null
2023-09-04T06:38:17Z
[]
https://github.com/huggingface/datasets/pull/6191
MEMBER
null
false
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "I have found the same issue. Good fix. Should be merged as soon as possible.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006258 / 0.011353 (-0.005095) | 0.003717 / 0.011008 (-0.007291) | 0.079444 / 0.038508 (0.040936) | 0.066318 / 0.023109 (0.043209) | 0.310129 / 0.275898 (0.034231) | 0.346948 / 0.323480 (0.023469) | 0.003505 / 0.007986 (-0.004480) | 0.002855 / 0.004328 (-0.001474) | 0.062447 / 0.004250 (0.058197) | 0.050191 / 0.037052 (0.013139) | 0.314550 / 0.258489 (0.056061) | 0.357883 / 0.293841 (0.064042) | 0.027754 / 0.128546 (-0.100792) | 0.008068 / 0.075646 (-0.067578) | 0.262170 / 0.419271 (-0.157102) | 0.045834 / 0.043533 (0.002301) | 0.306938 / 0.255139 (0.051799) | 0.339229 / 0.283200 (0.056030) | 0.021188 / 0.141683 (-0.120495) | 1.430904 / 1.452155 (-0.021251) | 1.542038 / 1.492716 (0.049321) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.201232 / 0.018006 (0.183226) | 0.432848 / 0.000490 (0.432358) | 0.002403 / 0.000200 (0.002203) | 0.000070 / 0.000054 (0.000015) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024068 / 0.037411 (-0.013344) | 0.074077 / 0.014526 (0.059551) | 0.083578 / 0.176557 (-0.092978) | 0.144497 / 0.737135 (-0.592638) | 0.085386 / 0.296338 (-0.210952) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.397912 / 0.215209 (0.182702) | 3.940953 / 2.077655 (1.863299) | 1.935914 / 1.504120 (0.431794) | 1.753688 / 1.541195 (0.212493) | 1.832916 / 1.468490 (0.364426) | 0.503320 / 4.584777 (-4.081457) | 3.068693 / 3.745712 (-0.677019) | 2.867543 / 5.269862 (-2.402318) | 1.876265 / 4.565676 (-2.689412) | 0.057234 / 0.424275 (-0.367041) | 0.006753 / 0.007607 (-0.000854) | 0.468456 / 0.226044 (0.242411) | 4.681671 / 2.268929 (2.412742) | 2.445141 / 55.444624 (-52.999483) | 2.182366 / 6.876477 (-4.694110) | 2.399365 / 2.142072 (0.257293) | 0.591880 / 4.805227 (-4.213347) | 0.126176 / 6.500664 (-6.374488) | 0.061488 / 0.075469 (-0.013982) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.244013 / 1.841788 (-0.597775) | 18.534720 / 8.074308 (10.460412) | 13.853267 / 10.191392 (3.661875) | 0.154167 / 0.680424 (-0.526257) | 0.016685 / 0.534201 (-0.517515) | 0.331044 / 0.579283 (-0.248239) | 0.341399 / 0.434364 (-0.092965) | 0.378878 / 0.540337 (-0.161459) | 0.535707 / 1.386936 (-0.851230) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006284 / 0.011353 (-0.005069) | 0.003707 / 0.011008 (-0.007301) | 0.062481 / 0.038508 (0.023973) | 0.063342 / 0.023109 (0.040233) | 0.445465 / 0.275898 (0.169567) | 0.482021 / 0.323480 (0.158541) | 0.004909 / 0.007986 (-0.003076) | 0.002908 / 0.004328 (-0.001420) | 0.063111 / 0.004250 (0.058860) | 0.050197 / 0.037052 (0.013145) | 0.453367 / 0.258489 (0.194878) | 0.485249 / 0.293841 (0.191408) | 0.028532 / 0.128546 (-0.100014) | 0.008157 / 0.075646 (-0.067490) | 0.068033 / 0.419271 (-0.351238) | 0.041093 / 0.043533 (-0.002440) | 0.446555 / 0.255139 (0.191416) | 0.469103 / 0.283200 (0.185904) | 0.019529 / 0.141683 (-0.122154) | 1.503135 / 1.452155 (0.050980) | 1.545819 / 1.492716 (0.053103) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.257274 / 0.018006 (0.239268) | 0.418643 / 0.000490 (0.418153) | 0.011604 / 0.000200 (0.011405) | 0.000094 / 0.000054 (0.000040) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026286 / 0.037411 (-0.011125) | 0.082459 / 0.014526 (0.067933) | 0.090007 / 0.176557 (-0.086550) | 0.144963 / 0.737135 (-0.592173) | 0.093236 / 0.296338 (-0.203102) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.456331 / 0.215209 (0.241122) | 4.559469 / 2.077655 (2.481814) | 2.503452 / 1.504120 (0.999333) | 2.326579 / 1.541195 (0.785384) | 2.387551 / 1.468490 (0.919061) | 0.508683 / 4.584777 (-4.076094) | 3.071293 / 3.745712 (-0.674419) | 2.872820 / 5.269862 (-2.397041) | 1.891674 / 4.565676 (-2.674003) | 0.058951 / 0.424275 (-0.365324) | 0.006493 / 0.007607 (-0.001114) | 0.526747 / 0.226044 (0.300703) | 5.279985 / 2.268929 (3.011057) | 2.986146 / 55.444624 (-52.458478) | 2.603462 / 6.876477 (-4.273015) | 2.766776 / 2.142072 (0.624704) | 0.594685 / 4.805227 (-4.210542) | 0.125174 / 6.500664 (-6.375490) | 0.061430 / 0.075469 (-0.014039) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.350012 / 1.841788 (-0.491776) | 18.991941 / 8.074308 (10.917633) | 14.903483 / 10.191392 (4.712091) | 0.145918 / 0.680424 (-0.534506) | 0.017766 / 0.534201 (-0.516435) | 0.335350 / 0.579283 (-0.243933) | 0.357936 / 0.434364 (-0.076428) | 0.392355 / 0.540337 (-0.147983) | 0.545787 / 1.386936 (-0.841149) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#439e115d34a2d8737af719660c1b586ac32279dc \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005927 / 0.011353 (-0.005426) | 0.003497 / 0.011008 (-0.007512) | 0.079802 / 0.038508 (0.041294) | 0.058994 / 0.023109 (0.035885) | 0.309349 / 0.275898 (0.033451) | 0.344876 / 0.323480 (0.021396) | 0.004631 / 0.007986 (-0.003354) | 0.002814 / 0.004328 (-0.001515) | 0.062228 / 0.004250 (0.057978) | 0.046001 / 0.037052 (0.008949) | 0.312196 / 0.258489 (0.053707) | 0.356283 / 0.293841 (0.062442) | 0.027264 / 0.128546 (-0.101282) | 0.007992 / 0.075646 (-0.067654) | 0.260746 / 0.419271 (-0.158526) | 0.045112 / 0.043533 (0.001579) | 0.310463 / 0.255139 (0.055324) | 0.336456 / 0.283200 (0.053256) | 0.020364 / 0.141683 (-0.121319) | 1.482159 / 1.452155 (0.030005) | 1.541586 / 1.492716 (0.048870) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.185035 / 0.018006 (0.167028) | 0.432104 / 0.000490 (0.431615) | 0.002911 / 0.000200 (0.002711) | 0.000070 / 0.000054 (0.000016) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023674 / 0.037411 (-0.013737) | 0.072462 / 0.014526 (0.057936) | 0.080154 / 0.176557 (-0.096402) | 0.143022 / 0.737135 (-0.594114) | 0.082909 / 0.296338 (-0.213430) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.436977 / 0.215209 (0.221768) | 4.359633 / 2.077655 (2.281979) | 2.321479 / 1.504120 (0.817359) | 2.115277 / 1.541195 (0.574082) | 2.172303 / 1.468490 (0.703813) | 0.495735 / 4.584777 (-4.089042) | 3.006773 / 3.745712 (-0.738939) | 2.866560 / 5.269862 (-2.403302) | 1.839339 / 4.565676 (-2.726337) | 0.056925 / 0.424275 (-0.367350) | 0.006777 / 0.007607 (-0.000830) | 0.507217 / 0.226044 (0.281172) | 5.064933 / 2.268929 (2.796004) | 2.737542 / 55.444624 (-52.707082) | 2.386227 / 6.876477 (-4.490250) | 2.566375 / 2.142072 (0.424302) | 0.582965 / 4.805227 (-4.222262) | 0.124715 / 6.500664 (-6.375949) | 0.061560 / 0.075469 (-0.013909) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.295684 / 1.841788 (-0.546103) | 18.178345 / 8.074308 (10.104037) | 13.795886 / 10.191392 (3.604494) | 0.131464 / 0.680424 (-0.548960) | 0.016808 / 0.534201 (-0.517393) | 0.334190 / 0.579283 (-0.245093) | 0.347358 / 0.434364 (-0.087006) | 0.386198 / 0.540337 (-0.154139) | 0.527807 / 1.386936 (-0.859129) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006118 / 0.011353 (-0.005235) | 0.003634 / 0.011008 (-0.007374) | 0.062117 / 0.038508 (0.023609) | 0.061407 / 0.023109 (0.038298) | 0.448047 / 0.275898 (0.172149) | 0.483382 / 0.323480 (0.159902) | 0.004849 / 0.007986 (-0.003137) | 0.002859 / 0.004328 (-0.001469) | 0.061714 / 0.004250 (0.057463) | 0.047959 / 0.037052 (0.010907) | 0.452038 / 0.258489 (0.193549) | 0.485206 / 0.293841 (0.191365) | 0.028254 / 0.128546 (-0.100292) | 0.008055 / 0.075646 (-0.067591) | 0.067752 / 0.419271 (-0.351519) | 0.040355 / 0.043533 (-0.003178) | 0.446986 / 0.255139 (0.191847) | 0.472554 / 0.283200 (0.189354) | 0.019461 / 0.141683 (-0.122222) | 1.459048 / 1.452155 (0.006893) | 1.497283 / 1.492716 (0.004566) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.241788 / 0.018006 (0.223782) | 0.457352 / 0.000490 (0.456862) | 0.003841 / 0.000200 (0.003641) | 0.000081 / 0.000054 (0.000027) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026429 / 0.037411 (-0.010982) | 0.081604 / 0.014526 (0.067078) | 0.092881 / 0.176557 (-0.083675) | 0.146057 / 0.737135 (-0.591078) | 0.092987 / 0.296338 (-0.203352) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.456641 / 0.215209 (0.241432) | 4.567853 / 2.077655 (2.490198) | 2.491684 / 1.504120 (0.987564) | 2.323647 / 1.541195 (0.782452) | 2.387689 / 1.468490 (0.919198) | 0.505114 / 4.584777 (-4.079663) | 3.071615 / 3.745712 (-0.674098) | 2.912391 / 5.269862 (-2.357471) | 1.922350 / 4.565676 (-2.643326) | 0.057785 / 0.424275 (-0.366490) | 0.006642 / 0.007607 (-0.000965) | 0.532463 / 0.226044 (0.306418) | 5.344084 / 2.268929 (3.075155) | 2.970182 / 55.444624 (-52.474442) | 2.601733 / 6.876477 (-4.274744) | 2.763803 / 2.142072 (0.621731) | 0.596333 / 4.805227 (-4.208894) | 0.127047 / 6.500664 (-6.373617) | 0.062516 / 0.075469 (-0.012953) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.343206 / 1.841788 (-0.498581) | 19.405215 / 8.074308 (11.330907) | 15.406568 / 10.191392 (5.215176) | 0.132328 / 0.680424 (-0.548096) | 0.017882 / 0.534201 (-0.516318) | 0.336393 / 0.579283 (-0.242890) | 0.361989 / 0.434364 (-0.072375) | 0.394336 / 0.540337 (-0.146001) | 0.545166 / 1.386936 (-0.841770) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#439e115d34a2d8737af719660c1b586ac32279dc \"CML watermark\")\n" ]
Add missing `revision` argument
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/6191/reactions" }
PR_kwDODunzps5ZCKmv
{ "diff_url": "https://github.com/huggingface/datasets/pull/6191.diff", "html_url": "https://github.com/huggingface/datasets/pull/6191", "merged_at": "2023-08-31T13:50:00Z", "patch_url": "https://github.com/huggingface/datasets/pull/6191.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6191" }
2023-08-29T13:05:04Z
https://api.github.com/repos/huggingface/datasets/issues/6191/comments
I've noticed that when you're not working on the main branch, there are sometimes errors in the files returned. After some investigation, I realized that the revision was not properly passed everywhere. This PR proposes a fix.
{ "avatar_url": "https://avatars.githubusercontent.com/u/45557362?v=4", "events_url": "https://api.github.com/users/qgallouedec/events{/privacy}", "followers_url": "https://api.github.com/users/qgallouedec/followers", "following_url": "https://api.github.com/users/qgallouedec/following{/other_user}", "gists_url": "https://api.github.com/users/qgallouedec/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/qgallouedec", "id": 45557362, "login": "qgallouedec", "node_id": "MDQ6VXNlcjQ1NTU3MzYy", "organizations_url": "https://api.github.com/users/qgallouedec/orgs", "received_events_url": "https://api.github.com/users/qgallouedec/received_events", "repos_url": "https://api.github.com/users/qgallouedec/repos", "site_admin": false, "starred_url": "https://api.github.com/users/qgallouedec/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/qgallouedec/subscriptions", "type": "User", "url": "https://api.github.com/users/qgallouedec" }
https://api.github.com/repos/huggingface/datasets/issues/6191/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6191/timeline
closed
false
6,191
null
2023-08-31T13:50:00Z
null
true
1,871,582,175
https://api.github.com/repos/huggingface/datasets/issues/6190
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6190/events
[]
null
2023-08-29T13:01:10Z
[]
https://github.com/huggingface/datasets/issues/6190
MEMBER
completed
null
null
[ "This is because `download_config.use_auth_token` is deprecated - you should use `download_config.token` instead", "Works! Thanks for the quick fix! <3" ]
`Invalid user token` even when correct user token is passed!
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6190/reactions" }
I_kwDODunzps5vjhPf
null
2023-08-29T12:37:03Z
https://api.github.com/repos/huggingface/datasets/issues/6190/comments
### Describe the bug I'm working on a dataset which comprises other datasets on the hub. URL: https://huggingface.co/datasets/open-asr-leaderboard/datasets-test-only Note: Some of the sub-datasets in this metadataset require explicit access. All the other datasets work fine, except, `common_voice`. ### Steps to reproduce the bug https://github.com/Vaibhavs10/scratchpad/blob/main/cv_datasets_bug_repro.ipynb ### Expected behavior It should work if the provided access token is valid (as it does for all the other datasets) ### Environment info datasets version -> 2.14.4
{ "avatar_url": "https://avatars.githubusercontent.com/u/18682411?v=4", "events_url": "https://api.github.com/users/Vaibhavs10/events{/privacy}", "followers_url": "https://api.github.com/users/Vaibhavs10/followers", "following_url": "https://api.github.com/users/Vaibhavs10/following{/other_user}", "gists_url": "https://api.github.com/users/Vaibhavs10/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Vaibhavs10", "id": 18682411, "login": "Vaibhavs10", "node_id": "MDQ6VXNlcjE4NjgyNDEx", "organizations_url": "https://api.github.com/users/Vaibhavs10/orgs", "received_events_url": "https://api.github.com/users/Vaibhavs10/received_events", "repos_url": "https://api.github.com/users/Vaibhavs10/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Vaibhavs10/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Vaibhavs10/subscriptions", "type": "User", "url": "https://api.github.com/users/Vaibhavs10" }
https://api.github.com/repos/huggingface/datasets/issues/6190/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6190/timeline
closed
false
6,190
null
2023-08-29T13:01:09Z
null
false
1,871,569,855
https://api.github.com/repos/huggingface/datasets/issues/6189
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6189/events
[]
null
2023-08-29T13:04:59Z
[]
https://github.com/huggingface/datasets/pull/6189
MEMBER
null
false
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006166 / 0.011353 (-0.005187) | 0.003643 / 0.011008 (-0.007365) | 0.080966 / 0.038508 (0.042458) | 0.060538 / 0.023109 (0.037429) | 0.309205 / 0.275898 (0.033307) | 0.351007 / 0.323480 (0.027527) | 0.003592 / 0.007986 (-0.004393) | 0.002880 / 0.004328 (-0.001448) | 0.062957 / 0.004250 (0.058707) | 0.049015 / 0.037052 (0.011963) | 0.309436 / 0.258489 (0.050947) | 0.362695 / 0.293841 (0.068854) | 0.027818 / 0.128546 (-0.100728) | 0.008030 / 0.075646 (-0.067616) | 0.262678 / 0.419271 (-0.156594) | 0.046024 / 0.043533 (0.002491) | 0.316246 / 0.255139 (0.061107) | 0.337454 / 0.283200 (0.054254) | 0.022529 / 0.141683 (-0.119154) | 1.432492 / 1.452155 (-0.019662) | 1.499646 / 1.492716 (0.006929) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.190931 / 0.018006 (0.172925) | 0.428053 / 0.000490 (0.427564) | 0.002839 / 0.000200 (0.002639) | 0.000069 / 0.000054 (0.000014) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024042 / 0.037411 (-0.013370) | 0.073952 / 0.014526 (0.059426) | 0.905973 / 0.176557 (0.729417) | 0.177767 / 0.737135 (-0.559368) | 0.125779 / 0.296338 (-0.170559) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.398997 / 0.215209 (0.183788) | 3.959575 / 2.077655 (1.881920) | 1.907038 / 1.504120 (0.402918) | 1.732908 / 1.541195 (0.191713) | 1.757038 / 1.468490 (0.288548) | 0.495917 / 4.584777 (-4.088860) | 3.021437 / 3.745712 (-0.724275) | 2.793960 / 5.269862 (-2.475901) | 1.827753 / 4.565676 (-2.737923) | 0.057143 / 0.424275 (-0.367132) | 0.006583 / 0.007607 (-0.001024) | 0.469402 / 0.226044 (0.243357) | 4.685623 / 2.268929 (2.416695) | 2.325200 / 55.444624 (-53.119424) | 1.985559 / 6.876477 (-4.890918) | 2.151208 / 2.142072 (0.009136) | 0.589498 / 4.805227 (-4.215730) | 0.125433 / 6.500664 (-6.375231) | 0.060834 / 0.075469 (-0.014636) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.228217 / 1.841788 (-0.613571) | 18.076089 / 8.074308 (10.001780) | 13.814460 / 10.191392 (3.623068) | 0.144674 / 0.680424 (-0.535750) | 0.016749 / 0.534201 (-0.517452) | 0.332839 / 0.579283 (-0.246444) | 0.357211 / 0.434364 (-0.077153) | 0.380367 / 0.540337 (-0.159971) | 0.531177 / 1.386936 (-0.855759) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006006 / 0.011353 (-0.005347) | 0.003552 / 0.011008 (-0.007456) | 0.061822 / 0.038508 (0.023313) | 0.057724 / 0.023109 (0.034615) | 0.462326 / 0.275898 (0.186428) | 0.492842 / 0.323480 (0.169362) | 0.004833 / 0.007986 (-0.003152) | 0.002847 / 0.004328 (-0.001481) | 0.062278 / 0.004250 (0.058028) | 0.046754 / 0.037052 (0.009702) | 0.464185 / 0.258489 (0.205696) | 0.496416 / 0.293841 (0.202576) | 0.028949 / 0.128546 (-0.099597) | 0.008038 / 0.075646 (-0.067608) | 0.067572 / 0.419271 (-0.351700) | 0.041176 / 0.043533 (-0.002356) | 0.460047 / 0.255139 (0.204908) | 0.482728 / 0.283200 (0.199528) | 0.020047 / 0.141683 (-0.121635) | 1.455958 / 1.452155 (0.003804) | 1.525730 / 1.492716 (0.033014) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.283643 / 0.018006 (0.265637) | 0.443046 / 0.000490 (0.442556) | 0.041019 / 0.000200 (0.040819) | 0.000340 / 0.000054 (0.000286) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026229 / 0.037411 (-0.011182) | 0.081498 / 0.014526 (0.066972) | 0.091412 / 0.176557 (-0.085145) | 0.146621 / 0.737135 (-0.590514) | 0.092113 / 0.296338 (-0.204225) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.463525 / 0.215209 (0.248315) | 4.629852 / 2.077655 (2.552198) | 2.564831 / 1.504120 (1.060711) | 2.386976 / 1.541195 (0.845781) | 2.457757 / 1.468490 (0.989266) | 0.507317 / 4.584777 (-4.077460) | 3.142418 / 3.745712 (-0.603294) | 2.851642 / 5.269862 (-2.418219) | 1.894444 / 4.565676 (-2.671233) | 0.058495 / 0.424275 (-0.365780) | 0.006453 / 0.007607 (-0.001154) | 0.545363 / 0.226044 (0.319319) | 5.448092 / 2.268929 (3.179164) | 2.996328 / 55.444624 (-52.448296) | 2.664666 / 6.876477 (-4.211811) | 2.832247 / 2.142072 (0.690174) | 0.597631 / 4.805227 (-4.207596) | 0.126101 / 6.500664 (-6.374563) | 0.062573 / 0.075469 (-0.012896) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.366502 / 1.841788 (-0.475286) | 18.872990 / 8.074308 (10.798682) | 14.892114 / 10.191392 (4.700722) | 0.146668 / 0.680424 (-0.533756) | 0.017876 / 0.534201 (-0.516325) | 0.338490 / 0.579283 (-0.240793) | 0.357471 / 0.434364 (-0.076893) | 0.398730 / 0.540337 (-0.141608) | 0.542464 / 1.386936 (-0.844472) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#a6ff3e846d86814fa6962326e9346a4f1f1e8a80 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009132 / 0.011353 (-0.002221) | 0.005796 / 0.011008 (-0.005212) | 0.119495 / 0.038508 (0.080987) | 0.081708 / 0.023109 (0.058599) | 0.432940 / 0.275898 (0.157042) | 0.466793 / 0.323480 (0.143313) | 0.006464 / 0.007986 (-0.001521) | 0.004308 / 0.004328 (-0.000021) | 0.086344 / 0.004250 (0.082093) | 0.065987 / 0.037052 (0.028935) | 0.445213 / 0.258489 (0.186724) | 0.482405 / 0.293841 (0.188564) | 0.053553 / 0.128546 (-0.074993) | 0.015320 / 0.075646 (-0.060326) | 0.455669 / 0.419271 (0.036397) | 0.071619 / 0.043533 (0.028086) | 0.434843 / 0.255139 (0.179704) | 0.503224 / 0.283200 (0.220025) | 0.038280 / 0.141683 (-0.103403) | 1.901877 / 1.452155 (0.449722) | 2.040406 / 1.492716 (0.547690) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.268275 / 0.018006 (0.250269) | 0.622795 / 0.000490 (0.622305) | 0.004572 / 0.000200 (0.004372) | 0.000107 / 0.000054 (0.000053) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032514 / 0.037411 (-0.004898) | 0.100619 / 0.014526 (0.086093) | 0.118407 / 0.176557 (-0.058149) | 0.190311 / 0.737135 (-0.546824) | 0.117160 / 0.296338 (-0.179178) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.629836 / 0.215209 (0.414627) | 6.236124 / 2.077655 (4.158470) | 2.750775 / 1.504120 (1.246655) | 2.380111 / 1.541195 (0.838916) | 2.487279 / 1.468490 (1.018789) | 0.849568 / 4.584777 (-3.735209) | 5.571308 / 3.745712 (1.825596) | 4.934114 / 5.269862 (-0.335747) | 3.205478 / 4.565676 (-1.360198) | 0.104804 / 0.424275 (-0.319471) | 0.009856 / 0.007607 (0.002248) | 0.753352 / 0.226044 (0.527308) | 7.523482 / 2.268929 (5.254554) | 3.660088 / 55.444624 (-51.784537) | 2.726493 / 6.876477 (-4.149984) | 3.011344 / 2.142072 (0.869271) | 1.093410 / 4.805227 (-3.711817) | 0.229758 / 6.500664 (-6.270906) | 0.081516 / 0.075469 (0.006047) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.700199 / 1.841788 (-0.141588) | 25.238736 / 8.074308 (17.164428) | 23.188131 / 10.191392 (12.996739) | 0.257862 / 0.680424 (-0.422562) | 0.028885 / 0.534201 (-0.505316) | 0.510693 / 0.579283 (-0.068590) | 0.648474 / 0.434364 (0.214110) | 0.576314 / 0.540337 (0.035976) | 0.800606 / 1.386936 (-0.586330) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009426 / 0.011353 (-0.001927) | 0.006205 / 0.011008 (-0.004803) | 0.083947 / 0.038508 (0.045438) | 0.089164 / 0.023109 (0.066055) | 0.540500 / 0.275898 (0.264602) | 0.578825 / 0.323480 (0.255345) | 0.006792 / 0.007986 (-0.001194) | 0.005125 / 0.004328 (0.000797) | 0.083284 / 0.004250 (0.079034) | 0.067539 / 0.037052 (0.030487) | 0.544330 / 0.258489 (0.285841) | 0.593836 / 0.293841 (0.299995) | 0.050647 / 0.128546 (-0.077899) | 0.014688 / 0.075646 (-0.060959) | 0.095977 / 0.419271 (-0.323295) | 0.062326 / 0.043533 (0.018793) | 0.536096 / 0.255139 (0.280957) | 0.578691 / 0.283200 (0.295492) | 0.035488 / 0.141683 (-0.106194) | 1.911145 / 1.452155 (0.458990) | 1.977647 / 1.492716 (0.484931) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.368365 / 0.018006 (0.350359) | 0.609836 / 0.000490 (0.609346) | 0.054720 / 0.000200 (0.054520) | 0.000465 / 0.000054 (0.000411) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.036057 / 0.037411 (-0.001355) | 0.126434 / 0.014526 (0.111908) | 0.124740 / 0.176557 (-0.051817) | 0.198907 / 0.737135 (-0.538228) | 0.138201 / 0.296338 (-0.158137) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.684814 / 0.215209 (0.469605) | 6.738182 / 2.077655 (4.660527) | 3.231054 / 1.504120 (1.726934) | 2.889550 / 1.541195 (1.348355) | 2.933985 / 1.468490 (1.465495) | 0.867176 / 4.584777 (-3.717601) | 5.465475 / 3.745712 (1.719763) | 4.928370 / 5.269862 (-0.341492) | 3.126382 / 4.565676 (-1.439294) | 0.129673 / 0.424275 (-0.294603) | 0.009755 / 0.007607 (0.002148) | 0.797860 / 0.226044 (0.571816) | 8.003178 / 2.268929 (5.734250) | 4.081658 / 55.444624 (-51.362966) | 3.303837 / 6.876477 (-3.572640) | 3.574577 / 2.142072 (1.432505) | 1.064674 / 4.805227 (-3.740554) | 0.232894 / 6.500664 (-6.267770) | 0.082298 / 0.075469 (0.006829) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.858701 / 1.841788 (0.016913) | 25.839794 / 8.074308 (17.765485) | 24.291425 / 10.191392 (14.100033) | 0.250181 / 0.680424 (-0.430243) | 0.034479 / 0.534201 (-0.499722) | 0.540754 / 0.579283 (-0.038529) | 0.615996 / 0.434364 (0.181632) | 0.631499 / 0.540337 (0.091161) | 0.838719 / 1.386936 (-0.548217) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#0b6bb2f0e7a460d4ed04855eafe1184a7ce7c09c \"CML watermark\")\n" ]
Don't alter input in Features.from_dict
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6189/reactions" }
PR_kwDODunzps5ZB8Z9
{ "diff_url": "https://github.com/huggingface/datasets/pull/6189.diff", "html_url": "https://github.com/huggingface/datasets/pull/6189", "merged_at": "2023-08-29T12:52:48Z", "patch_url": "https://github.com/huggingface/datasets/pull/6189.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6189" }
2023-08-29T12:29:47Z
https://api.github.com/repos/huggingface/datasets/issues/6189/comments
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/6189/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6189/timeline
closed
false
6,189
null
2023-08-29T12:52:48Z
null
true
1,870,987,640
https://api.github.com/repos/huggingface/datasets/issues/6188
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6188/events
[]
null
2023-09-19T21:55:38Z
[]
https://github.com/huggingface/datasets/issues/6188
NONE
not_planned
null
null
[ "I think this error means you filter all examples within an (input) batch by deleting its columns. In that case, to avoid the error, you can set the column value to an empty list (`input_batch[\"col\"] = []`) instead." ]
[Feature Request] Check the length of batch before writing so that empty batch is allowed
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6188/reactions" }
I_kwDODunzps5vhQF4
null
2023-08-29T06:37:34Z
https://api.github.com/repos/huggingface/datasets/issues/6188/comments
### Use Case I use `dataset.map(process_fn, batched=True)` to process the dataset, with data **augmentations or filtering**. However, when all examples within a batch is filtered out, i.e. **an empty batch is returned**, the following error will be thrown: ``` ValueError: Schema and number of arrays unequal ``` This is because the empty batch does not comply with the schema of other batches. I think an empty batch should be allowed to facilitate coding (one does not need to assign an empty list manually for all keys.) A simple fix is to check the length of `batch` before writing: ``` if len(batch): writer.write_batch(batch) ``` instead of https://github.com/huggingface/datasets/blob/74d60213dcbd7c99484c62ce1d3dfd90a1df0770/src/datasets/arrow_dataset.py#L3493
{ "avatar_url": "https://avatars.githubusercontent.com/u/61188463?v=4", "events_url": "https://api.github.com/users/namespace-Pt/events{/privacy}", "followers_url": "https://api.github.com/users/namespace-Pt/followers", "following_url": "https://api.github.com/users/namespace-Pt/following{/other_user}", "gists_url": "https://api.github.com/users/namespace-Pt/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/namespace-Pt", "id": 61188463, "login": "namespace-Pt", "node_id": "MDQ6VXNlcjYxMTg4NDYz", "organizations_url": "https://api.github.com/users/namespace-Pt/orgs", "received_events_url": "https://api.github.com/users/namespace-Pt/received_events", "repos_url": "https://api.github.com/users/namespace-Pt/repos", "site_admin": false, "starred_url": "https://api.github.com/users/namespace-Pt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/namespace-Pt/subscriptions", "type": "User", "url": "https://api.github.com/users/namespace-Pt" }
https://api.github.com/repos/huggingface/datasets/issues/6188/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6188/timeline
closed
false
6,188
null
2023-09-19T21:55:37Z
null
false
1,870,936,143
https://api.github.com/repos/huggingface/datasets/issues/6187
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6187/events
[]
null
2023-08-29T16:21:45Z
[]
https://github.com/huggingface/datasets/issues/6187
NONE
null
null
null
[ "Hi! You can load this dataset with:\r\n```python\r\ndata_files = {\r\n \"train\": \"/content/PUBHEALTH/train.tsv\",\r\n \"validation\": \"/content/PUBHEALTH/dev.tsv\",\r\n \"test\": \"/content/PUBHEALTH/test.tsv\",\r\n}\r\n\r\ntsv_datasets_reloaded = load_dataset(\"csv\", data_files=data_files, sep=\"\\t\")\r\n```\r\n\r\nTo support your `load_dataset` call, defining aliases for the packaged builders, as suggested in https://github.com/huggingface/datasets/issues/5625, must be implemented. We can consider adding this feature if more people request it.\r\n \r\n(Also answered on the Discord [here](https://discord.com/channels/879548962464493619/1145956791134470224/1146071491260186744))" ]
Couldn't find a dataset script at /content/tsv/tsv.py or any data file in the same directory
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6187/reactions" }
I_kwDODunzps5vhDhP
null
2023-08-29T05:49:56Z
https://api.github.com/repos/huggingface/datasets/issues/6187/comments
### Describe the bug ``` --------------------------------------------------------------------------- FileNotFoundError Traceback (most recent call last) [<ipython-input-48-6a7b3e847019>](https://localhost:8080/#) in <cell line: 7>() 5 } 6 ----> 7 csv_datasets_reloaded = load_dataset("tsv", data_files=data_files) 8 csv_datasets_reloaded 2 frames [/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://localhost:8080/#) in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, **download_kwargs) 1489 raise e1 from None 1490 if isinstance(e1, FileNotFoundError): -> 1491 raise FileNotFoundError( 1492 f"Couldn't find a dataset script at {relative_to_absolute_path(combined_path)} or any data file in the same directory. " 1493 f"Couldn't find '{path}' on the Hugging Face Hub either: {type(e1).__name__}: {e1}" FileNotFoundError: Couldn't find a dataset script at /content/tsv/tsv.py or any data file in the same directory. Couldn't find 'tsv' on the Hugging Face Hub either: FileNotFoundError: Dataset 'tsv' doesn't exist on the Hub ``` ### Steps to reproduce the bug ``` data_files = { "train": "/content/PUBHEALTH/train.tsv", "validation": "/content/PUBHEALTH/dev.tsv", "test": "/content/PUBHEALTH/test.tsv", } tsv_datasets_reloaded = load_dataset("tsv", data_files=data_files) tsv_datasets_reloaded ``` ``` --------------------------------------------------------------------------- FileNotFoundError Traceback (most recent call last) <ipython-input-48-6a7b3e847019> in <cell line: 7>() 5 } 6 ----> 7 csv_datasets_reloaded = load_dataset("tsv", data_files=data_files) 8 csv_datasets_reloaded 2 frames /usr/local/lib/python3.10/dist-packages/datasets/load.py in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, **download_kwargs) 1489 raise e1 from None 1490 if isinstance(e1, FileNotFoundError): -> 1491 raise FileNotFoundError( 1492 f"Couldn't find a dataset script at {relative_to_absolute_path(combined_path)} or any data file in the same directory. " 1493 f"Couldn't find '{path}' on the Hugging Face Hub either: {type(e1).__name__}: {e1}" FileNotFoundError: Couldn't find a dataset script at /content/tsv/tsv.py or any data file in the same directory. Couldn't find 'tsv' on the Hugging Face Hub either: FileNotFoundError: Dataset 'tsv' doesn't exist on the Hub ``` ### Expected behavior load the data, push to hub ### Environment info jupyter notebook RTX 3090
{ "avatar_url": "https://avatars.githubusercontent.com/u/20493493?v=4", "events_url": "https://api.github.com/users/andysingal/events{/privacy}", "followers_url": "https://api.github.com/users/andysingal/followers", "following_url": "https://api.github.com/users/andysingal/following{/other_user}", "gists_url": "https://api.github.com/users/andysingal/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/andysingal", "id": 20493493, "login": "andysingal", "node_id": "MDQ6VXNlcjIwNDkzNDkz", "organizations_url": "https://api.github.com/users/andysingal/orgs", "received_events_url": "https://api.github.com/users/andysingal/received_events", "repos_url": "https://api.github.com/users/andysingal/repos", "site_admin": false, "starred_url": "https://api.github.com/users/andysingal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/andysingal/subscriptions", "type": "User", "url": "https://api.github.com/users/andysingal" }
https://api.github.com/repos/huggingface/datasets/issues/6187/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6187/timeline
open
false
6,187
null
null
null
false
1,869,431,457
https://api.github.com/repos/huggingface/datasets/issues/6186
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6186/events
[ { "color": "0075ca", "default": true, "description": "Improvements or additions to documentation", "id": 1935892861, "name": "documentation", "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation" }, { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
null
2024-03-28T21:40:35Z
[]
https://github.com/huggingface/datasets/issues/6186
CONTRIBUTOR
completed
null
null
[ "That'd be a great idea! @mariosasko or @lhoestq, would it be possible to fix the code snippet or do you have another suggested way for doing this?", "Indeed `if __name__ == \"__main__\"` is important in this case.\r\n\r\nNot sure about the imbalanced GPU usage though, but maybe you can try using the `torch.cuda.device` context manager ?\r\n\r\n> also, should I do it like this or use nn.DataParallel?\r\n\r\nIn this case you wouldn't need a multiprocessed map no ? Since nn.DataParallel would take care of parallelism", "Adding this Tweet for reference: https://twitter.com/jxmnop/status/1716834517909119019.", "I think the issue is that we set `CUDA_VISIBLE_DEVICES` after pytorch is imported ?\r\n\r\nWe should use `torch.cuda.set_device(...)` instead", "@lhoestq \r\n> In this case you wouldn't need a multiprocessed map no ?\r\n\r\nYes. But how to load a model to 2 GPU simultaneously without something like accelerate?", "> @lhoestq\r\n> \r\n> > In this case you wouldn't need a multiprocessed map no ?\r\n> \r\n> Yes. But how to load a model to 2 GPU simultaneously without something like accelerate?\r\n\r\nTake a look at this fix #6550 . Basically, you move the model to each GPU inside of the function to be mapped. \r\n\r\n", "In case someone also runs into this issue, I wrote a [blog post](https://forrestbao.github.io/2024/01/30/datasets_map_with_rank_multiple_GPUs.html) with a complete working example by compiling information from several PRs and issues here. Hope it can help. This issue cost me a few hours. I hope my blog post can save you time before the official document gets fixed. ", "Thanks ! I updated the docs in https://github.com/huggingface/datasets/pull/6550", "hey @forrestbao , i was too struggling with the same issue for weeks hence i checked out your blog. great work on the blog. \r\nhowever i wanted to ask you could we scale up the process by reinitializing the same model on the same GPU multiple times for even more speedups ? \r\n\r\ni mean to say given that on a multi GPU setup where GPU vram is above 40GB each, after intializing the translation model which is barely 1-2GB in VRAM size, the rest of VRAM sits idle, how could i keep creating multiple instances of the same model on the same GPU for all GPUs to maxmize flops ? ", "You can use one single instance on your GPU and increase the batch size until you fill the VRAM", "@lhoestq i tried that, but i noticed that after a certain number of batch_size, using a larger batch_size makes the overall process really slow than using a lower batch_size.", "Hi @lhoestq , could you help with my two questions: \r\n1. You mentioned `if __name__ == \"__main__\"`, why is that? I tried with a toy dataset and didn't put this line, my two GPU usage looks balanced. \r\n2. Is there any difference between \r\n`from multiprocess import set_start_method` and `from multiprocessing import set_start_method`? The latter is Python's built-in library. In [the official doc](https://huggingface.co/docs/datasets/en/process), it uses `from multiprocess import set_start_method`, but it gives me error like \r\n```\r\n[jobuser@f6e2419a0a63d45638da-n0-0 ~]$ python test.py\r\nTraceback (most recent call last):\r\n File \"/home/jobuser/test.py\", line 33, in <module>\r\n updated_dataset = dataset.map(\r\n File \"/home/jobuser/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py\", line 593, in wrapper\r\n out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n File \"/home/jobuser/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py\", line 558, in wrapper\r\n out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n File \"/home/jobuser/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py\", line 3189, in map\r\n with Pool(len(kwargs_per_job)) as pool:\r\n File \"/home/jobuser/.local/lib/python3.10/site-packages/multiprocess/context.py\", line 119, in Pool\r\n return Pool(processes, initializer, initargs, maxtasksperchild,\r\n File \"/home/jobuser/.local/lib/python3.10/site-packages/multiprocess/pool.py\", line 191, in __init__\r\n self._setup_queues()\r\n File \"/home/jobuser/.local/lib/python3.10/site-packages/multiprocess/pool.py\", line 343, in _setup_queues\r\n self._inqueue = self._ctx.SimpleQueue()\r\n File \"/home/jobuser/.local/lib/python3.10/site-packages/multiprocess/context.py\", line 113, in SimpleQueue\r\n return SimpleQueue(ctx=self.get_context())\r\n File \"/home/jobuser/.local/lib/python3.10/site-packages/multiprocess/queues.py\", line 339, in __init__\r\n self._rlock = ctx.Lock()\r\n File \"/home/jobuser/.local/lib/python3.10/site-packages/multiprocess/context.py\", line 68, in Lock\r\n return Lock(ctx=self.get_context())\r\n File \"/home/jobuser/.local/lib/python3.10/site-packages/multiprocess/synchronize.py\", line 168, in __init__\r\n SemLock.__init__(self, SEMAPHORE, 1, 1, ctx=ctx)\r\n File \"/home/jobuser/.local/lib/python3.10/site-packages/multiprocess/synchronize.py\", line 86, in __init__\r\n register(self._semlock.name, \"semaphore\")\r\n File \"/home/jobuser/.local/lib/python3.10/site-packages/multiprocess/resource_tracker.py\", line 150, in register\r\n self._send('REGISTER', name, rtype)\r\n File \"/home/jobuser/.local/lib/python3.10/site-packages/multiprocess/resource_tracker.py\", line 157, in _send\r\n self.ensure_running()\r\n File \"/home/jobuser/.local/lib/python3.10/site-packages/multiprocess/resource_tracker.py\", line 124, in ensure_running\r\n pid = util.spawnv_passfds(exe, args, fds_to_pass)\r\n File \"/home/jobuser/.local/lib/python3.10/site-packages/multiprocess/util.py\", line 452, in spawnv_passfds\r\n return _posixsubprocess.fork_exec(\r\nTypeError: fork_exec() takes exactly 21 arguments (17 given)\r\n```\r\nwhich seems caused by python version. I am using Python 3.10.2. ", "Hi ! \r\n\r\n> You mentioned if __name__ == \"__main__\", why is that? I tried with a toy dataset and didn't put this line, my two GPU usage looks balanced.\r\n\r\nIt's a good practice when doing multiprocessing in python. Depending on the multiprocessing method and your python version, python could re-run the code in your main.py in subprocesses that you don't want to re-run (e.g. recursively spawning processes and failing). Though some multiprocessing methods don't re-run main.py and it appears to be your case ;)\r\n\r\n> Is there any difference between\r\nfrom multiprocess import set_start_method and from multiprocessing import set_start_method? The latter is Python's built-in library. In [the official doc](https://huggingface.co/docs/datasets/en/process), it uses from multiprocess import set_start_method, but it gives me error like\r\n\r\nYes, `datasets` uses `multiprocess` which is a separate library from the built-in `multiprocessing`.\r\n\r\n`multiprocess` is an extended version of `multiprocessing` which allows e.g. to pass `lambda` functions to subprocesses", "Thanks @lhoestq for explanation. Is it okay we use `multiprocessing` for set_start_method given the above-mentioned issue for multiprocess? From my run with toy example, it's fine. Just want to check if you foresee any problems. ", "Not sure whether `multiprocessing.set_start_method` has any effect actually since we use `dill` for multiprocessed `map()`", "I'm running the [code example of multi-GPU processing](https://huggingface.co/docs/datasets/en/process#multiprocessing) on a Linux 8x A100 instance. The entire python code run time is 30 seconds faster if I add one line to set torch number of threads immediately after the `import torch` statement. It loads faster to the eight GPUs (however the map() progress bars take similar amount of time without/with this additional line).\r\n```\r\nimport torch\r\ntorch.set_num_threads(1) # I added this line.\r\n\r\nfrom multiprocess import set_start_method\r\n```\r\nFWIW: my instance has these versions.\r\n```\r\nCUDA 12.2 driver 535.161.08\r\nPython 3.10.12\r\ntorch '2.2.2'\r\nmultiprocess '0.70.16'\r\ntransformers '4.39.2'\r\ndatasets '2.18.0'\r\n```" ]
Feature request: add code example of multi-GPU processing
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6186/reactions" }
I_kwDODunzps5vbUKh
null
2023-08-28T10:00:59Z
https://api.github.com/repos/huggingface/datasets/issues/6186/comments
### Feature request Would be great to add a code example of how to do multi-GPU processing with 🤗 Datasets in the documentation. cc @stevhliu Currently the docs has a small [section](https://huggingface.co/docs/datasets/v2.3.2/en/process#map) on this saying "your big GPU call goes here", however it didn't work for me out-of-the-box. Let's say you have a PyTorch model that can do translation, and you have multiple GPUs. In that case, you'd like to duplicate the model on each GPU, each processing (translating) a chunk of the data in parallel. Here's how I tried to do that: ``` from datasets import load_dataset from transformers import AutoModelForSeq2SeqLM, AutoTokenizer from multiprocess import set_start_method import torch import os dataset = load_dataset("mlfoundations/datacomp_small") tokenizer = AutoTokenizer.from_pretrained("facebook/nllb-200-distilled-600M") model = AutoModelForSeq2SeqLM.from_pretrained("facebook/nllb-200-distilled-600M") # put model on each available GPU # also, should I do it like this or use nn.DataParallel? model.to("cuda:0") model.to("cuda:1") set_start_method("spawn") def translate_captions(batch, rank): os.environ["CUDA_VISIBLE_DEVICES"] = str(rank % torch.cuda.device_count()) texts = batch["text"] inputs = tokenizer(texts, padding=True, truncation=True, return_tensors="pt").to(model.device) translated_tokens = model.generate( **inputs, forced_bos_token_id=tokenizer.lang_code_to_id["eng_Latn"], max_length=30 ) translated_texts = tokenizer.batch_decode(translated_tokens, skip_special_tokens=True) batch["translated_text"] = translated_texts return batch updated_dataset = dataset.map(translate_captions, with_rank=True, num_proc=2, batched=True, batch_size=256) ``` I've personally tried running this script on a machine with 2 A100 GPUs. ## Error 1 Running the code snippet above from the terminal (python script.py) resulted in the following error: ``` Traceback (most recent call last): File "<string>", line 1, in <module> File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/multiprocess/spawn.py", line 116, in spawn_main exitcode = _main(fd, parent_sentinel) File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/multiprocess/spawn.py", line 125, in _main prepare(preparation_data) File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/multiprocess/spawn.py", line 236, in prepare _fixup_main_from_path(data['init_main_from_path']) File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/multiprocess/spawn.py", line 287, in _fixup_main_from_path main_content = runpy.run_path(main_path, File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/runpy.py", line 289, in run_path return _run_module_code(code, init_globals, run_name, File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/runpy.py", line 96, in _run_module_code _run_code(code, mod_globals, init_globals, File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/runpy.py", line 86, in _run_code exec(code, run_globals) File "/home/niels/python_projects/datacomp/datasets_multi_gpu.py", line 16, in <module> set_start_method("spawn") File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/multiprocess/context.py", line 247, in set_start_method raise RuntimeError('context has already been set') RuntimeError: context has already been set ``` ## Error 2 Then, based on [this Stackoverflow answer](https://stackoverflow.com/a/71616344/7762882), I put the `set_start_method("spawn")` section in a try: catch block. This resulted in the following error: ``` File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/datasets/dataset_dict.py", line 817, in <dictcomp> k: dataset.map( File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 2926, in map with Pool(nb_of_missing_shards, initargs=initargs, initializer=initializer) as pool: File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/multiprocess/context.py", line 119, in Pool return Pool(processes, initializer, initargs, maxtasksperchild, File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/multiprocess/pool.py", line 215, in __init__ self._repopulate_pool() File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/multiprocess/pool.py", line 306, in _repopulate_pool return self._repopulate_pool_static(self._ctx, self.Process, File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/multiprocess/pool.py", line 329, in _repopulate_pool_static w.start() File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/multiprocess/process.py", line 121, in start self._popen = self._Popen(self) File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/multiprocess/context.py", line 288, in _Popen return Popen(process_obj) File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/multiprocess/popen_spawn_posix.py", line 32, in __init__ super().__init__(process_obj) File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/multiprocess/popen_fork.py", line 19, in __init__ self._launch(process_obj) File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/multiprocess/popen_spawn_posix.py", line 42, in _launch prep_data = spawn.get_preparation_data(process_obj._name) File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/multiprocess/spawn.py", line 154, in get_preparation_data _check_not_importing_main() File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/multiprocess/spawn.py", line 134, in _check_not_importing_main raise RuntimeError(''' RuntimeError: An attempt has been made to start a new process before the current process has finished its bootstrapping phase. This probably means that you are not using fork to start your child processes and you have forgotten to use the proper idiom in the main module: if __name__ == '__main__': freeze_support() ... The "freeze_support()" line can be omitted if the program is not going to be frozen to produce an executable. ``` So then I put the last line under a `if __name__ == '__main__':` block. Then the code snippet seemed to work, but it seemed that it's only leveraging a single GPU (based on monitoring `nvidia-smi`): ``` Mon Aug 28 12:19:24 2023 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 515.65.01 Driver Version: 515.65.01 CUDA Version: 11.7 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 NVIDIA A100-SXM... On | 00000000:01:00.0 Off | 0 | | N/A 55C P0 76W / 275W | 8747MiB / 81920MiB | 0% Default | | | | Disabled | +-------------------------------+----------------------+----------------------+ | 1 NVIDIA A100-SXM... On | 00000000:47:00.0 Off | 0 | | N/A 67C P0 274W / 275W | 59835MiB / 81920MiB | 100% Default | | | | Disabled | ``` Both GPUs should have equal GPU usage, but I've always noticed that the last GPU has way more usage than the other ones. This made me think that `os.environ["CUDA_VISIBLE_DEVICES"] = str(rank % torch.cuda.device_count())` might not work inside a Python script, especially if done after importing PyTorch? ### Motivation Would be great to clarify how to do multi-GPU data processing. ### Your contribution If my code snippet can be fixed, I can contribute it to the docs :)
{ "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/NielsRogge", "id": 48327001, "login": "NielsRogge", "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "repos_url": "https://api.github.com/users/NielsRogge/repos", "site_admin": false, "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "type": "User", "url": "https://api.github.com/users/NielsRogge" }
https://api.github.com/repos/huggingface/datasets/issues/6186/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6186/timeline
closed
false
6,186
null
2023-11-22T15:42:20Z
null
false
1,868,077,748
https://api.github.com/repos/huggingface/datasets/issues/6185
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6185/events
[]
null
2023-08-29T14:49:58Z
[]
https://github.com/huggingface/datasets/issues/6185
NONE
null
null
null
[ "You can cast the `input_image` column to the `Image` type to fix the issue:\r\n```python\r\nds.cast_column(\"input_image\", datasets.Image())\r\n```" ]
Error in saving the PIL image into *.arrow files using datasets.arrow_writer
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6185/reactions" }
I_kwDODunzps5vWJq0
null
2023-08-26T12:15:57Z
https://api.github.com/repos/huggingface/datasets/issues/6185/comments
### Describe the bug I am using the ArrowWriter from datasets.arrow_writer to save a json-style file as arrow files. Within the dictionary, it contains a feature called "image" which is a list of PIL.Image objects. I am saving the json using the following script: ``` def save_to_arrow(path,temp): with ArrowWriter(path=path,writer_batch_size=20) as writer: writer.write_batch(temp) writer.finalize() ``` However, when I attempt to restore the dataset and use the ```Dataset.from_file(path)``` function to load the arrow file, there seems to be an issue with the PIL.Image object in the dataset. The list of PIL.Images appears as follows rather than a normal PIL.Image object: ![1693051705440](https://github.com/huggingface/datasets/assets/14247682/03b204c2-d0fa-4d19-beff-6f4d7b83c848) ### Steps to reproduce the bug 1. Storing the data json into arrow files: ``` def save_to_arrow(path,temp): with ArrowWriter(path=path,writer_batch_size=20) as writer: writer.write_batch(temp) writer.finalize() save_to_arrow( path, json_file ) ``` 2. try to load the arrow file into the Dataset object using the ```Dataset.from_file(path)``` ### Expected behavior Except to saving the contained "image" feature as a list PIL.Image objects as the arrow file. And I can restore the dataset from the file. ### Environment info - `datasets` version: 2.12.0 - Platform: Linux-5.4.0-150-generic-x86_64-with-glibc2.17 - Python version: 3.8.17 - Huggingface_hub version: 0.16.4 - PyArrow version: 12.0.1 - Pandas version: 1.4.4
{ "avatar_url": "https://avatars.githubusercontent.com/u/14247682?v=4", "events_url": "https://api.github.com/users/HaozheZhao/events{/privacy}", "followers_url": "https://api.github.com/users/HaozheZhao/followers", "following_url": "https://api.github.com/users/HaozheZhao/following{/other_user}", "gists_url": "https://api.github.com/users/HaozheZhao/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/HaozheZhao", "id": 14247682, "login": "HaozheZhao", "node_id": "MDQ6VXNlcjE0MjQ3Njgy", "organizations_url": "https://api.github.com/users/HaozheZhao/orgs", "received_events_url": "https://api.github.com/users/HaozheZhao/received_events", "repos_url": "https://api.github.com/users/HaozheZhao/repos", "site_admin": false, "starred_url": "https://api.github.com/users/HaozheZhao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/HaozheZhao/subscriptions", "type": "User", "url": "https://api.github.com/users/HaozheZhao" }
https://api.github.com/repos/huggingface/datasets/issues/6185/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6185/timeline
open
false
6,185
null
null
null
false
1,867,766,143
https://api.github.com/repos/huggingface/datasets/issues/6184
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6184/events
[ { "color": "cfd3d7", "default": true, "description": "This issue or pull request already exists", "id": 1935892865, "name": "duplicate", "node_id": "MDU6TGFiZWwxOTM1ODkyODY1", "url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate" } ]
null
2023-08-29T20:57:07Z
[]
https://github.com/huggingface/datasets/issues/6184
NONE
completed
null
null
[ "This issue is a duplicate of https://github.com/huggingface/datasets/issues/3297. This is a limitation of `dill`, a package we use for caching (non-`__main__` module objects are serialized by reference). You can find more info about it here: https://github.com/uqfoundation/dill/issues/424.\r\n\r\nIn your case, moving \r\n```\r\ndata = datasets.load_dataset('json', data_files=['/tmp/test.json'], split='train')\r\ndata = data.map(transform)\r\n``` \r\nto `test.py` and setting `transform.__module__ = None` at the end of `dataset.py` should fix the issue.", "I understand this may be a limitation of an upstream tool, but for a user for datasets this is very annoying, as when you have dozens of different datasets with different preprocessing functions you can't really move them all into the same file. It may be worth seeing if there is a way to specialize the dependency (eg. subclass it) and enforce behaviors that makes sense for your product.\r\n\r\nI was able to work around this for now by setting `__module__ = None`. If such workarounds are required for now it may be better to document it somewhere than a single obscure issue from a long time ago.\r\n\r\nAs this is a duplicate issue I'm closing it.\r\n\r\nI have another issue with the cache https://github.com/huggingface/datasets/issues/6179 can you take a look?" ]
Map cache does not detect function changes in another module
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6184/reactions" }
I_kwDODunzps5vU9l_
null
2023-08-25T22:59:14Z
https://api.github.com/repos/huggingface/datasets/issues/6184/comments
```python # dataset.py import os import datasets if not os.path.exists('/tmp/test.json'): with open('/tmp/test.json', 'w') as file: file.write('[{"text": "hello"}]') def transform(example): text = example['text'] # text += ' world' return {'text': text} data = datasets.load_dataset('json', data_files=['/tmp/test.json'], split='train') data = data.map(transform) ``` ```python # test.py import dataset print(next(iter(dataset.data))) ``` Initialize cache ``` python3 test.py # {'text': 'hello'} ``` Edit dataset.py and uncomment the commented line, run again ``` python3 test.py # {'text': 'hello'} # expected: {'text': 'hello world'} ``` Clear cache and run again ``` rm -rf ~/.cache/huggingface/datasets/* python3 test.py # {'text': 'hello world'} ``` If instead the two files are combined, then changes to the function are detected correctly. But it's expected when working on any realistic codebase that things will be modularized into separate files.
{ "avatar_url": "https://avatars.githubusercontent.com/u/511073?v=4", "events_url": "https://api.github.com/users/jonathanasdf/events{/privacy}", "followers_url": "https://api.github.com/users/jonathanasdf/followers", "following_url": "https://api.github.com/users/jonathanasdf/following{/other_user}", "gists_url": "https://api.github.com/users/jonathanasdf/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jonathanasdf", "id": 511073, "login": "jonathanasdf", "node_id": "MDQ6VXNlcjUxMTA3Mw==", "organizations_url": "https://api.github.com/users/jonathanasdf/orgs", "received_events_url": "https://api.github.com/users/jonathanasdf/received_events", "repos_url": "https://api.github.com/users/jonathanasdf/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jonathanasdf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jonathanasdf/subscriptions", "type": "User", "url": "https://api.github.com/users/jonathanasdf" }
https://api.github.com/repos/huggingface/datasets/issues/6184/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6184/timeline
closed
false
6,184
null
2023-08-29T20:56:49Z
null
false
1,867,743,276
https://api.github.com/repos/huggingface/datasets/issues/6183
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6183/events
[]
null
2023-08-29T13:26:22Z
[]
https://github.com/huggingface/datasets/issues/6183
NONE
completed
null
null
[ "Same problem", "This was fixed in https://github.com/huggingface/datasets/pull/6155, which will be included in the next release (or you can install `datasets` from source to use it immediately)." ]
Load dataset with non-existent file
{ "+1": 0, "-1": 0, "confused": 1, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/6183/reactions" }
I_kwDODunzps5vU4As
null
2023-08-25T22:21:22Z
https://api.github.com/repos/huggingface/datasets/issues/6183/comments
### Describe the bug When load a dataset from datasets and pass a wrong path to json with the data, error message does not contain something abount "wrong path" or "file do not exist" - ```SchemaInferenceError: Please pass `features` or at least one example when writing data``` ### Steps to reproduce the bug ```python from datasets import load_dataset load_dataset('json', data_files='/home/alexey/unreal_file.json') ``` ### Expected behavior Raise os FileNotFound error or custom error with informative message ### Environment info ``` # packages in environment at /home/alexey/.conda/envs/alex_LoRA: # # Name Version Build Channel _libgcc_mutex 0.1 main _openmp_mutex 5.1 1_gnu accelerate 0.21.0 pypi_0 pypi aiohttp 3.8.5 pypi_0 pypi aiosignal 1.3.1 pypi_0 pypi antlr4-python3-runtime 4.9.3 pypi_0 pypi appdirs 1.4.4 pypi_0 pypi asttokens 2.0.5 pyhd3eb1b0_0 async-timeout 4.0.3 pypi_0 pypi attrs 23.1.0 pypi_0 pypi backcall 0.2.0 pyhd3eb1b0_0 bitsandbytes 0.41.1 pypi_0 pypi bzip2 1.0.8 h7b6447c_0 ca-certificates 2023.05.30 h06a4308_0 certifi 2023.7.22 pypi_0 pypi charset-normalizer 3.2.0 pypi_0 pypi click 8.1.6 pypi_0 pypi cmake 3.27.2 pypi_0 pypi comm 0.1.2 py310h06a4308_0 contourpy 1.1.0 pypi_0 pypi cycler 0.11.0 pypi_0 pypi datasets 2.14.4 pypi_0 pypi debugpy 1.6.7 py310h6a678d5_0 decorator 5.1.1 pyhd3eb1b0_0 dill 0.3.7 pypi_0 pypi docker-pycreds 0.4.0 pypi_0 pypi executing 0.8.3 pyhd3eb1b0_0 filelock 3.12.2 pypi_0 pypi fire 0.5.0 pypi_0 pypi fonttools 4.42.0 pypi_0 pypi frozenlist 1.4.0 pypi_0 pypi fsspec 2023.6.0 pypi_0 pypi gitdb 4.0.10 pypi_0 pypi gitpython 3.1.32 pypi_0 pypi huggingface-hub 0.16.4 pypi_0 pypi idna 3.4 pypi_0 pypi ipykernel 6.25.0 py310h2f386ee_0 ipython 8.12.2 py310h06a4308_0 ipython-genutils 0.2.0 pypi_0 pypi ipywidgets 8.0.4 py310h06a4308_0 jedi 0.18.1 py310h06a4308_1 jinja2 3.1.2 pypi_0 pypi jsonschema 4.19.0 pypi_0 pypi jsonschema-specifications 2023.7.1 pypi_0 pypi jupyter_client 8.1.0 py310h06a4308_0 jupyter_core 5.3.0 py310h06a4308_0 jupyterlab_widgets 3.0.5 py310h06a4308_0 kiwisolver 1.4.4 pypi_0 pypi ld_impl_linux-64 2.38 h1181459_1 libffi 3.3 he6710b0_2 libgcc-ng 11.2.0 h1234567_1 libgomp 11.2.0 h1234567_1 libsodium 1.0.18 h7b6447c_0 libstdcxx-ng 11.2.0 h1234567_1 libuuid 1.41.5 h5eee18b_0 lightning-utilities 0.9.0 pypi_0 pypi lit 16.0.6 pypi_0 pypi markupsafe 2.1.3 pypi_0 pypi matplotlib 3.7.2 pypi_0 pypi matplotlib-inline 0.1.6 py310h06a4308_0 mpmath 1.3.0 pypi_0 pypi multidict 6.0.4 pypi_0 pypi multiprocess 0.70.15 pypi_0 pypi nbformat 4.2.0 pypi_0 pypi ncurses 6.4 h6a678d5_0 nest-asyncio 1.5.6 py310h06a4308_0 networkx 3.1 pypi_0 pypi numpy 1.25.2 pypi_0 pypi nvidia-cublas-cu11 11.10.3.66 pypi_0 pypi nvidia-cuda-cupti-cu11 11.7.101 pypi_0 pypi nvidia-cuda-nvrtc-cu11 11.7.99 pypi_0 pypi nvidia-cuda-runtime-cu11 11.7.99 pypi_0 pypi nvidia-cudnn-cu11 8.5.0.96 pypi_0 pypi nvidia-cufft-cu11 10.9.0.58 pypi_0 pypi nvidia-curand-cu11 10.2.10.91 pypi_0 pypi nvidia-cusolver-cu11 11.4.0.1 pypi_0 pypi nvidia-cusparse-cu11 11.7.4.91 pypi_0 pypi nvidia-nccl-cu11 2.14.3 pypi_0 pypi nvidia-nvtx-cu11 11.7.91 pypi_0 pypi omegaconf 2.3.0 pypi_0 pypi openssl 1.1.1v h7f8727e_0 packaging 23.0 py310h06a4308_0 pandas 2.0.3 pypi_0 pypi parso 0.8.3 pyhd3eb1b0_0 pathtools 0.1.2 pypi_0 pypi peft 0.4.0 pypi_0 pypi pexpect 4.8.0 pyhd3eb1b0_3 pickleshare 0.7.5 pyhd3eb1b0_1003 pillow 10.0.0 pypi_0 pypi pip 23.2.1 py310h06a4308_0 platformdirs 2.5.2 py310h06a4308_0 plotly 5.16.1 pypi_0 pypi prompt-toolkit 3.0.36 py310h06a4308_0 protobuf 4.24.0 pypi_0 pypi psutil 5.9.0 py310h5eee18b_0 ptyprocess 0.7.0 pyhd3eb1b0_2 pure_eval 0.2.2 pyhd3eb1b0_0 pyarrow 12.0.1 pypi_0 pypi pygments 2.15.1 py310h06a4308_1 pyparsing 3.0.9 pypi_0 pypi python 3.10.0 h12debd9_5 python-dateutil 2.8.2 pyhd3eb1b0_0 pytorch-lightning 2.0.6 pypi_0 pypi pytz 2023.3 pypi_0 pypi pyyaml 6.0.1 pypi_0 pypi pyzmq 25.1.0 py310h6a678d5_0 readline 8.2 h5eee18b_0 referencing 0.30.2 pypi_0 pypi regex 2023.8.8 pypi_0 pypi requests 2.31.0 pypi_0 pypi rpds-py 0.9.2 pypi_0 pypi safetensors 0.3.2 pypi_0 pypi scipy 1.11.1 pypi_0 pypi sentencepiece 0.1.99 pypi_0 pypi sentry-sdk 1.29.2 pypi_0 pypi setproctitle 1.3.2 pypi_0 pypi setuptools 68.0.0 py310h06a4308_0 six 1.16.0 pyhd3eb1b0_1 smmap 5.0.0 pypi_0 pypi sqlite 3.41.2 h5eee18b_0 stack_data 0.2.0 pyhd3eb1b0_0 sympy 1.12 pypi_0 pypi tenacity 8.2.3 pypi_0 pypi termcolor 2.3.0 pypi_0 pypi tk 8.6.12 h1ccaba5_0 tokenizers 0.13.3 pypi_0 pypi torch 2.0.1 pypi_0 pypi torchmetrics 1.0.3 pypi_0 pypi tornado 6.3.2 py310h5eee18b_0 tqdm 4.66.1 pypi_0 pypi traitlets 5.7.1 py310h06a4308_0 transformers 4.31.0 pypi_0 pypi triton 2.0.0 pypi_0 pypi typing-extensions 4.7.1 pypi_0 pypi tzdata 2023.3 pypi_0 pypi urllib3 2.0.4 pypi_0 pypi wandb 0.15.8 pypi_0 pypi wcwidth 0.2.5 pyhd3eb1b0_0 wheel 0.38.4 py310h06a4308_0 widgetsnbextension 4.0.5 py310h06a4308_0 xxhash 3.3.0 pypi_0 pypi xz 5.4.2 h5eee18b_0 yarl 1.9.2 pypi_0 pypi zeromq 4.3.4 h2531618_0 zlib 1.2.13 h5eee18b_0 active environment : None user config file : /home/alexey/.condarc populated config files : conda version : 23.1.0 conda-build version : 3.22.0 python version : 3.9.13.final.0 virtual packages : __archspec=1=x86_64 __cuda=12.0=0 __glibc=2.35=0 __linux=5.19.0=0 __unix=0=0 base environment : /opt/anaconda/anaconda3 (read only) conda av data dir : /opt/anaconda/anaconda3/etc/conda conda av metadata url : None channel URLs : https://repo.anaconda.com/pkgs/main/linux-64 https://repo.anaconda.com/pkgs/main/noarch https://repo.anaconda.com/pkgs/r/linux-64 https://repo.anaconda.com/pkgs/r/noarch package cache : /opt/anaconda/anaconda3/pkgs /home/alexey/.conda/pkgs envs directories : /home/alexey/.conda/envs /opt/anaconda/anaconda3/envs platform : linux-64 user-agent : conda/23.1.0 requests/2.31.0 CPython/3.9.13 Linux/5.19.0-46-generic ubuntu/22.04.2 glibc/2.35 UID:GID : 1009:1009 netrc file : /home/alexey/.netrc offline mode : False ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/64750224?v=4", "events_url": "https://api.github.com/users/freQuensy23-coder/events{/privacy}", "followers_url": "https://api.github.com/users/freQuensy23-coder/followers", "following_url": "https://api.github.com/users/freQuensy23-coder/following{/other_user}", "gists_url": "https://api.github.com/users/freQuensy23-coder/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/freQuensy23-coder", "id": 64750224, "login": "freQuensy23-coder", "node_id": "MDQ6VXNlcjY0NzUwMjI0", "organizations_url": "https://api.github.com/users/freQuensy23-coder/orgs", "received_events_url": "https://api.github.com/users/freQuensy23-coder/received_events", "repos_url": "https://api.github.com/users/freQuensy23-coder/repos", "site_admin": false, "starred_url": "https://api.github.com/users/freQuensy23-coder/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/freQuensy23-coder/subscriptions", "type": "User", "url": "https://api.github.com/users/freQuensy23-coder" }
https://api.github.com/repos/huggingface/datasets/issues/6183/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6183/timeline
closed
false
6,183
null
2023-08-29T13:26:22Z
null
false