url
stringlengths
61
61
repository_url
stringclasses
1 value
labels_url
stringlengths
75
75
comments_url
stringlengths
70
70
events_url
stringlengths
68
68
html_url
stringlengths
49
51
id
int64
904M
1.99B
node_id
stringlengths
18
32
number
int64
2.42k
6.41k
title
stringlengths
1
290
user
dict
labels
listlengths
0
4
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
listlengths
0
4
milestone
dict
comments
sequencelengths
0
30
created_at
unknown
updated_at
unknown
closed_at
unknown
author_association
stringclasses
3 values
active_lock_reason
float64
body
stringlengths
0
36.2k
reactions
dict
timeline_url
stringlengths
70
70
performed_via_github_app
float64
state_reason
stringclasses
3 values
draft
float64
0
1
pull_request
dict
is_pull_request
bool
2 classes
https://api.github.com/repos/huggingface/datasets/issues/6406
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6406/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6406/comments
https://api.github.com/repos/huggingface/datasets/issues/6406/events
https://github.com/huggingface/datasets/issues/6406
1,990,469,045
I_kwDODunzps52pCW1
6,406
CI Build PR Documentation is broken: ImportError: cannot import name 'TypeAliasType' from 'typing_extensions'
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
open
false
null
[]
null
[]
"2023-11-13T11:36:10"
"2023-11-13T11:36:10"
null
MEMBER
null
Our CI Build PR Documentation is broken. See: https://github.com/huggingface/datasets/actions/runs/6799554060/job/18486828777?pr=6390 ``` ImportError: cannot import name 'TypeAliasType' from 'typing_extensions' ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6406/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6406/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6405
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6405/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6405/comments
https://api.github.com/repos/huggingface/datasets/issues/6405/events
https://github.com/huggingface/datasets/issues/6405
1,990,358,743
I_kwDODunzps52onbX
6,405
ConfigNamesError on a simple CSV file
{ "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
open
false
null
[]
null
[ "The viewer is working now. \r\n\r\nBased on the repo commit history, the bug was due to the incorrect format of the `features` field in the README YAML (`Value` requires `dtype`, e.g., `Value(\"string\")`, but it was not specified)", "Feel free to close the issue", "Oh, OK! Thanks. So, there was no reason to open an issue" ]
"2023-11-13T10:28:29"
"2023-11-13T10:28:33"
null
CONTRIBUTOR
null
See https://huggingface.co/datasets/Nguyendo1999/mmath/discussions/1 ``` Error code: ConfigNamesError Exception: TypeError Message: __init__() missing 1 required positional argument: 'dtype' Traceback: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 65, in compute_config_names_response for config in sorted(get_dataset_config_names(path=dataset, token=hf_token)) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 351, in get_dataset_config_names dataset_module = dataset_module_factory( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1512, in dataset_module_factory raise e1 from None File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1489, in dataset_module_factory return HubDatasetModuleFactoryWithoutScript( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1039, in get_module dataset_infos = DatasetInfosDict.from_dataset_card_data(dataset_card_data) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/info.py", line 468, in from_dataset_card_data dataset_info = DatasetInfo._from_yaml_dict(dataset_card_data["dataset_info"]) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/info.py", line 399, in _from_yaml_dict yaml_data["features"] = Features._from_yaml_list(yaml_data["features"]) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1838, in _from_yaml_list return cls.from_dict(from_yaml_inner(yaml_data)) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1690, in from_dict obj = generate_from_dict(dic) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1345, in generate_from_dict return {key: generate_from_dict(value) for key, value in obj.items()} File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1345, in <dictcomp> return {key: generate_from_dict(value) for key, value in obj.items()} File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1353, in generate_from_dict return class_type(**{k: v for k, v in obj.items() if k in field_names}) TypeError: __init__() missing 1 required positional argument: 'dtype' ``` This is the CSV file: https://huggingface.co/datasets/Nguyendo1999/mmath/blob/dbcdd7c2c6fc447f852ec136a7532292802bb46f/math_train.csv
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6405/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6405/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6404
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6404/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6404/comments
https://api.github.com/repos/huggingface/datasets/issues/6404/events
https://github.com/huggingface/datasets/pull/6404
1,990,211,901
PR_kwDODunzps5fRrJ-
6,404
Support pyarrow 14.0.1
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
open
false
null
[]
null
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005974 / 0.011353 (-0.005378) | 0.003707 / 0.011008 (-0.007301) | 0.079908 / 0.038508 (0.041399) | 0.036891 / 0.023109 (0.013781) | 0.390355 / 0.275898 (0.114457) | 0.424439 / 0.323480 (0.100960) | 0.004936 / 0.007986 (-0.003050) | 0.002886 / 0.004328 (-0.001442) | 0.062793 / 0.004250 (0.058542) | 0.054192 / 0.037052 (0.017139) | 0.394697 / 0.258489 (0.136208) | 0.437775 / 0.293841 (0.143934) | 0.027596 / 0.128546 (-0.100950) | 0.008006 / 0.075646 (-0.067640) | 0.262515 / 0.419271 (-0.156757) | 0.071014 / 0.043533 (0.027481) | 0.392964 / 0.255139 (0.137825) | 0.417449 / 0.283200 (0.134249) | 0.021819 / 0.141683 (-0.119864) | 1.458083 / 1.452155 (0.005929) | 1.489042 / 1.492716 (-0.003674) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.230303 / 0.018006 (0.212297) | 0.439361 / 0.000490 (0.438871) | 0.010615 / 0.000200 (0.010415) | 0.000303 / 0.000054 (0.000249) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026600 / 0.037411 (-0.010811) | 0.078605 / 0.014526 (0.064079) | 0.088552 / 0.176557 (-0.088005) | 0.149429 / 0.737135 (-0.587706) | 0.087921 / 0.296338 (-0.208417) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.422063 / 0.215209 (0.206854) | 4.201333 / 2.077655 (2.123678) | 1.982284 / 1.504120 (0.478164) | 1.779625 / 1.541195 (0.238431) | 1.872454 / 1.468490 (0.403964) | 0.502713 / 4.584777 (-4.082063) | 3.103372 / 3.745712 (-0.642340) | 3.030516 / 5.269862 (-2.239346) | 1.909123 / 4.565676 (-2.656554) | 0.057134 / 0.424275 (-0.367141) | 0.006405 / 0.007607 (-0.001202) | 0.494452 / 0.226044 (0.268408) | 4.839345 / 2.268929 (2.570417) | 2.424721 / 55.444624 (-53.019904) | 2.028618 / 6.876477 (-4.847859) | 2.082528 / 2.142072 (-0.059545) | 0.587396 / 4.805227 (-4.217831) | 0.125013 / 6.500664 (-6.375651) | 0.061369 / 0.075469 (-0.014100) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.235799 / 1.841788 (-0.605989) | 17.919977 / 8.074308 (9.845669) | 13.868524 / 10.191392 (3.677132) | 0.146058 / 0.680424 (-0.534366) | 0.016826 / 0.534201 (-0.517375) | 0.337512 / 0.579283 (-0.241771) | 0.390263 / 0.434364 (-0.044101) | 0.385336 / 0.540337 (-0.155001) | 0.566004 / 1.386936 (-0.820932) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006537 / 0.011353 (-0.004816) | 0.003787 / 0.011008 (-0.007221) | 0.062568 / 0.038508 (0.024060) | 0.066672 / 0.023109 (0.043563) | 0.420447 / 0.275898 (0.144549) | 0.457260 / 0.323480 (0.133780) | 0.005005 / 0.007986 (-0.002981) | 0.003037 / 0.004328 (-0.001291) | 0.062095 / 0.004250 (0.057844) | 0.049619 / 0.037052 (0.012567) | 0.429935 / 0.258489 (0.171446) | 0.471566 / 0.293841 (0.177725) | 0.029688 / 0.128546 (-0.098859) | 0.008028 / 0.075646 (-0.067619) | 0.067915 / 0.419271 (-0.351356) | 0.042066 / 0.043533 (-0.001467) | 0.419275 / 0.255139 (0.164136) | 0.444819 / 0.283200 (0.161619) | 0.020100 / 0.141683 (-0.121583) | 1.439057 / 1.452155 (-0.013098) | 1.495657 / 1.492716 (0.002940) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.211148 / 0.018006 (0.193142) | 0.423777 / 0.000490 (0.423288) | 0.005892 / 0.000200 (0.005693) | 0.000086 / 0.000054 (0.000032) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026469 / 0.037411 (-0.010942) | 0.081438 / 0.014526 (0.066912) | 0.092007 / 0.176557 (-0.084550) | 0.143433 / 0.737135 (-0.593703) | 0.093039 / 0.296338 (-0.203300) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.410468 / 0.215209 (0.195259) | 4.083783 / 2.077655 (2.006128) | 2.234501 / 1.504120 (0.730381) | 2.122323 / 1.541195 (0.581128) | 2.255036 / 1.468490 (0.786546) | 0.497712 / 4.584777 (-4.087065) | 3.231187 / 3.745712 (-0.514525) | 3.005399 / 5.269862 (-2.264463) | 1.909516 / 4.565676 (-2.656161) | 0.057529 / 0.424275 (-0.366746) | 0.006475 / 0.007607 (-0.001132) | 0.477282 / 0.226044 (0.251238) | 4.799566 / 2.268929 (2.530637) | 2.497070 / 55.444624 (-52.947554) | 2.206359 / 6.876477 (-4.670118) | 2.281614 / 2.142072 (0.139541) | 0.581710 / 4.805227 (-4.223518) | 0.121572 / 6.500664 (-6.379092) | 0.058774 / 0.075469 (-0.016695) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.301880 / 1.841788 (-0.539908) | 18.287330 / 8.074308 (10.213021) | 14.939642 / 10.191392 (4.748250) | 0.153941 / 0.680424 (-0.526483) | 0.018345 / 0.534201 (-0.515856) | 0.335986 / 0.579283 (-0.243297) | 0.384264 / 0.434364 (-0.050099) | 0.393115 / 0.540337 (-0.147223) | 0.573343 / 1.386936 (-0.813594) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#d54b6459f4ed0b2519ddec605dd71956d2d1d3e4 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004805 / 0.011353 (-0.006548) | 0.003261 / 0.011008 (-0.007747) | 0.061585 / 0.038508 (0.023077) | 0.030236 / 0.023109 (0.007127) | 0.234767 / 0.275898 (-0.041131) | 0.260478 / 0.323480 (-0.063002) | 0.004121 / 0.007986 (-0.003865) | 0.002525 / 0.004328 (-0.001803) | 0.048213 / 0.004250 (0.043962) | 0.045229 / 0.037052 (0.008176) | 0.245143 / 0.258489 (-0.013346) | 0.271818 / 0.293841 (-0.022023) | 0.023594 / 0.128546 (-0.104952) | 0.007335 / 0.075646 (-0.068311) | 0.206246 / 0.419271 (-0.213026) | 0.060783 / 0.043533 (0.017250) | 0.238588 / 0.255139 (-0.016551) | 0.274985 / 0.283200 (-0.008214) | 0.018342 / 0.141683 (-0.123341) | 1.135445 / 1.452155 (-0.316710) | 1.184836 / 1.492716 (-0.307881) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095603 / 0.018006 (0.077597) | 0.290340 / 0.000490 (0.289850) | 0.000219 / 0.000200 (0.000019) | 0.000052 / 0.000054 (-0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018804 / 0.037411 (-0.018607) | 0.062525 / 0.014526 (0.047999) | 0.074797 / 0.176557 (-0.101760) | 0.120360 / 0.737135 (-0.616775) | 0.076182 / 0.296338 (-0.220156) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.274981 / 0.215209 (0.059772) | 2.684931 / 2.077655 (0.607276) | 1.453845 / 1.504120 (-0.050275) | 1.348361 / 1.541195 (-0.192834) | 1.402820 / 1.468490 (-0.065670) | 0.396311 / 4.584777 (-4.188466) | 2.396314 / 3.745712 (-1.349398) | 2.744379 / 5.269862 (-2.525482) | 1.615268 / 4.565676 (-2.950409) | 0.045920 / 0.424275 (-0.378355) | 0.004844 / 0.007607 (-0.002763) | 0.331132 / 0.226044 (0.105087) | 3.325484 / 2.268929 (1.056556) | 1.845734 / 55.444624 (-53.598890) | 1.537268 / 6.876477 (-5.339209) | 1.565155 / 2.142072 (-0.576918) | 0.480032 / 4.805227 (-4.325195) | 0.099917 / 6.500664 (-6.400747) | 0.042276 / 0.075469 (-0.033193) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.973128 / 1.841788 (-0.868660) | 12.643790 / 8.074308 (4.569482) | 10.319586 / 10.191392 (0.128194) | 0.131733 / 0.680424 (-0.548691) | 0.014849 / 0.534201 (-0.519352) | 0.270960 / 0.579283 (-0.308323) | 0.265409 / 0.434364 (-0.168955) | 0.309073 / 0.540337 (-0.231264) | 0.466204 / 1.386936 (-0.920732) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005067 / 0.011353 (-0.006286) | 0.003344 / 0.011008 (-0.007665) | 0.047917 / 0.038508 (0.009409) | 0.059556 / 0.023109 (0.036447) | 0.275777 / 0.275898 (-0.000121) | 0.299703 / 0.323480 (-0.023777) | 0.004185 / 0.007986 (-0.003801) | 0.002602 / 0.004328 (-0.001726) | 0.048723 / 0.004250 (0.044472) | 0.040686 / 0.037052 (0.003634) | 0.281078 / 0.258489 (0.022589) | 0.314725 / 0.293841 (0.020885) | 0.024645 / 0.128546 (-0.103901) | 0.007465 / 0.075646 (-0.068182) | 0.053827 / 0.419271 (-0.365445) | 0.033395 / 0.043533 (-0.010138) | 0.273675 / 0.255139 (0.018536) | 0.291261 / 0.283200 (0.008062) | 0.019733 / 0.141683 (-0.121950) | 1.134084 / 1.452155 (-0.318071) | 1.189186 / 1.492716 (-0.303531) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.114960 / 0.018006 (0.096954) | 0.308800 / 0.000490 (0.308311) | 0.000237 / 0.000200 (0.000037) | 0.000061 / 0.000054 (0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021633 / 0.037411 (-0.015778) | 0.073192 / 0.014526 (0.058666) | 0.081598 / 0.176557 (-0.094959) | 0.123085 / 0.737135 (-0.614050) | 0.088677 / 0.296338 (-0.207661) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.300865 / 0.215209 (0.085656) | 2.956847 / 2.077655 (0.879192) | 1.613890 / 1.504120 (0.109770) | 1.494074 / 1.541195 (-0.047121) | 1.550345 / 1.468490 (0.081855) | 0.408880 / 4.584777 (-4.175897) | 2.422848 / 3.745712 (-1.322865) | 2.690623 / 5.269862 (-2.579239) | 1.546922 / 4.565676 (-3.018755) | 0.047192 / 0.424275 (-0.377083) | 0.004882 / 0.007607 (-0.002725) | 0.360625 / 0.226044 (0.134580) | 3.512678 / 2.268929 (1.243749) | 1.978633 / 55.444624 (-53.465992) | 1.686927 / 6.876477 (-5.189549) | 1.748387 / 2.142072 (-0.393685) | 0.480780 / 4.805227 (-4.324447) | 0.099163 / 6.500664 (-6.401501) | 0.041194 / 0.075469 (-0.034275) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.989087 / 1.841788 (-0.852700) | 12.341951 / 8.074308 (4.267643) | 11.109329 / 10.191392 (0.917936) | 0.143329 / 0.680424 (-0.537095) | 0.015565 / 0.534201 (-0.518636) | 0.269532 / 0.579283 (-0.309751) | 0.274899 / 0.434364 (-0.159465) | 0.309308 / 0.540337 (-0.231030) | 0.439651 / 1.386936 (-0.947285) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#04a3f006a1a88c894ea10610d66dfddd73ad1490 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007880 / 0.011353 (-0.003473) | 0.004386 / 0.011008 (-0.006622) | 0.099067 / 0.038508 (0.060559) | 0.048036 / 0.023109 (0.024927) | 0.368349 / 0.275898 (0.092451) | 0.400052 / 0.323480 (0.076572) | 0.004493 / 0.007986 (-0.003493) | 0.003732 / 0.004328 (-0.000597) | 0.076153 / 0.004250 (0.071902) | 0.071024 / 0.037052 (0.033972) | 0.379771 / 0.258489 (0.121282) | 0.425005 / 0.293841 (0.131164) | 0.036092 / 0.128546 (-0.092454) | 0.009825 / 0.075646 (-0.065822) | 0.340217 / 0.419271 (-0.079055) | 0.089571 / 0.043533 (0.046038) | 0.371426 / 0.255139 (0.116287) | 0.397864 / 0.283200 (0.114664) | 0.029440 / 0.141683 (-0.112243) | 1.778100 / 1.452155 (0.325945) | 1.857202 / 1.492716 (0.364486) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.254022 / 0.018006 (0.236015) | 0.549844 / 0.000490 (0.549354) | 0.012824 / 0.000200 (0.012624) | 0.000378 / 0.000054 (0.000324) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032334 / 0.037411 (-0.005077) | 0.096101 / 0.014526 (0.081576) | 0.117825 / 0.176557 (-0.058731) | 0.179277 / 0.737135 (-0.557858) | 0.112614 / 0.296338 (-0.183724) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.455051 / 0.215209 (0.239842) | 4.537086 / 2.077655 (2.459431) | 2.198662 / 1.504120 (0.694542) | 1.982772 / 1.541195 (0.441578) | 2.058673 / 1.468490 (0.590182) | 0.569268 / 4.584777 (-4.015509) | 4.095000 / 3.745712 (0.349288) | 3.891680 / 5.269862 (-1.378182) | 2.345129 / 4.565676 (-2.220548) | 0.066974 / 0.424275 (-0.357301) | 0.008557 / 0.007607 (0.000950) | 0.545290 / 0.226044 (0.319245) | 5.453377 / 2.268929 (3.184448) | 2.858688 / 55.444624 (-52.585936) | 2.502367 / 6.876477 (-4.374109) | 2.515658 / 2.142072 (0.373586) | 0.681423 / 4.805227 (-4.123804) | 0.155975 / 6.500664 (-6.344689) | 0.070872 / 0.075469 (-0.004597) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.474674 / 1.841788 (-0.367114) | 21.653619 / 8.074308 (13.579311) | 16.277111 / 10.191392 (6.085719) | 0.166445 / 0.680424 (-0.513979) | 0.021676 / 0.534201 (-0.512525) | 0.466949 / 0.579283 (-0.112334) | 0.500953 / 0.434364 (0.066589) | 0.540413 / 0.540337 (0.000076) | 0.792989 / 1.386936 (-0.593947) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007633 / 0.011353 (-0.003720) | 0.004468 / 0.011008 (-0.006540) | 0.075573 / 0.038508 (0.037065) | 0.081174 / 0.023109 (0.058064) | 0.440741 / 0.275898 (0.164843) | 0.489493 / 0.323480 (0.166013) | 0.006180 / 0.007986 (-0.001805) | 0.003693 / 0.004328 (-0.000636) | 0.074692 / 0.004250 (0.070441) | 0.061732 / 0.037052 (0.024680) | 0.460391 / 0.258489 (0.201902) | 0.505575 / 0.293841 (0.211734) | 0.037692 / 0.128546 (-0.090854) | 0.009870 / 0.075646 (-0.065776) | 0.083830 / 0.419271 (-0.335442) | 0.056255 / 0.043533 (0.012723) | 0.439330 / 0.255139 (0.184191) | 0.475598 / 0.283200 (0.192399) | 0.026626 / 0.141683 (-0.115056) | 1.794410 / 1.452155 (0.342255) | 1.882510 / 1.492716 (0.389794) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.236194 / 0.018006 (0.218187) | 0.486109 / 0.000490 (0.485619) | 0.006652 / 0.000200 (0.006453) | 0.000108 / 0.000054 (0.000053) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.037277 / 0.037411 (-0.000134) | 0.108904 / 0.014526 (0.094378) | 0.122699 / 0.176557 (-0.053857) | 0.182388 / 0.737135 (-0.554747) | 0.122826 / 0.296338 (-0.173512) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.485989 / 0.215209 (0.270780) | 4.913263 / 2.077655 (2.835609) | 2.571618 / 1.504120 (1.067498) | 2.401248 / 1.541195 (0.860054) | 2.501117 / 1.468490 (1.032627) | 0.570989 / 4.584777 (-4.013788) | 4.107420 / 3.745712 (0.361708) | 3.814977 / 5.269862 (-1.454885) | 2.282539 / 4.565676 (-2.283138) | 0.067765 / 0.424275 (-0.356511) | 0.008561 / 0.007607 (0.000954) | 0.584515 / 0.226044 (0.358471) | 5.817821 / 2.268929 (3.548893) | 3.211202 / 55.444624 (-52.233422) | 2.764480 / 6.876477 (-4.111996) | 2.807301 / 2.142072 (0.665229) | 0.676882 / 4.805227 (-4.128346) | 0.150124 / 6.500664 (-6.350540) | 0.067205 / 0.075469 (-0.008265) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.594945 / 1.841788 (-0.246843) | 22.533511 / 8.074308 (14.459203) | 17.099693 / 10.191392 (6.908301) | 0.195954 / 0.680424 (-0.484470) | 0.023968 / 0.534201 (-0.510233) | 0.471337 / 0.579283 (-0.107946) | 0.491017 / 0.434364 (0.056653) | 0.561342 / 0.540337 (0.021004) | 0.797116 / 1.386936 (-0.589820) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#98871b9ba46e89e75e9d0dddc49f4241373c575d \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006235 / 0.011353 (-0.005118) | 0.003688 / 0.011008 (-0.007321) | 0.080801 / 0.038508 (0.042293) | 0.036243 / 0.023109 (0.013134) | 0.312173 / 0.275898 (0.036275) | 0.346239 / 0.323480 (0.022759) | 0.003429 / 0.007986 (-0.004556) | 0.003806 / 0.004328 (-0.000523) | 0.063236 / 0.004250 (0.058986) | 0.053229 / 0.037052 (0.016177) | 0.315184 / 0.258489 (0.056695) | 0.360124 / 0.293841 (0.066283) | 0.027447 / 0.128546 (-0.101099) | 0.008029 / 0.075646 (-0.067618) | 0.262766 / 0.419271 (-0.156505) | 0.068421 / 0.043533 (0.024888) | 0.309028 / 0.255139 (0.053889) | 0.345859 / 0.283200 (0.062659) | 0.021388 / 0.141683 (-0.120295) | 1.452807 / 1.452155 (0.000652) | 1.502803 / 1.492716 (0.010087) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.211297 / 0.018006 (0.193291) | 0.423364 / 0.000490 (0.422874) | 0.004574 / 0.000200 (0.004374) | 0.000272 / 0.000054 (0.000218) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023805 / 0.037411 (-0.013606) | 0.072309 / 0.014526 (0.057783) | 0.083274 / 0.176557 (-0.093283) | 0.143594 / 0.737135 (-0.593541) | 0.083777 / 0.296338 (-0.212561) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.415691 / 0.215209 (0.200482) | 4.128621 / 2.077655 (2.050967) | 1.931128 / 1.504120 (0.427008) | 1.737486 / 1.541195 (0.196292) | 1.806314 / 1.468490 (0.337823) | 0.501405 / 4.584777 (-4.083372) | 3.082042 / 3.745712 (-0.663670) | 2.980224 / 5.269862 (-2.289637) | 1.879780 / 4.565676 (-2.685897) | 0.057546 / 0.424275 (-0.366729) | 0.006422 / 0.007607 (-0.001186) | 0.479813 / 0.226044 (0.253768) | 4.854497 / 2.268929 (2.585568) | 2.529674 / 55.444624 (-52.914950) | 2.283041 / 6.876477 (-4.593436) | 2.377173 / 2.142072 (0.235101) | 0.589654 / 4.805227 (-4.215573) | 0.126190 / 6.500664 (-6.374474) | 0.062391 / 0.075469 (-0.013079) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.232023 / 1.841788 (-0.609764) | 17.576621 / 8.074308 (9.502313) | 13.437075 / 10.191392 (3.245683) | 0.143367 / 0.680424 (-0.537057) | 0.016638 / 0.534201 (-0.517563) | 0.332806 / 0.579283 (-0.246477) | 0.356029 / 0.434364 (-0.078335) | 0.385610 / 0.540337 (-0.154727) | 0.563268 / 1.386936 (-0.823668) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006293 / 0.011353 (-0.005060) | 0.003692 / 0.011008 (-0.007317) | 0.062075 / 0.038508 (0.023567) | 0.062104 / 0.023109 (0.038995) | 0.407478 / 0.275898 (0.131580) | 0.434982 / 0.323480 (0.111502) | 0.004889 / 0.007986 (-0.003097) | 0.002915 / 0.004328 (-0.001413) | 0.061426 / 0.004250 (0.057176) | 0.048027 / 0.037052 (0.010974) | 0.410504 / 0.258489 (0.152015) | 0.435383 / 0.293841 (0.141542) | 0.029419 / 0.128546 (-0.099127) | 0.008275 / 0.075646 (-0.067371) | 0.067796 / 0.419271 (-0.351476) | 0.041696 / 0.043533 (-0.001837) | 0.398882 / 0.255139 (0.143743) | 0.419480 / 0.283200 (0.136281) | 0.021519 / 0.141683 (-0.120164) | 1.436961 / 1.452155 (-0.015194) | 1.507961 / 1.492716 (0.015245) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.223190 / 0.018006 (0.205184) | 0.416281 / 0.000490 (0.415791) | 0.003370 / 0.000200 (0.003170) | 0.000080 / 0.000054 (0.000026) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025923 / 0.037411 (-0.011488) | 0.079989 / 0.014526 (0.065463) | 0.091289 / 0.176557 (-0.085268) | 0.141212 / 0.737135 (-0.595923) | 0.091717 / 0.296338 (-0.204622) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.434640 / 0.215209 (0.219431) | 4.326154 / 2.077655 (2.248500) | 2.364845 / 1.504120 (0.860725) | 2.194040 / 1.541195 (0.652846) | 2.276665 / 1.468490 (0.808175) | 0.501879 / 4.584777 (-4.082898) | 3.073307 / 3.745712 (-0.672405) | 2.893823 / 5.269862 (-2.376039) | 1.820594 / 4.565676 (-2.745083) | 0.057595 / 0.424275 (-0.366680) | 0.006516 / 0.007607 (-0.001091) | 0.513633 / 0.226044 (0.287589) | 5.104799 / 2.268929 (2.835870) | 2.845025 / 55.444624 (-52.599599) | 2.513852 / 6.876477 (-4.362624) | 2.561044 / 2.142072 (0.418972) | 0.582711 / 4.805227 (-4.222516) | 0.120631 / 6.500664 (-6.380034) | 0.056738 / 0.075469 (-0.018731) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.303370 / 1.841788 (-0.538418) | 18.023568 / 8.074308 (9.949259) | 14.637973 / 10.191392 (4.446581) | 0.145182 / 0.680424 (-0.535241) | 0.018061 / 0.534201 (-0.516140) | 0.333219 / 0.579283 (-0.246065) | 0.373184 / 0.434364 (-0.061180) | 0.388176 / 0.540337 (-0.152161) | 0.564752 / 1.386936 (-0.822184) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#aecdc94580d105d4b70c94e8e238ce097f97af90 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007230 / 0.011353 (-0.004122) | 0.003727 / 0.011008 (-0.007281) | 0.078893 / 0.038508 (0.040385) | 0.042600 / 0.023109 (0.019491) | 0.301905 / 0.275898 (0.026007) | 0.328478 / 0.323480 (0.004998) | 0.003960 / 0.007986 (-0.004026) | 0.004530 / 0.004328 (0.000201) | 0.059446 / 0.004250 (0.055196) | 0.061241 / 0.037052 (0.024189) | 0.301878 / 0.258489 (0.043389) | 0.340935 / 0.293841 (0.047095) | 0.030559 / 0.128546 (-0.097988) | 0.008016 / 0.075646 (-0.067630) | 0.305174 / 0.419271 (-0.114097) | 0.080374 / 0.043533 (0.036842) | 0.307162 / 0.255139 (0.052023) | 0.342459 / 0.283200 (0.059259) | 0.025881 / 0.141683 (-0.115801) | 1.443311 / 1.452155 (-0.008844) | 1.631060 / 1.492716 (0.138344) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.242676 / 0.018006 (0.224670) | 0.463941 / 0.000490 (0.463451) | 0.007762 / 0.000200 (0.007562) | 0.000582 / 0.000054 (0.000527) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027334 / 0.037411 (-0.010077) | 0.078910 / 0.014526 (0.064384) | 0.091399 / 0.176557 (-0.085157) | 0.143318 / 0.737135 (-0.593818) | 0.089761 / 0.296338 (-0.206577) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.463002 / 0.215209 (0.247793) | 4.627235 / 2.077655 (2.549580) | 2.256699 / 1.504120 (0.752579) | 2.057615 / 1.541195 (0.516421) | 2.126424 / 1.468490 (0.657934) | 0.571969 / 4.584777 (-4.012808) | 4.130260 / 3.745712 (0.384548) | 3.833521 / 5.269862 (-1.436341) | 2.320141 / 4.565676 (-2.245535) | 0.067587 / 0.424275 (-0.356688) | 0.008452 / 0.007607 (0.000845) | 0.546478 / 0.226044 (0.320433) | 5.070678 / 2.268929 (2.801750) | 2.325387 / 55.444624 (-53.119237) | 2.044041 / 6.876477 (-4.832435) | 2.019714 / 2.142072 (-0.122358) | 0.563589 / 4.805227 (-4.241639) | 0.135269 / 6.500664 (-6.365395) | 0.058208 / 0.075469 (-0.017261) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.283156 / 1.841788 (-0.558631) | 18.617776 / 8.074308 (10.543468) | 13.360700 / 10.191392 (3.169308) | 0.160001 / 0.680424 (-0.520423) | 0.021538 / 0.534201 (-0.512663) | 0.384169 / 0.579283 (-0.195114) | 0.407517 / 0.434364 (-0.026847) | 0.427295 / 0.540337 (-0.113042) | 0.655288 / 1.386936 (-0.731648) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006854 / 0.011353 (-0.004499) | 0.003442 / 0.011008 (-0.007566) | 0.060622 / 0.038508 (0.022114) | 0.074649 / 0.023109 (0.051540) | 0.341733 / 0.275898 (0.065835) | 0.360096 / 0.323480 (0.036616) | 0.006235 / 0.007986 (-0.001751) | 0.003447 / 0.004328 (-0.000882) | 0.057301 / 0.004250 (0.053051) | 0.059022 / 0.037052 (0.021970) | 0.369523 / 0.258489 (0.111034) | 0.386280 / 0.293841 (0.092439) | 0.034319 / 0.128546 (-0.094228) | 0.008291 / 0.075646 (-0.067355) | 0.070403 / 0.419271 (-0.348868) | 0.050433 / 0.043533 (0.006901) | 0.347262 / 0.255139 (0.092123) | 0.380543 / 0.283200 (0.097343) | 0.024492 / 0.141683 (-0.117191) | 1.446721 / 1.452155 (-0.005433) | 1.541614 / 1.492716 (0.048898) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.226148 / 0.018006 (0.208142) | 0.442150 / 0.000490 (0.441660) | 0.004997 / 0.000200 (0.004797) | 0.000096 / 0.000054 (0.000041) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032866 / 0.037411 (-0.004546) | 0.088097 / 0.014526 (0.073571) | 0.102178 / 0.176557 (-0.074379) | 0.151129 / 0.737135 (-0.586006) | 0.103953 / 0.296338 (-0.192386) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.376701 / 0.215209 (0.161492) | 3.886997 / 2.077655 (1.809342) | 2.027143 / 1.504120 (0.523023) | 1.808647 / 1.541195 (0.267453) | 1.867664 / 1.468490 (0.399173) | 0.459487 / 4.584777 (-4.125290) | 3.640801 / 3.745712 (-0.104911) | 3.242512 / 5.269862 (-2.027350) | 1.889174 / 4.565676 (-2.676503) | 0.052415 / 0.424275 (-0.371860) | 0.007479 / 0.007607 (-0.000128) | 0.457706 / 0.226044 (0.231662) | 4.815041 / 2.268929 (2.546112) | 2.542470 / 55.444624 (-52.902154) | 2.137084 / 6.876477 (-4.739392) | 2.122867 / 2.142072 (-0.019205) | 0.553756 / 4.805227 (-4.251471) | 0.118902 / 6.500664 (-6.381763) | 0.058149 / 0.075469 (-0.017320) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.272615 / 1.841788 (-0.569173) | 19.455709 / 8.074308 (11.381401) | 14.111693 / 10.191392 (3.920301) | 0.165741 / 0.680424 (-0.514683) | 0.023680 / 0.534201 (-0.510521) | 0.431458 / 0.579283 (-0.147825) | 0.433612 / 0.434364 (-0.000752) | 0.465615 / 0.540337 (-0.074722) | 0.678177 / 1.386936 (-0.708759) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#998623fa51991320740b945d0853ee20807304d7 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004870 / 0.011353 (-0.006483) | 0.002834 / 0.011008 (-0.008175) | 0.061359 / 0.038508 (0.022851) | 0.031286 / 0.023109 (0.008177) | 0.236701 / 0.275898 (-0.039197) | 0.258139 / 0.323480 (-0.065341) | 0.002943 / 0.007986 (-0.005043) | 0.002989 / 0.004328 (-0.001339) | 0.048046 / 0.004250 (0.043796) | 0.044927 / 0.037052 (0.007874) | 0.241339 / 0.258489 (-0.017151) | 0.273912 / 0.293841 (-0.019929) | 0.023427 / 0.128546 (-0.105119) | 0.007251 / 0.075646 (-0.068395) | 0.202730 / 0.419271 (-0.216542) | 0.056223 / 0.043533 (0.012691) | 0.239908 / 0.255139 (-0.015231) | 0.254723 / 0.283200 (-0.028476) | 0.018223 / 0.141683 (-0.123460) | 1.119691 / 1.452155 (-0.332464) | 1.163802 / 1.492716 (-0.328915) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091303 / 0.018006 (0.073297) | 0.302097 / 0.000490 (0.301607) | 0.000214 / 0.000200 (0.000014) | 0.000044 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018201 / 0.037411 (-0.019210) | 0.062092 / 0.014526 (0.047566) | 0.074806 / 0.176557 (-0.101751) | 0.119625 / 0.737135 (-0.617510) | 0.074680 / 0.296338 (-0.221659) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.281140 / 0.215209 (0.065931) | 2.752094 / 2.077655 (0.674439) | 1.436813 / 1.504120 (-0.067307) | 1.312947 / 1.541195 (-0.228247) | 1.331022 / 1.468490 (-0.137468) | 0.396579 / 4.584777 (-4.188198) | 2.406181 / 3.745712 (-1.339531) | 2.597180 / 5.269862 (-2.672682) | 1.565879 / 4.565676 (-2.999798) | 0.046330 / 0.424275 (-0.377945) | 0.004776 / 0.007607 (-0.002831) | 0.339681 / 0.226044 (0.113637) | 3.279533 / 2.268929 (1.010605) | 1.793352 / 55.444624 (-53.651272) | 1.493910 / 6.876477 (-5.382567) | 1.514494 / 2.142072 (-0.627579) | 0.467955 / 4.805227 (-4.337272) | 0.097764 / 6.500664 (-6.402900) | 0.041659 / 0.075469 (-0.033810) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.943204 / 1.841788 (-0.898583) | 11.350848 / 8.074308 (3.276540) | 10.169944 / 10.191392 (-0.021448) | 0.130882 / 0.680424 (-0.549542) | 0.013804 / 0.534201 (-0.520397) | 0.269107 / 0.579283 (-0.310177) | 0.261685 / 0.434364 (-0.172679) | 0.305610 / 0.540337 (-0.234727) | 0.430586 / 1.386936 (-0.956350) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004835 / 0.011353 (-0.006518) | 0.002530 / 0.011008 (-0.008479) | 0.047383 / 0.038508 (0.008875) | 0.052559 / 0.023109 (0.029450) | 0.265015 / 0.275898 (-0.010883) | 0.286955 / 0.323480 (-0.036525) | 0.003931 / 0.007986 (-0.004054) | 0.002038 / 0.004328 (-0.002290) | 0.047458 / 0.004250 (0.043207) | 0.038257 / 0.037052 (0.001205) | 0.270569 / 0.258489 (0.012080) | 0.298968 / 0.293841 (0.005127) | 0.024615 / 0.128546 (-0.103932) | 0.006969 / 0.075646 (-0.068677) | 0.052361 / 0.419271 (-0.366911) | 0.032701 / 0.043533 (-0.010832) | 0.269126 / 0.255139 (0.013987) | 0.285934 / 0.283200 (0.002735) | 0.018121 / 0.141683 (-0.123562) | 1.129796 / 1.452155 (-0.322359) | 1.272831 / 1.492716 (-0.219885) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092058 / 0.018006 (0.074051) | 0.303544 / 0.000490 (0.303054) | 0.000232 / 0.000200 (0.000032) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.020983 / 0.037411 (-0.016428) | 0.069798 / 0.014526 (0.055272) | 0.081410 / 0.176557 (-0.095146) | 0.120403 / 0.737135 (-0.616732) | 0.082813 / 0.296338 (-0.213525) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.295943 / 0.215209 (0.080734) | 2.895761 / 2.077655 (0.818106) | 1.583534 / 1.504120 (0.079414) | 1.458397 / 1.541195 (-0.082798) | 1.492113 / 1.468490 (0.023623) | 0.402364 / 4.584777 (-4.182413) | 2.469777 / 3.745712 (-1.275935) | 2.565262 / 5.269862 (-2.704599) | 1.525914 / 4.565676 (-3.039763) | 0.047168 / 0.424275 (-0.377107) | 0.004800 / 0.007607 (-0.002808) | 0.348356 / 0.226044 (0.122311) | 3.463184 / 2.268929 (1.194255) | 1.930240 / 55.444624 (-53.514385) | 1.644312 / 6.876477 (-5.232165) | 1.625477 / 2.142072 (-0.516596) | 0.480781 / 4.805227 (-4.324446) | 0.098431 / 6.500664 (-6.402233) | 0.041071 / 0.075469 (-0.034398) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.973633 / 1.841788 (-0.868154) | 11.952261 / 8.074308 (3.877953) | 11.038222 / 10.191392 (0.846830) | 0.142755 / 0.680424 (-0.537669) | 0.015389 / 0.534201 (-0.518812) | 0.274144 / 0.579283 (-0.305139) | 0.282319 / 0.434364 (-0.152045) | 0.314330 / 0.540337 (-0.226007) | 0.435315 / 1.386936 (-0.951621) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#05200c0a4f8f02c3890ab79a10b44ab0bcf11629 \"CML watermark\")\n", "The red CI job is unrelated to this PR. It appeared 5 days ago. See:\r\n- https://github.com/huggingface/datasets/pull/6390#pullrequestreview-1721070927\r\n- https://github.com/huggingface/datasets/issues/6406", "Let's do a new release once this is merged ? cc @mariosasko as well let us know if the fix sounds good to you", "@lhoestq Yes, this sounds good to me!", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004932 / 0.011353 (-0.006421) | 0.002956 / 0.011008 (-0.008052) | 0.061999 / 0.038508 (0.023491) | 0.030174 / 0.023109 (0.007065) | 0.241483 / 0.275898 (-0.034415) | 0.261578 / 0.323480 (-0.061902) | 0.002881 / 0.007986 (-0.005105) | 0.002451 / 0.004328 (-0.001878) | 0.048176 / 0.004250 (0.043925) | 0.045028 / 0.037052 (0.007976) | 0.244304 / 0.258489 (-0.014185) | 0.275834 / 0.293841 (-0.018007) | 0.023312 / 0.128546 (-0.105234) | 0.007361 / 0.075646 (-0.068286) | 0.204433 / 0.419271 (-0.214838) | 0.054561 / 0.043533 (0.011028) | 0.236902 / 0.255139 (-0.018237) | 0.269358 / 0.283200 (-0.013842) | 0.017736 / 0.141683 (-0.123947) | 1.112444 / 1.452155 (-0.339711) | 1.170260 / 1.492716 (-0.322456) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093081 / 0.018006 (0.075074) | 0.311470 / 0.000490 (0.310981) | 0.000212 / 0.000200 (0.000013) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018654 / 0.037411 (-0.018757) | 0.063239 / 0.014526 (0.048714) | 0.073759 / 0.176557 (-0.102798) | 0.120279 / 0.737135 (-0.616857) | 0.076214 / 0.296338 (-0.220124) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.287219 / 0.215209 (0.072010) | 2.765378 / 2.077655 (0.687723) | 1.459733 / 1.504120 (-0.044387) | 1.325999 / 1.541195 (-0.215196) | 1.349957 / 1.468490 (-0.118533) | 0.413093 / 4.584777 (-4.171684) | 2.394758 / 3.745712 (-1.350954) | 2.633916 / 5.269862 (-2.635945) | 1.621629 / 4.565676 (-2.944047) | 0.046839 / 0.424275 (-0.377436) | 0.004786 / 0.007607 (-0.002822) | 0.336261 / 0.226044 (0.110217) | 3.348196 / 2.268929 (1.079267) | 1.853050 / 55.444624 (-53.591574) | 1.543926 / 6.876477 (-5.332551) | 1.573675 / 2.142072 (-0.568398) | 0.484088 / 4.805227 (-4.321139) | 0.100820 / 6.500664 (-6.399845) | 0.042194 / 0.075469 (-0.033275) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.945186 / 1.841788 (-0.896601) | 11.859855 / 8.074308 (3.785547) | 10.459883 / 10.191392 (0.268491) | 0.142024 / 0.680424 (-0.538400) | 0.013882 / 0.534201 (-0.520319) | 0.269584 / 0.579283 (-0.309699) | 0.264353 / 0.434364 (-0.170011) | 0.307988 / 0.540337 (-0.232349) | 0.423655 / 1.386936 (-0.963281) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004891 / 0.011353 (-0.006461) | 0.003087 / 0.011008 (-0.007921) | 0.048206 / 0.038508 (0.009697) | 0.058570 / 0.023109 (0.035461) | 0.268552 / 0.275898 (-0.007346) | 0.287839 / 0.323480 (-0.035641) | 0.004044 / 0.007986 (-0.003942) | 0.002388 / 0.004328 (-0.001940) | 0.048186 / 0.004250 (0.043935) | 0.038719 / 0.037052 (0.001667) | 0.271940 / 0.258489 (0.013451) | 0.299716 / 0.293841 (0.005875) | 0.027166 / 0.128546 (-0.101380) | 0.007388 / 0.075646 (-0.068258) | 0.053885 / 0.419271 (-0.365387) | 0.032804 / 0.043533 (-0.010729) | 0.271664 / 0.255139 (0.016525) | 0.284613 / 0.283200 (0.001414) | 0.018488 / 0.141683 (-0.123195) | 1.125854 / 1.452155 (-0.326301) | 1.195896 / 1.492716 (-0.296820) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092438 / 0.018006 (0.074431) | 0.315265 / 0.000490 (0.314775) | 0.000228 / 0.000200 (0.000028) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021373 / 0.037411 (-0.016038) | 0.070611 / 0.014526 (0.056085) | 0.080391 / 0.176557 (-0.096165) | 0.118749 / 0.737135 (-0.618386) | 0.082340 / 0.296338 (-0.213999) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.295583 / 0.215209 (0.080374) | 2.882152 / 2.077655 (0.804497) | 1.565088 / 1.504120 (0.060968) | 1.451954 / 1.541195 (-0.089241) | 1.505783 / 1.468490 (0.037293) | 0.404699 / 4.584777 (-4.180078) | 2.451703 / 3.745712 (-1.294009) | 2.596301 / 5.269862 (-2.673560) | 1.547014 / 4.565676 (-3.018662) | 0.047750 / 0.424275 (-0.376525) | 0.004850 / 0.007607 (-0.002757) | 0.346893 / 0.226044 (0.120849) | 3.383355 / 2.268929 (1.114426) | 1.943933 / 55.444624 (-53.500692) | 1.657513 / 6.876477 (-5.218964) | 1.687166 / 2.142072 (-0.454906) | 0.478543 / 4.805227 (-4.326685) | 0.097804 / 6.500664 (-6.402860) | 0.041392 / 0.075469 (-0.034078) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.983894 / 1.841788 (-0.857893) | 12.446443 / 8.074308 (4.372135) | 10.973461 / 10.191392 (0.782069) | 0.131630 / 0.680424 (-0.548794) | 0.017196 / 0.534201 (-0.517005) | 0.270873 / 0.579283 (-0.308411) | 0.284379 / 0.434364 (-0.149985) | 0.306103 / 0.540337 (-0.234234) | 0.413762 / 1.386936 (-0.973174) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#980ad4c6e6e33f0129db8745e84de8c298741aa2 \"CML watermark\")\n", "Note I had to add `pa.ExtensionType.__reduce__` because this is used by `copy.deepcopy` when using `.with_format`. See error below.\r\n\r\nThis method was added in pyarrow-13.0.0: https://github.com/apache/arrow/pull/36170\r\n- We need to re-implement it as long we support lower pyarrow versions\r\n\r\nErrors: https://github.com/huggingface/datasets/actions/runs/6861278161/job/18656665772\r\n```\r\n ____________________________ test_dataset_map[True] ____________________________\r\n[gw1] linux -- Python 3.8.18 /opt/hostedtoolcache/Python/3.8.18/x64/bin/python\r\n\r\n> ???\r\nE KeyError: 'extension<datasets.features.features.array3dextensiontype<array3dextensiontype>>'\r\n\r\npyarrow/types.pxi:3155: KeyError\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nwith_none = True\r\n\r\n @pytest.mark.parametrize(\"with_none\", [False, True])\r\n def test_dataset_map(with_none):\r\n ds = datasets.Dataset.from_dict({\"path\": [\"path1\", \"path2\"]})\r\n \r\n def process_data(batch):\r\n batch = {\r\n \"image\": [\r\n np.array(\r\n [\r\n [[1, 2, 3], [4, 5, 6], [7, 8, 9]],\r\n [[10, 20, 30], [40, 50, 60], [70, 80, 90]],\r\n [[100, 200, 300], [400, 500, 600], [700, 800, 900]],\r\n ]\r\n )\r\n for _ in batch[\"path\"]\r\n ]\r\n }\r\n if with_none:\r\n batch[\"image\"][0] = None\r\n return batch\r\n \r\n features = datasets.Features({\"image\": Array3D(dtype=\"int32\", shape=(3, 3, 3))})\r\n processed_ds = ds.map(process_data, batched=True, remove_columns=ds.column_names, features=features)\r\n assert processed_ds.shape == (2, 1)\r\n> with processed_ds.with_format(\"numpy\") as pds:\r\n\r\ntests/features/test_array_xd.py:459: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\n/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/datasets/arrow_dataset.py:2669: in with_format\r\n dataset = copy.deepcopy(self)\r\n/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/copy.py:172: in deepcopy\r\n y = _reconstruct(x, memo, *rv)\r\n/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/copy.py:270: in _reconstruct\r\n state = deepcopy(state, memo)\r\n/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/copy.py:146: in deepcopy\r\n y = copier(x, memo)\r\n/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/copy.py:230: in _deepcopy_dict\r\n y[deepcopy(key, memo)] = deepcopy(value, memo)\r\n/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/copy.py:153: in deepcopy\r\n y = copier(memo)\r\n/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/datasets/table.py:188: in __deepcopy__\r\n return _deepcopy(self, memo)\r\n/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/datasets/table.py:86: in _deepcopy\r\n setattr(result, k, copy.deepcopy(v, memo))\r\n/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/copy.py:172: in deepcopy\r\n y = _reconstruct(x, memo, *rv)\r\n/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/copy.py:264: in _reconstruct\r\n y = func(*args)\r\n/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/copy.py:263: in <genexpr>\r\n args = (deepcopy(arg, memo) for arg in args)\r\n/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/copy.py:146: in deepcopy\r\n y = copier(x, memo)\r\n/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/copy.py:205: in _deepcopy_list\r\n append(deepcopy(a, memo))\r\n/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/copy.py:172: in deepcopy\r\n y = _reconstruct(x, memo, *rv)\r\n/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/copy.py:264: in _reconstruct\r\n y = func(*args)\r\n/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/copy.py:263: in <genexpr>\r\n args = (deepcopy(arg, memo) for arg in args)\r\n/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/copy.py:172: in deepcopy\r\n y = _reconstruct(x, memo, *rv)\r\n/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/copy.py:264: in _reconstruct\r\n y = func(*args)\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\n\r\n> ???\r\nE ValueError: No type alias for extension<datasets.features.features.array3dextensiontype<array3dextensiontype>>\r\n\r\npyarrow/types.pxi:3157: ValueError\r\n```\r\n```\r\n=========================== short test summary info ============================\r\nFAILED tests/test_arrow_dataset.py::BaseDatasetTest::test_class_encode_column_on_disk - ValueError: No type alias for extension<datasets.features.features.array2dextensiontype<array2dextensiontype>>\r\nFAILED tests/test_arrow_dataset.py::BaseDatasetTest::test_dummy_dataset_on_disk - ValueError: No type alias for extension<datasets.features.features.array2dextensiontype<array2dextensiontype>>\r\nFAILED tests/test_arrow_dataset.py::BaseDatasetTest::test_tf_dataset_conversion_in_memory - ValueError: No type alias for extension<datasets.features.features.array2dextensiontype<array2dextensiontype>>\r\nFAILED tests/test_arrow_dataset.py::BaseDatasetTest::test_tf_dataset_conversion_on_disk - ValueError: No type alias for extension<datasets.features.features.array2dextensiontype<array2dextensiontype>>\r\nFAILED tests/test_arrow_dataset.py::BaseDatasetTest::test_tf_dataset_options_in_memory - ValueError: No type alias for extension<datasets.features.features.array2dextensiontype<array2dextensiontype>>\r\nFAILED tests/test_arrow_dataset.py::BaseDatasetTest::test_tf_dataset_options_on_disk - ValueError: No type alias for extension<datasets.features.features.array2dextensiontype<array2dextensiontype>>\r\nFAILED tests/test_arrow_dataset.py::BaseDatasetTest::test_to_csv_on_disk - ValueError: No type alias for extension<datasets.features.features.array2dextensiontype<array2dextensiontype>>\r\nFAILED tests/test_arrow_dataset.py::BaseDatasetTest::test_to_parquet_on_disk - ValueError: No type alias for extension<datasets.features.features.array2dextensiontype<array2dextensiontype>>\r\nFAILED tests/test_arrow_dataset.py::BaseDatasetTest::test_to_sql_on_disk - ValueError: No type alias for extension<datasets.features.features.array2dextensiontype<array2dextensiontype>>\r\nFAILED tests/test_arrow_dataset.py::test_map_cases[True] - ValueError: No type alias for extension<datasets.features.features.array2dextensiontype<array2dextensiontype>>\r\nFAILED tests/test_arrow_dataset.py::test_map_cases[False] - ValueError: No type alias for extension<datasets.features.features.array2dextensiontype<array2dextensiontype>>\r\nFAILED tests/test_arrow_dataset.py::test_map_cases[mix] - ValueError: No type alias for extension<datasets.features.features.array2dextensiontype<array2dextensiontype>>\r\nFAILED tests/features/test_array_xd.py::ArrayXDDynamicTest::test_map_dataset - ValueError: No type alias for extension<datasets.features.features.array3dextensiontype<array3dextensiontype>>\r\nFAILED tests/features/test_array_xd.py::test_dataset_map[False] - ValueError: No type alias for extension<datasets.features.features.array3dextensiontype<array3dextensiontype>>\r\nFAILED tests/features/test_array_xd.py::test_dataset_map[True] - ValueError: No type alias for extension<datasets.features.features.array3dextensiontype<array3dextensiontype>>\r\n===== 15 failed,\r\n```", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007338 / 0.011353 (-0.004015) | 0.004308 / 0.011008 (-0.006700) | 0.088788 / 0.038508 (0.050280) | 0.039369 / 0.023109 (0.016260) | 0.334527 / 0.275898 (0.058629) | 0.373748 / 0.323480 (0.050268) | 0.005550 / 0.007986 (-0.002435) | 0.003606 / 0.004328 (-0.000723) | 0.072238 / 0.004250 (0.067988) | 0.061271 / 0.037052 (0.024218) | 0.336333 / 0.258489 (0.077844) | 0.398256 / 0.293841 (0.104415) | 0.041941 / 0.128546 (-0.086605) | 0.013372 / 0.075646 (-0.062274) | 0.336221 / 0.419271 (-0.083050) | 0.083013 / 0.043533 (0.039480) | 0.334743 / 0.255139 (0.079604) | 0.362572 / 0.283200 (0.079373) | 0.031161 / 0.141683 (-0.110521) | 1.563441 / 1.452155 (0.111287) | 1.704059 / 1.492716 (0.211343) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.252978 / 0.018006 (0.234972) | 0.506348 / 0.000490 (0.505859) | 0.011679 / 0.000200 (0.011479) | 0.000104 / 0.000054 (0.000049) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026257 / 0.037411 (-0.011154) | 0.085936 / 0.014526 (0.071410) | 0.098542 / 0.176557 (-0.078015) | 0.154507 / 0.737135 (-0.582628) | 0.111493 / 0.296338 (-0.184845) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.575941 / 0.215209 (0.360732) | 5.590230 / 2.077655 (3.512576) | 2.463330 / 1.504120 (0.959211) | 2.125760 / 1.541195 (0.584565) | 2.095933 / 1.468490 (0.627443) | 0.844768 / 4.584777 (-3.740009) | 4.768995 / 3.745712 (1.023282) | 4.670484 / 5.269862 (-0.599377) | 2.630386 / 4.565676 (-1.935290) | 0.085996 / 0.424275 (-0.338279) | 0.007900 / 0.007607 (0.000293) | 0.685463 / 0.226044 (0.459419) | 6.699310 / 2.268929 (4.430381) | 3.132542 / 55.444624 (-52.312083) | 2.527963 / 6.876477 (-4.348513) | 2.381835 / 2.142072 (0.239763) | 0.909668 / 4.805227 (-3.895559) | 0.209979 / 6.500664 (-6.290685) | 0.079222 / 0.075469 (0.003753) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.444895 / 1.841788 (-0.396892) | 20.388140 / 8.074308 (12.313832) | 19.354148 / 10.191392 (9.162756) | 0.222433 / 0.680424 (-0.457991) | 0.029710 / 0.534201 (-0.504491) | 0.427153 / 0.579283 (-0.152130) | 0.537500 / 0.434364 (0.103136) | 0.506917 / 0.540337 (-0.033421) | 0.726088 / 1.386936 (-0.660848) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007652 / 0.011353 (-0.003701) | 0.004320 / 0.011008 (-0.006688) | 0.072721 / 0.038508 (0.034212) | 0.068204 / 0.023109 (0.045095) | 0.392087 / 0.275898 (0.116189) | 0.431638 / 0.323480 (0.108158) | 0.005419 / 0.007986 (-0.002566) | 0.004305 / 0.004328 (-0.000023) | 0.069042 / 0.004250 (0.064791) | 0.051555 / 0.037052 (0.014503) | 0.412141 / 0.258489 (0.153651) | 0.438802 / 0.293841 (0.144961) | 0.043631 / 0.128546 (-0.084915) | 0.014169 / 0.075646 (-0.061478) | 0.079571 / 0.419271 (-0.339701) | 0.056707 / 0.043533 (0.013174) | 0.413698 / 0.255139 (0.158559) | 0.414127 / 0.283200 (0.130928) | 0.031380 / 0.141683 (-0.110303) | 1.677157 / 1.452155 (0.225003) | 1.755155 / 1.492716 (0.262439) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.257236 / 0.018006 (0.239230) | 0.521347 / 0.000490 (0.520858) | 0.006282 / 0.000200 (0.006082) | 0.000139 / 0.000054 (0.000085) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028433 / 0.037411 (-0.008978) | 0.087698 / 0.014526 (0.073172) | 0.108840 / 0.176557 (-0.067716) | 0.157432 / 0.737135 (-0.579704) | 0.103144 / 0.296338 (-0.193195) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.598745 / 0.215209 (0.383536) | 5.981460 / 2.077655 (3.903805) | 2.556931 / 1.504120 (1.052811) | 2.179915 / 1.541195 (0.638720) | 2.240841 / 1.468490 (0.772351) | 0.811501 / 4.584777 (-3.773276) | 4.718282 / 3.745712 (0.972570) | 4.365738 / 5.269862 (-0.904124) | 2.669798 / 4.565676 (-1.895878) | 0.099135 / 0.424275 (-0.325140) | 0.007369 / 0.007607 (-0.000238) | 0.669491 / 0.226044 (0.443447) | 6.700389 / 2.268929 (4.431461) | 3.155328 / 55.444624 (-52.289296) | 2.563375 / 6.876477 (-4.313102) | 2.545191 / 2.142072 (0.403119) | 0.961359 / 4.805227 (-3.843868) | 0.189391 / 6.500664 (-6.311273) | 0.061597 / 0.075469 (-0.013873) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.564008 / 1.841788 (-0.277780) | 21.401307 / 8.074308 (13.326999) | 20.693441 / 10.191392 (10.502049) | 0.229340 / 0.680424 (-0.451084) | 0.033637 / 0.534201 (-0.500564) | 0.429394 / 0.579283 (-0.149889) | 0.557202 / 0.434364 (0.122838) | 0.510284 / 0.540337 (-0.030054) | 0.725661 / 1.386936 (-0.661276) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#45abe297c178b829afcee853f9958b0c5a6469aa \"CML watermark\")\n", "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004820 / 0.011353 (-0.006533) | 0.003152 / 0.011008 (-0.007856) | 0.061842 / 0.038508 (0.023334) | 0.030127 / 0.023109 (0.007018) | 0.257409 / 0.275898 (-0.018489) | 0.269382 / 0.323480 (-0.054097) | 0.004288 / 0.007986 (-0.003698) | 0.002500 / 0.004328 (-0.001829) | 0.048520 / 0.004250 (0.044270) | 0.046815 / 0.037052 (0.009763) | 0.245858 / 0.258489 (-0.012631) | 0.289636 / 0.293841 (-0.004205) | 0.023983 / 0.128546 (-0.104563) | 0.007336 / 0.075646 (-0.068310) | 0.202347 / 0.419271 (-0.216924) | 0.057737 / 0.043533 (0.014204) | 0.245922 / 0.255139 (-0.009217) | 0.268788 / 0.283200 (-0.014412) | 0.017819 / 0.141683 (-0.123864) | 1.149889 / 1.452155 (-0.302265) | 1.227192 / 1.492716 (-0.265524) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092234 / 0.018006 (0.074228) | 0.310259 / 0.000490 (0.309769) | 0.000223 / 0.000200 (0.000023) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019059 / 0.037411 (-0.018352) | 0.064904 / 0.014526 (0.050378) | 0.073531 / 0.176557 (-0.103026) | 0.120879 / 0.737135 (-0.616257) | 0.075410 / 0.296338 (-0.220929) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.275364 / 0.215209 (0.060155) | 2.724379 / 2.077655 (0.646725) | 1.447617 / 1.504120 (-0.056503) | 1.366794 / 1.541195 (-0.174401) | 1.345849 / 1.468490 (-0.122641) | 0.411205 / 4.584777 (-4.173572) | 2.412712 / 3.745712 (-1.333000) | 2.612469 / 5.269862 (-2.657393) | 1.552113 / 4.565676 (-3.013564) | 0.045783 / 0.424275 (-0.378492) | 0.004782 / 0.007607 (-0.002825) | 0.339218 / 0.226044 (0.113174) | 3.359540 / 2.268929 (1.090612) | 1.821369 / 55.444624 (-53.623256) | 1.540742 / 6.876477 (-5.335734) | 1.531845 / 2.142072 (-0.610227) | 0.462009 / 4.805227 (-4.343218) | 0.097794 / 6.500664 (-6.402870) | 0.041222 / 0.075469 (-0.034247) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.938319 / 1.841788 (-0.903469) | 11.712003 / 8.074308 (3.637695) | 10.325317 / 10.191392 (0.133925) | 0.126812 / 0.680424 (-0.553612) | 0.013734 / 0.534201 (-0.520467) | 0.279509 / 0.579283 (-0.299774) | 0.269265 / 0.434364 (-0.165099) | 0.322033 / 0.540337 (-0.218304) | 0.441610 / 1.386936 (-0.945326) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004882 / 0.011353 (-0.006471) | 0.002984 / 0.011008 (-0.008024) | 0.048318 / 0.038508 (0.009810) | 0.054642 / 0.023109 (0.031533) | 0.268599 / 0.275898 (-0.007299) | 0.292916 / 0.323480 (-0.030564) | 0.004108 / 0.007986 (-0.003878) | 0.002500 / 0.004328 (-0.001829) | 0.048452 / 0.004250 (0.044202) | 0.038835 / 0.037052 (0.001782) | 0.275410 / 0.258489 (0.016921) | 0.307284 / 0.293841 (0.013443) | 0.024720 / 0.128546 (-0.103826) | 0.007274 / 0.075646 (-0.068372) | 0.054419 / 0.419271 (-0.364853) | 0.032815 / 0.043533 (-0.010718) | 0.273660 / 0.255139 (0.018521) | 0.289183 / 0.283200 (0.005984) | 0.017746 / 0.141683 (-0.123937) | 1.153876 / 1.452155 (-0.298278) | 1.212778 / 1.492716 (-0.279938) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095286 / 0.018006 (0.077280) | 0.305185 / 0.000490 (0.304696) | 0.000230 / 0.000200 (0.000030) | 0.000054 / 0.000054 (-0.000000) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021556 / 0.037411 (-0.015855) | 0.071029 / 0.014526 (0.056503) | 0.081914 / 0.176557 (-0.094643) | 0.120553 / 0.737135 (-0.616582) | 0.086696 / 0.296338 (-0.209642) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.289750 / 0.215209 (0.074541) | 2.794247 / 2.077655 (0.716592) | 1.577105 / 1.504120 (0.072985) | 1.457706 / 1.541195 (-0.083489) | 1.500481 / 1.468490 (0.031991) | 0.403834 / 4.584777 (-4.180943) | 2.466810 / 3.745712 (-1.278902) | 2.701008 / 5.269862 (-2.568854) | 1.634821 / 4.565676 (-2.930856) | 0.046954 / 0.424275 (-0.377322) | 0.004811 / 0.007607 (-0.002796) | 0.347622 / 0.226044 (0.121578) | 3.407125 / 2.268929 (1.138197) | 1.987121 / 55.444624 (-53.457504) | 1.689978 / 6.876477 (-5.186499) | 1.731801 / 2.142072 (-0.410271) | 0.478926 / 4.805227 (-4.326301) | 0.100730 / 6.500664 (-6.399934) | 0.043078 / 0.075469 (-0.032391) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.963575 / 1.841788 (-0.878212) | 12.675331 / 8.074308 (4.601023) | 11.167584 / 10.191392 (0.976192) | 0.131199 / 0.680424 (-0.549225) | 0.016030 / 0.534201 (-0.518171) | 0.277783 / 0.579283 (-0.301500) | 0.278693 / 0.434364 (-0.155671) | 0.315141 / 0.540337 (-0.225196) | 0.429104 / 1.386936 (-0.957832) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#825c1d25835b64fc3533a63d60bd237f4465f15e \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004807 / 0.011353 (-0.006546) | 0.002925 / 0.011008 (-0.008083) | 0.062560 / 0.038508 (0.024052) | 0.029926 / 0.023109 (0.006817) | 0.264708 / 0.275898 (-0.011190) | 0.273464 / 0.323480 (-0.050016) | 0.003197 / 0.007986 (-0.004788) | 0.002544 / 0.004328 (-0.001784) | 0.048230 / 0.004250 (0.043980) | 0.046552 / 0.037052 (0.009500) | 0.249553 / 0.258489 (-0.008936) | 0.282078 / 0.293841 (-0.011762) | 0.023201 / 0.128546 (-0.105346) | 0.007306 / 0.075646 (-0.068340) | 0.241361 / 0.419271 (-0.177910) | 0.058286 / 0.043533 (0.014753) | 0.245854 / 0.255139 (-0.009285) | 0.266053 / 0.283200 (-0.017146) | 0.020294 / 0.141683 (-0.121388) | 1.102215 / 1.452155 (-0.349939) | 1.170733 / 1.492716 (-0.321984) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094647 / 0.018006 (0.076641) | 0.303819 / 0.000490 (0.303329) | 0.000250 / 0.000200 (0.000050) | 0.000055 / 0.000054 (0.000000) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019036 / 0.037411 (-0.018375) | 0.064729 / 0.014526 (0.050203) | 0.074143 / 0.176557 (-0.102414) | 0.120082 / 0.737135 (-0.617054) | 0.076835 / 0.296338 (-0.219503) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.283786 / 0.215209 (0.068577) | 2.751446 / 2.077655 (0.673791) | 1.473789 / 1.504120 (-0.030331) | 1.336968 / 1.541195 (-0.204226) | 1.384148 / 1.468490 (-0.084342) | 0.397452 / 4.584777 (-4.187325) | 2.388042 / 3.745712 (-1.357670) | 2.661291 / 5.269862 (-2.608571) | 1.595454 / 4.565676 (-2.970223) | 0.045919 / 0.424275 (-0.378356) | 0.004879 / 0.007607 (-0.002728) | 0.337862 / 0.226044 (0.111818) | 3.355665 / 2.268929 (1.086737) | 1.875261 / 55.444624 (-53.569363) | 1.540874 / 6.876477 (-5.335603) | 1.653632 / 2.142072 (-0.488440) | 0.473090 / 4.805227 (-4.332138) | 0.100151 / 6.500664 (-6.400513) | 0.042357 / 0.075469 (-0.033112) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.959550 / 1.841788 (-0.882238) | 12.307145 / 8.074308 (4.232837) | 10.719321 / 10.191392 (0.527929) | 0.128376 / 0.680424 (-0.552048) | 0.014406 / 0.534201 (-0.519795) | 0.295208 / 0.579283 (-0.284075) | 0.268891 / 0.434364 (-0.165473) | 0.305446 / 0.540337 (-0.234892) | 0.429591 / 1.386936 (-0.957345) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005189 / 0.011353 (-0.006164) | 0.003082 / 0.011008 (-0.007926) | 0.048956 / 0.038508 (0.010448) | 0.063403 / 0.023109 (0.040294) | 0.272858 / 0.275898 (-0.003040) | 0.295207 / 0.323480 (-0.028273) | 0.004253 / 0.007986 (-0.003733) | 0.002552 / 0.004328 (-0.001776) | 0.048042 / 0.004250 (0.043792) | 0.040429 / 0.037052 (0.003377) | 0.269614 / 0.258489 (0.011125) | 0.307205 / 0.293841 (0.013364) | 0.027912 / 0.128546 (-0.100634) | 0.007621 / 0.075646 (-0.068026) | 0.054020 / 0.419271 (-0.365251) | 0.036958 / 0.043533 (-0.006574) | 0.272457 / 0.255139 (0.017318) | 0.287966 / 0.283200 (0.004766) | 0.019542 / 0.141683 (-0.122141) | 1.116742 / 1.452155 (-0.335413) | 1.194739 / 1.492716 (-0.297977) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093532 / 0.018006 (0.075526) | 0.303262 / 0.000490 (0.302773) | 0.000217 / 0.000200 (0.000017) | 0.000042 / 0.000054 (-0.000013) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021984 / 0.037411 (-0.015428) | 0.075024 / 0.014526 (0.060498) | 0.080959 / 0.176557 (-0.095598) | 0.121780 / 0.737135 (-0.615356) | 0.082817 / 0.296338 (-0.213522) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.292766 / 0.215209 (0.077557) | 2.857457 / 2.077655 (0.779802) | 1.621860 / 1.504120 (0.117740) | 1.473783 / 1.541195 (-0.067412) | 1.535211 / 1.468490 (0.066721) | 0.402212 / 4.584777 (-4.182565) | 2.467143 / 3.745712 (-1.278569) | 2.618162 / 5.269862 (-2.651700) | 1.568682 / 4.565676 (-2.996994) | 0.047123 / 0.424275 (-0.377152) | 0.004780 / 0.007607 (-0.002827) | 0.346959 / 0.226044 (0.120914) | 3.395196 / 2.268929 (1.126268) | 1.957835 / 55.444624 (-53.486789) | 1.674287 / 6.876477 (-5.202190) | 1.715879 / 2.142072 (-0.426193) | 0.479481 / 4.805227 (-4.325746) | 0.100043 / 6.500664 (-6.400621) | 0.041289 / 0.075469 (-0.034180) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.965418 / 1.841788 (-0.876370) | 12.703830 / 8.074308 (4.629522) | 11.301401 / 10.191392 (1.110009) | 0.131429 / 0.680424 (-0.548995) | 0.016597 / 0.534201 (-0.517604) | 0.273290 / 0.579283 (-0.305993) | 0.285400 / 0.434364 (-0.148964) | 0.307327 / 0.540337 (-0.233011) | 0.434186 / 1.386936 (-0.952750) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#c096bd288d07ed86f340ae090e5d4d9c5351f76f \"CML watermark\")\n" ]
"2023-11-13T09:15:39"
"2023-11-13T17:58:37"
null
MEMBER
null
Support `pyarrow` 14.0.1. Fix #6396.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6404/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6404/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6404.diff", "html_url": "https://github.com/huggingface/datasets/pull/6404", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/6404.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6404" }
true
https://api.github.com/repos/huggingface/datasets/issues/6403
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6403/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6403/comments
https://api.github.com/repos/huggingface/datasets/issues/6403/events
https://github.com/huggingface/datasets/issues/6403
1,990,098,817
I_kwDODunzps52nn-B
6,403
Cannot import datasets on google colab (python 3.10.12)
{ "avatar_url": "https://avatars.githubusercontent.com/u/15389235?v=4", "events_url": "https://api.github.com/users/nabilaannisa/events{/privacy}", "followers_url": "https://api.github.com/users/nabilaannisa/followers", "following_url": "https://api.github.com/users/nabilaannisa/following{/other_user}", "gists_url": "https://api.github.com/users/nabilaannisa/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/nabilaannisa", "id": 15389235, "login": "nabilaannisa", "node_id": "MDQ6VXNlcjE1Mzg5MjM1", "organizations_url": "https://api.github.com/users/nabilaannisa/orgs", "received_events_url": "https://api.github.com/users/nabilaannisa/received_events", "repos_url": "https://api.github.com/users/nabilaannisa/repos", "site_admin": false, "starred_url": "https://api.github.com/users/nabilaannisa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nabilaannisa/subscriptions", "type": "User", "url": "https://api.github.com/users/nabilaannisa" }
[]
open
false
null
[]
null
[ "You are most likely using an outdated version of `datasets` in the notebook, which can be verified with the `!datasets-cli env` command. You can run `!pip install -U datasets to update the installation." ]
"2023-11-13T08:14:43"
"2023-11-13T08:14:43"
null
NONE
null
### Describe the bug I'm trying A full colab demo notebook of zero-shot-distillation from https://github.com/huggingface/transformers/tree/main/examples/research_projects/zero-shot-distillation but i got this type of error when importing datasets on my google colab (python version is 3.10.12) ![image](https://github.com/huggingface/datasets/assets/15389235/6f7758a2-681d-4436-87d0-5e557838e368) I found the same problem that have been solved in [#3326 ] but it seem still error on the google colab. I can't try on my local using jupyter notebook because of my laptop resource doesn't fulfill the requirements. Please can anyone help me solve this problem. Thank you 😅 ### Steps to reproduce the bug Error: ``` --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) [<ipython-input-8-b6e092f83978>](https://localhost:8080/#) in <cell line: 1>() ----> 1 from datasets import load_dataset 2 3 # Print all the available datasets 4 from huggingface_hub import list_datasets 5 print([dataset.id for dataset in list_datasets()]) 6 frames [/usr/lib/python3.10/functools.py](https://localhost:8080/#) in update_wrapper(wrapper, wrapped, assigned, updated) 59 # Issue #17482: set __wrapped__ last so we don't inadvertently copy it 60 # from the wrapped function when updating __dict__ ---> 61 wrapper.__wrapped__ = wrapped 62 # Return the wrapper so this can be used as a decorator via partial() 63 return wrapper AttributeError: readonly attribute ``` ### Expected behavior Run success on Google Colab (free) ### Environment info Windows 11 x64, Google Colab free
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6403/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6403/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6402
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6402/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6402/comments
https://api.github.com/repos/huggingface/datasets/issues/6402/events
https://github.com/huggingface/datasets/pull/6402
1,989,094,542
PR_kwDODunzps5fOBdK
6,402
Update torch_formatter.py
{ "avatar_url": "https://avatars.githubusercontent.com/u/32204417?v=4", "events_url": "https://api.github.com/users/VarunNSrivastava/events{/privacy}", "followers_url": "https://api.github.com/users/VarunNSrivastava/followers", "following_url": "https://api.github.com/users/VarunNSrivastava/following{/other_user}", "gists_url": "https://api.github.com/users/VarunNSrivastava/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/VarunNSrivastava", "id": 32204417, "login": "VarunNSrivastava", "node_id": "MDQ6VXNlcjMyMjA0NDE3", "organizations_url": "https://api.github.com/users/VarunNSrivastava/orgs", "received_events_url": "https://api.github.com/users/VarunNSrivastava/received_events", "repos_url": "https://api.github.com/users/VarunNSrivastava/repos", "site_admin": false, "starred_url": "https://api.github.com/users/VarunNSrivastava/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/VarunNSrivastava/subscriptions", "type": "User", "url": "https://api.github.com/users/VarunNSrivastava" }
[]
open
false
null
[]
null
[]
"2023-11-11T19:40:41"
"2023-11-11T19:41:53"
null
NONE
null
Ensure PyTorch images are converted to (C, H, W) instead of (H, W, C). See #6394 for motivation.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6402/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6402/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6402.diff", "html_url": "https://github.com/huggingface/datasets/pull/6402", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/6402.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6402" }
true
https://api.github.com/repos/huggingface/datasets/issues/6401
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6401/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6401/comments
https://api.github.com/repos/huggingface/datasets/issues/6401/events
https://github.com/huggingface/datasets/issues/6401
1,988,710,061
I_kwDODunzps52iU6t
6,401
dataset = load_dataset("Hyperspace-Technologies/scp-wiki-text") not working
{ "avatar_url": "https://avatars.githubusercontent.com/u/47074021?v=4", "events_url": "https://api.github.com/users/userbox020/events{/privacy}", "followers_url": "https://api.github.com/users/userbox020/followers", "following_url": "https://api.github.com/users/userbox020/following{/other_user}", "gists_url": "https://api.github.com/users/userbox020/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/userbox020", "id": 47074021, "login": "userbox020", "node_id": "MDQ6VXNlcjQ3MDc0MDIx", "organizations_url": "https://api.github.com/users/userbox020/orgs", "received_events_url": "https://api.github.com/users/userbox020/received_events", "repos_url": "https://api.github.com/users/userbox020/repos", "site_admin": false, "starred_url": "https://api.github.com/users/userbox020/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/userbox020/subscriptions", "type": "User", "url": "https://api.github.com/users/userbox020" }
[]
open
false
null
[]
null
[ "Seems like it's a problem with the dataset, since in the [README](https://huggingface.co/datasets/Hyperspace-Technologies/scp-wiki-text/blob/main/README.md) the validation is not specified. Try cloning the dataset, removing the README (or validation split), and loading it locally/ " ]
"2023-11-11T04:09:07"
"2023-11-11T22:51:34"
null
NONE
null
### Describe the bug ``` (datasets) mruserbox@guru-X99:/media/10TB_HHD/_LLM_DATASETS$ python dataset.py Downloading readme: 100%|███████████████████████████████████| 360/360 [00:00<00:00, 2.16MB/s] Downloading data: 100%|█████████████████████████████████| 65.1M/65.1M [00:19<00:00, 3.38MB/s] Downloading data: 100%|█████████████████████████████████| 6.35k/6.35k [00:00<00:00, 20.7kB/s] Downloading data: 100%|█████████████████████████████████| 7.29M/7.29M [00:01<00:00, 3.99MB/s] Downloading data files: 100%|██████████████████████████████████| 3/3 [00:21<00:00, 7.14s/it] Extracting data files: 100%|█████████████████████████████████| 3/3 [00:00<00:00, 1624.23it/s] Generating train split: 100%|█████████████| 314294/314294 [00:00<00:00, 668186.58 examples/s] Generating validation split: 120 examples [00:00, 100422.28 examples/s] Generating test split: 100%|████████████████| 34922/34922 [00:00<00:00, 754683.41 examples/s] Traceback (most recent call last): File "/media/10TB_HHD/_LLM_DATASETS/dataset.py", line 3, in <module> dataset = load_dataset("Hyperspace-Technologies/scp-wiki-text") File "/home/mruserbox/miniconda3/envs/datasets/lib/python3.10/site-packages/datasets/load.py", line 2153, in load_dataset builder_instance.download_and_prepare( File "/home/mruserbox/miniconda3/envs/datasets/lib/python3.10/site-packages/datasets/builder.py", line 954, in download_and_prepare self._download_and_prepare( File "/home/mruserbox/miniconda3/envs/datasets/lib/python3.10/site-packages/datasets/builder.py", line 1067, in _download_and_prepare verify_splits(self.info.splits, split_dict) File "/home/mruserbox/miniconda3/envs/datasets/lib/python3.10/site-packages/datasets/utils/info_utils.py", line 93, in verify_splits raise UnexpectedSplits(str(set(recorded_splits) - set(expected_splits))) datasets.utils.info_utils.UnexpectedSplits: {'validation'} ``` ### Steps to reproduce the bug Name: `dataset.py` Code: ``` from datasets import load_dataset dataset = load_dataset("Hyperspace-Technologies/scp-wiki-text") ``` ### Expected behavior Run without errors ### Environment info ``` name: datasets channels: - defaults dependencies: - _libgcc_mutex=0.1=main - _openmp_mutex=5.1=1_gnu - bzip2=1.0.8=h7b6447c_0 - ca-certificates=2023.08.22=h06a4308_0 - ld_impl_linux-64=2.38=h1181459_1 - libffi=3.4.4=h6a678d5_0 - libgcc-ng=11.2.0=h1234567_1 - libgomp=11.2.0=h1234567_1 - libstdcxx-ng=11.2.0=h1234567_1 - libuuid=1.41.5=h5eee18b_0 - ncurses=6.4=h6a678d5_0 - openssl=3.0.12=h7f8727e_0 - python=3.10.13=h955ad1f_0 - readline=8.2=h5eee18b_0 - setuptools=68.0.0=py310h06a4308_0 - sqlite=3.41.2=h5eee18b_0 - tk=8.6.12=h1ccaba5_0 - wheel=0.41.2=py310h06a4308_0 - xz=5.4.2=h5eee18b_0 - zlib=1.2.13=h5eee18b_0 - pip: - aiohttp==3.8.6 - aiosignal==1.3.1 - async-timeout==4.0.3 - attrs==23.1.0 - certifi==2023.7.22 - charset-normalizer==3.3.2 - click==8.1.7 - datasets==2.14.6 - dill==0.3.7 - filelock==3.13.1 - frozenlist==1.4.0 - fsspec==2023.10.0 - huggingface-hub==0.19.0 - idna==3.4 - multidict==6.0.4 - multiprocess==0.70.15 - numpy==1.26.1 - openai==0.27.8 - packaging==23.2 - pandas==2.1.3 - pip==23.3.1 - platformdirs==4.0.0 - pyarrow==14.0.1 - python-dateutil==2.8.2 - pytz==2023.3.post1 - pyyaml==6.0.1 - requests==2.31.0 - six==1.16.0 - tomli==2.0.1 - tqdm==4.66.1 - typer==0.9.0 - typing-extensions==4.8.0 - tzdata==2023.3 - urllib3==2.0.7 - xxhash==3.4.1 - yarl==1.9.2 prefix: /home/mruserbox/miniconda3/envs/datasets ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6401/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6401/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6400
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6400/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6400/comments
https://api.github.com/repos/huggingface/datasets/issues/6400/events
https://github.com/huggingface/datasets/issues/6400
1,988,571,317
I_kwDODunzps52hzC1
6,400
Safely load datasets by disabling execution of dataset loading script
{ "avatar_url": "https://avatars.githubusercontent.com/u/14367635?v=4", "events_url": "https://api.github.com/users/irenedea/events{/privacy}", "followers_url": "https://api.github.com/users/irenedea/followers", "following_url": "https://api.github.com/users/irenedea/following{/other_user}", "gists_url": "https://api.github.com/users/irenedea/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/irenedea", "id": 14367635, "login": "irenedea", "node_id": "MDQ6VXNlcjE0MzY3NjM1", "organizations_url": "https://api.github.com/users/irenedea/orgs", "received_events_url": "https://api.github.com/users/irenedea/received_events", "repos_url": "https://api.github.com/users/irenedea/repos", "site_admin": false, "starred_url": "https://api.github.com/users/irenedea/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/irenedea/subscriptions", "type": "User", "url": "https://api.github.com/users/irenedea" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
null
[ "great idea IMO\r\n\r\nthis could be a `trust_remote_code=True` flag like in transformers. We could also default to loading the Parquet conversion rather than executing code (for dataset repos that have both)", "@julien-c that would be great!" ]
"2023-11-10T23:48:29"
"2023-11-13T10:13:22"
null
NONE
null
### Feature request Is there a way to disable execution of dataset loading script using `load_dataset`? This is a security vulnerability that could lead to arbitrary code execution. Any suggested workarounds are welcome as well. ### Motivation This is a security vulnerability that could lead to arbitrary code execution. ### Your contribution n/a
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6400/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6400/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6399
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6399/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6399/comments
https://api.github.com/repos/huggingface/datasets/issues/6399/events
https://github.com/huggingface/datasets/issues/6399
1,988,368,503
I_kwDODunzps52hBh3
6,399
TypeError: Cannot convert pyarrow.lib.ChunkedArray to pyarrow.lib.Array
{ "avatar_url": "https://avatars.githubusercontent.com/u/76236359?v=4", "events_url": "https://api.github.com/users/y-hwang/events{/privacy}", "followers_url": "https://api.github.com/users/y-hwang/followers", "following_url": "https://api.github.com/users/y-hwang/following{/other_user}", "gists_url": "https://api.github.com/users/y-hwang/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/y-hwang", "id": 76236359, "login": "y-hwang", "node_id": "MDQ6VXNlcjc2MjM2MzU5", "organizations_url": "https://api.github.com/users/y-hwang/orgs", "received_events_url": "https://api.github.com/users/y-hwang/received_events", "repos_url": "https://api.github.com/users/y-hwang/repos", "site_admin": false, "starred_url": "https://api.github.com/users/y-hwang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/y-hwang/subscriptions", "type": "User", "url": "https://api.github.com/users/y-hwang" }
[]
open
false
null
[]
null
[]
"2023-11-10T20:48:46"
"2023-11-10T20:48:46"
null
NONE
null
### Describe the bug Hi, I am preprocessing a large custom dataset with numpy arrays. I am running into this TypeError during writing in a dataset.map() function. I've tried decreasing writer batch size, but this error persists. This error does not occur for smaller datasets. Thank you! ### Steps to reproduce the bug Traceback (most recent call last): File "/n/home12/yhwang/.conda/envs/lib/python3.10/site-packages/multiprocess/pool.py", line 125, in worker result = (True, func(*args, **kwds)) File "/n/home12/yhwang/.conda/envs/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 1354, in _write_generator_to_queue for i, result in enumerate(func(**kwargs)): File "/n/home12/yhwang/.conda/envs/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 3493, in _map_single writer.write_batch(batch) File "/n/home12/yhwang/.conda/envs/lib/python3.10/site-packages/datasets/arrow_writer.py", line 555, in write_batch arrays.append(pa.array(typed_sequence)) File "pyarrow/array.pxi", line 243, in pyarrow.lib.array File "pyarrow/array.pxi", line 110, in pyarrow.lib._handle_arrow_array_protocol File "/n/home12/yhwang/.conda/envs/lib/python3.10/site-packages/datasets/arrow_writer.py", line 184, in __arrow_array__ out = numpy_to_pyarrow_listarray(data) File "/n/home12/yhwang/.conda/envs/lib/python3.10/site-packages/datasets/features/features.py", line 1394, in numpy_to_pyarrow_listarray values = pa.ListArray.from_arrays(offsets, values) File "pyarrow/array.pxi", line 2004, in pyarrow.lib.ListArray.from_arrays TypeError: Cannot convert pyarrow.lib.ChunkedArray to pyarrow.lib.Array ### Expected behavior Type should not be a ChunkedArray ### Environment info datasets v2.14.5 arrow v1.2.3 pyarrow v12.0.1
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/6399/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6399/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6398
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6398/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6398/comments
https://api.github.com/repos/huggingface/datasets/issues/6398/events
https://github.com/huggingface/datasets/pull/6398
1,987,786,446
PR_kwDODunzps5fJlP7
6,398
Remove redundant condition in builders
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
open
false
null
[]
null
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004475 / 0.011353 (-0.006878) | 0.002840 / 0.011008 (-0.008168) | 0.061544 / 0.038508 (0.023036) | 0.031237 / 0.023109 (0.008128) | 0.243270 / 0.275898 (-0.032628) | 0.271903 / 0.323480 (-0.051577) | 0.002906 / 0.007986 (-0.005080) | 0.003118 / 0.004328 (-0.001210) | 0.047362 / 0.004250 (0.043112) | 0.047840 / 0.037052 (0.010788) | 0.244044 / 0.258489 (-0.014445) | 0.279310 / 0.293841 (-0.014531) | 0.023408 / 0.128546 (-0.105138) | 0.007110 / 0.075646 (-0.068536) | 0.207328 / 0.419271 (-0.211943) | 0.058463 / 0.043533 (0.014930) | 0.245631 / 0.255139 (-0.009508) | 0.267755 / 0.283200 (-0.015445) | 0.018147 / 0.141683 (-0.123536) | 1.086877 / 1.452155 (-0.365278) | 1.155380 / 1.492716 (-0.337337) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091925 / 0.018006 (0.073919) | 0.299858 / 0.000490 (0.299368) | 0.000232 / 0.000200 (0.000032) | 0.000047 / 0.000054 (-0.000007) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018416 / 0.037411 (-0.018995) | 0.062608 / 0.014526 (0.048082) | 0.073897 / 0.176557 (-0.102660) | 0.120216 / 0.737135 (-0.616919) | 0.075788 / 0.296338 (-0.220550) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.287823 / 0.215209 (0.072614) | 2.797546 / 2.077655 (0.719891) | 1.470878 / 1.504120 (-0.033242) | 1.347497 / 1.541195 (-0.193698) | 1.363837 / 1.468490 (-0.104653) | 0.400069 / 4.584777 (-4.184708) | 2.338870 / 3.745712 (-1.406842) | 2.564075 / 5.269862 (-2.705787) | 1.568454 / 4.565676 (-2.997222) | 0.047103 / 0.424275 (-0.377172) | 0.004783 / 0.007607 (-0.002824) | 0.345244 / 0.226044 (0.119200) | 3.407752 / 2.268929 (1.138823) | 1.826552 / 55.444624 (-53.618073) | 1.536714 / 6.876477 (-5.339763) | 1.543138 / 2.142072 (-0.598934) | 0.478996 / 4.805227 (-4.326232) | 0.099580 / 6.500664 (-6.401085) | 0.041994 / 0.075469 (-0.033475) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.947106 / 1.841788 (-0.894682) | 11.391262 / 8.074308 (3.316954) | 10.531141 / 10.191392 (0.339749) | 0.141497 / 0.680424 (-0.538927) | 0.014214 / 0.534201 (-0.519987) | 0.269346 / 0.579283 (-0.309937) | 0.268129 / 0.434364 (-0.166235) | 0.309496 / 0.540337 (-0.230841) | 0.429207 / 1.386936 (-0.957729) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004770 / 0.011353 (-0.006583) | 0.002878 / 0.011008 (-0.008130) | 0.048248 / 0.038508 (0.009740) | 0.051068 / 0.023109 (0.027959) | 0.272076 / 0.275898 (-0.003822) | 0.292423 / 0.323480 (-0.031057) | 0.004016 / 0.007986 (-0.003970) | 0.002522 / 0.004328 (-0.001807) | 0.047617 / 0.004250 (0.043367) | 0.038168 / 0.037052 (0.001115) | 0.275236 / 0.258489 (0.016746) | 0.303811 / 0.293841 (0.009970) | 0.023816 / 0.128546 (-0.104730) | 0.007177 / 0.075646 (-0.068469) | 0.053453 / 0.419271 (-0.365818) | 0.032425 / 0.043533 (-0.011108) | 0.271620 / 0.255139 (0.016481) | 0.289618 / 0.283200 (0.006418) | 0.017986 / 0.141683 (-0.123697) | 1.154225 / 1.452155 (-0.297930) | 1.224244 / 1.492716 (-0.268472) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.090477 / 0.018006 (0.072471) | 0.299461 / 0.000490 (0.298971) | 0.000224 / 0.000200 (0.000024) | 0.000053 / 0.000054 (-0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022043 / 0.037411 (-0.015369) | 0.070327 / 0.014526 (0.055801) | 0.080132 / 0.176557 (-0.096425) | 0.120007 / 0.737135 (-0.617128) | 0.083037 / 0.296338 (-0.213301) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.294538 / 0.215209 (0.079329) | 2.882791 / 2.077655 (0.805136) | 1.582923 / 1.504120 (0.078803) | 1.457091 / 1.541195 (-0.084104) | 1.536149 / 1.468490 (0.067659) | 0.401539 / 4.584777 (-4.183238) | 2.440919 / 3.745712 (-1.304793) | 2.503108 / 5.269862 (-2.766753) | 1.509216 / 4.565676 (-3.056460) | 0.046267 / 0.424275 (-0.378008) | 0.004790 / 0.007607 (-0.002817) | 0.336137 / 0.226044 (0.110093) | 3.331655 / 2.268929 (1.062726) | 1.954228 / 55.444624 (-53.490396) | 1.686637 / 6.876477 (-5.189840) | 1.650278 / 2.142072 (-0.491794) | 0.473895 / 4.805227 (-4.331333) | 0.096908 / 6.500664 (-6.403756) | 0.040387 / 0.075469 (-0.035082) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.972999 / 1.841788 (-0.868789) | 11.978367 / 8.074308 (3.904059) | 10.861092 / 10.191392 (0.669699) | 0.129054 / 0.680424 (-0.551369) | 0.015988 / 0.534201 (-0.518213) | 0.268827 / 0.579283 (-0.310456) | 0.271714 / 0.434364 (-0.162649) | 0.304045 / 0.540337 (-0.236293) | 0.413158 / 1.386936 (-0.973778) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#9e4348a233a75907c305b3159ac9cb183cf30ea5 \"CML watermark\")\n", "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005286 / 0.011353 (-0.006067) | 0.002860 / 0.011008 (-0.008149) | 0.062449 / 0.038508 (0.023941) | 0.035346 / 0.023109 (0.012237) | 0.241685 / 0.275898 (-0.034213) | 0.268116 / 0.323480 (-0.055364) | 0.003050 / 0.007986 (-0.004935) | 0.003134 / 0.004328 (-0.001194) | 0.048818 / 0.004250 (0.044567) | 0.049187 / 0.037052 (0.012135) | 0.247395 / 0.258489 (-0.011094) | 0.280301 / 0.293841 (-0.013540) | 0.023801 / 0.128546 (-0.104745) | 0.007653 / 0.075646 (-0.067994) | 0.204185 / 0.419271 (-0.215087) | 0.071251 / 0.043533 (0.027718) | 0.244409 / 0.255139 (-0.010730) | 0.262363 / 0.283200 (-0.020836) | 0.018631 / 0.141683 (-0.123052) | 1.110152 / 1.452155 (-0.342003) | 1.165093 / 1.492716 (-0.327624) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.099536 / 0.018006 (0.081530) | 0.309598 / 0.000490 (0.309109) | 0.000207 / 0.000200 (0.000007) | 0.000050 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019213 / 0.037411 (-0.018198) | 0.069296 / 0.014526 (0.054770) | 0.074752 / 0.176557 (-0.101804) | 0.121314 / 0.737135 (-0.615822) | 0.081274 / 0.296338 (-0.215065) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.281345 / 0.215209 (0.066136) | 2.755435 / 2.077655 (0.677780) | 1.453358 / 1.504120 (-0.050762) | 1.328222 / 1.541195 (-0.212973) | 1.392281 / 1.468490 (-0.076209) | 0.410539 / 4.584777 (-4.174238) | 2.452072 / 3.745712 (-1.293640) | 2.777757 / 5.269862 (-2.492105) | 1.656719 / 4.565676 (-2.908958) | 0.046844 / 0.424275 (-0.377431) | 0.004785 / 0.007607 (-0.002822) | 0.336567 / 0.226044 (0.110522) | 3.317564 / 2.268929 (1.048635) | 1.830737 / 55.444624 (-53.613888) | 1.528464 / 6.876477 (-5.348013) | 1.620527 / 2.142072 (-0.521545) | 0.480662 / 4.805227 (-4.324565) | 0.100819 / 6.500664 (-6.399845) | 0.042501 / 0.075469 (-0.032968) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.962593 / 1.841788 (-0.879195) | 12.508048 / 8.074308 (4.433740) | 11.117398 / 10.191392 (0.926006) | 0.131265 / 0.680424 (-0.549159) | 0.014469 / 0.534201 (-0.519732) | 0.271627 / 0.579283 (-0.307656) | 0.274966 / 0.434364 (-0.159398) | 0.313260 / 0.540337 (-0.227077) | 0.444741 / 1.386936 (-0.942195) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004974 / 0.011353 (-0.006379) | 0.003383 / 0.011008 (-0.007626) | 0.048792 / 0.038508 (0.010284) | 0.052821 / 0.023109 (0.029712) | 0.267123 / 0.275898 (-0.008775) | 0.293604 / 0.323480 (-0.029876) | 0.003968 / 0.007986 (-0.004018) | 0.002594 / 0.004328 (-0.001735) | 0.047690 / 0.004250 (0.043439) | 0.040236 / 0.037052 (0.003183) | 0.267805 / 0.258489 (0.009315) | 0.310543 / 0.293841 (0.016702) | 0.025707 / 0.128546 (-0.102839) | 0.008012 / 0.075646 (-0.067634) | 0.054460 / 0.419271 (-0.364812) | 0.033545 / 0.043533 (-0.009988) | 0.270166 / 0.255139 (0.015027) | 0.285965 / 0.283200 (0.002765) | 0.019391 / 0.141683 (-0.122292) | 1.144991 / 1.452155 (-0.307164) | 1.198491 / 1.492716 (-0.294225) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094757 / 0.018006 (0.076751) | 0.306712 / 0.000490 (0.306222) | 0.000218 / 0.000200 (0.000018) | 0.000055 / 0.000054 (0.000000) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.020995 / 0.037411 (-0.016417) | 0.070293 / 0.014526 (0.055767) | 0.081441 / 0.176557 (-0.095116) | 0.119538 / 0.737135 (-0.617597) | 0.081454 / 0.296338 (-0.214885) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.293451 / 0.215209 (0.078242) | 2.880378 / 2.077655 (0.802723) | 1.572547 / 1.504120 (0.068427) | 1.439172 / 1.541195 (-0.102023) | 1.506343 / 1.468490 (0.037853) | 0.402764 / 4.584777 (-4.182013) | 2.501341 / 3.745712 (-1.244371) | 2.538494 / 5.269862 (-2.731367) | 1.524306 / 4.565676 (-3.041371) | 0.046401 / 0.424275 (-0.377874) | 0.004781 / 0.007607 (-0.002826) | 0.349448 / 0.226044 (0.123404) | 3.416181 / 2.268929 (1.147252) | 1.964204 / 55.444624 (-53.480420) | 1.648564 / 6.876477 (-5.227912) | 1.675977 / 2.142072 (-0.466095) | 0.475717 / 4.805227 (-4.329511) | 0.098416 / 6.500664 (-6.402248) | 0.041212 / 0.075469 (-0.034257) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.975928 / 1.841788 (-0.865860) | 12.066648 / 8.074308 (3.992340) | 10.943181 / 10.191392 (0.751789) | 0.149687 / 0.680424 (-0.530736) | 0.015107 / 0.534201 (-0.519094) | 0.268950 / 0.579283 (-0.310333) | 0.280419 / 0.434364 (-0.153945) | 0.305263 / 0.540337 (-0.235074) | 0.408486 / 1.386936 (-0.978450) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#344086a7a1707ef20b57399f813ef64ce679e956 \"CML watermark\")\n" ]
"2023-11-10T14:56:43"
"2023-11-10T14:56:43"
null
MEMBER
null
Minor refactoring by remove redundant condition.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6398/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6398/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6398.diff", "html_url": "https://github.com/huggingface/datasets/pull/6398", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/6398.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6398" }
true
https://api.github.com/repos/huggingface/datasets/issues/6397
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6397/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6397/comments
https://api.github.com/repos/huggingface/datasets/issues/6397/events
https://github.com/huggingface/datasets/issues/6397
1,987,622,152
I_kwDODunzps52eLUI
6,397
Raise a different exception for inexisting dataset vs files without known extension
{ "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo" }
[]
open
false
null
[]
null
[]
"2023-11-10T13:22:14"
"2023-11-10T13:22:14"
null
CONTRIBUTOR
null
See https://github.com/huggingface/datasets-server/issues/2082#issuecomment-1805716557 We have the same error for: - https://huggingface.co/datasets/severo/a_dataset_that_does_not_exist: a dataset that does not exist - https://huggingface.co/datasets/severo/test_files_without_extension: a dataset with files without a known extension ``` >>> import datasets >>> datasets.get_dataset_config_names('severo/a_dataset_that_does_not_exist') Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 351, in get_dataset_config_names dataset_module = dataset_module_factory( File "/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1508, in dataset_module_factory raise FileNotFoundError( FileNotFoundError: Couldn't find a dataset script at /home/slesage/hf/datasets-server/services/worker/severo/a_dataset_that_does_not_exist/a_dataset_that_does_not_exist.py or any data file in the same directory. Couldn't find 'severo/a_dataset_that_does_not_exist' on the Hugging Face Hub either: FileNotFoundError: Dataset 'severo/a_dataset_that_does_not_exist' doesn't exist on the Hub. If the repo is private or gated, make sure to log in with `huggingface-cli login`. >>> datasets.get_dataset_config_names('severo/test_files_without_extension') Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 351, in get_dataset_config_names dataset_module = dataset_module_factory( File "/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1508, in dataset_module_factory raise FileNotFoundError( FileNotFoundError: Couldn't find a dataset script at /home/slesage/hf/datasets-server/services/worker/severo/test_files_without_extension/test_files_without_extension.py or any data file in the same directory. Couldn't find 'severo/test_files_without_extension' on the Hugging Face Hub either: FileNotFoundError: No (supported) data files or dataset script found in severo/test_files_without_extension. ``` To differentiate, we must parse the error message (only the end is different). We should have a different exception for these two errors.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6397/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6397/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6396
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6396/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6396/comments
https://api.github.com/repos/huggingface/datasets/issues/6396/events
https://github.com/huggingface/datasets/issues/6396
1,987,308,077
I_kwDODunzps52c-ot
6,396
Issue with pyarrow 14.0.1
{ "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo" }
[]
open
false
null
[]
null
[ "Looks like we should stop using `PyExtensionType` and use `ExtensionType` instead\r\n\r\nsee https://github.com/apache/arrow/commit/f14170976372436ec1d03a724d8d3f3925484ecf", "https://github.com/huggingface/datasets-server/pull/2089#pullrequestreview-1724449532\r\n\r\n> Yes, I understand now: they have disabled their `PyExtensionType` and we use it in `datasets` for arrays... ", "related?\r\n\r\nhttps://huggingface.co/datasets/ssbuild/tools_data/discussions/1#654e663b77c8ec680d10479c", "> related?\r\n>\r\n> https://huggingface.co/datasets/ssbuild/tools_data/discussions/1#654e663b77c8ec680d10479c\r\n\r\nNo, related to https://github.com/huggingface/datasets/issues/5706", "Running the following is a workaround:\r\n\r\n```\r\nimport pyarrow\r\npyarrow.PyExtensionType.set_auto_load(True)\r\n```" ]
"2023-11-10T10:02:12"
"2023-11-12T00:22:32"
null
CONTRIBUTOR
null
See https://github.com/huggingface/datasets-server/pull/2089 for reference ``` from datasets import (Array2D, Dataset, Features) feature_type = Array2D(shape=(2, 2), dtype="float32") content = [[0.0, 0.0], [0.0, 0.0]] features = Features({"col": feature_type}) dataset = Dataset.from_dict({"col": [content]}, features=features) ``` generates ``` /home/slesage/hf/datasets-server/libs/libcommon/.venv/lib/python3.9/site-packages/datasets/features/features.py:648: FutureWarning: pyarrow.PyExtensionType is deprecated and will refuse deserialization by default. Instead, please derive from pyarrow.ExtensionType and implement your own serialization mechanism. pa.PyExtensionType.__init__(self, self.storage_dtype) /home/slesage/hf/datasets-server/libs/libcommon/.venv/lib/python3.9/site-packages/datasets/features/features.py:1661: RuntimeWarning: pickle-based deserialization of pyarrow.PyExtensionType subclasses is disabled by default; if you only ingest trusted data files, you may re-enable this using `pyarrow.PyExtensionType.set_auto_load(True)`. In the future, Python-defined extension subclasses should derive from pyarrow.ExtensionType (not pyarrow.PyExtensionType) and implement their own serialization mechanism. obj = {field.name: generate_from_arrow_type(field.type) for field in pa_schema} /home/slesage/hf/datasets-server/libs/libcommon/.venv/lib/python3.9/site-packages/datasets/features/features.py:1661: FutureWarning: pyarrow.PyExtensionType is deprecated and will refuse deserialization by default. Instead, please derive from pyarrow.ExtensionType and implement your own serialization mechanism. obj = {field.name: generate_from_arrow_type(field.type) for field in pa_schema} Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/slesage/hf/datasets-server/libs/libcommon/.venv/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 924, in from_dict return cls(pa_table, info=info, split=split) File "/home/slesage/hf/datasets-server/libs/libcommon/.venv/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 693, in __init__ inferred_features = Features.from_arrow_schema(arrow_table.schema) File "/home/slesage/hf/datasets-server/libs/libcommon/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1661, in from_arrow_schema obj = {field.name: generate_from_arrow_type(field.type) for field in pa_schema} File "/home/slesage/hf/datasets-server/libs/libcommon/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1661, in <dictcomp> obj = {field.name: generate_from_arrow_type(field.type) for field in pa_schema} File "/home/slesage/hf/datasets-server/libs/libcommon/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1381, in generate_from_arrow_type return Value(dtype=_arrow_to_datasets_dtype(pa_type)) File "/home/slesage/hf/datasets-server/libs/libcommon/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 111, in _arrow_to_datasets_dtype raise ValueError(f"Arrow type {arrow_type} does not have a datasets dtype equivalent.") ValueError: Arrow type extension<arrow.py_extension_type<pyarrow.lib.UnknownExtensionType>> does not have a datasets dtype equivalent. ```
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/6396/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6396/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6395
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6395/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6395/comments
https://api.github.com/repos/huggingface/datasets/issues/6395/events
https://github.com/huggingface/datasets/issues/6395
1,986,484,124
I_kwDODunzps52Z1ec
6,395
Add ability to set lock type
{ "avatar_url": "https://avatars.githubusercontent.com/u/37735580?v=4", "events_url": "https://api.github.com/users/leoleoasd/events{/privacy}", "followers_url": "https://api.github.com/users/leoleoasd/followers", "following_url": "https://api.github.com/users/leoleoasd/following{/other_user}", "gists_url": "https://api.github.com/users/leoleoasd/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/leoleoasd", "id": 37735580, "login": "leoleoasd", "node_id": "MDQ6VXNlcjM3NzM1NTgw", "organizations_url": "https://api.github.com/users/leoleoasd/orgs", "received_events_url": "https://api.github.com/users/leoleoasd/received_events", "repos_url": "https://api.github.com/users/leoleoasd/repos", "site_admin": false, "starred_url": "https://api.github.com/users/leoleoasd/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/leoleoasd/subscriptions", "type": "User", "url": "https://api.github.com/users/leoleoasd" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
null
[]
"2023-11-09T22:12:30"
"2023-11-09T22:13:13"
null
NONE
null
### Feature request Allow setting file lock type, maybe from an environment variable Currently, it only depends on whether fnctl is available: https://github.com/huggingface/datasets/blob/12ebe695b4748c5a26e08b44ed51955f74f5801d/src/datasets/utils/filelock.py#L463-L470C16 ### Motivation In my environment, flock isn't supported on a network attached drive ### Your contribution I'll be happy to submit a pr.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6395/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6395/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6394
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6394/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6394/comments
https://api.github.com/repos/huggingface/datasets/issues/6394/events
https://github.com/huggingface/datasets/issues/6394
1,985,947,116
I_kwDODunzps52XyXs
6,394
TorchFormatter images (H, W, C) instead of (C, H, W) format
{ "avatar_url": "https://avatars.githubusercontent.com/u/37351874?v=4", "events_url": "https://api.github.com/users/Modexus/events{/privacy}", "followers_url": "https://api.github.com/users/Modexus/followers", "following_url": "https://api.github.com/users/Modexus/following{/other_user}", "gists_url": "https://api.github.com/users/Modexus/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Modexus", "id": 37351874, "login": "Modexus", "node_id": "MDQ6VXNlcjM3MzUxODc0", "organizations_url": "https://api.github.com/users/Modexus/orgs", "received_events_url": "https://api.github.com/users/Modexus/received_events", "repos_url": "https://api.github.com/users/Modexus/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Modexus/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Modexus/subscriptions", "type": "User", "url": "https://api.github.com/users/Modexus" }
[]
open
false
null
[]
null
[ "Here's a PR for that. https://github.com/huggingface/datasets/pull/6402\r\n\r\nIt's not backward compatible, unfortunately. " ]
"2023-11-09T16:02:15"
"2023-11-11T19:41:03"
null
NONE
null
### Describe the bug Using .set_format("torch") leads to images having shape (H, W, C), the same as in numpy. However, pytorch normally uses (C, H, W) format. Maybe I'm missing something but this makes the format a lot less useful as I then have to permute it anyways. If not using the format it is possible to directly use torchvision transforms but any non-transformed value will not be a tensor. Is there a reason for this choice? ### Steps to reproduce the bug ```python from datasets import Dataset, Features, Audio, Image images = ["path/to/image.png"] * 10 features = Features({"image": Image()}) ds = Dataset.from_dict({"image": images}, features=features) ds = ds.with_format("torch") ds[0]["image"].shape ``` ```python torch.Size([512, 512, 4]) ``` ### Expected behavior ```python from datasets import Dataset, Features, Audio, Image images = ["path/to/image.png"] * 10 features = Features({"image": Image()}) ds = Dataset.from_dict({"image": images}, features=features) ds = ds.with_format("torch") ds[0]["image"].shape ``` ```python torch.Size([4, 512, 512]) ``` ### Environment info - `datasets` version: 2.14.6 - Platform: Linux-6.5.9-100.fc37.x86_64-x86_64-with-glibc2.31 - Python version: 3.11.6 - Huggingface_hub version: 0.18.0 - PyArrow version: 14.0.1 - Pandas version: 2.1.2
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6394/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6394/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6393
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6393/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6393/comments
https://api.github.com/repos/huggingface/datasets/issues/6393/events
https://github.com/huggingface/datasets/issues/6393
1,984,913,259
I_kwDODunzps52T19r
6,393
Filter occasionally hangs
{ "avatar_url": "https://avatars.githubusercontent.com/u/43149077?v=4", "events_url": "https://api.github.com/users/dakinggg/events{/privacy}", "followers_url": "https://api.github.com/users/dakinggg/followers", "following_url": "https://api.github.com/users/dakinggg/following{/other_user}", "gists_url": "https://api.github.com/users/dakinggg/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dakinggg", "id": 43149077, "login": "dakinggg", "node_id": "MDQ6VXNlcjQzMTQ5MDc3", "organizations_url": "https://api.github.com/users/dakinggg/orgs", "received_events_url": "https://api.github.com/users/dakinggg/received_events", "repos_url": "https://api.github.com/users/dakinggg/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dakinggg/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dakinggg/subscriptions", "type": "User", "url": "https://api.github.com/users/dakinggg" }
[]
open
false
null
[]
null
[ "It looks like I may not be the first to encounter this: https://github.com/huggingface/datasets/issues/3172", "Adding some more information, it seems to occur more frequently with large (millions of samples) datasets.", "More information. My code is structured as (1) load (2) map (3) filter (4) filter. It was always the second filter that failed. Combining the two filters into one seems to reliably work." ]
"2023-11-09T06:18:30"
"2023-11-09T23:36:28"
null
NONE
null
### Describe the bug A call to `.filter` occasionally hangs (after the filter is complete, according to tqdm) There is a trace produced ``` Exception ignored in: <function Dataset.__del__ at 0x7efb48130c10> Traceback (most recent call last): File "/usr/lib/python3/dist-packages/datasets/arrow_dataset.py", line 1366, in __del__ if hasattr(self, "_indices"): File "/usr/lib/python3/dist-packages/composer/core/engine.py", line 123, in sigterm_handler sys.exit(128 + signal) SystemExit: 143 ``` but I'm not sure if the trace is actually from `datasets`, or from surrounding code that is trying to clean up after datasets gets stuck. Unfortunately I can't reproduce this issue anywhere close to reliably. It happens infrequently when using `num_procs > 1`. Anecdotally I started seeing it when using larger datasets (~10M samples). ### Steps to reproduce the bug N/A see description ### Expected behavior map/filter calls always complete sucessfully ### Environment info - `datasets` version: 2.14.6 - Platform: Linux-5.4.0-137-generic-x86_64-with-glibc2.31 - Python version: 3.10.13 - Huggingface_hub version: 0.17.3 - PyArrow version: 13.0.0 - Pandas version: 2.1.2
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6393/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6393/timeline
null
null
null
null
false

Dataset Card for "github-issues"

More Information needed

Downloads last month
0
Edit dataset card