url
stringlengths
58
61
repository_url
stringclasses
1 value
labels_url
stringlengths
72
75
comments_url
stringlengths
67
70
events_url
stringlengths
65
68
html_url
stringlengths
46
51
id
int64
599M
1.9B
node_id
stringlengths
18
32
number
int64
1
6.24k
title
stringlengths
1
290
user
dict
labels
listlengths
0
4
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
listlengths
0
4
milestone
dict
comments
sequencelengths
0
30
created_at
unknown
updated_at
unknown
closed_at
unknown
author_association
stringclasses
3 values
active_lock_reason
float64
body
stringlengths
0
228k
reactions
dict
timeline_url
stringlengths
67
70
performed_via_github_app
float64
state_reason
stringclasses
3 values
draft
float64
0
1
pull_request
dict
is_pull_request
bool
2 classes
https://api.github.com/repos/huggingface/datasets/issues/5825
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5825/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5825/comments
https://api.github.com/repos/huggingface/datasets/issues/5825/events
https://github.com/huggingface/datasets/issues/5825
1,697,327,483
I_kwDODunzps5lKyl7
5,825
FileNotFound even though exists
{ "avatar_url": "https://avatars.githubusercontent.com/u/62820084?v=4", "events_url": "https://api.github.com/users/Muennighoff/events{/privacy}", "followers_url": "https://api.github.com/users/Muennighoff/followers", "following_url": "https://api.github.com/users/Muennighoff/following{/other_user}", "gists_url": "https://api.github.com/users/Muennighoff/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Muennighoff", "id": 62820084, "login": "Muennighoff", "node_id": "MDQ6VXNlcjYyODIwMDg0", "organizations_url": "https://api.github.com/users/Muennighoff/orgs", "received_events_url": "https://api.github.com/users/Muennighoff/received_events", "repos_url": "https://api.github.com/users/Muennighoff/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Muennighoff/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Muennighoff/subscriptions", "type": "User", "url": "https://api.github.com/users/Muennighoff" }
[]
closed
false
null
[]
null
[ "Hi! \r\n\r\nThis would only work if `bigscience/xP3` was a no-code dataset, but it isn't (it has a Python builder script).\r\n\r\nBut this should work: \r\n```python\r\nload_dataset(\"json\", data_files=\"https://huggingface.co/datasets/bigscience/xP3/resolve/main/ur/xp3_facebook_flores_spa_Latn-urd_Arab_devtest_ab-spa_Latn-urd_Arab.jsonl\")\r\n```\r\n\r\n", "I see, it's not compatible w/ regex right?\r\ne.g.\r\n`load_dataset(\"json\", data_files=\"https://huggingface.co/datasets/bigscience/xP3/resolve/main/ur/*\")`", "> I see, it's not compatible w/ regex right? e.g. `load_dataset(\"json\", data_files=\"https://huggingface.co/datasets/bigscience/xP3/resolve/main/ur/*\")`\r\n\r\nIt should work for patterns that \"reference\" the local filesystem, but to make this work with the Hub, we must implement https://github.com/huggingface/datasets/issues/5281 first.\r\n\r\nIn the meantime, you can fetch these glob files with `HfFileSystem` and pass them as a list to `load_dataset`:\r\n```python\r\nfrom datasets import load_dataset\r\nfrom huggingface_hub import HfFileSystem, hf_hub_url # `HfFileSystem` requires the latest version of `huggingface_hub`\r\n\r\nfs = HfFileSystem()\r\nglob_files = fs.glob(\"datasets/bigscience/xP3/ur/*\")\r\n# convert fsspec URLs to HTTP URLs\r\nresolved_paths = [fs.resolve_path(file) for file in glob_files]\r\ndata_files = [hf_hub_url(resolved_path.repo_id, resolved_path.path_in_repo, repo_type=resolved_path.repo_type) for resolved_path in resolved_paths]\r\n\r\nds = load_dataset(\"json\", data_files=data_files)\r\n```", "This works using `load_dataset(\"json\", data_files=\"hf://datasets/bigscience/xP3/ur/*\")` now, closing" ]
"2023-05-05T09:49:55Z"
"2023-08-16T10:02:01Z"
"2023-08-16T10:02:01Z"
CONTRIBUTOR
null
### Describe the bug I'm trying to download https://huggingface.co/datasets/bigscience/xP3/resolve/main/ur/xp3_facebook_flores_spa_Latn-urd_Arab_devtest_ab-spa_Latn-urd_Arab.jsonl which works fine in my webbrowser, but somehow not with datasets. Am I doing sth wrong? ``` Downloading builder script: 100% 2.82k/2.82k [00:00<00:00, 64.2kB/s] Downloading readme: 100% 12.6k/12.6k [00:00<00:00, 585kB/s] --------------------------------------------------------------------------- FileNotFoundError Traceback (most recent call last) [<ipython-input-2-4b45446a91d5>](https://localhost:8080/#) in <cell line: 4>() 2 lang = "ur" 3 fname = "xp3_facebook_flores_spa_Latn-urd_Arab_devtest_ab-spa_Latn-urd_Arab.jsonl" ----> 4 dataset = load_dataset("bigscience/xP3", data_files=f"{lang}/{fname}") 6 frames [/usr/local/lib/python3.10/dist-packages/datasets/data_files.py](https://localhost:8080/#) in _resolve_single_pattern_locally(base_path, pattern, allowed_extensions) 291 if allowed_extensions is not None: 292 error_msg += f" with any supported extension {list(allowed_extensions)}" --> 293 raise FileNotFoundError(error_msg) 294 return sorted(out) 295 FileNotFoundError: Unable to find 'https://huggingface.co/datasets/bigscience/xP3/resolve/main/ur/xp3_facebook_flores_spa_Latn-urd_Arab_devtest_ab-spa_Latn-urd_Arab.jsonl' at /content/https:/huggingface.co/datasets/bigscience/xP3/resolve/main ``` ### Steps to reproduce the bug ``` !pip install -q datasets from datasets import load_dataset lang = "ur" fname = "xp3_facebook_flores_spa_Latn-urd_Arab_devtest_ab-spa_Latn-urd_Arab.jsonl" dataset = load_dataset("bigscience/xP3", data_files=f"{lang}/{fname}") ``` ### Expected behavior Correctly downloads ### Environment info latest versions
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5825/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5825/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5824
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5824/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5824/comments
https://api.github.com/repos/huggingface/datasets/issues/5824/events
https://github.com/huggingface/datasets/pull/5824
1,697,152,148
PR_kwDODunzps5P1rIZ
5,824
Fix incomplete docstring for `BuilderConfig`
{ "avatar_url": "https://avatars.githubusercontent.com/u/21087104?v=4", "events_url": "https://api.github.com/users/Laurent2916/events{/privacy}", "followers_url": "https://api.github.com/users/Laurent2916/followers", "following_url": "https://api.github.com/users/Laurent2916/following{/other_user}", "gists_url": "https://api.github.com/users/Laurent2916/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Laurent2916", "id": 21087104, "login": "Laurent2916", "node_id": "MDQ6VXNlcjIxMDg3MTA0", "organizations_url": "https://api.github.com/users/Laurent2916/orgs", "received_events_url": "https://api.github.com/users/Laurent2916/received_events", "repos_url": "https://api.github.com/users/Laurent2916/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Laurent2916/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Laurent2916/subscriptions", "type": "User", "url": "https://api.github.com/users/Laurent2916" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007658 / 0.011353 (-0.003695) | 0.005497 / 0.011008 (-0.005511) | 0.097142 / 0.038508 (0.058633) | 0.034602 / 0.023109 (0.011493) | 0.304191 / 0.275898 (0.028293) | 0.329103 / 0.323480 (0.005624) | 0.005936 / 0.007986 (-0.002049) | 0.004324 / 0.004328 (-0.000004) | 0.073387 / 0.004250 (0.069137) | 0.049657 / 0.037052 (0.012604) | 0.301352 / 0.258489 (0.042863) | 0.343095 / 0.293841 (0.049254) | 0.036767 / 0.128546 (-0.091779) | 0.012438 / 0.075646 (-0.063208) | 0.333804 / 0.419271 (-0.085468) | 0.064557 / 0.043533 (0.021024) | 0.302397 / 0.255139 (0.047258) | 0.319739 / 0.283200 (0.036540) | 0.119264 / 0.141683 (-0.022418) | 1.465309 / 1.452155 (0.013155) | 1.578194 / 1.492716 (0.085478) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.256552 / 0.018006 (0.238545) | 0.555344 / 0.000490 (0.554854) | 0.004845 / 0.000200 (0.004645) | 0.000082 / 0.000054 (0.000027) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027215 / 0.037411 (-0.010197) | 0.107071 / 0.014526 (0.092545) | 0.116343 / 0.176557 (-0.060213) | 0.172646 / 0.737135 (-0.564490) | 0.123366 / 0.296338 (-0.172973) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.411421 / 0.215209 (0.196212) | 4.126028 / 2.077655 (2.048373) | 1.975826 / 1.504120 (0.471706) | 1.784404 / 1.541195 (0.243210) | 1.848697 / 1.468490 (0.380207) | 0.686400 / 4.584777 (-3.898377) | 3.677649 / 3.745712 (-0.068063) | 2.077787 / 5.269862 (-3.192075) | 1.310912 / 4.565676 (-3.254764) | 0.083980 / 0.424275 (-0.340295) | 0.012183 / 0.007607 (0.004575) | 0.506969 / 0.226044 (0.280924) | 5.094730 / 2.268929 (2.825802) | 2.419790 / 55.444624 (-53.024834) | 2.106592 / 6.876477 (-4.769884) | 2.244309 / 2.142072 (0.102237) | 0.814312 / 4.805227 (-3.990915) | 0.167872 / 6.500664 (-6.332792) | 0.065339 / 0.075469 (-0.010130) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.193314 / 1.841788 (-0.648474) | 14.980621 / 8.074308 (6.906313) | 14.352452 / 10.191392 (4.161060) | 0.164531 / 0.680424 (-0.515893) | 0.017432 / 0.534201 (-0.516769) | 0.422193 / 0.579283 (-0.157090) | 0.410047 / 0.434364 (-0.024317) | 0.497011 / 0.540337 (-0.043326) | 0.581395 / 1.386936 (-0.805541) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007214 / 0.011353 (-0.004139) | 0.005449 / 0.011008 (-0.005559) | 0.074320 / 0.038508 (0.035812) | 0.034261 / 0.023109 (0.011152) | 0.378265 / 0.275898 (0.102367) | 0.414419 / 0.323480 (0.090939) | 0.005804 / 0.007986 (-0.002182) | 0.004205 / 0.004328 (-0.000124) | 0.073266 / 0.004250 (0.069015) | 0.050444 / 0.037052 (0.013392) | 0.372999 / 0.258489 (0.114510) | 0.436032 / 0.293841 (0.142191) | 0.035432 / 0.128546 (-0.093114) | 0.012581 / 0.075646 (-0.063065) | 0.085777 / 0.419271 (-0.333495) | 0.046902 / 0.043533 (0.003369) | 0.378732 / 0.255139 (0.123593) | 0.401746 / 0.283200 (0.118547) | 0.113398 / 0.141683 (-0.028285) | 1.463851 / 1.452155 (0.011696) | 1.566387 / 1.492716 (0.073670) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.261246 / 0.018006 (0.243240) | 0.546730 / 0.000490 (0.546241) | 0.005245 / 0.000200 (0.005045) | 0.000103 / 0.000054 (0.000048) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029441 / 0.037411 (-0.007970) | 0.111834 / 0.014526 (0.097308) | 0.122411 / 0.176557 (-0.054145) | 0.171288 / 0.737135 (-0.565847) | 0.130338 / 0.296338 (-0.166001) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.433405 / 0.215209 (0.218196) | 4.315790 / 2.077655 (2.238135) | 2.121934 / 1.504120 (0.617814) | 1.924123 / 1.541195 (0.382928) | 2.029077 / 1.468490 (0.560587) | 0.710245 / 4.584777 (-3.874532) | 3.844393 / 3.745712 (0.098681) | 3.576580 / 5.269862 (-1.693281) | 1.930985 / 4.565676 (-2.634691) | 0.092186 / 0.424275 (-0.332090) | 0.012307 / 0.007607 (0.004700) | 0.533722 / 0.226044 (0.307677) | 5.324447 / 2.268929 (3.055519) | 2.615451 / 55.444624 (-52.829174) | 2.282310 / 6.876477 (-4.594167) | 2.319847 / 2.142072 (0.177774) | 0.849364 / 4.805227 (-3.955864) | 0.172722 / 6.500664 (-6.327942) | 0.064721 / 0.075469 (-0.010748) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.289942 / 1.841788 (-0.551846) | 15.875062 / 8.074308 (7.800754) | 14.784682 / 10.191392 (4.593290) | 0.144432 / 0.680424 (-0.535991) | 0.017703 / 0.534201 (-0.516498) | 0.424357 / 0.579283 (-0.154926) | 0.419078 / 0.434364 (-0.015286) | 0.489331 / 0.540337 (-0.051006) | 0.585284 / 1.386936 (-0.801652) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#e3f4f124a1b118a5bfff5bae76b25a68aedbebbc \"CML watermark\")\n" ]
"2023-05-05T07:34:28Z"
"2023-05-05T12:39:14Z"
"2023-05-05T12:31:54Z"
CONTRIBUTOR
null
Fixes #5820 Also fixed a couple of typos I spotted
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5824/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5824/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5824.diff", "html_url": "https://github.com/huggingface/datasets/pull/5824", "merged_at": "2023-05-05T12:31:54Z", "patch_url": "https://github.com/huggingface/datasets/pull/5824.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5824" }
true
https://api.github.com/repos/huggingface/datasets/issues/5823
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5823/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5823/comments
https://api.github.com/repos/huggingface/datasets/issues/5823/events
https://github.com/huggingface/datasets/issues/5823
1,697,024,789
I_kwDODunzps5lJosV
5,823
[2.12.0] DatasetDict.save_to_disk not saving to S3
{ "avatar_url": "https://avatars.githubusercontent.com/u/5233185?v=4", "events_url": "https://api.github.com/users/thejamesmarq/events{/privacy}", "followers_url": "https://api.github.com/users/thejamesmarq/followers", "following_url": "https://api.github.com/users/thejamesmarq/following{/other_user}", "gists_url": "https://api.github.com/users/thejamesmarq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/thejamesmarq", "id": 5233185, "login": "thejamesmarq", "node_id": "MDQ6VXNlcjUyMzMxODU=", "organizations_url": "https://api.github.com/users/thejamesmarq/orgs", "received_events_url": "https://api.github.com/users/thejamesmarq/received_events", "repos_url": "https://api.github.com/users/thejamesmarq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/thejamesmarq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thejamesmarq/subscriptions", "type": "User", "url": "https://api.github.com/users/thejamesmarq" }
[]
closed
false
null
[]
null
[ "Hi ! Can you try adding the `s3://` prefix ?\r\n```python\r\nf\"s3://{s3_bucket}/{s3_dir}/{dataset_name}\"\r\n```", "Ugh, yeah that was it. Thank you!" ]
"2023-05-05T05:22:59Z"
"2023-05-05T15:01:18Z"
"2023-05-05T15:01:17Z"
NONE
null
### Describe the bug When trying to save a `DatasetDict` to a private S3 bucket using `save_to_disk`, the artifacts are instead saved locally, and not in the S3 bucket. I have tried using the deprecated `fs` as well as the `storage_options` arguments and I get the same results. ### Steps to reproduce the bug 1. Create a DatsetDict `dataset` 2. Create a S3FileSystem object `s3 = datasets.filesystems.S3FileSystem(key=aws_access_key_id, secret=aws_secret_access_key)` 3. Save using `dataset_dict.save_to_disk(f"{s3_bucket}/{s3_dir}/{dataset_name}", storage_options=s3.storage_options)` or `dataset_dict.save_to_disk(f"{s3_bucket}/{s3_dir}/{dataset_name}", fs=s3)` 4. Check the corresponding S3 bucket and verify nothing has been uploaded 5. Check the path at f"{s3_bucket}/{s3_dir}/{dataset_name}" and verify that files have been saved there ### Expected behavior Artifacts are uploaded at the f"{s3_bucket}/{s3_dir}/{dataset_name}" S3 location. ### Environment info - `datasets` version: 2.12.0 - Platform: macOS-13.3.1-x86_64-i386-64bit - Python version: 3.11.2 - Huggingface_hub version: 0.14.1 - PyArrow version: 12.0.0 - Pandas version: 2.0.1
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5823/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5823/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5822
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5822/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5822/comments
https://api.github.com/repos/huggingface/datasets/issues/5822/events
https://github.com/huggingface/datasets/issues/5822
1,696,627,308
I_kwDODunzps5lIHps
5,822
Audio Dataset with_format torch problem
{ "avatar_url": "https://avatars.githubusercontent.com/u/20282916?v=4", "events_url": "https://api.github.com/users/paulbauriegel/events{/privacy}", "followers_url": "https://api.github.com/users/paulbauriegel/followers", "following_url": "https://api.github.com/users/paulbauriegel/following{/other_user}", "gists_url": "https://api.github.com/users/paulbauriegel/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/paulbauriegel", "id": 20282916, "login": "paulbauriegel", "node_id": "MDQ6VXNlcjIwMjgyOTE2", "organizations_url": "https://api.github.com/users/paulbauriegel/orgs", "received_events_url": "https://api.github.com/users/paulbauriegel/received_events", "repos_url": "https://api.github.com/users/paulbauriegel/repos", "site_admin": false, "starred_url": "https://api.github.com/users/paulbauriegel/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/paulbauriegel/subscriptions", "type": "User", "url": "https://api.github.com/users/paulbauriegel" }
[]
closed
false
null
[]
null
[ "Hi ! Can you try with a more recent version of `datasets` ?", "Ok, yes it worked with the most recent version. Thanks" ]
"2023-05-04T20:07:51Z"
"2023-05-11T20:45:53Z"
"2023-05-11T20:45:53Z"
NONE
null
### Describe the bug Common Voice v10 Delta (German) Dataset from here https://commonvoice.mozilla.org/de/datasets ``` audio_dataset = \ (Dataset .from_dict({"audio": ('/tmp/cv-corpus-10.0-delta-2022-07-04/de/clips/' + df.path).to_list()}) .cast_column("audio", Audio(sampling_rate=16_000)) .with_format('numpy')) audio_dataset[0]["audio"] ``` works, but ``` audio_dataset = \ (Dataset .from_dict({"audio": ('/tmp/cv-corpus-10.0-delta-2022-07-04/de/clips/' + df.path).to_list()}) .cast_column("audio", Audio(sampling_rate=16_000)) .with_format('torch')) audio_dataset[0]["audio"] ``` does not instead I get ``` --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) Cell In[54], line 1 ----> 1 audio_dataset[0]["audio"] File /anaconda/envs/azureml_py38/lib/python3.8/site-packages/datasets/arrow_dataset.py:2154, in Dataset.__getitem__(self, key) 2152 def __getitem__(self, key): # noqa: F811 2153 """Can be used to index columns (by string names) or rows (by integer index or iterable of indices or bools).""" -> 2154 return self._getitem( 2155 key, 2156 ) File /anaconda/envs/azureml_py38/lib/python3.8/site-packages/datasets/arrow_dataset.py:2139, in Dataset._getitem(self, key, decoded, **kwargs) 2137 formatter = get_formatter(format_type, features=self.features, decoded=decoded, **format_kwargs) 2138 pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None) -> 2139 formatted_output = format_table( 2140 pa_subtable, key, formatter=formatter, format_columns=format_columns, output_all_columns=output_all_columns 2141 ) 2142 return formatted_output File /anaconda/envs/azureml_py38/lib/python3.8/site-packages/datasets/formatting/formatting.py:532, in format_table(table, key, formatter, format_columns, output_all_columns) 530 python_formatter = PythonFormatter(features=None) 531 if format_columns is None: --> 532 return formatter(pa_table, query_type=query_type) 533 elif query_type == "column": 534 if key in format_columns: File /anaconda/envs/azureml_py38/lib/python3.8/site-packages/datasets/formatting/formatting.py:281, in Formatter.__call__(self, pa_table, query_type) 279 def __call__(self, pa_table: pa.Table, query_type: str) -> Union[RowFormat, ColumnFormat, BatchFormat]: 280 if query_type == "row": --> 281 return self.format_row(pa_table) 282 elif query_type == "column": 283 return self.format_column(pa_table) File /anaconda/envs/azureml_py38/lib/python3.8/site-packages/datasets/formatting/torch_formatter.py:58, in TorchFormatter.format_row(self, pa_table) 56 def format_row(self, pa_table: pa.Table) -> dict: 57 row = self.numpy_arrow_extractor().extract_row(pa_table) ---> 58 return self.recursive_tensorize(row) File /anaconda/envs/azureml_py38/lib/python3.8/site-packages/datasets/formatting/torch_formatter.py:54, in TorchFormatter.recursive_tensorize(self, data_struct) 53 def recursive_tensorize(self, data_struct: dict): ---> 54 return map_nested(self._recursive_tensorize, data_struct, map_list=False) File /anaconda/envs/azureml_py38/lib/python3.8/site-packages/datasets/utils/py_utils.py:356, in map_nested(function, data_struct, dict_only, map_list, map_tuple, map_numpy, num_proc, types, disable_tqdm, desc) 354 num_proc = 1 355 if num_proc <= 1 or len(iterable) <= num_proc: --> 356 mapped = [ 357 _single_map_nested((function, obj, types, None, True, None)) 358 for obj in logging.tqdm(iterable, disable=disable_tqdm, desc=desc) 359 ] 360 else: 361 split_kwds = [] # We organize the splits ourselve (contiguous splits) File /anaconda/envs/azureml_py38/lib/python3.8/site-packages/datasets/utils/py_utils.py:357, in <listcomp>(.0) 354 num_proc = 1 355 if num_proc <= 1 or len(iterable) <= num_proc: 356 mapped = [ --> 357 _single_map_nested((function, obj, types, None, True, None)) 358 for obj in logging.tqdm(iterable, disable=disable_tqdm, desc=desc) 359 ] 360 else: 361 split_kwds = [] # We organize the splits ourselve (contiguous splits) File /anaconda/envs/azureml_py38/lib/python3.8/site-packages/datasets/utils/py_utils.py:309, in _single_map_nested(args) 306 pbar = logging.tqdm(pbar_iterable, disable=disable_tqdm, position=rank, unit="obj", desc=pbar_desc) 308 if isinstance(data_struct, dict): --> 309 return {k: _single_map_nested((function, v, types, None, True, None)) for k, v in pbar} 310 else: 311 mapped = [_single_map_nested((function, v, types, None, True, None)) for v in pbar] File /anaconda/envs/azureml_py38/lib/python3.8/site-packages/datasets/utils/py_utils.py:309, in <dictcomp>(.0) 306 pbar = logging.tqdm(pbar_iterable, disable=disable_tqdm, position=rank, unit="obj", desc=pbar_desc) 308 if isinstance(data_struct, dict): --> 309 return {k: _single_map_nested((function, v, types, None, True, None)) for k, v in pbar} 310 else: 311 mapped = [_single_map_nested((function, v, types, None, True, None)) for v in pbar] File /anaconda/envs/azureml_py38/lib/python3.8/site-packages/datasets/utils/py_utils.py:293, in _single_map_nested(args) 291 # Singleton first to spare some computation 292 if not isinstance(data_struct, dict) and not isinstance(data_struct, types): --> 293 return function(data_struct) 295 # Reduce logging to keep things readable in multiprocessing with tqdm 296 if rank is not None and logging.get_verbosity() < logging.WARNING: File /anaconda/envs/azureml_py38/lib/python3.8/site-packages/datasets/formatting/torch_formatter.py:51, in TorchFormatter._recursive_tensorize(self, data_struct) 49 if data_struct.dtype == np.object: # pytorch tensors cannot be instantied from an array of objects 50 return [self.recursive_tensorize(substruct) for substruct in data_struct] ---> 51 return self._tensorize(data_struct) File /anaconda/envs/azureml_py38/lib/python3.8/site-packages/datasets/formatting/torch_formatter.py:38, in TorchFormatter._tensorize(self, value) 35 import torch 37 default_dtype = {} ---> 38 if np.issubdtype(value.dtype, np.integer): 39 default_dtype = {"dtype": torch.int64} 40 elif np.issubdtype(value.dtype, np.floating): AttributeError: 'NoneType' object has no attribute 'dtype' ``` ### Steps to reproduce the bug 1. Download some audio dataset in this case I used Common Voice v10 Delta (German) Dataset from here https://commonvoice.mozilla.org/de/datasets 2. Try the Code from above ### Expected behavior It should work for torch ### Environment info pytorch: 2.0.0 datasets: 2.3.2 numpy: 1.21.6 Python: 3.8 Linux
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5822/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5822/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5821
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5821/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5821/comments
https://api.github.com/repos/huggingface/datasets/issues/5821/events
https://github.com/huggingface/datasets/pull/5821
1,696,400,343
PR_kwDODunzps5PzHLU
5,821
IterableDataset Arrow formatting
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007593 / 0.011353 (-0.003760) | 0.005554 / 0.011008 (-0.005454) | 0.097663 / 0.038508 (0.059155) | 0.034915 / 0.023109 (0.011806) | 0.303116 / 0.275898 (0.027218) | 0.342376 / 0.323480 (0.018897) | 0.006044 / 0.007986 (-0.001942) | 0.004239 / 0.004328 (-0.000090) | 0.074561 / 0.004250 (0.070310) | 0.049109 / 0.037052 (0.012057) | 0.311302 / 0.258489 (0.052813) | 0.360717 / 0.293841 (0.066876) | 0.035119 / 0.128546 (-0.093428) | 0.012465 / 0.075646 (-0.063181) | 0.333648 / 0.419271 (-0.085624) | 0.051294 / 0.043533 (0.007762) | 0.297298 / 0.255139 (0.042159) | 0.321957 / 0.283200 (0.038757) | 0.108206 / 0.141683 (-0.033477) | 1.425023 / 1.452155 (-0.027132) | 1.526395 / 1.492716 (0.033678) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.300694 / 0.018006 (0.282688) | 0.515141 / 0.000490 (0.514651) | 0.003965 / 0.000200 (0.003765) | 0.000260 / 0.000054 (0.000206) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029428 / 0.037411 (-0.007983) | 0.107634 / 0.014526 (0.093108) | 0.123662 / 0.176557 (-0.052895) | 0.182886 / 0.737135 (-0.554249) | 0.128361 / 0.296338 (-0.167977) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.398809 / 0.215209 (0.183600) | 3.984428 / 2.077655 (1.906773) | 1.795337 / 1.504120 (0.291217) | 1.609235 / 1.541195 (0.068040) | 1.724825 / 1.468490 (0.256335) | 0.698413 / 4.584777 (-3.886364) | 3.857479 / 3.745712 (0.111767) | 2.135203 / 5.269862 (-3.134659) | 1.348458 / 4.565676 (-3.217218) | 0.086445 / 0.424275 (-0.337830) | 0.012717 / 0.007607 (0.005110) | 0.498713 / 0.226044 (0.272668) | 4.988685 / 2.268929 (2.719757) | 2.284764 / 55.444624 (-53.159860) | 1.961162 / 6.876477 (-4.915315) | 2.147514 / 2.142072 (0.005441) | 0.850334 / 4.805227 (-3.954894) | 0.171664 / 6.500664 (-6.329000) | 0.065526 / 0.075469 (-0.009943) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.204398 / 1.841788 (-0.637390) | 15.625790 / 8.074308 (7.551482) | 14.614980 / 10.191392 (4.423588) | 0.167135 / 0.680424 (-0.513289) | 0.017631 / 0.534201 (-0.516570) | 0.427337 / 0.579283 (-0.151946) | 0.439203 / 0.434364 (0.004839) | 0.499670 / 0.540337 (-0.040668) | 0.587577 / 1.386936 (-0.799359) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007866 / 0.011353 (-0.003486) | 0.005798 / 0.011008 (-0.005210) | 0.075803 / 0.038508 (0.037295) | 0.035773 / 0.023109 (0.012664) | 0.361965 / 0.275898 (0.086067) | 0.402780 / 0.323480 (0.079300) | 0.006521 / 0.007986 (-0.001465) | 0.004613 / 0.004328 (0.000284) | 0.075196 / 0.004250 (0.070946) | 0.055324 / 0.037052 (0.018272) | 0.363468 / 0.258489 (0.104979) | 0.410344 / 0.293841 (0.116503) | 0.036324 / 0.128546 (-0.092222) | 0.012891 / 0.075646 (-0.062755) | 0.086991 / 0.419271 (-0.332280) | 0.048082 / 0.043533 (0.004549) | 0.357238 / 0.255139 (0.102099) | 0.377065 / 0.283200 (0.093865) | 0.118586 / 0.141683 (-0.023097) | 1.463161 / 1.452155 (0.011007) | 1.582686 / 1.492716 (0.089969) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.267916 / 0.018006 (0.249909) | 0.540862 / 0.000490 (0.540373) | 0.003148 / 0.000200 (0.002948) | 0.000101 / 0.000054 (0.000047) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032290 / 0.037411 (-0.005122) | 0.115468 / 0.014526 (0.100943) | 0.125743 / 0.176557 (-0.050814) | 0.177469 / 0.737135 (-0.559667) | 0.133579 / 0.296338 (-0.162759) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.446727 / 0.215209 (0.231518) | 4.467938 / 2.077655 (2.390284) | 2.330171 / 1.504120 (0.826052) | 2.165624 / 1.541195 (0.624429) | 2.298063 / 1.468490 (0.829573) | 0.702241 / 4.584777 (-3.882536) | 3.845302 / 3.745712 (0.099590) | 2.169278 / 5.269862 (-3.100584) | 1.401392 / 4.565676 (-3.164285) | 0.086672 / 0.424275 (-0.337603) | 0.012355 / 0.007607 (0.004748) | 0.543639 / 0.226044 (0.317595) | 5.425876 / 2.268929 (3.156947) | 2.781794 / 55.444624 (-52.662831) | 2.503724 / 6.876477 (-4.372752) | 2.622580 / 2.142072 (0.480507) | 0.847143 / 4.805227 (-3.958084) | 0.171721 / 6.500664 (-6.328943) | 0.067894 / 0.075469 (-0.007575) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.292194 / 1.841788 (-0.549594) | 15.497311 / 8.074308 (7.423003) | 15.002463 / 10.191392 (4.811071) | 0.152244 / 0.680424 (-0.528180) | 0.018085 / 0.534201 (-0.516116) | 0.445787 / 0.579283 (-0.133496) | 0.448960 / 0.434364 (0.014596) | 0.515319 / 0.540337 (-0.025019) | 0.623840 / 1.386936 (-0.763096) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#f8417a41547ce0c939bd342398be621f5ce3e340 \"CML watermark\")\n", "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006938 / 0.011353 (-0.004415) | 0.005100 / 0.011008 (-0.005909) | 0.096525 / 0.038508 (0.058017) | 0.033764 / 0.023109 (0.010655) | 0.301107 / 0.275898 (0.025209) | 0.333140 / 0.323480 (0.009660) | 0.005719 / 0.007986 (-0.002266) | 0.005192 / 0.004328 (0.000864) | 0.073685 / 0.004250 (0.069434) | 0.048149 / 0.037052 (0.011096) | 0.299244 / 0.258489 (0.040754) | 0.347518 / 0.293841 (0.053677) | 0.034810 / 0.128546 (-0.093736) | 0.012284 / 0.075646 (-0.063363) | 0.333600 / 0.419271 (-0.085672) | 0.050750 / 0.043533 (0.007217) | 0.299782 / 0.255139 (0.044643) | 0.322712 / 0.283200 (0.039512) | 0.105659 / 0.141683 (-0.036024) | 1.457536 / 1.452155 (0.005381) | 1.571604 / 1.492716 (0.078887) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.207190 / 0.018006 (0.189184) | 0.439230 / 0.000490 (0.438740) | 0.006403 / 0.000200 (0.006203) | 0.000282 / 0.000054 (0.000228) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027424 / 0.037411 (-0.009987) | 0.107180 / 0.014526 (0.092655) | 0.118356 / 0.176557 (-0.058201) | 0.175557 / 0.737135 (-0.561579) | 0.125671 / 0.296338 (-0.170668) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.411249 / 0.215209 (0.196039) | 4.094494 / 2.077655 (2.016839) | 1.946843 / 1.504120 (0.442723) | 1.766503 / 1.541195 (0.225308) | 1.831406 / 1.468490 (0.362916) | 0.704637 / 4.584777 (-3.880140) | 3.819204 / 3.745712 (0.073492) | 3.412598 / 5.269862 (-1.857263) | 1.796385 / 4.565676 (-2.769291) | 0.084591 / 0.424275 (-0.339684) | 0.012568 / 0.007607 (0.004961) | 0.506372 / 0.226044 (0.280327) | 5.049461 / 2.268929 (2.780532) | 2.409860 / 55.444624 (-53.034765) | 2.064514 / 6.876477 (-4.811963) | 2.192808 / 2.142072 (0.050735) | 0.833773 / 4.805227 (-3.971455) | 0.167948 / 6.500664 (-6.332716) | 0.064617 / 0.075469 (-0.010852) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.174739 / 1.841788 (-0.667048) | 14.605634 / 8.074308 (6.531326) | 14.321043 / 10.191392 (4.129651) | 0.145892 / 0.680424 (-0.534532) | 0.017413 / 0.534201 (-0.516788) | 0.444940 / 0.579283 (-0.134343) | 0.430792 / 0.434364 (-0.003572) | 0.539699 / 0.540337 (-0.000638) | 0.640279 / 1.386936 (-0.746657) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007159 / 0.011353 (-0.004194) | 0.005313 / 0.011008 (-0.005695) | 0.073630 / 0.038508 (0.035122) | 0.033459 / 0.023109 (0.010350) | 0.356959 / 0.275898 (0.081061) | 0.385918 / 0.323480 (0.062438) | 0.005714 / 0.007986 (-0.002272) | 0.004074 / 0.004328 (-0.000254) | 0.073278 / 0.004250 (0.069028) | 0.047193 / 0.037052 (0.010140) | 0.360300 / 0.258489 (0.101811) | 0.398052 / 0.293841 (0.104212) | 0.035670 / 0.128546 (-0.092876) | 0.012499 / 0.075646 (-0.063147) | 0.086677 / 0.419271 (-0.332595) | 0.046534 / 0.043533 (0.003001) | 0.370029 / 0.255139 (0.114890) | 0.376040 / 0.283200 (0.092841) | 0.105184 / 0.141683 (-0.036499) | 1.419779 / 1.452155 (-0.032375) | 1.538925 / 1.492716 (0.046209) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.220465 / 0.018006 (0.202459) | 0.438836 / 0.000490 (0.438346) | 0.000428 / 0.000200 (0.000228) | 0.000060 / 0.000054 (0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029114 / 0.037411 (-0.008298) | 0.111871 / 0.014526 (0.097345) | 0.124367 / 0.176557 (-0.052189) | 0.173737 / 0.737135 (-0.563398) | 0.128435 / 0.296338 (-0.167904) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.440706 / 0.215209 (0.225497) | 4.414826 / 2.077655 (2.337171) | 2.128899 / 1.504120 (0.624780) | 1.929551 / 1.541195 (0.388357) | 2.013130 / 1.468490 (0.544640) | 0.708566 / 4.584777 (-3.876211) | 3.846459 / 3.745712 (0.100747) | 2.158829 / 5.269862 (-3.111032) | 1.339454 / 4.565676 (-3.226223) | 0.086345 / 0.424275 (-0.337930) | 0.012085 / 0.007607 (0.004478) | 0.546360 / 0.226044 (0.320316) | 5.461612 / 2.268929 (3.192683) | 2.657388 / 55.444624 (-52.787237) | 2.298403 / 6.876477 (-4.578074) | 2.344572 / 2.142072 (0.202499) | 0.844276 / 4.805227 (-3.960951) | 0.170225 / 6.500664 (-6.330439) | 0.064684 / 0.075469 (-0.010785) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.265114 / 1.841788 (-0.576674) | 15.058156 / 8.074308 (6.983848) | 14.485182 / 10.191392 (4.293790) | 0.165960 / 0.680424 (-0.514464) | 0.017481 / 0.534201 (-0.516719) | 0.425141 / 0.579283 (-0.154142) | 0.434883 / 0.434364 (0.000519) | 0.506701 / 0.540337 (-0.033637) | 0.613240 / 1.386936 (-0.773697) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#8f019dffffb214b44b30dd9ac56fdea12259e148 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007651 / 0.011353 (-0.003702) | 0.005503 / 0.011008 (-0.005505) | 0.098751 / 0.038508 (0.060243) | 0.036822 / 0.023109 (0.013713) | 0.340754 / 0.275898 (0.064856) | 0.387247 / 0.323480 (0.063767) | 0.006513 / 0.007986 (-0.001473) | 0.006135 / 0.004328 (0.001807) | 0.073656 / 0.004250 (0.069406) | 0.055508 / 0.037052 (0.018456) | 0.352493 / 0.258489 (0.094004) | 0.408003 / 0.293841 (0.114162) | 0.036346 / 0.128546 (-0.092201) | 0.012562 / 0.075646 (-0.063085) | 0.335111 / 0.419271 (-0.084160) | 0.051928 / 0.043533 (0.008395) | 0.339405 / 0.255139 (0.084266) | 0.366840 / 0.283200 (0.083640) | 0.114353 / 0.141683 (-0.027330) | 1.449062 / 1.452155 (-0.003092) | 1.567310 / 1.492716 (0.074594) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.262975 / 0.018006 (0.244968) | 0.570302 / 0.000490 (0.569813) | 0.003419 / 0.000200 (0.003219) | 0.000100 / 0.000054 (0.000046) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027363 / 0.037411 (-0.010049) | 0.109033 / 0.014526 (0.094507) | 0.119048 / 0.176557 (-0.057509) | 0.175891 / 0.737135 (-0.561244) | 0.124577 / 0.296338 (-0.171762) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.397988 / 0.215209 (0.182779) | 3.993210 / 2.077655 (1.915555) | 1.809275 / 1.504120 (0.305155) | 1.614664 / 1.541195 (0.073469) | 1.723650 / 1.468490 (0.255159) | 0.698484 / 4.584777 (-3.886293) | 3.914135 / 3.745712 (0.168423) | 2.142622 / 5.269862 (-3.127239) | 1.360215 / 4.565676 (-3.205461) | 0.086340 / 0.424275 (-0.337935) | 0.012836 / 0.007607 (0.005229) | 0.500728 / 0.226044 (0.274684) | 5.006744 / 2.268929 (2.737815) | 2.350668 / 55.444624 (-53.093956) | 1.979816 / 6.876477 (-4.896660) | 2.190159 / 2.142072 (0.048087) | 0.854063 / 4.805227 (-3.951164) | 0.170203 / 6.500664 (-6.330461) | 0.066903 / 0.075469 (-0.008566) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.184012 / 1.841788 (-0.657775) | 15.407350 / 8.074308 (7.333042) | 14.758180 / 10.191392 (4.566788) | 0.169280 / 0.680424 (-0.511144) | 0.017419 / 0.534201 (-0.516781) | 0.434359 / 0.579283 (-0.144925) | 0.442515 / 0.434364 (0.008151) | 0.503132 / 0.540337 (-0.037205) | 0.602589 / 1.386936 (-0.784347) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008022 / 0.011353 (-0.003331) | 0.005473 / 0.011008 (-0.005535) | 0.076106 / 0.038508 (0.037598) | 0.037065 / 0.023109 (0.013956) | 0.380039 / 0.275898 (0.104141) | 0.394205 / 0.323480 (0.070725) | 0.006447 / 0.007986 (-0.001539) | 0.006011 / 0.004328 (0.001682) | 0.075236 / 0.004250 (0.070985) | 0.054425 / 0.037052 (0.017372) | 0.381707 / 0.258489 (0.123218) | 0.411237 / 0.293841 (0.117396) | 0.037222 / 0.128546 (-0.091324) | 0.012627 / 0.075646 (-0.063020) | 0.086733 / 0.419271 (-0.332538) | 0.053857 / 0.043533 (0.010324) | 0.373374 / 0.255139 (0.118235) | 0.381680 / 0.283200 (0.098480) | 0.121962 / 0.141683 (-0.019721) | 1.430804 / 1.452155 (-0.021351) | 1.562517 / 1.492716 (0.069801) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.262034 / 0.018006 (0.244028) | 0.563497 / 0.000490 (0.563007) | 0.002726 / 0.000200 (0.002526) | 0.000099 / 0.000054 (0.000044) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031071 / 0.037411 (-0.006341) | 0.111983 / 0.014526 (0.097457) | 0.126634 / 0.176557 (-0.049923) | 0.177511 / 0.737135 (-0.559625) | 0.132599 / 0.296338 (-0.163739) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.436148 / 0.215209 (0.220939) | 4.344850 / 2.077655 (2.267195) | 2.105877 / 1.504120 (0.601757) | 1.920934 / 1.541195 (0.379739) | 2.072930 / 1.468490 (0.604440) | 0.701793 / 4.584777 (-3.882984) | 3.841621 / 3.745712 (0.095909) | 3.602550 / 5.269862 (-1.667311) | 1.775999 / 4.565676 (-2.789677) | 0.086024 / 0.424275 (-0.338251) | 0.012275 / 0.007607 (0.004668) | 0.532815 / 0.226044 (0.306770) | 5.336273 / 2.268929 (3.067344) | 2.638842 / 55.444624 (-52.805782) | 2.301842 / 6.876477 (-4.574635) | 2.407448 / 2.142072 (0.265376) | 0.855836 / 4.805227 (-3.949392) | 0.170348 / 6.500664 (-6.330317) | 0.066926 / 0.075469 (-0.008543) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.291515 / 1.841788 (-0.550272) | 15.869825 / 8.074308 (7.795517) | 15.068227 / 10.191392 (4.876835) | 0.156953 / 0.680424 (-0.523471) | 0.017761 / 0.534201 (-0.516440) | 0.429515 / 0.579283 (-0.149768) | 0.432758 / 0.434364 (-0.001605) | 0.500080 / 0.540337 (-0.040258) | 0.601451 / 1.386936 (-0.785485) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#00b148b09da2074fcaba0538a23c7f46d28d387c \"CML watermark\")\n", "Will need to take https://github.com/huggingface/datasets/pull/5810 into account if it gets merged before this one", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006914 / 0.011353 (-0.004439) | 0.004727 / 0.011008 (-0.006281) | 0.098880 / 0.038508 (0.060372) | 0.036663 / 0.023109 (0.013554) | 0.317575 / 0.275898 (0.041677) | 0.360301 / 0.323480 (0.036821) | 0.006084 / 0.007986 (-0.001901) | 0.004118 / 0.004328 (-0.000210) | 0.074330 / 0.004250 (0.070079) | 0.042422 / 0.037052 (0.005369) | 0.335625 / 0.258489 (0.077136) | 0.366616 / 0.293841 (0.072775) | 0.028523 / 0.128546 (-0.100023) | 0.008883 / 0.075646 (-0.066763) | 0.332475 / 0.419271 (-0.086797) | 0.051746 / 0.043533 (0.008214) | 0.324952 / 0.255139 (0.069813) | 0.339660 / 0.283200 (0.056460) | 0.103714 / 0.141683 (-0.037969) | 1.472130 / 1.452155 (0.019976) | 1.516548 / 1.492716 (0.023831) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.229538 / 0.018006 (0.211532) | 0.449077 / 0.000490 (0.448588) | 0.003707 / 0.000200 (0.003507) | 0.000086 / 0.000054 (0.000032) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027897 / 0.037411 (-0.009514) | 0.115452 / 0.014526 (0.100926) | 0.118830 / 0.176557 (-0.057726) | 0.176228 / 0.737135 (-0.560907) | 0.125966 / 0.296338 (-0.170372) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.436947 / 0.215209 (0.221738) | 4.355687 / 2.077655 (2.278033) | 2.195857 / 1.504120 (0.691737) | 2.028133 / 1.541195 (0.486938) | 2.119872 / 1.468490 (0.651382) | 0.524256 / 4.584777 (-4.060521) | 3.864064 / 3.745712 (0.118352) | 3.446181 / 5.269862 (-1.823680) | 1.610307 / 4.565676 (-2.955370) | 0.065981 / 0.424275 (-0.358294) | 0.012172 / 0.007607 (0.004565) | 0.545341 / 0.226044 (0.319297) | 5.451728 / 2.268929 (3.182800) | 2.690734 / 55.444624 (-52.753890) | 2.368203 / 6.876477 (-4.508274) | 2.549533 / 2.142072 (0.407460) | 0.651296 / 4.805227 (-4.153931) | 0.143697 / 6.500664 (-6.356968) | 0.065170 / 0.075469 (-0.010299) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.198898 / 1.841788 (-0.642890) | 15.349348 / 8.074308 (7.275040) | 15.314467 / 10.191392 (5.123075) | 0.177219 / 0.680424 (-0.503205) | 0.018223 / 0.534201 (-0.515978) | 0.396209 / 0.579283 (-0.183074) | 0.427810 / 0.434364 (-0.006554) | 0.475107 / 0.540337 (-0.065230) | 0.561224 / 1.386936 (-0.825712) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007024 / 0.011353 (-0.004329) | 0.004851 / 0.011008 (-0.006157) | 0.075031 / 0.038508 (0.036523) | 0.036411 / 0.023109 (0.013302) | 0.375999 / 0.275898 (0.100101) | 0.433033 / 0.323480 (0.109553) | 0.006089 / 0.007986 (-0.001897) | 0.005638 / 0.004328 (0.001309) | 0.072599 / 0.004250 (0.068348) | 0.048489 / 0.037052 (0.011436) | 0.381807 / 0.258489 (0.123318) | 0.441531 / 0.293841 (0.147691) | 0.029044 / 0.128546 (-0.099503) | 0.009052 / 0.075646 (-0.066595) | 0.080086 / 0.419271 (-0.339186) | 0.046919 / 0.043533 (0.003386) | 0.360399 / 0.255139 (0.105260) | 0.405445 / 0.283200 (0.122245) | 0.108815 / 0.141683 (-0.032868) | 1.415168 / 1.452155 (-0.036987) | 1.511756 / 1.492716 (0.019040) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.210287 / 0.018006 (0.192281) | 0.445139 / 0.000490 (0.444650) | 0.000386 / 0.000200 (0.000186) | 0.000056 / 0.000054 (0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030457 / 0.037411 (-0.006954) | 0.117225 / 0.014526 (0.102699) | 0.122833 / 0.176557 (-0.053724) | 0.170441 / 0.737135 (-0.566694) | 0.131589 / 0.296338 (-0.164750) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.446541 / 0.215209 (0.231332) | 4.471214 / 2.077655 (2.393560) | 2.145894 / 1.504120 (0.641774) | 1.958113 / 1.541195 (0.416919) | 2.069623 / 1.468490 (0.601132) | 0.527562 / 4.584777 (-4.057215) | 3.838285 / 3.745712 (0.092573) | 1.884780 / 5.269862 (-3.385081) | 1.088124 / 4.565676 (-3.477553) | 0.066099 / 0.424275 (-0.358176) | 0.011973 / 0.007607 (0.004366) | 0.540369 / 0.226044 (0.314325) | 5.403554 / 2.268929 (3.134626) | 2.749920 / 55.444624 (-52.694704) | 2.543169 / 6.876477 (-4.333308) | 2.403116 / 2.142072 (0.261043) | 0.638723 / 4.805227 (-4.166505) | 0.142232 / 6.500664 (-6.358432) | 0.065551 / 0.075469 (-0.009918) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.298307 / 1.841788 (-0.543481) | 15.986177 / 8.074308 (7.911869) | 15.530453 / 10.191392 (5.339061) | 0.160138 / 0.680424 (-0.520286) | 0.017988 / 0.534201 (-0.516213) | 0.397857 / 0.579283 (-0.181427) | 0.435071 / 0.434364 (0.000707) | 0.480096 / 0.540337 (-0.060241) | 0.589139 / 1.386936 (-0.797797) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#5bd9c974e08e059ce36dc0843256747016e843c5 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006976 / 0.011353 (-0.004377) | 0.005068 / 0.011008 (-0.005940) | 0.098178 / 0.038508 (0.059670) | 0.035167 / 0.023109 (0.012057) | 0.324093 / 0.275898 (0.048195) | 0.350749 / 0.323480 (0.027269) | 0.006128 / 0.007986 (-0.001858) | 0.004361 / 0.004328 (0.000033) | 0.075412 / 0.004250 (0.071161) | 0.052083 / 0.037052 (0.015031) | 0.326726 / 0.258489 (0.068237) | 0.371450 / 0.293841 (0.077609) | 0.028522 / 0.128546 (-0.100025) | 0.009210 / 0.075646 (-0.066436) | 0.329296 / 0.419271 (-0.089976) | 0.051182 / 0.043533 (0.007649) | 0.319863 / 0.255139 (0.064724) | 0.329140 / 0.283200 (0.045941) | 0.111653 / 0.141683 (-0.030030) | 1.464205 / 1.452155 (0.012050) | 1.555779 / 1.492716 (0.063062) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.282372 / 0.018006 (0.264366) | 0.569227 / 0.000490 (0.568737) | 0.005289 / 0.000200 (0.005089) | 0.000095 / 0.000054 (0.000041) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029875 / 0.037411 (-0.007537) | 0.111889 / 0.014526 (0.097364) | 0.125678 / 0.176557 (-0.050878) | 0.184695 / 0.737135 (-0.552441) | 0.129737 / 0.296338 (-0.166602) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.417404 / 0.215209 (0.202195) | 4.172367 / 2.077655 (2.094712) | 2.008088 / 1.504120 (0.503968) | 1.813182 / 1.541195 (0.271988) | 1.882727 / 1.468490 (0.414237) | 0.525764 / 4.584777 (-4.059013) | 3.815202 / 3.745712 (0.069490) | 1.884197 / 5.269862 (-3.385664) | 1.073779 / 4.565676 (-3.491897) | 0.066125 / 0.424275 (-0.358150) | 0.012473 / 0.007607 (0.004866) | 0.522197 / 0.226044 (0.296153) | 5.218486 / 2.268929 (2.949557) | 2.413846 / 55.444624 (-53.030779) | 2.093298 / 6.876477 (-4.783179) | 2.320583 / 2.142072 (0.178511) | 0.648832 / 4.805227 (-4.156395) | 0.146168 / 6.500664 (-6.354496) | 0.065869 / 0.075469 (-0.009600) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.181859 / 1.841788 (-0.659929) | 15.369517 / 8.074308 (7.295209) | 14.896270 / 10.191392 (4.704878) | 0.146793 / 0.680424 (-0.533630) | 0.017960 / 0.534201 (-0.516241) | 0.421801 / 0.579283 (-0.157482) | 0.438357 / 0.434364 (0.003993) | 0.524554 / 0.540337 (-0.015783) | 0.621041 / 1.386936 (-0.765895) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007104 / 0.011353 (-0.004249) | 0.004895 / 0.011008 (-0.006113) | 0.075641 / 0.038508 (0.037133) | 0.034821 / 0.023109 (0.011712) | 0.363875 / 0.275898 (0.087977) | 0.403042 / 0.323480 (0.079562) | 0.006747 / 0.007986 (-0.001238) | 0.005793 / 0.004328 (0.001465) | 0.074709 / 0.004250 (0.070458) | 0.058801 / 0.037052 (0.021749) | 0.366900 / 0.258489 (0.108411) | 0.414442 / 0.293841 (0.120601) | 0.029099 / 0.128546 (-0.099448) | 0.009394 / 0.075646 (-0.066253) | 0.082612 / 0.419271 (-0.336659) | 0.049076 / 0.043533 (0.005543) | 0.358828 / 0.255139 (0.103689) | 0.378261 / 0.283200 (0.095061) | 0.122147 / 0.141683 (-0.019535) | 1.454155 / 1.452155 (0.002000) | 1.572437 / 1.492716 (0.079720) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.293133 / 0.018006 (0.275127) | 0.536785 / 0.000490 (0.536295) | 0.000457 / 0.000200 (0.000257) | 0.000058 / 0.000054 (0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031046 / 0.037411 (-0.006366) | 0.113929 / 0.014526 (0.099403) | 0.126222 / 0.176557 (-0.050335) | 0.173992 / 0.737135 (-0.563143) | 0.129635 / 0.296338 (-0.166704) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.441984 / 0.215209 (0.226775) | 4.406002 / 2.077655 (2.328348) | 2.173912 / 1.504120 (0.669792) | 2.000507 / 1.541195 (0.459312) | 2.172766 / 1.468490 (0.704276) | 0.524530 / 4.584777 (-4.060247) | 3.758827 / 3.745712 (0.013115) | 1.886701 / 5.269862 (-3.383160) | 1.073601 / 4.565676 (-3.492075) | 0.066137 / 0.424275 (-0.358139) | 0.011926 / 0.007607 (0.004319) | 0.541103 / 0.226044 (0.315059) | 5.404162 / 2.268929 (3.135233) | 2.634271 / 55.444624 (-52.810354) | 2.366156 / 6.876477 (-4.510321) | 2.566877 / 2.142072 (0.424804) | 0.639088 / 4.805227 (-4.166139) | 0.141810 / 6.500664 (-6.358854) | 0.065446 / 0.075469 (-0.010023) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.288173 / 1.841788 (-0.553614) | 15.897051 / 8.074308 (7.822743) | 15.243404 / 10.191392 (5.052012) | 0.162380 / 0.680424 (-0.518043) | 0.017716 / 0.534201 (-0.516485) | 0.396400 / 0.579283 (-0.182883) | 0.420479 / 0.434364 (-0.013885) | 0.476238 / 0.540337 (-0.064099) | 0.583039 / 1.386936 (-0.803897) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#bd373f69f12e926f4e2a489c14df36c38ce07bcc \"CML watermark\")\n", "I fixed the docstring and type hint", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006310 / 0.011353 (-0.005043) | 0.004297 / 0.011008 (-0.006711) | 0.098288 / 0.038508 (0.059780) | 0.029295 / 0.023109 (0.006185) | 0.386804 / 0.275898 (0.110906) | 0.425717 / 0.323480 (0.102237) | 0.005516 / 0.007986 (-0.002470) | 0.005058 / 0.004328 (0.000730) | 0.074318 / 0.004250 (0.070068) | 0.040609 / 0.037052 (0.003557) | 0.388159 / 0.258489 (0.129670) | 0.428683 / 0.293841 (0.134842) | 0.026207 / 0.128546 (-0.102340) | 0.008655 / 0.075646 (-0.066991) | 0.321601 / 0.419271 (-0.097671) | 0.055329 / 0.043533 (0.011796) | 0.390452 / 0.255139 (0.135313) | 0.409084 / 0.283200 (0.125884) | 0.099555 / 0.141683 (-0.042128) | 1.484289 / 1.452155 (0.032134) | 1.549892 / 1.492716 (0.057176) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.219466 / 0.018006 (0.201460) | 0.437288 / 0.000490 (0.436798) | 0.003556 / 0.000200 (0.003356) | 0.000080 / 0.000054 (0.000025) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023876 / 0.037411 (-0.013535) | 0.100205 / 0.014526 (0.085679) | 0.106365 / 0.176557 (-0.070191) | 0.164353 / 0.737135 (-0.572782) | 0.109987 / 0.296338 (-0.186352) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.418819 / 0.215209 (0.203610) | 4.168558 / 2.077655 (2.090903) | 1.862883 / 1.504120 (0.358764) | 1.673308 / 1.541195 (0.132114) | 1.742338 / 1.468490 (0.273848) | 0.550113 / 4.584777 (-4.034664) | 3.492085 / 3.745712 (-0.253627) | 1.734579 / 5.269862 (-3.535283) | 1.006876 / 4.565676 (-3.558801) | 0.068014 / 0.424275 (-0.356261) | 0.012242 / 0.007607 (0.004634) | 0.520633 / 0.226044 (0.294588) | 5.214095 / 2.268929 (2.945167) | 2.319282 / 55.444624 (-53.125343) | 1.979521 / 6.876477 (-4.896956) | 2.099595 / 2.142072 (-0.042477) | 0.659306 / 4.805227 (-4.145921) | 0.135282 / 6.500664 (-6.365382) | 0.067417 / 0.075469 (-0.008052) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.232099 / 1.841788 (-0.609689) | 13.967219 / 8.074308 (5.892910) | 14.347105 / 10.191392 (4.155713) | 0.146360 / 0.680424 (-0.534063) | 0.017021 / 0.534201 (-0.517180) | 0.363254 / 0.579283 (-0.216030) | 0.404391 / 0.434364 (-0.029973) | 0.428670 / 0.540337 (-0.111668) | 0.514942 / 1.386936 (-0.871994) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006360 / 0.011353 (-0.004993) | 0.004160 / 0.011008 (-0.006848) | 0.074856 / 0.038508 (0.036347) | 0.028624 / 0.023109 (0.005515) | 0.355624 / 0.275898 (0.079726) | 0.403678 / 0.323480 (0.080198) | 0.005253 / 0.007986 (-0.002732) | 0.004808 / 0.004328 (0.000480) | 0.074215 / 0.004250 (0.069964) | 0.040641 / 0.037052 (0.003588) | 0.358473 / 0.258489 (0.099984) | 0.414442 / 0.293841 (0.120601) | 0.025595 / 0.128546 (-0.102951) | 0.008506 / 0.075646 (-0.067140) | 0.081547 / 0.419271 (-0.337725) | 0.039719 / 0.043533 (-0.003814) | 0.355420 / 0.255139 (0.100281) | 0.380953 / 0.283200 (0.097753) | 0.100064 / 0.141683 (-0.041618) | 1.459639 / 1.452155 (0.007484) | 1.557288 / 1.492716 (0.064572) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.232837 / 0.018006 (0.214831) | 0.424788 / 0.000490 (0.424298) | 0.000397 / 0.000200 (0.000197) | 0.000059 / 0.000054 (0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026156 / 0.037411 (-0.011256) | 0.103633 / 0.014526 (0.089107) | 0.109633 / 0.176557 (-0.066923) | 0.159407 / 0.737135 (-0.577728) | 0.113874 / 0.296338 (-0.182465) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.471980 / 0.215209 (0.256771) | 4.724424 / 2.077655 (2.646769) | 2.459950 / 1.504120 (0.955830) | 2.280926 / 1.541195 (0.739731) | 2.368478 / 1.468490 (0.899987) | 0.552809 / 4.584777 (-4.031968) | 3.461985 / 3.745712 (-0.283728) | 1.757060 / 5.269862 (-3.512802) | 1.009599 / 4.565676 (-3.556077) | 0.068407 / 0.424275 (-0.355868) | 0.012341 / 0.007607 (0.004734) | 0.576287 / 0.226044 (0.350242) | 5.767331 / 2.268929 (3.498402) | 2.965743 / 55.444624 (-52.478882) | 2.644935 / 6.876477 (-4.231542) | 2.699663 / 2.142072 (0.557591) | 0.656005 / 4.805227 (-4.149222) | 0.136315 / 6.500664 (-6.364349) | 0.068355 / 0.075469 (-0.007114) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.308301 / 1.841788 (-0.533486) | 14.587268 / 8.074308 (6.512960) | 14.385670 / 10.191392 (4.194278) | 0.148154 / 0.680424 (-0.532270) | 0.016798 / 0.534201 (-0.517402) | 0.360761 / 0.579283 (-0.218523) | 0.392566 / 0.434364 (-0.041798) | 0.431604 / 0.540337 (-0.108734) | 0.528463 / 1.386936 (-0.858473) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#f2778e1ab255545cb2171379fd2276c85768a2ad \"CML watermark\")\n", "let me know if it sounds good for you now @albertvillanova :)", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008414 / 0.011353 (-0.002939) | 0.005320 / 0.011008 (-0.005688) | 0.115585 / 0.038508 (0.077077) | 0.040815 / 0.023109 (0.017706) | 0.363453 / 0.275898 (0.087555) | 0.385954 / 0.323480 (0.062474) | 0.006463 / 0.007986 (-0.001523) | 0.005571 / 0.004328 (0.001242) | 0.084831 / 0.004250 (0.080581) | 0.050294 / 0.037052 (0.013242) | 0.375684 / 0.258489 (0.117195) | 0.394672 / 0.293841 (0.100831) | 0.033618 / 0.128546 (-0.094928) | 0.010451 / 0.075646 (-0.065195) | 0.388937 / 0.419271 (-0.030334) | 0.059974 / 0.043533 (0.016441) | 0.360437 / 0.255139 (0.105298) | 0.375149 / 0.283200 (0.091950) | 0.118397 / 0.141683 (-0.023286) | 1.726759 / 1.452155 (0.274604) | 1.811928 / 1.492716 (0.319212) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.239186 / 0.018006 (0.221180) | 0.483728 / 0.000490 (0.483238) | 0.003285 / 0.000200 (0.003085) | 0.000097 / 0.000054 (0.000043) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030514 / 0.037411 (-0.006898) | 0.127111 / 0.014526 (0.112585) | 0.136185 / 0.176557 (-0.040371) | 0.204541 / 0.737135 (-0.532594) | 0.143228 / 0.296338 (-0.153111) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.465840 / 0.215209 (0.250631) | 4.611160 / 2.077655 (2.533506) | 2.119307 / 1.504120 (0.615187) | 1.882463 / 1.541195 (0.341268) | 1.946067 / 1.468490 (0.477577) | 0.602352 / 4.584777 (-3.982425) | 4.576313 / 3.745712 (0.830601) | 2.112860 / 5.269862 (-3.157001) | 1.224388 / 4.565676 (-3.341289) | 0.073808 / 0.424275 (-0.350467) | 0.013157 / 0.007607 (0.005550) | 0.592208 / 0.226044 (0.366163) | 5.948971 / 2.268929 (3.680042) | 2.690144 / 55.444624 (-52.754480) | 2.236489 / 6.876477 (-4.639987) | 2.423617 / 2.142072 (0.281545) | 0.752053 / 4.805227 (-4.053175) | 0.168185 / 6.500664 (-6.332480) | 0.075454 / 0.075469 (-0.000015) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.407432 / 1.841788 (-0.434356) | 17.054545 / 8.074308 (8.980236) | 15.661362 / 10.191392 (5.469970) | 0.175027 / 0.680424 (-0.505397) | 0.020262 / 0.534201 (-0.513939) | 0.479052 / 0.579283 (-0.100231) | 0.509829 / 0.434364 (0.075465) | 0.601935 / 0.540337 (0.061598) | 0.726754 / 1.386936 (-0.660182) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007698 / 0.011353 (-0.003655) | 0.005267 / 0.011008 (-0.005741) | 0.085832 / 0.038508 (0.047324) | 0.041974 / 0.023109 (0.018865) | 0.418966 / 0.275898 (0.143068) | 0.466314 / 0.323480 (0.142834) | 0.006580 / 0.007986 (-0.001406) | 0.007063 / 0.004328 (0.002735) | 0.087120 / 0.004250 (0.082870) | 0.054908 / 0.037052 (0.017856) | 0.423813 / 0.258489 (0.165323) | 0.489878 / 0.293841 (0.196037) | 0.032823 / 0.128546 (-0.095723) | 0.010471 / 0.075646 (-0.065175) | 0.095839 / 0.419271 (-0.323432) | 0.056421 / 0.043533 (0.012888) | 0.420526 / 0.255139 (0.165387) | 0.447975 / 0.283200 (0.164775) | 0.126604 / 0.141683 (-0.015079) | 1.723097 / 1.452155 (0.270942) | 1.819539 / 1.492716 (0.326822) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.279604 / 0.018006 (0.261598) | 0.496129 / 0.000490 (0.495639) | 0.005419 / 0.000200 (0.005219) | 0.000096 / 0.000054 (0.000041) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.035069 / 0.037411 (-0.002343) | 0.133064 / 0.014526 (0.118538) | 0.145404 / 0.176557 (-0.031152) | 0.205237 / 0.737135 (-0.531898) | 0.150684 / 0.296338 (-0.145654) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.513596 / 0.215209 (0.298387) | 5.104861 / 2.077655 (3.027206) | 2.487908 / 1.504120 (0.983788) | 2.271383 / 1.541195 (0.730188) | 2.421043 / 1.468490 (0.952553) | 0.625204 / 4.584777 (-3.959573) | 4.555389 / 3.745712 (0.809677) | 4.181518 / 5.269862 (-1.088344) | 1.676059 / 4.565676 (-2.889617) | 0.078786 / 0.424275 (-0.345489) | 0.014186 / 0.007607 (0.006579) | 0.638360 / 0.226044 (0.412315) | 6.367915 / 2.268929 (4.098986) | 3.095175 / 55.444624 (-52.349449) | 2.706707 / 6.876477 (-4.169769) | 2.735907 / 2.142072 (0.593835) | 0.756323 / 4.805227 (-4.048905) | 0.164783 / 6.500664 (-6.335881) | 0.076291 / 0.075469 (0.000822) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.667058 / 1.841788 (-0.174730) | 18.687459 / 8.074308 (10.613151) | 17.111596 / 10.191392 (6.920204) | 0.167218 / 0.680424 (-0.513206) | 0.020995 / 0.534201 (-0.513206) | 0.463985 / 0.579283 (-0.115298) | 0.502705 / 0.434364 (0.068341) | 0.562877 / 0.540337 (0.022540) | 0.682249 / 1.386936 (-0.704687) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#028822a5d657f6c1251f61b56a701c4d7d2ab0a7 \"CML watermark\")\n", "> Maybe we should fix all the tests in test_iterable_dataset.py that contain .with_format(\"torch\")?\r\n\r\nthey're updated in https://github.com/huggingface/datasets/pull/5852", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005931 / 0.011353 (-0.005421) | 0.004004 / 0.011008 (-0.007004) | 0.098632 / 0.038508 (0.060124) | 0.027820 / 0.023109 (0.004711) | 0.302944 / 0.275898 (0.027046) | 0.332684 / 0.323480 (0.009204) | 0.005529 / 0.007986 (-0.002457) | 0.004814 / 0.004328 (0.000485) | 0.074477 / 0.004250 (0.070227) | 0.034875 / 0.037052 (-0.002178) | 0.304542 / 0.258489 (0.046053) | 0.342853 / 0.293841 (0.049012) | 0.025263 / 0.128546 (-0.103283) | 0.008558 / 0.075646 (-0.067089) | 0.322522 / 0.419271 (-0.096750) | 0.043980 / 0.043533 (0.000447) | 0.306618 / 0.255139 (0.051479) | 0.331692 / 0.283200 (0.048492) | 0.087434 / 0.141683 (-0.054248) | 1.464686 / 1.452155 (0.012531) | 1.575038 / 1.492716 (0.082322) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.221920 / 0.018006 (0.203914) | 0.417108 / 0.000490 (0.416619) | 0.004625 / 0.000200 (0.004425) | 0.000079 / 0.000054 (0.000024) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023493 / 0.037411 (-0.013918) | 0.096684 / 0.014526 (0.082158) | 0.102035 / 0.176557 (-0.074522) | 0.166609 / 0.737135 (-0.570526) | 0.107456 / 0.296338 (-0.188883) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.418713 / 0.215209 (0.203504) | 4.156913 / 2.077655 (2.079258) | 1.869064 / 1.504120 (0.364944) | 1.666219 / 1.541195 (0.125024) | 1.676491 / 1.468490 (0.208001) | 0.553843 / 4.584777 (-4.030934) | 3.380471 / 3.745712 (-0.365241) | 2.970370 / 5.269862 (-2.299491) | 1.421597 / 4.565676 (-3.144080) | 0.068019 / 0.424275 (-0.356256) | 0.012995 / 0.007607 (0.005387) | 0.519410 / 0.226044 (0.293365) | 5.198251 / 2.268929 (2.929323) | 2.352969 / 55.444624 (-53.091655) | 2.008981 / 6.876477 (-4.867496) | 2.066519 / 2.142072 (-0.075553) | 0.658982 / 4.805227 (-4.146245) | 0.134341 / 6.500664 (-6.366323) | 0.065893 / 0.075469 (-0.009576) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.207509 / 1.841788 (-0.634279) | 13.863838 / 8.074308 (5.789530) | 13.363359 / 10.191392 (3.171967) | 0.129076 / 0.680424 (-0.551348) | 0.016818 / 0.534201 (-0.517383) | 0.357956 / 0.579283 (-0.221327) | 0.386174 / 0.434364 (-0.048189) | 0.418663 / 0.540337 (-0.121674) | 0.498708 / 1.386936 (-0.888228) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006132 / 0.011353 (-0.005220) | 0.004335 / 0.011008 (-0.006673) | 0.078517 / 0.038508 (0.040009) | 0.027685 / 0.023109 (0.004576) | 0.357956 / 0.275898 (0.082058) | 0.392397 / 0.323480 (0.068918) | 0.005364 / 0.007986 (-0.002622) | 0.004922 / 0.004328 (0.000593) | 0.078061 / 0.004250 (0.073810) | 0.038889 / 0.037052 (0.001837) | 0.360952 / 0.258489 (0.102463) | 0.402790 / 0.293841 (0.108949) | 0.025542 / 0.128546 (-0.103004) | 0.008718 / 0.075646 (-0.066929) | 0.085799 / 0.419271 (-0.333472) | 0.044256 / 0.043533 (0.000723) | 0.358366 / 0.255139 (0.103227) | 0.393500 / 0.283200 (0.110300) | 0.096382 / 0.141683 (-0.045301) | 1.530889 / 1.452155 (0.078735) | 1.621007 / 1.492716 (0.128291) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.180572 / 0.018006 (0.162566) | 0.429478 / 0.000490 (0.428988) | 0.002966 / 0.000200 (0.002766) | 0.000074 / 0.000054 (0.000020) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024530 / 0.037411 (-0.012881) | 0.101401 / 0.014526 (0.086875) | 0.108208 / 0.176557 (-0.068349) | 0.159582 / 0.737135 (-0.577554) | 0.111170 / 0.296338 (-0.185168) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.465768 / 0.215209 (0.250559) | 4.706311 / 2.077655 (2.628656) | 2.437756 / 1.504120 (0.933636) | 2.245694 / 1.541195 (0.704499) | 2.282637 / 1.468490 (0.814147) | 0.552752 / 4.584777 (-4.032025) | 3.432992 / 3.745712 (-0.312720) | 1.800054 / 5.269862 (-3.469808) | 1.037852 / 4.565676 (-3.527824) | 0.068240 / 0.424275 (-0.356035) | 0.012433 / 0.007607 (0.004826) | 0.574867 / 0.226044 (0.348822) | 5.707623 / 2.268929 (3.438695) | 2.909746 / 55.444624 (-52.534878) | 2.585423 / 6.876477 (-4.291054) | 2.636801 / 2.142072 (0.494729) | 0.686593 / 4.805227 (-4.118634) | 0.136633 / 6.500664 (-6.364031) | 0.068598 / 0.075469 (-0.006871) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.286628 / 1.841788 (-0.555159) | 14.333258 / 8.074308 (6.258949) | 14.355793 / 10.191392 (4.164401) | 0.133459 / 0.680424 (-0.546965) | 0.017090 / 0.534201 (-0.517111) | 0.358852 / 0.579283 (-0.220431) | 0.399929 / 0.434364 (-0.034435) | 0.422838 / 0.540337 (-0.117500) | 0.515199 / 1.386936 (-0.871737) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#7437d0f676da8634b5655a227cb8c3508c7372a2 \"CML watermark\")\n" ]
"2023-05-04T17:23:43Z"
"2023-05-31T09:43:26Z"
"2023-05-31T09:36:18Z"
MEMBER
null
Adding an optional `.iter_arrow` to examples iterable. This allows to use Arrow formatting in map/filter. This will also be useful for torch formatting, since we can reuse the TorchFormatter that converts Arrow data to torch tensors Related to https://github.com/huggingface/datasets/issues/5793 and https://github.com/huggingface/datasets/issues/3444 Required for https://github.com/huggingface/datasets/pull/5852 ### Example: Speed x10 in map ```python from datasets import Dataset import pyarrow.compute as pc import time ds = Dataset.from_dict({"a": range(100_000)}) ids = ds.to_iterable_dataset() ids = ids.map(lambda x: {"a": [a + 10 for a in x["a"]]}, batched=True) _start = time.time() print(f"Python ({sum(1 for _ in ids)} items):\t{(time.time() - _start) * 1000:.1f}ms") # Python (100000 items): 695.7ms ids = ds.to_iterable_dataset().with_format("arrow") ids = ids.map(lambda t: t.set_column(0, "a", pc.add(t[0], 10)), batched=True) ids = ids.with_format(None) _start = time.time() print(f"Arrow ({sum(1 for _ in ids)} items):\t{(time.time() - _start) * 1000:.1f}ms)") # Arrow (100000 items): 81.0ms) ``` ### Implementation details I added an optional `iter_arrow` method to examples iterable. If an example iterable has this method, then it can be used to iterate on the examples by batch of arrow tables.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 1, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/5821/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5821/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5821.diff", "html_url": "https://github.com/huggingface/datasets/pull/5821", "merged_at": "2023-05-31T09:36:18Z", "patch_url": "https://github.com/huggingface/datasets/pull/5821.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5821" }
true
https://api.github.com/repos/huggingface/datasets/issues/5820
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5820/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5820/comments
https://api.github.com/repos/huggingface/datasets/issues/5820/events
https://github.com/huggingface/datasets/issues/5820
1,695,892,811
I_kwDODunzps5lFUVL
5,820
Incomplete docstring for `BuilderConfig`
{ "avatar_url": "https://avatars.githubusercontent.com/u/21087104?v=4", "events_url": "https://api.github.com/users/Laurent2916/events{/privacy}", "followers_url": "https://api.github.com/users/Laurent2916/followers", "following_url": "https://api.github.com/users/Laurent2916/following{/other_user}", "gists_url": "https://api.github.com/users/Laurent2916/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Laurent2916", "id": 21087104, "login": "Laurent2916", "node_id": "MDQ6VXNlcjIxMDg3MTA0", "organizations_url": "https://api.github.com/users/Laurent2916/orgs", "received_events_url": "https://api.github.com/users/Laurent2916/received_events", "repos_url": "https://api.github.com/users/Laurent2916/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Laurent2916/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Laurent2916/subscriptions", "type": "User", "url": "https://api.github.com/users/Laurent2916" }
[ { "color": "7057ff", "default": true, "description": "Good for newcomers", "id": 1935892877, "name": "good first issue", "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue" } ]
closed
false
null
[]
null
[ "Thanks for reporting! You are more than welcome to improve `BuilderConfig`'s docstring.\r\n\r\nThis class serves an identical purpose as `tensorflow_datasets`'s `BuilderConfig`, and its docstring is [here](https://github.com/tensorflow/datasets/blob/a95e38b5bb018312c3d3720619c2a8ef83ebf57f/tensorflow_datasets/core/dataset_builder.py#L81), so feel free to re-use parts of it." ]
"2023-05-04T12:14:34Z"
"2023-05-05T12:31:56Z"
"2023-05-05T12:31:56Z"
CONTRIBUTOR
null
Hi guys ! I stumbled upon this docstring while working on a project. Some of the attributes have missing descriptions. https://github.com/huggingface/datasets/blob/bc5fef5b6d91f009e4101684adcb374df2c170f6/src/datasets/builder.py#L104-L117
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5820/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5820/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5819
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5819/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5819/comments
https://api.github.com/repos/huggingface/datasets/issues/5819/events
https://github.com/huggingface/datasets/issues/5819
1,695,536,738
I_kwDODunzps5lD9Zi
5,819
Cannot pickle error in Dataset.from_generator()
{ "avatar_url": "https://avatars.githubusercontent.com/u/50691954?v=4", "events_url": "https://api.github.com/users/xinghaow99/events{/privacy}", "followers_url": "https://api.github.com/users/xinghaow99/followers", "following_url": "https://api.github.com/users/xinghaow99/following{/other_user}", "gists_url": "https://api.github.com/users/xinghaow99/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/xinghaow99", "id": 50691954, "login": "xinghaow99", "node_id": "MDQ6VXNlcjUwNjkxOTU0", "organizations_url": "https://api.github.com/users/xinghaow99/orgs", "received_events_url": "https://api.github.com/users/xinghaow99/received_events", "repos_url": "https://api.github.com/users/xinghaow99/repos", "site_admin": false, "starred_url": "https://api.github.com/users/xinghaow99/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/xinghaow99/subscriptions", "type": "User", "url": "https://api.github.com/users/xinghaow99" }
[]
closed
false
null
[]
null
[ "Hi! It should work if you put `model = torch.compile(model)` inside the `generate_data` function. If a referenced object is outside, it needs to be pickable, and that's not the case for the compiled models (or functions). ", "> Hi! It should work if you put `model = torch.compile(model)` inside the `generate_data` function. If a referenced object is outside, it needs to be pickable, and that's not the case for the compiled models (or functions).\r\n\r\nHi! Thank you for your reply! Everything works perfectly with your suggestion!\r\n\r\nClosing the issue.\r\n" ]
"2023-05-04T08:39:09Z"
"2023-05-05T19:20:59Z"
"2023-05-05T19:20:58Z"
NONE
null
### Describe the bug I'm trying to use Dataset.from_generator() to generate a large dataset. ### Steps to reproduce the bug Code to reproduce: ``` from transformers import T5Tokenizer, T5ForConditionalGeneration, GenerationConfig import torch from tqdm import tqdm from datasets import load_dataset tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-small") model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-small", device_map="auto") model = torch.compile(model) def generate_data(data_loader): model.eval() for batch in tqdm(data_loader): input_ids = tokenizer(batch['instruction'], return_tensors='pt', padding=True, truncation=True).input_ids.to("cuda:0") with torch.no_grad(): outputs = model.generate(input_ids, generation_config=generation_config) decoder_hidden_states = outputs.decoder_hidden_states for i, h in zip(batch['instruction'], decoder_hidden_states): yield {"instruction": i, "decoder_hidden_states": h} generation_config = GenerationConfig( temperature=1, max_new_tokens=1024, do_sample=False, num_return_sequences=1, return_dict_in_generate=True, output_scores=True, output_hidden_states=True, ) from datasets import Dataset, load_dataset from torch.utils.data import DataLoader dataset = load_dataset("HuggingFaceH4/databricks_dolly_15k") train_loader = DataLoader(dataset['train'], batch_size=2, shuffle=True) dataset = Dataset.from_generator(generator=generate_data, gen_kwargs={"data_loader": train_loader}) dataset.save_to_disk("data/flant5_small_generation") ``` ### Expected behavior The dataset should be generated and saved. But the following error occurred: ``` Traceback (most recent call last): File "/remote-home/xhwang/alpaca-lora/data_collection_t5.py", line 46, in <module> dataset = Dataset.from_generator(generator=generate_data, gen_kwargs={"data_loader": train_loader}) File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 1035, in from_generator return GeneratorDatasetInputStream( File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/io/generator.py", line 28, in __init__ self.builder = Generator( File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/builder.py", line 336, in __init__ self.config, self.config_id = self._create_builder_config( File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/builder.py", line 505, in _create_builder_config config_id = builder_config.create_config_id( File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/builder.py", line 179, in create_config_id suffix = Hasher.hash(config_kwargs_to_add_to_suffix) File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/fingerprint.py", line 236, in hash return cls.hash_default(value) File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/fingerprint.py", line 229, in hash_default return cls.hash_bytes(dumps(value)) File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 726, in dumps dump(obj, file) File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 701, in dump Pickler(file, recurse=True).dump(obj) File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 394, in dump StockPickler.dump(self, obj) File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 487, in dump self.save(obj) File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 691, in save dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id) File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 388, in save StockPickler.save(self, obj, save_persistent_id) File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 560, in save f(self, obj) # Call unbound method with explicit self File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 1186, in save_module_dict StockPickler.save_dict(pickler, obj) File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 972, in save_dict self._batch_setitems(obj.items()) File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 998, in _batch_setitems save(v) File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 691, in save dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id) File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 388, in save StockPickler.save(self, obj, save_persistent_id) File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 560, in save f(self, obj) # Call unbound method with explicit self File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 1311, in save_function dill._dill._save_with_postproc( File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 1084, in _save_with_postproc pickler._batch_setitems(iter(source.items())) File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 998, in _batch_setitems save(v) File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 691, in save dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id) File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 388, in save StockPickler.save(self, obj, save_persistent_id) File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 603, in save self.save_reduce(obj=obj, *rv) File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 717, in save_reduce save(state) File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 691, in save dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id) File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 388, in save StockPickler.save(self, obj, save_persistent_id) File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 560, in save f(self, obj) # Call unbound method with explicit self File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 1186, in save_module_dict StockPickler.save_dict(pickler, obj) File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 972, in save_dict self._batch_setitems(obj.items()) File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 998, in _batch_setitems save(v) File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 691, in save dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id) File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 388, in save StockPickler.save(self, obj, save_persistent_id) File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 603, in save self.save_reduce(obj=obj, *rv) File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 717, in save_reduce save(state) File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 691, in save dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id) File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 388, in save StockPickler.save(self, obj, save_persistent_id) File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 560, in save f(self, obj) # Call unbound method with explicit self File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 1186, in save_module_dict StockPickler.save_dict(pickler, obj) File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 972, in save_dict self._batch_setitems(obj.items()) File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 998, in _batch_setitems save(v) File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 691, in save dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id) File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 388, in save StockPickler.save(self, obj, save_persistent_id) File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 560, in save f(self, obj) # Call unbound method with explicit self File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 1311, in save_function dill._dill._save_with_postproc( File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 1070, in _save_with_postproc pickler.save_reduce(*reduction, obj=obj) File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 717, in save_reduce save(state) File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 691, in save dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id) File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 388, in save StockPickler.save(self, obj, save_persistent_id) File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 560, in save f(self, obj) # Call unbound method with explicit self File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 887, in save_tuple save(element) File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 691, in save dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id) File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 388, in save StockPickler.save(self, obj, save_persistent_id) File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 560, in save f(self, obj) # Call unbound method with explicit self File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 1186, in save_module_dict StockPickler.save_dict(pickler, obj) File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 972, in save_dict self._batch_setitems(obj.items()) File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 998, in _batch_setitems save(v) File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 691, in save dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id) File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 388, in save StockPickler.save(self, obj, save_persistent_id) File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 560, in save f(self, obj) # Call unbound method with explicit self File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 1311, in save_function dill._dill._save_with_postproc( File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 1070, in _save_with_postproc pickler.save_reduce(*reduction, obj=obj) File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 717, in save_reduce save(state) File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 691, in save dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id) File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 388, in save StockPickler.save(self, obj, save_persistent_id) File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 560, in save f(self, obj) # Call unbound method with explicit self File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 887, in save_tuple save(element) File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 691, in save dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id) File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 388, in save StockPickler.save(self, obj, save_persistent_id) File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 560, in save f(self, obj) # Call unbound method with explicit self File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 1186, in save_module_dict StockPickler.save_dict(pickler, obj) File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 972, in save_dict self._batch_setitems(obj.items()) File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 1003, in _batch_setitems save(v) File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 691, in save dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id) File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 388, in save StockPickler.save(self, obj, save_persistent_id) File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 560, in save f(self, obj) # Call unbound method with explicit self File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 1311, in save_function dill._dill._save_with_postproc( File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 1084, in _save_with_postproc pickler._batch_setitems(iter(source.items())) File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 998, in _batch_setitems save(v) File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 691, in save dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id) File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 388, in save StockPickler.save(self, obj, save_persistent_id) File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 578, in save rv = reduce(self.proto) TypeError: cannot pickle 'ConfigModuleInstance' object ``` ### Environment info - `datasets` version: 2.11.0 - Platform: Linux-4.15.0-156-generic-x86_64-with-glibc2.31 - Python version: 3.10.10 - Huggingface_hub version: 0.13.2 - PyArrow version: 11.0.0 - Pandas version: 1.5.3
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5819/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5819/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5818
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5818/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5818/comments
https://api.github.com/repos/huggingface/datasets/issues/5818/events
https://github.com/huggingface/datasets/issues/5818
1,695,052,555
I_kwDODunzps5lCHML
5,818
Ability to update a dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/4443482?v=4", "events_url": "https://api.github.com/users/davidgilbertson/events{/privacy}", "followers_url": "https://api.github.com/users/davidgilbertson/followers", "following_url": "https://api.github.com/users/davidgilbertson/following{/other_user}", "gists_url": "https://api.github.com/users/davidgilbertson/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/davidgilbertson", "id": 4443482, "login": "davidgilbertson", "node_id": "MDQ6VXNlcjQ0NDM0ODI=", "organizations_url": "https://api.github.com/users/davidgilbertson/orgs", "received_events_url": "https://api.github.com/users/davidgilbertson/received_events", "repos_url": "https://api.github.com/users/davidgilbertson/repos", "site_admin": false, "starred_url": "https://api.github.com/users/davidgilbertson/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/davidgilbertson/subscriptions", "type": "User", "url": "https://api.github.com/users/davidgilbertson" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
null
[ "This [reply](https://discuss.huggingface.co/t/how-do-i-add-things-rows-to-an-already-saved-dataset/27423) from @mariosasko on the forums may be useful :)", "In this case, I think we can avoid the `PermissionError` by unpacking the underlying `ConcatenationTable` and saving only the newly added data blocks (in new files).", "Thanks @stevhliu and @mariosasko , so saving to individual files then loading them later, concatenating again and saving again is the recommended way. Good to know.\r\n\r\nQuestion that I hope doesn't sound rude: is this sort of thing (processing a dataset that doesn't fit in memory) outside of `datasets`'s core area of focus? Are there other tools you would recommend to do this sort of thing that play nice with `datasets`? Or is it just that I've found myself in a niche situation that hasn't specifically been catered for?" ]
"2023-05-04T01:08:13Z"
"2023-05-04T20:43:39Z"
null
NONE
null
### Feature request The ability to load a dataset, add or change something, and save it back to disk. Maybe it's possible, but I can't work out how to do it, e.g. this fails: ```py import datasets dataset = datasets.load_from_disk("data/test1") dataset = dataset.add_item({"text": "A new item"}) dataset.save_to_disk("data/test1") ``` With the error: ``` PermissionError: Tried to overwrite /mnt/c/Users/david/py/learning/mini_projects/data_sorting_and_filtering/data/test1 but a dataset can't overwrite itself. ``` ### Motivation My use case is that I want to process a dataset in a particular way but it doesn't fit in memory if I do it in one go. So I want to perform a loop and at each step in the loop, process one shard and append it to an ever-growing dataset. The code in the loop will load a dataset, add some rows, then save it again. Maybe I'm just thinking about things incorrectly and there's a better approach. FWIW I can't use `dataset.map()` to do the task because that doesn't work with `num_proc` when adding rows, so is confined to a single process which is too slow. The only other way I can think of is to create a new file each time, but surely that's not how people do this sort of thing. ### Your contribution na
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5818/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5818/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5817
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5817/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5817/comments
https://api.github.com/repos/huggingface/datasets/issues/5817/events
https://github.com/huggingface/datasets/issues/5817
1,694,891,866
I_kwDODunzps5lBf9a
5,817
Setting `num_proc` errors when `.map` returns additional items.
{ "avatar_url": "https://avatars.githubusercontent.com/u/4443482?v=4", "events_url": "https://api.github.com/users/davidgilbertson/events{/privacy}", "followers_url": "https://api.github.com/users/davidgilbertson/followers", "following_url": "https://api.github.com/users/davidgilbertson/following{/other_user}", "gists_url": "https://api.github.com/users/davidgilbertson/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/davidgilbertson", "id": 4443482, "login": "davidgilbertson", "node_id": "MDQ6VXNlcjQ0NDM0ODI=", "organizations_url": "https://api.github.com/users/davidgilbertson/orgs", "received_events_url": "https://api.github.com/users/davidgilbertson/received_events", "repos_url": "https://api.github.com/users/davidgilbertson/repos", "site_admin": false, "starred_url": "https://api.github.com/users/davidgilbertson/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/davidgilbertson/subscriptions", "type": "User", "url": "https://api.github.com/users/davidgilbertson" }
[]
closed
false
null
[]
null
[ "Hi ! Unfortunately I couldn't reproduce on my side locally and with datasets 2.11 and python 3.10.11 on colab.\r\nWhat version of `multiprocess` are you using ?", "I've got `multiprocess` version `0.70.14`.\r\n\r\nI've done some more testing and the error only occurs in PyCharm's Python Console. It seems to be [this PyCharm bug](https://youtrack.jetbrains.com/issue/PY-51922/Multiprocessing-bug.-Can-only-run-in-debugger.), I'll close this.", "For other users facing this, my workaround is to conditionally set `num_proc` so I can work interactively in the PyCharm Python Console while developing, then when I'm ready to run on the whole dataset, run it as a script and use multiprocessing.\r\n\r\n```py\r\nmapped_ds = ds.map(\r\n my_map_function,\r\n batched=True,\r\n remove_columns=ds.column_names,\r\n num_proc=1 if \"PYCHARM_HOSTED\" in os.environ else 8,\r\n)\r\n```" ]
"2023-05-03T21:46:53Z"
"2023-05-04T21:14:21Z"
"2023-05-04T20:22:25Z"
NONE
null
### Describe the bug I'm using a map function that returns more rows than are passed in. If I try to use `num_proc` I get: ``` File "/home/davidg/.virtualenvs/learning/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 563, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/home/davidg/.virtualenvs/learning/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 528, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/home/davidg/.virtualenvs/learning/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 3097, in map for rank, done, content in iflatmap_unordered( File "/home/davidg/.virtualenvs/learning/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 1372, in iflatmap_unordered yield queue.get(timeout=0.05) File "<string>", line 2, in get File "/home/davidg/.virtualenvs/learning/lib/python3.10/site-packages/multiprocess/managers.py", line 818, in _callmethod kind, result = conn.recv() File "/home/davidg/.virtualenvs/learning/lib/python3.10/site-packages/multiprocess/connection.py", line 258, in recv buf = self._recv_bytes() File "/home/davidg/.virtualenvs/learning/lib/python3.10/site-packages/multiprocess/connection.py", line 422, in _recv_bytes buf = self._recv(4) File "/home/davidg/.virtualenvs/learning/lib/python3.10/site-packages/multiprocess/connection.py", line 391, in _recv raise EOFError EOFError ``` ### Steps to reproduce the bug This is copied from the [Datasets docs](https://huggingface.co/docs/datasets/v2.12.0/en/process#batch-processing), with `num_proc` added, and will error. ```py import datasets dataset = ... # any old dataset def chunk_examples(examples): chunks = [] for sentence in examples["text"]: chunks += [sentence[i : i + 50] for i in range(0, len(sentence), 50)] return {"chunks": chunks} chunked_dataset = dataset.map( chunk_examples, batched=True, remove_columns=dataset.column_names, num_proc=2, # Remove and it works ) ``` ### Expected behavior Should work fine. On a related note, multi-processing also fails if there is a Meta class anywhere in scope (and there are plenty in the standard library). This is the fault of `dill` and is a long standing issue. Have you considered using Loky for multiprocessing? I've found that the built-in `datasets` multi-processing breaks more than it works so have written my own function using `loky`, for reference: ```py import datasets import loky def fast_loop(dataset: datasets.Dataset, func, num_proc=None): if num_proc is None: import os num_proc = len(os.sched_getaffinity(0)) shards = [ dataset.shard(num_shards=num_proc, index=i, contiguous=True) for i in range(num_proc) ] executor = loky.get_reusable_executor(max_workers=num_proc) results = executor.map(func, shards) return datasets.combine.concatenate_datasets(list(results)) ``` ### Environment info - `datasets` version: 2.11.0 - Platform: Linux-5.15.90.1-microsoft-standard-WSL2-x86_64-with-glibc2.31 - Python version: 3.10.8 - Huggingface_hub version: 0.12.1 - PyArrow version: 11.0.0 - Pandas version: 2.0.1
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5817/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5817/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5816
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5816/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5816/comments
https://api.github.com/repos/huggingface/datasets/issues/5816/events
https://github.com/huggingface/datasets/pull/5816
1,694,590,856
PR_kwDODunzps5Ps4t9
5,816
Preserve `stopping_strategy` of shuffled interleaved dataset (random cycling case)
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
closed
false
null
[]
null
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007862 / 0.011353 (-0.003491) | 0.005747 / 0.011008 (-0.005261) | 0.106818 / 0.038508 (0.068310) | 0.036630 / 0.023109 (0.013521) | 0.344218 / 0.275898 (0.068320) | 0.398803 / 0.323480 (0.075324) | 0.006187 / 0.007986 (-0.001799) | 0.005686 / 0.004328 (0.001358) | 0.078568 / 0.004250 (0.074318) | 0.051786 / 0.037052 (0.014734) | 0.361736 / 0.258489 (0.103247) | 0.396323 / 0.293841 (0.102482) | 0.037943 / 0.128546 (-0.090603) | 0.013957 / 0.075646 (-0.061689) | 0.366782 / 0.419271 (-0.052490) | 0.054700 / 0.043533 (0.011167) | 0.349692 / 0.255139 (0.094553) | 0.366481 / 0.283200 (0.083281) | 0.117394 / 0.141683 (-0.024289) | 1.593156 / 1.452155 (0.141001) | 1.708864 / 1.492716 (0.216148) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.229529 / 0.018006 (0.211523) | 0.490531 / 0.000490 (0.490042) | 0.002934 / 0.000200 (0.002734) | 0.000094 / 0.000054 (0.000040) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028074 / 0.037411 (-0.009337) | 0.122321 / 0.014526 (0.107795) | 0.129120 / 0.176557 (-0.047436) | 0.188413 / 0.737135 (-0.548722) | 0.138983 / 0.296338 (-0.157355) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.479350 / 0.215209 (0.264141) | 4.926201 / 2.077655 (2.848546) | 2.265557 / 1.504120 (0.761437) | 2.014580 / 1.541195 (0.473386) | 2.120517 / 1.468490 (0.652027) | 0.795334 / 4.584777 (-3.789443) | 4.509754 / 3.745712 (0.764042) | 4.328313 / 5.269862 (-0.941548) | 2.153304 / 4.565676 (-2.412373) | 0.102942 / 0.424275 (-0.321333) | 0.053504 / 0.007607 (0.045896) | 0.609392 / 0.226044 (0.383347) | 6.114048 / 2.268929 (3.845119) | 2.773306 / 55.444624 (-52.671318) | 2.443434 / 6.876477 (-4.433042) | 2.612005 / 2.142072 (0.469932) | 0.950435 / 4.805227 (-3.854792) | 0.194081 / 6.500664 (-6.306583) | 0.074513 / 0.075469 (-0.000956) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.402897 / 1.841788 (-0.438891) | 18.263033 / 8.074308 (10.188724) | 16.579809 / 10.191392 (6.388417) | 0.212319 / 0.680424 (-0.468104) | 0.020468 / 0.534201 (-0.513733) | 0.494850 / 0.579283 (-0.084433) | 0.483790 / 0.434364 (0.049426) | 0.572073 / 0.540337 (0.031735) | 0.684353 / 1.386936 (-0.702583) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009732 / 0.011353 (-0.001621) | 0.005901 / 0.011008 (-0.005107) | 0.084568 / 0.038508 (0.046060) | 0.038743 / 0.023109 (0.015634) | 0.431323 / 0.275898 (0.155425) | 0.472124 / 0.323480 (0.148644) | 0.006255 / 0.007986 (-0.001731) | 0.005892 / 0.004328 (0.001563) | 0.081913 / 0.004250 (0.077662) | 0.055560 / 0.037052 (0.018507) | 0.442857 / 0.258489 (0.184368) | 0.481887 / 0.293841 (0.188046) | 0.040730 / 0.128546 (-0.087816) | 0.014339 / 0.075646 (-0.061307) | 0.099258 / 0.419271 (-0.320013) | 0.054692 / 0.043533 (0.011159) | 0.436323 / 0.255139 (0.181184) | 0.461046 / 0.283200 (0.177846) | 0.125972 / 0.141683 (-0.015710) | 1.673173 / 1.452155 (0.221018) | 1.781364 / 1.492716 (0.288648) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.271450 / 0.018006 (0.253444) | 0.514484 / 0.000490 (0.513994) | 0.000455 / 0.000200 (0.000255) | 0.000061 / 0.000054 (0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.036104 / 0.037411 (-0.001308) | 0.143306 / 0.014526 (0.128780) | 0.151105 / 0.176557 (-0.025451) | 0.210737 / 0.737135 (-0.526399) | 0.151404 / 0.296338 (-0.144934) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.573613 / 0.215209 (0.358404) | 5.828222 / 2.077655 (3.750567) | 2.993028 / 1.504120 (1.488908) | 2.617900 / 1.541195 (1.076706) | 2.754673 / 1.468490 (1.286183) | 1.010624 / 4.584777 (-3.574152) | 4.971261 / 3.745712 (1.225549) | 4.382017 / 5.269862 (-0.887845) | 1.971894 / 4.565676 (-2.593782) | 0.104404 / 0.424275 (-0.319871) | 0.014595 / 0.007607 (0.006988) | 0.657684 / 0.226044 (0.431639) | 6.566151 / 2.268929 (4.297222) | 3.221378 / 55.444624 (-52.223246) | 2.809402 / 6.876477 (-4.067075) | 2.882426 / 2.142072 (0.740354) | 1.006134 / 4.805227 (-3.799093) | 0.204469 / 6.500664 (-6.296196) | 0.078147 / 0.075469 (0.002678) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.574768 / 1.841788 (-0.267020) | 18.193335 / 8.074308 (10.119027) | 17.275353 / 10.191392 (7.083961) | 0.166890 / 0.680424 (-0.513534) | 0.020612 / 0.534201 (-0.513589) | 0.496179 / 0.579283 (-0.083104) | 0.507824 / 0.434364 (0.073460) | 0.620984 / 0.540337 (0.080647) | 0.749727 / 1.386936 (-0.637209) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#06988d3e01820b93ebcdc76158339fd6f67329dc \"CML watermark\")\n", "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006534 / 0.011353 (-0.004819) | 0.004456 / 0.011008 (-0.006553) | 0.097978 / 0.038508 (0.059470) | 0.027614 / 0.023109 (0.004505) | 0.309833 / 0.275898 (0.033935) | 0.337006 / 0.323480 (0.013526) | 0.004986 / 0.007986 (-0.002999) | 0.004521 / 0.004328 (0.000193) | 0.075053 / 0.004250 (0.070803) | 0.037095 / 0.037052 (0.000043) | 0.305430 / 0.258489 (0.046941) | 0.345298 / 0.293841 (0.051457) | 0.029784 / 0.128546 (-0.098762) | 0.011449 / 0.075646 (-0.064197) | 0.323346 / 0.419271 (-0.095925) | 0.042188 / 0.043533 (-0.001345) | 0.318653 / 0.255139 (0.063514) | 0.333799 / 0.283200 (0.050599) | 0.088194 / 0.141683 (-0.053488) | 1.511012 / 1.452155 (0.058857) | 1.578205 / 1.492716 (0.085489) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.229695 / 0.018006 (0.211689) | 0.413276 / 0.000490 (0.412786) | 0.009142 / 0.000200 (0.008942) | 0.000537 / 0.000054 (0.000482) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024327 / 0.037411 (-0.013084) | 0.097953 / 0.014526 (0.083427) | 0.105551 / 0.176557 (-0.071005) | 0.169397 / 0.737135 (-0.567738) | 0.109784 / 0.296338 (-0.186554) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.417713 / 0.215209 (0.202504) | 4.190703 / 2.077655 (2.113048) | 1.873504 / 1.504120 (0.369384) | 1.664540 / 1.541195 (0.123346) | 1.704539 / 1.468490 (0.236049) | 0.699840 / 4.584777 (-3.884937) | 3.480605 / 3.745712 (-0.265107) | 1.844229 / 5.269862 (-3.425633) | 1.155793 / 4.565676 (-3.409883) | 0.083013 / 0.424275 (-0.341262) | 0.012414 / 0.007607 (0.004807) | 0.518357 / 0.226044 (0.292313) | 5.186136 / 2.268929 (2.917207) | 2.329263 / 55.444624 (-53.115361) | 1.991395 / 6.876477 (-4.885081) | 2.074563 / 2.142072 (-0.067509) | 0.801388 / 4.805227 (-4.003839) | 0.152236 / 6.500664 (-6.348428) | 0.067414 / 0.075469 (-0.008055) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.197290 / 1.841788 (-0.644497) | 13.666537 / 8.074308 (5.592229) | 13.017190 / 10.191392 (2.825798) | 0.142109 / 0.680424 (-0.538314) | 0.016321 / 0.534201 (-0.517880) | 0.378434 / 0.579283 (-0.200849) | 0.381101 / 0.434364 (-0.053263) | 0.444113 / 0.540337 (-0.096225) | 0.521448 / 1.386936 (-0.865488) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006273 / 0.011353 (-0.005080) | 0.004408 / 0.011008 (-0.006600) | 0.077100 / 0.038508 (0.038592) | 0.027361 / 0.023109 (0.004251) | 0.358170 / 0.275898 (0.082272) | 0.390125 / 0.323480 (0.066646) | 0.004736 / 0.007986 (-0.003250) | 0.004663 / 0.004328 (0.000334) | 0.077626 / 0.004250 (0.073376) | 0.037103 / 0.037052 (0.000051) | 0.360044 / 0.258489 (0.101555) | 0.411539 / 0.293841 (0.117698) | 0.030173 / 0.128546 (-0.098373) | 0.011618 / 0.075646 (-0.064028) | 0.086036 / 0.419271 (-0.333235) | 0.039077 / 0.043533 (-0.004456) | 0.382223 / 0.255139 (0.127084) | 0.384817 / 0.283200 (0.101618) | 0.094591 / 0.141683 (-0.047092) | 1.494961 / 1.452155 (0.042807) | 1.583769 / 1.492716 (0.091053) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.227467 / 0.018006 (0.209460) | 0.396648 / 0.000490 (0.396159) | 0.000382 / 0.000200 (0.000182) | 0.000057 / 0.000054 (0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025346 / 0.037411 (-0.012065) | 0.102086 / 0.014526 (0.087560) | 0.108570 / 0.176557 (-0.067986) | 0.158777 / 0.737135 (-0.578359) | 0.112885 / 0.296338 (-0.183453) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.460731 / 0.215209 (0.245522) | 4.556450 / 2.077655 (2.478795) | 2.258185 / 1.504120 (0.754065) | 2.122584 / 1.541195 (0.581389) | 2.224638 / 1.468490 (0.756148) | 0.691909 / 4.584777 (-3.892868) | 3.482634 / 3.745712 (-0.263078) | 2.772837 / 5.269862 (-2.497024) | 1.533897 / 4.565676 (-3.031780) | 0.083025 / 0.424275 (-0.341250) | 0.012629 / 0.007607 (0.005022) | 0.548397 / 0.226044 (0.322352) | 5.492005 / 2.268929 (3.223077) | 2.669841 / 55.444624 (-52.774784) | 2.366947 / 6.876477 (-4.509529) | 2.496795 / 2.142072 (0.354722) | 0.804868 / 4.805227 (-4.000359) | 0.151686 / 6.500664 (-6.348978) | 0.068333 / 0.075469 (-0.007136) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.320414 / 1.841788 (-0.521374) | 14.367567 / 8.074308 (6.293258) | 14.047702 / 10.191392 (3.856310) | 0.129087 / 0.680424 (-0.551337) | 0.016658 / 0.534201 (-0.517543) | 0.381949 / 0.579283 (-0.197335) | 0.390105 / 0.434364 (-0.044258) | 0.445947 / 0.540337 (-0.094390) | 0.531074 / 1.386936 (-0.855862) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#c67c9f3797ecc231b34d87ddef489c1238ec4046 \"CML watermark\")\n" ]
"2023-05-03T18:34:18Z"
"2023-05-04T14:31:55Z"
"2023-05-04T14:24:49Z"
CONTRIBUTOR
null
Preserve the `stopping_strategy` in the `RandomlyCyclingMultiSourcesExamplesIterable.shard_data_sources` to fix shuffling a dataset interleaved (from multiple sources) with probabilities. Fix #5812
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5816/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5816/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5816.diff", "html_url": "https://github.com/huggingface/datasets/pull/5816", "merged_at": "2023-05-04T14:24:49Z", "patch_url": "https://github.com/huggingface/datasets/pull/5816.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5816" }
true
https://api.github.com/repos/huggingface/datasets/issues/5814
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5814/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5814/comments
https://api.github.com/repos/huggingface/datasets/issues/5814/events
https://github.com/huggingface/datasets/pull/5814
1,693,216,778
PR_kwDODunzps5PoOQ9
5,814
Repro windows crash
{ "avatar_url": "https://avatars.githubusercontent.com/u/106995444?v=4", "events_url": "https://api.github.com/users/maddiedawson/events{/privacy}", "followers_url": "https://api.github.com/users/maddiedawson/followers", "following_url": "https://api.github.com/users/maddiedawson/following{/other_user}", "gists_url": "https://api.github.com/users/maddiedawson/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/maddiedawson", "id": 106995444, "login": "maddiedawson", "node_id": "U_kgDOBmCe9A", "organizations_url": "https://api.github.com/users/maddiedawson/orgs", "received_events_url": "https://api.github.com/users/maddiedawson/received_events", "repos_url": "https://api.github.com/users/maddiedawson/repos", "site_admin": false, "starred_url": "https://api.github.com/users/maddiedawson/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/maddiedawson/subscriptions", "type": "User", "url": "https://api.github.com/users/maddiedawson" }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5814). All of your documentation changes will be reflected on that endpoint." ]
"2023-05-02T23:30:18Z"
"2023-05-02T23:47:07Z"
null
CONTRIBUTOR
null
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5814/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5814/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5814.diff", "html_url": "https://github.com/huggingface/datasets/pull/5814", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/5814.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5814" }
true
https://api.github.com/repos/huggingface/datasets/issues/5815
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5815/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5815/comments
https://api.github.com/repos/huggingface/datasets/issues/5815/events
https://github.com/huggingface/datasets/issues/5815
1,693,701,743
I_kwDODunzps5k89Zv
5,815
Easy way to create a Kaggle dataset from a Huggingface dataset?
{ "avatar_url": "https://avatars.githubusercontent.com/u/5355286?v=4", "events_url": "https://api.github.com/users/hrbigelow/events{/privacy}", "followers_url": "https://api.github.com/users/hrbigelow/followers", "following_url": "https://api.github.com/users/hrbigelow/following{/other_user}", "gists_url": "https://api.github.com/users/hrbigelow/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/hrbigelow", "id": 5355286, "login": "hrbigelow", "node_id": "MDQ6VXNlcjUzNTUyODY=", "organizations_url": "https://api.github.com/users/hrbigelow/orgs", "received_events_url": "https://api.github.com/users/hrbigelow/received_events", "repos_url": "https://api.github.com/users/hrbigelow/repos", "site_admin": false, "starred_url": "https://api.github.com/users/hrbigelow/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hrbigelow/subscriptions", "type": "User", "url": "https://api.github.com/users/hrbigelow" }
[]
open
false
null
[]
null
[ "Hi @hrbigelow , I'm no expert for such a question so I'll ping @lhoestq from the `datasets` library (also this issue could be moved there if someone with permission can do it :) )", "Hi ! Many datasets are made of several files, and how they are parsed often requires a python script. Because of that, datasets like wmt14 are not available as a single file on HF. Though you can create this file using `datasets`:\r\n\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nds = load_dataset(\"wmt14\", \"de-en\", split=\"train\")\r\n\r\nds.to_json(\"wmt14-train.json\")\r\n# OR to parquet, which is compressed:\r\n# ds.to_parquet(\"wmt14-train.parquet\")\r\n```\r\n\r\nWe are also working on providing parquet exports for all datasets, but wmt14 is not supported yet (we're rolling it out for datasets <1GB first). They're usually available in the `refs/convert/parquet` branch (empty for wmt14):\r\n\r\n<img width=\"267\" alt=\"image\" src=\"https://user-images.githubusercontent.com/42851186/235878909-7339f5a4-be19-4ada-85d8-8a50d23acf35.png\">\r\n", "also cc @nateraw for visibility on this (and cc @osanseviero too)", "I've requested support for creating a Kaggle dataset from an imported HF dataset repo on their \"forum\" here: https://www.kaggle.com/discussions/product-feedback/427142 (upvotes appreciated 🙂)" ]
"2023-05-02T21:43:33Z"
"2023-07-26T16:13:31Z"
null
NONE
null
I'm not sure whether this is more appropriately addressed with HuggingFace or Kaggle. I would like to somehow directly create a Kaggle dataset from a HuggingFace Dataset. While Kaggle does provide the option to create a dataset from a URI, that URI must point to a single file. For example: ![image](https://user-images.githubusercontent.com/5355286/235792394-7c559d07-4aff-45b7-ad2b-9c5280c88415.png) Is there some mechanism from huggingface to represent a dataset (such as that from `load_dataset('wmt14', 'de-en', split='train')` as a single file? Or, some other way to get that into a Kaggle dataset so that I can use the huggingface `datasets` module to process and consume it inside of a Kaggle notebook? Thanks in advance!
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5815/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5815/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5813
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5813/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5813/comments
https://api.github.com/repos/huggingface/datasets/issues/5813/events
https://github.com/huggingface/datasets/pull/5813
1,691,908,535
PR_kwDODunzps5Pj0_E
5,813
[DO-NOT-MERGE] Debug Windows issue at #3
{ "avatar_url": "https://avatars.githubusercontent.com/u/6477701?v=4", "events_url": "https://api.github.com/users/HyukjinKwon/events{/privacy}", "followers_url": "https://api.github.com/users/HyukjinKwon/followers", "following_url": "https://api.github.com/users/HyukjinKwon/following{/other_user}", "gists_url": "https://api.github.com/users/HyukjinKwon/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/HyukjinKwon", "id": 6477701, "login": "HyukjinKwon", "node_id": "MDQ6VXNlcjY0Nzc3MDE=", "organizations_url": "https://api.github.com/users/HyukjinKwon/orgs", "received_events_url": "https://api.github.com/users/HyukjinKwon/received_events", "repos_url": "https://api.github.com/users/HyukjinKwon/repos", "site_admin": false, "starred_url": "https://api.github.com/users/HyukjinKwon/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/HyukjinKwon/subscriptions", "type": "User", "url": "https://api.github.com/users/HyukjinKwon" }
[]
closed
false
null
[]
null
[]
"2023-05-02T07:19:34Z"
"2023-05-02T07:21:30Z"
"2023-05-02T07:21:30Z"
NONE
null
TBD
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5813/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5813/timeline
null
null
1
{ "diff_url": "https://github.com/huggingface/datasets/pull/5813.diff", "html_url": "https://github.com/huggingface/datasets/pull/5813", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/5813.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5813" }
true
https://api.github.com/repos/huggingface/datasets/issues/5812
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5812/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5812/comments
https://api.github.com/repos/huggingface/datasets/issues/5812/events
https://github.com/huggingface/datasets/issues/5812
1,691,798,169
I_kwDODunzps5k1sqZ
5,812
Cannot shuffle interleaved IterableDataset with "all_exhausted" stopping strategy
{ "avatar_url": "https://avatars.githubusercontent.com/u/15215732?v=4", "events_url": "https://api.github.com/users/off99555/events{/privacy}", "followers_url": "https://api.github.com/users/off99555/followers", "following_url": "https://api.github.com/users/off99555/following{/other_user}", "gists_url": "https://api.github.com/users/off99555/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/off99555", "id": 15215732, "login": "off99555", "node_id": "MDQ6VXNlcjE1MjE1NzMy", "organizations_url": "https://api.github.com/users/off99555/orgs", "received_events_url": "https://api.github.com/users/off99555/received_events", "repos_url": "https://api.github.com/users/off99555/repos", "site_admin": false, "starred_url": "https://api.github.com/users/off99555/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/off99555/subscriptions", "type": "User", "url": "https://api.github.com/users/off99555" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" }, { "color": "fef2c0", "default": false, "description": "", "id": 3287858981, "name": "streaming", "node_id": "MDU6TGFiZWwzMjg3ODU4OTgx", "url": "https://api.github.com/repos/huggingface/datasets/labels/streaming" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" } ]
null
[]
"2023-05-02T05:26:17Z"
"2023-05-04T14:24:51Z"
"2023-05-04T14:24:51Z"
NONE
null
### Describe the bug Shuffling interleaved `IterableDataset` with "all_exhausted" strategy yields non-exhaustive sampling. ### Steps to reproduce the bug ```py from datasets import IterableDataset, interleave_datasets def gen(bias, length): for i in range(length): yield dict(a=bias+i) seed = 42 probabilities = [0.2, 0.6, 0.2] d1 = IterableDataset.from_generator(lambda: gen(0, 3)) d2 = IterableDataset.from_generator(lambda: gen(10, 4)) d3 = IterableDataset.from_generator(lambda: gen(20, 3)) ds = interleave_datasets([d1, d2, d3], probabilities=probabilities, seed=seed, stopping_strategy='all_exhausted') ds = ds.shuffle(buffer_size=1000) for x in ds: print(x) ``` This code produces ``` {'a': 0} {'a': 22} {'a': 20} {'a': 21} {'a': 10} {'a': 1} ``` ### Expected behavior It should produce a longer list of examples to exhaust all the datasets. If you comment out the shuffle line, it will exhaust all the datasets properly. Here is the output if you comment out shuffling: ``` {'a': 10} {'a': 11} {'a': 20} {'a': 12} {'a': 0} {'a': 21} {'a': 13} {'a': 10} {'a': 1} {'a': 11} {'a': 12} {'a': 22} {'a': 13} {'a': 20} {'a': 10} {'a': 11} {'a': 12} {'a': 2} ``` ### Environment info - `datasets` version: 2.12.0 - Platform: Linux-5.10.147+-x86_64-with-glibc2.31 - Python version: 3.10.11 - Huggingface_hub version: 0.14.1 - PyArrow version: 9.0.0 - Pandas version: 1.5.3 This was run on Google Colab.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5812/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5812/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5811
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5811/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5811/comments
https://api.github.com/repos/huggingface/datasets/issues/5811/events
https://github.com/huggingface/datasets/issues/5811
1,689,919,046
I_kwDODunzps5kuh5G
5,811
load_dataset: TypeError: 'NoneType' object is not callable, on local dataset filename changes
{ "avatar_url": "https://avatars.githubusercontent.com/u/50685483?v=4", "events_url": "https://api.github.com/users/durapensa/events{/privacy}", "followers_url": "https://api.github.com/users/durapensa/followers", "following_url": "https://api.github.com/users/durapensa/following{/other_user}", "gists_url": "https://api.github.com/users/durapensa/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/durapensa", "id": 50685483, "login": "durapensa", "node_id": "MDQ6VXNlcjUwNjg1NDgz", "organizations_url": "https://api.github.com/users/durapensa/orgs", "received_events_url": "https://api.github.com/users/durapensa/received_events", "repos_url": "https://api.github.com/users/durapensa/repos", "site_admin": false, "starred_url": "https://api.github.com/users/durapensa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/durapensa/subscriptions", "type": "User", "url": "https://api.github.com/users/durapensa" }
[]
open
false
null
[]
null
[ "This error means a `DatasetBuilder` subclass that generates the dataset could not be found inside the script, so make sure `dushowxa-characters/dushowxa-characters.py `is a valid dataset script (assuming `path_or_dataset` is `dushowxa-characters`)\r\n\r\nAlso, we should improve the error to make it more obvious what the problem is." ]
"2023-04-30T13:27:17Z"
"2023-05-05T17:44:03Z"
null
NONE
null
### Describe the bug I've adapted Databrick's [train_dolly.py](/databrickslabs/dolly/blob/master/train_dolly.py) to train using a local dataset, which has been working. Upon changing the filenames of the `.json` & `.py` files in my local dataset directory, `dataset = load_dataset(path_or_dataset)["train"]` throws the error: ```python 2023-04-30 09:10:52 INFO [training.trainer] Loading dataset from dushowxa-characters Traceback (most recent call last): File "/data/dushowxa-dolly/train_dushowxa.py", line 26, in <module> load_training_dataset() File "/data/dushowxa-dolly/training/trainer.py", line 89, in load_training_dataset dataset = load_dataset(path_or_dataset)["train"] File "/data/dushowxa-dolly/.venv/lib/python3.10/site-packages/datasets/load.py", line 1773, in load_dataset builder_instance = load_dataset_builder( File "/data/dushowxa-dolly/.venv/lib/python3.10/site-packages/datasets/load.py", line 1528, in load_dataset_builder builder_instance: DatasetBuilder = builder_cls( TypeError: 'NoneType' object is not callable ``` The local dataset filenames were of the form `dushowxa-characters/expanse-dushowxa-characters.json` and are now of the form `dushowxa-characters/dushowxa-characters.json` (the word `expanse-` was removed from the filenames). Is this perhaps a dataset caching issue? I have attempted to manually clear caches, but to no effect: ```sh rm -rfv ~/.cache/huggingface/datasets/* rm -rfv ~/.cache/huggingface/modules/* ``` ### Steps to reproduce the bug Run `python3 train_dushowxa.py` (adapted from Databrick's [train_dolly.py](/databrickslabs/dolly/blob/master/train_dolly.py)). ### Expected behavior Training succeeds as before local dataset filenames were changed. ### Environment info Ubuntu 22.04, Python 3.10.6, venv ```python accelerate>=0.16.0,<1 click>=8.0.4,<9 datasets>=2.10.0,<3 deepspeed>=0.9.0,<1 transformers[torch]>=4.28.1,<5 langchain>=0.0.139 ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5811/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5811/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5810
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5810/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5810/comments
https://api.github.com/repos/huggingface/datasets/issues/5810/events
https://github.com/huggingface/datasets/pull/5810
1,689,917,822
PR_kwDODunzps5PdJHI
5,810
Add `fn_kwargs` to `map` and `filter` of `IterableDataset` and `IterableDatasetDict`
{ "avatar_url": "https://avatars.githubusercontent.com/u/3927621?v=4", "events_url": "https://api.github.com/users/yuukicammy/events{/privacy}", "followers_url": "https://api.github.com/users/yuukicammy/followers", "following_url": "https://api.github.com/users/yuukicammy/following{/other_user}", "gists_url": "https://api.github.com/users/yuukicammy/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/yuukicammy", "id": 3927621, "login": "yuukicammy", "node_id": "MDQ6VXNlcjM5Mjc2MjE=", "organizations_url": "https://api.github.com/users/yuukicammy/orgs", "received_events_url": "https://api.github.com/users/yuukicammy/received_events", "repos_url": "https://api.github.com/users/yuukicammy/repos", "site_admin": false, "starred_url": "https://api.github.com/users/yuukicammy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yuukicammy/subscriptions", "type": "User", "url": "https://api.github.com/users/yuukicammy" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "Sorry, the local test passed because it was inadvertently testing the main branch. I am currently fixing where the test failed.", "- I have fixed the bug and addressed the above two points.\r\n- I have tested locally and confirmed that the test passes.\r\n\r\nPlease check the contents. @lhoestq \r\n\r\n5715a7e64bdd2951e6705aee58d592392e1538d6", "Cool ! You can run `make style` to fix code formatting to fix the ci", "I had forgotten about it. I did it. @lhoestq \r\n00248926a37c6f1387614aa388c36fdc105a59f5", "Thanks for putting this together @yuukicammy ! Looking forward to using this new addition ASAP. \r\n@lhoestq - sorry to bother you with this, but if this looks good to you, any chance we could get this merged in? \r\n\r\nThanks again to you both! ", "Yup there's just one test to remove and we can merge", "Sorry for my understanding wrong! Correspondence has been addressed. @lhoestq \r\n ca511b7b29fdde51ffd69b58bda79220472e9e94\r\n\r\nThanks for your comment! @brianhill11 ", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006788 / 0.011353 (-0.004564) | 0.004372 / 0.011008 (-0.006636) | 0.097746 / 0.038508 (0.059238) | 0.034858 / 0.023109 (0.011749) | 0.298122 / 0.275898 (0.022224) | 0.335272 / 0.323480 (0.011792) | 0.005810 / 0.007986 (-0.002175) | 0.004944 / 0.004328 (0.000616) | 0.072352 / 0.004250 (0.068101) | 0.041730 / 0.037052 (0.004678) | 0.316482 / 0.258489 (0.057992) | 0.338710 / 0.293841 (0.044869) | 0.027975 / 0.128546 (-0.100571) | 0.008746 / 0.075646 (-0.066901) | 0.329336 / 0.419271 (-0.089935) | 0.051327 / 0.043533 (0.007794) | 0.300695 / 0.255139 (0.045556) | 0.322813 / 0.283200 (0.039613) | 0.101133 / 0.141683 (-0.040550) | 1.422767 / 1.452155 (-0.029388) | 1.538364 / 1.492716 (0.045648) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.016698 / 0.018006 (-0.001308) | 0.447042 / 0.000490 (0.446552) | 0.007609 / 0.000200 (0.007409) | 0.000277 / 0.000054 (0.000223) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026732 / 0.037411 (-0.010679) | 0.108295 / 0.014526 (0.093769) | 0.116905 / 0.176557 (-0.059652) | 0.173166 / 0.737135 (-0.563969) | 0.122560 / 0.296338 (-0.173779) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.394893 / 0.215209 (0.179683) | 3.950314 / 2.077655 (1.872659) | 1.780576 / 1.504120 (0.276456) | 1.579855 / 1.541195 (0.038660) | 1.711197 / 1.468490 (0.242707) | 0.521469 / 4.584777 (-4.063308) | 3.838850 / 3.745712 (0.093138) | 3.101095 / 5.269862 (-2.168767) | 1.531574 / 4.565676 (-3.034102) | 0.065291 / 0.424275 (-0.358984) | 0.011979 / 0.007607 (0.004372) | 0.496543 / 0.226044 (0.270498) | 4.965446 / 2.268929 (2.696517) | 2.250788 / 55.444624 (-53.193837) | 1.923231 / 6.876477 (-4.953245) | 2.075372 / 2.142072 (-0.066700) | 0.638708 / 4.805227 (-4.166519) | 0.142048 / 6.500664 (-6.358616) | 0.064225 / 0.075469 (-0.011244) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.211799 / 1.841788 (-0.629989) | 14.791822 / 8.074308 (6.717514) | 14.274993 / 10.191392 (4.083601) | 0.163942 / 0.680424 (-0.516482) | 0.017541 / 0.534201 (-0.516660) | 0.396440 / 0.579283 (-0.182843) | 0.427502 / 0.434364 (-0.006861) | 0.494273 / 0.540337 (-0.046064) | 0.586877 / 1.386936 (-0.800059) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006846 / 0.011353 (-0.004506) | 0.004854 / 0.011008 (-0.006154) | 0.075654 / 0.038508 (0.037146) | 0.034295 / 0.023109 (0.011186) | 0.378095 / 0.275898 (0.102197) | 0.407833 / 0.323480 (0.084353) | 0.006155 / 0.007986 (-0.001830) | 0.004259 / 0.004328 (-0.000070) | 0.076195 / 0.004250 (0.071944) | 0.051901 / 0.037052 (0.014849) | 0.375027 / 0.258489 (0.116538) | 0.428189 / 0.293841 (0.134348) | 0.028814 / 0.128546 (-0.099733) | 0.009209 / 0.075646 (-0.066438) | 0.083681 / 0.419271 (-0.335591) | 0.049158 / 0.043533 (0.005625) | 0.366669 / 0.255139 (0.111530) | 0.388767 / 0.283200 (0.105568) | 0.107837 / 0.141683 (-0.033845) | 1.476354 / 1.452155 (0.024199) | 1.580160 / 1.492716 (0.087443) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.218900 / 0.018006 (0.200894) | 0.445475 / 0.000490 (0.444985) | 0.000423 / 0.000200 (0.000223) | 0.000063 / 0.000054 (0.000008) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029740 / 0.037411 (-0.007671) | 0.115192 / 0.014526 (0.100666) | 0.122439 / 0.176557 (-0.054118) | 0.170639 / 0.737135 (-0.566496) | 0.128085 / 0.296338 (-0.168254) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.437745 / 0.215209 (0.222536) | 4.385695 / 2.077655 (2.308040) | 2.189893 / 1.504120 (0.685773) | 2.023160 / 1.541195 (0.481965) | 2.112798 / 1.468490 (0.644308) | 0.522497 / 4.584777 (-4.062280) | 3.881356 / 3.745712 (0.135644) | 3.206090 / 5.269862 (-2.063772) | 1.308241 / 4.565676 (-3.257435) | 0.065635 / 0.424275 (-0.358640) | 0.012288 / 0.007607 (0.004681) | 0.537265 / 0.226044 (0.311220) | 5.361641 / 2.268929 (3.092712) | 2.638941 / 55.444624 (-52.805684) | 2.344717 / 6.876477 (-4.531759) | 2.437619 / 2.142072 (0.295546) | 0.645079 / 4.805227 (-4.160149) | 0.143852 / 6.500664 (-6.356812) | 0.065796 / 0.075469 (-0.009673) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.276588 / 1.841788 (-0.565200) | 15.239396 / 8.074308 (7.165088) | 13.150591 / 10.191392 (2.959199) | 0.163635 / 0.680424 (-0.516789) | 0.017533 / 0.534201 (-0.516668) | 0.397659 / 0.579283 (-0.181624) | 0.425589 / 0.434364 (-0.008774) | 0.466570 / 0.540337 (-0.073768) | 0.563953 / 1.386936 (-0.822983) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#807d5c5ed4f8db7761b92bed498b2193acce8fb7 \"CML watermark\")\n" ]
"2023-04-30T13:23:01Z"
"2023-05-22T08:12:39Z"
"2023-05-22T08:05:31Z"
CONTRIBUTOR
null
# Overview I've added an argument`fn_kwargs` for map and filter methods of `IterableDataset` and `IterableDatasetDict` classes. # Details Currently, the map and filter methods of some classes related to `IterableDataset` do not allow specifing the arguments passed to the function. This pull request adds `fn_kwargs` to pass arguments to the mapping function. This allows users to preprocess data more flexibly. Added `fn_kwargs` to the following classes and methods (description of the argument is also added). 1. class `FilteredExamplesIterable` 2. method `filter` of class `IterableDataset` 3. method `map` of class `IterableDatasetDict` 4. method `filter` of class `IterableDatasetDict` # Example of changes Here's an example of how to use the new functionality: ```python from datasets import IterableDatasetDict def preprocess_function(example, a=None, b=None): # do something return example dataset = IterableDatasetDict(...) dataset = dataset.map(preprocess_function, fn_kwargs={"a": 1, "b": 2}) ``` # Related Issues This pull request is related to the following issue: https://github.com/huggingface/datasets/issues/3444 . # Testing I have added unit tests to test the new functionality. In test_iterable_dataset.py - Added `test_filtered_examples_iterable_with_fn_kwargs` for [1](#details). - Added `test_iterable_dataset_filter` for [2](#details). - Added `test_iterable_dataset_map_with_fn_kwargs`. This is not a newly added feature, but was added because it was not tested. In test_dataset_dict.py - Added `_create_dummy_iterable_dataset` for [3](#details) and [4](#details). - Added `_create_dummy_iterable_dataset_dict` for [3](#details) and [4](#details). - Added `test_iterable_map` for [3](#details). - Added `test_iterable_filter` for [4](#details). Note that, there is no test for `IterableDatasetDict` at the current main branch. I thought about writing tests for `IterableDatasetDict` in a new file, but I decided to add them in the test file for `DatasetDict` (test_dataset_dict.py). # Checklist - [x] Format the code. - [x] Added tests. - [x] Passed tests locally.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5810/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5810/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5810.diff", "html_url": "https://github.com/huggingface/datasets/pull/5810", "merged_at": "2023-05-22T08:05:31Z", "patch_url": "https://github.com/huggingface/datasets/pull/5810.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5810" }
true
https://api.github.com/repos/huggingface/datasets/issues/5809
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5809/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5809/comments
https://api.github.com/repos/huggingface/datasets/issues/5809/events
https://github.com/huggingface/datasets/issues/5809
1,689,797,293
I_kwDODunzps5kuEKt
5,809
wiki_dpr details for Open Domain Question Answering tasks
{ "avatar_url": "https://avatars.githubusercontent.com/u/64122846?v=4", "events_url": "https://api.github.com/users/yulgok22/events{/privacy}", "followers_url": "https://api.github.com/users/yulgok22/followers", "following_url": "https://api.github.com/users/yulgok22/following{/other_user}", "gists_url": "https://api.github.com/users/yulgok22/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/yulgok22", "id": 64122846, "login": "yulgok22", "node_id": "MDQ6VXNlcjY0MTIyODQ2", "organizations_url": "https://api.github.com/users/yulgok22/orgs", "received_events_url": "https://api.github.com/users/yulgok22/received_events", "repos_url": "https://api.github.com/users/yulgok22/repos", "site_admin": false, "starred_url": "https://api.github.com/users/yulgok22/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yulgok22/subscriptions", "type": "User", "url": "https://api.github.com/users/yulgok22" }
[]
closed
false
null
[]
null
[ "Hi ! I don't remember exactly how it was done, but maybe you have to embed `f\"{title}<sep>{text}\"` ?\r\n\r\nUsing a HF tokenizer it corresponds to doing\r\n```python\r\ntokenized = tokenizer(titles, texts)\r\n```" ]
"2023-04-30T06:12:04Z"
"2023-07-21T14:11:00Z"
"2023-07-21T14:11:00Z"
NONE
null
Hey guys! Thanks for creating the wiki_dpr dataset! I am currently trying to combine wiki_dpr and my own datasets. but I don't know how to make the embedding value the same way as wiki_dpr. As an experiment, I embeds the text of id="7" of wiki_dpr, but this result was very different from wiki_dpr.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5809/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5809/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5807
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5807/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5807/comments
https://api.github.com/repos/huggingface/datasets/issues/5807/events
https://github.com/huggingface/datasets/pull/5807
1,688,977,237
PR_kwDODunzps5PaKRE
5,807
Support parallelized downloading in load_dataset with Spark
{ "avatar_url": "https://avatars.githubusercontent.com/u/12763339?v=4", "events_url": "https://api.github.com/users/es94129/events{/privacy}", "followers_url": "https://api.github.com/users/es94129/followers", "following_url": "https://api.github.com/users/es94129/following{/other_user}", "gists_url": "https://api.github.com/users/es94129/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/es94129", "id": 12763339, "login": "es94129", "node_id": "MDQ6VXNlcjEyNzYzMzM5", "organizations_url": "https://api.github.com/users/es94129/orgs", "received_events_url": "https://api.github.com/users/es94129/received_events", "repos_url": "https://api.github.com/users/es94129/repos", "site_admin": false, "starred_url": "https://api.github.com/users/es94129/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/es94129/subscriptions", "type": "User", "url": "https://api.github.com/users/es94129" }
[]
closed
false
null
[]
null
[ "Hi @lhoestq or other maintainers, this is ready for review, could you please take a look?", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5807). All of your documentation changes will be reflected on that endpoint.", "Per the discussion in #5798, will implement with `joblibspark` instead." ]
"2023-04-28T18:34:32Z"
"2023-05-25T16:54:14Z"
"2023-05-25T16:54:14Z"
CONTRIBUTOR
null
As proposed in https://github.com/huggingface/datasets/issues/5798, this adds support to parallelized downloading in `load_dataset` with Spark, which can speed up the process by distributing the workload to worker nodes. Parallelizing dataset processing is not supported in this PR.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5807/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5807/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5807.diff", "html_url": "https://github.com/huggingface/datasets/pull/5807", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/5807.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5807" }
true
https://api.github.com/repos/huggingface/datasets/issues/5806
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5806/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5806/comments
https://api.github.com/repos/huggingface/datasets/issues/5806/events
https://github.com/huggingface/datasets/issues/5806
1,688,598,095
I_kwDODunzps5kpfZP
5,806
Return the name of the currently loaded file in the load_dataset function.
{ "avatar_url": "https://avatars.githubusercontent.com/u/16948304?v=4", "events_url": "https://api.github.com/users/s-JoL/events{/privacy}", "followers_url": "https://api.github.com/users/s-JoL/followers", "following_url": "https://api.github.com/users/s-JoL/following{/other_user}", "gists_url": "https://api.github.com/users/s-JoL/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/s-JoL", "id": 16948304, "login": "s-JoL", "node_id": "MDQ6VXNlcjE2OTQ4MzA0", "organizations_url": "https://api.github.com/users/s-JoL/orgs", "received_events_url": "https://api.github.com/users/s-JoL/received_events", "repos_url": "https://api.github.com/users/s-JoL/repos", "site_admin": false, "starred_url": "https://api.github.com/users/s-JoL/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/s-JoL/subscriptions", "type": "User", "url": "https://api.github.com/users/s-JoL" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" }, { "color": "7057ff", "default": true, "description": "Good for newcomers", "id": 1935892877, "name": "good first issue", "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue" } ]
open
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/49894149?v=4", "events_url": "https://api.github.com/users/tsabbir96/events{/privacy}", "followers_url": "https://api.github.com/users/tsabbir96/followers", "following_url": "https://api.github.com/users/tsabbir96/following{/other_user}", "gists_url": "https://api.github.com/users/tsabbir96/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/tsabbir96", "id": 49894149, "login": "tsabbir96", "node_id": "MDQ6VXNlcjQ5ODk0MTQ5", "organizations_url": "https://api.github.com/users/tsabbir96/orgs", "received_events_url": "https://api.github.com/users/tsabbir96/received_events", "repos_url": "https://api.github.com/users/tsabbir96/repos", "site_admin": false, "starred_url": "https://api.github.com/users/tsabbir96/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tsabbir96/subscriptions", "type": "User", "url": "https://api.github.com/users/tsabbir96" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/49894149?v=4", "events_url": "https://api.github.com/users/tsabbir96/events{/privacy}", "followers_url": "https://api.github.com/users/tsabbir96/followers", "following_url": "https://api.github.com/users/tsabbir96/following{/other_user}", "gists_url": "https://api.github.com/users/tsabbir96/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/tsabbir96", "id": 49894149, "login": "tsabbir96", "node_id": "MDQ6VXNlcjQ5ODk0MTQ5", "organizations_url": "https://api.github.com/users/tsabbir96/orgs", "received_events_url": "https://api.github.com/users/tsabbir96/received_events", "repos_url": "https://api.github.com/users/tsabbir96/repos", "site_admin": false, "starred_url": "https://api.github.com/users/tsabbir96/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tsabbir96/subscriptions", "type": "User", "url": "https://api.github.com/users/tsabbir96" } ]
null
[ "Implementing this makes sense (e.g., `tensorflow_datasets`' imagefolder returns image filenames). Also, in Datasets 3.0, we plan only to store the bytes of an image/audio, not its path, so this feature would be useful when the path info is still needed.", "Hey @mariosasko, Can I work on this issue, this one seems interesting to implement. I have contributed to jupyterlab recently, and would love to contribute here as well. ", "@tsabbir96 if you are planning to start working on this, you can take on this issue by writing a comment with only the keyword: #self-assign", "#self-assign", "@albertvillanova thank you for letting me contribute here. \r\n@albertvillanova @mariosasko As I am totally new to this repo, could you tell me something more about this issue or perhaps give me some idea on how I can proceed with it? Thanks!", "Hello there, is this issue resolved? @tsabbir96 are you still working on it? Otherwise I would love to give it a try", "@EduardoPach This issue is still relevant, so feel free to work on it.", "Hey @mariosasko, I've taken the time to take a look at how we load the datasets usually. My main question now is about the final solution.\r\n\r\nSo the idea is that whenever we load the datasets we also add a new column in the _generate_tables() method from the builders called filename (or file_name) that should be related files contained in each split, right?\r\n\r\nDo you have any suggestions of where I could add that? " ]
"2023-04-28T13:50:15Z"
"2023-07-28T22:08:18Z"
null
NONE
null
### Feature request Add an optional parameter return_file_name in the load_dataset function. When it is set to True, the function will include the name of the file corresponding to the current line as a feature in the returned output. ### Motivation When training large language models, machine problems may interrupt the training process. In such cases, it is common to load a previously saved checkpoint to resume training. I would like to be able to obtain the names of the previously trained data shards, so that I can skip these parts of the data during continued training to avoid overfitting and redundant training time. ### Your contribution I currently use a dataset in jsonl format, so I am primarily interested in the json format. I suggest adding the file name to the returned table here https://github.com/huggingface/datasets/blob/main/src/datasets/packaged_modules/json/json.py#L92.
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/5806/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5806/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5805
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5805/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5805/comments
https://api.github.com/repos/huggingface/datasets/issues/5805/events
https://github.com/huggingface/datasets/issues/5805
1,688,558,577
I_kwDODunzps5kpVvx
5,805
Improve `Create a dataset` tutorial
{ "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/polinaeterna", "id": 16348744, "login": "polinaeterna", "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "repos_url": "https://api.github.com/users/polinaeterna/repos", "site_admin": false, "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "type": "User", "url": "https://api.github.com/users/polinaeterna" }
[ { "color": "0075ca", "default": true, "description": "Improvements or additions to documentation", "id": 1935892861, "name": "documentation", "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation" } ]
open
false
null
[]
null
[ "I can work on this. The link to the tutorial seems to be broken though @polinaeterna. ", "@isunitha98selvan would be great, thank you! which link are you talking about? I think it should work: https://huggingface.co/docs/datasets/create_dataset" ]
"2023-04-28T13:26:22Z"
"2023-06-23T14:58:44Z"
null
CONTRIBUTOR
null
Our [tutorial on how to create a dataset](https://huggingface.co/docs/datasets/create_dataset) is a bit misleading. 1. In **Folder-based builders** section it says that we have two folder-based builders as standard builders, but we also have similar builders (that can be created from directory with data of required format) for `csv`, `json/jsonl`, `parquet` and `txt` files. We have info about these loaders in separate [guide for loading](https://huggingface.co/docs/datasets/loading#local-and-remote-files) but it's worth briefly mentioning them in the beginning tutorial because they are more common and for consistency. Would be helpful to add the link to the full guide. 2. **From local files** section lists methods for creating a dataset from in-memory data which are also described in [loading guide](https://huggingface.co/docs/datasets/loading#inmemory-data). Maybe we should actually rethink and restructure this tutorial somehow.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 1, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/5805/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5805/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5804
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5804/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5804/comments
https://api.github.com/repos/huggingface/datasets/issues/5804/events
https://github.com/huggingface/datasets/pull/5804
1,688,285,666
PR_kwDODunzps5PX0Dk
5,804
Set dev version
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5804). All of your documentation changes will be reflected on that endpoint.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006448 / 0.011353 (-0.004905) | 0.004440 / 0.011008 (-0.006568) | 0.097837 / 0.038508 (0.059328) | 0.027754 / 0.023109 (0.004645) | 0.306462 / 0.275898 (0.030564) | 0.332454 / 0.323480 (0.008975) | 0.004984 / 0.007986 (-0.003001) | 0.004703 / 0.004328 (0.000375) | 0.075213 / 0.004250 (0.070962) | 0.036524 / 0.037052 (-0.000529) | 0.310149 / 0.258489 (0.051659) | 0.346392 / 0.293841 (0.052552) | 0.031012 / 0.128546 (-0.097534) | 0.011598 / 0.075646 (-0.064049) | 0.323066 / 0.419271 (-0.096206) | 0.042945 / 0.043533 (-0.000588) | 0.302286 / 0.255139 (0.047147) | 0.327813 / 0.283200 (0.044614) | 0.092540 / 0.141683 (-0.049143) | 1.532893 / 1.452155 (0.080739) | 1.556676 / 1.492716 (0.063960) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.195126 / 0.018006 (0.177120) | 0.399623 / 0.000490 (0.399133) | 0.003176 / 0.000200 (0.002976) | 0.000068 / 0.000054 (0.000014) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023612 / 0.037411 (-0.013799) | 0.097794 / 0.014526 (0.083268) | 0.104665 / 0.176557 (-0.071891) | 0.167145 / 0.737135 (-0.569990) | 0.108769 / 0.296338 (-0.187570) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.437818 / 0.215209 (0.222608) | 4.354896 / 2.077655 (2.277242) | 2.092832 / 1.504120 (0.588712) | 1.957630 / 1.541195 (0.416435) | 2.033135 / 1.468490 (0.564645) | 0.702316 / 4.584777 (-3.882461) | 3.448035 / 3.745712 (-0.297678) | 1.906762 / 5.269862 (-3.363100) | 1.253274 / 4.565676 (-3.312402) | 0.082486 / 0.424275 (-0.341789) | 0.012442 / 0.007607 (0.004835) | 0.532096 / 0.226044 (0.306052) | 5.366580 / 2.268929 (3.097652) | 2.441904 / 55.444624 (-53.002720) | 2.112116 / 6.876477 (-4.764361) | 2.185471 / 2.142072 (0.043398) | 0.797905 / 4.805227 (-4.007322) | 0.149811 / 6.500664 (-6.350853) | 0.066507 / 0.075469 (-0.008962) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.206300 / 1.841788 (-0.635487) | 13.620851 / 8.074308 (5.546543) | 14.190666 / 10.191392 (3.999274) | 0.142343 / 0.680424 (-0.538081) | 0.016867 / 0.534201 (-0.517334) | 0.381557 / 0.579283 (-0.197726) | 0.373935 / 0.434364 (-0.060429) | 0.437856 / 0.540337 (-0.102481) | 0.525235 / 1.386936 (-0.861701) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006598 / 0.011353 (-0.004755) | 0.004487 / 0.011008 (-0.006522) | 0.077582 / 0.038508 (0.039073) | 0.028008 / 0.023109 (0.004899) | 0.341602 / 0.275898 (0.065704) | 0.377105 / 0.323480 (0.053625) | 0.004999 / 0.007986 (-0.002986) | 0.004791 / 0.004328 (0.000462) | 0.076418 / 0.004250 (0.072167) | 0.038347 / 0.037052 (0.001295) | 0.343196 / 0.258489 (0.084707) | 0.382459 / 0.293841 (0.088618) | 0.030597 / 0.128546 (-0.097950) | 0.011579 / 0.075646 (-0.064067) | 0.085876 / 0.419271 (-0.333396) | 0.043241 / 0.043533 (-0.000292) | 0.343754 / 0.255139 (0.088615) | 0.380689 / 0.283200 (0.097489) | 0.096015 / 0.141683 (-0.045668) | 1.464419 / 1.452155 (0.012264) | 1.574010 / 1.492716 (0.081294) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.156433 / 0.018006 (0.138427) | 0.403179 / 0.000490 (0.402690) | 0.002415 / 0.000200 (0.002215) | 0.000082 / 0.000054 (0.000027) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024946 / 0.037411 (-0.012465) | 0.100568 / 0.014526 (0.086042) | 0.106440 / 0.176557 (-0.070117) | 0.158457 / 0.737135 (-0.578678) | 0.110774 / 0.296338 (-0.185564) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.434734 / 0.215209 (0.219525) | 4.343874 / 2.077655 (2.266220) | 2.059759 / 1.504120 (0.555639) | 1.855124 / 1.541195 (0.313930) | 1.908567 / 1.468490 (0.440077) | 0.695283 / 4.584777 (-3.889494) | 3.347724 / 3.745712 (-0.397988) | 2.979498 / 5.269862 (-2.290364) | 1.532040 / 4.565676 (-3.033636) | 0.083021 / 0.424275 (-0.341254) | 0.012522 / 0.007607 (0.004915) | 0.540934 / 0.226044 (0.314890) | 5.385690 / 2.268929 (3.116762) | 2.507409 / 55.444624 (-52.937216) | 2.160537 / 6.876477 (-4.715939) | 2.269195 / 2.142072 (0.127123) | 0.804718 / 4.805227 (-4.000509) | 0.152432 / 6.500664 (-6.348232) | 0.068783 / 0.075469 (-0.006686) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.294698 / 1.841788 (-0.547090) | 14.152792 / 8.074308 (6.078484) | 14.233132 / 10.191392 (4.041740) | 0.143655 / 0.680424 (-0.536768) | 0.016844 / 0.534201 (-0.517357) | 0.380246 / 0.579283 (-0.199037) | 0.381730 / 0.434364 (-0.052633) | 0.456838 / 0.540337 (-0.083499) | 0.543677 / 1.386936 (-0.843259) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#b28d5610887f2e107765f5f1557679184db08214 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008586 / 0.011353 (-0.002767) | 0.005886 / 0.011008 (-0.005122) | 0.114522 / 0.038508 (0.076014) | 0.040966 / 0.023109 (0.017857) | 0.366655 / 0.275898 (0.090757) | 0.408765 / 0.323480 (0.085285) | 0.006822 / 0.007986 (-0.001164) | 0.004508 / 0.004328 (0.000180) | 0.084715 / 0.004250 (0.080465) | 0.054007 / 0.037052 (0.016954) | 0.380500 / 0.258489 (0.122011) | 0.410377 / 0.293841 (0.116536) | 0.041040 / 0.128546 (-0.087507) | 0.013940 / 0.075646 (-0.061707) | 0.398456 / 0.419271 (-0.020816) | 0.059315 / 0.043533 (0.015782) | 0.353640 / 0.255139 (0.098501) | 0.388682 / 0.283200 (0.105482) | 0.121744 / 0.141683 (-0.019939) | 1.729306 / 1.452155 (0.277151) | 1.824768 / 1.492716 (0.332052) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.228806 / 0.018006 (0.210800) | 0.492790 / 0.000490 (0.492300) | 0.010815 / 0.000200 (0.010615) | 0.000372 / 0.000054 (0.000318) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031750 / 0.037411 (-0.005662) | 0.127160 / 0.014526 (0.112635) | 0.136717 / 0.176557 (-0.039839) | 0.205590 / 0.737135 (-0.531545) | 0.142596 / 0.296338 (-0.153742) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.486419 / 0.215209 (0.271210) | 4.858572 / 2.077655 (2.780918) | 2.173867 / 1.504120 (0.669747) | 1.934619 / 1.541195 (0.393424) | 2.104185 / 1.468490 (0.635695) | 0.837913 / 4.584777 (-3.746864) | 4.552192 / 3.745712 (0.806480) | 2.565040 / 5.269862 (-2.704822) | 1.808499 / 4.565676 (-2.757178) | 0.103283 / 0.424275 (-0.320993) | 0.015040 / 0.007607 (0.007433) | 0.602325 / 0.226044 (0.376281) | 6.038655 / 2.268929 (3.769727) | 2.759789 / 55.444624 (-52.684835) | 2.330990 / 6.876477 (-4.545487) | 2.404111 / 2.142072 (0.262038) | 1.011637 / 4.805227 (-3.793590) | 0.202142 / 6.500664 (-6.298522) | 0.079496 / 0.075469 (0.004026) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.429543 / 1.841788 (-0.412245) | 18.052409 / 8.074308 (9.978101) | 16.989154 / 10.191392 (6.797762) | 0.208981 / 0.680424 (-0.471443) | 0.020490 / 0.534201 (-0.513711) | 0.502746 / 0.579283 (-0.076537) | 0.491769 / 0.434364 (0.057405) | 0.581970 / 0.540337 (0.041632) | 0.695816 / 1.386936 (-0.691120) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008449 / 0.011353 (-0.002904) | 0.006633 / 0.011008 (-0.004375) | 0.088638 / 0.038508 (0.050130) | 0.040013 / 0.023109 (0.016904) | 0.413108 / 0.275898 (0.137210) | 0.446310 / 0.323480 (0.122830) | 0.006515 / 0.007986 (-0.001471) | 0.006223 / 0.004328 (0.001894) | 0.089823 / 0.004250 (0.085573) | 0.052029 / 0.037052 (0.014977) | 0.407263 / 0.258489 (0.148774) | 0.449416 / 0.293841 (0.155576) | 0.041810 / 0.128546 (-0.086736) | 0.014604 / 0.075646 (-0.061042) | 0.103728 / 0.419271 (-0.315543) | 0.058212 / 0.043533 (0.014679) | 0.408936 / 0.255139 (0.153797) | 0.436727 / 0.283200 (0.153528) | 0.124344 / 0.141683 (-0.017339) | 1.752112 / 1.452155 (0.299957) | 1.859104 / 1.492716 (0.366387) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.231172 / 0.018006 (0.213166) | 0.502974 / 0.000490 (0.502485) | 0.005586 / 0.000200 (0.005386) | 0.000137 / 0.000054 (0.000082) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034097 / 0.037411 (-0.003314) | 0.133780 / 0.014526 (0.119254) | 0.142321 / 0.176557 (-0.034236) | 0.199807 / 0.737135 (-0.537329) | 0.150073 / 0.296338 (-0.146266) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.515658 / 0.215209 (0.300449) | 5.129783 / 2.077655 (3.052129) | 2.534767 / 1.504120 (1.030648) | 2.352468 / 1.541195 (0.811274) | 2.430708 / 1.468490 (0.962218) | 0.850087 / 4.584777 (-3.734690) | 4.529622 / 3.745712 (0.783910) | 2.451986 / 5.269862 (-2.817876) | 1.569568 / 4.565676 (-2.996109) | 0.102907 / 0.424275 (-0.321368) | 0.014420 / 0.007607 (0.006813) | 0.635124 / 0.226044 (0.409080) | 6.260496 / 2.268929 (3.991568) | 3.094984 / 55.444624 (-52.349640) | 2.780629 / 6.876477 (-4.095847) | 2.947620 / 2.142072 (0.805548) | 1.002397 / 4.805227 (-3.802830) | 0.200502 / 6.500664 (-6.300162) | 0.076577 / 0.075469 (0.001107) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.505958 / 1.841788 (-0.335829) | 18.364986 / 8.074308 (10.290678) | 16.707214 / 10.191392 (6.515822) | 0.210976 / 0.680424 (-0.469447) | 0.022077 / 0.534201 (-0.512124) | 0.516174 / 0.579283 (-0.063109) | 0.502469 / 0.434364 (0.068105) | 0.626790 / 0.540337 (0.086453) | 0.747230 / 1.386936 (-0.639706) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#bc5fef5b6d91f009e4101684adcb374df2c170f6 \"CML watermark\")\n" ]
"2023-04-28T10:10:01Z"
"2023-04-28T10:18:51Z"
"2023-04-28T10:10:29Z"
MEMBER
null
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5804/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5804/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5804.diff", "html_url": "https://github.com/huggingface/datasets/pull/5804", "merged_at": "2023-04-28T10:10:29Z", "patch_url": "https://github.com/huggingface/datasets/pull/5804.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5804" }
true
https://api.github.com/repos/huggingface/datasets/issues/5803
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5803/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5803/comments
https://api.github.com/repos/huggingface/datasets/issues/5803/events
https://github.com/huggingface/datasets/pull/5803
1,688,256,290
PR_kwDODunzps5PXtte
5,803
Release: 2.12.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5803). All of your documentation changes will be reflected on that endpoint.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008303 / 0.011353 (-0.003050) | 0.005681 / 0.011008 (-0.005327) | 0.111830 / 0.038508 (0.073322) | 0.039222 / 0.023109 (0.016112) | 0.336773 / 0.275898 (0.060875) | 0.376673 / 0.323480 (0.053193) | 0.006756 / 0.007986 (-0.001230) | 0.006078 / 0.004328 (0.001749) | 0.083552 / 0.004250 (0.079301) | 0.054430 / 0.037052 (0.017377) | 0.337310 / 0.258489 (0.078821) | 0.386138 / 0.293841 (0.092297) | 0.040068 / 0.128546 (-0.088478) | 0.013895 / 0.075646 (-0.061751) | 0.384174 / 0.419271 (-0.035097) | 0.058244 / 0.043533 (0.014711) | 0.342410 / 0.255139 (0.087271) | 0.362417 / 0.283200 (0.079217) | 0.123470 / 0.141683 (-0.018213) | 1.662938 / 1.452155 (0.210784) | 1.786488 / 1.492716 (0.293771) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.232629 / 0.018006 (0.214622) | 0.478252 / 0.000490 (0.477762) | 0.008519 / 0.000200 (0.008319) | 0.000111 / 0.000054 (0.000057) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031222 / 0.037411 (-0.006190) | 0.125875 / 0.014526 (0.111350) | 0.138995 / 0.176557 (-0.037562) | 0.213073 / 0.737135 (-0.524062) | 0.141848 / 0.296338 (-0.154490) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.463648 / 0.215209 (0.248439) | 4.582969 / 2.077655 (2.505314) | 2.104622 / 1.504120 (0.600502) | 1.887697 / 1.541195 (0.346502) | 1.946096 / 1.468490 (0.477606) | 0.809008 / 4.584777 (-3.775769) | 4.527871 / 3.745712 (0.782159) | 4.862721 / 5.269862 (-0.407141) | 2.423257 / 4.565676 (-2.142419) | 0.101080 / 0.424275 (-0.323196) | 0.014767 / 0.007607 (0.007160) | 0.574471 / 0.226044 (0.348427) | 5.746445 / 2.268929 (3.477516) | 2.682584 / 55.444624 (-52.762040) | 2.320113 / 6.876477 (-4.556364) | 2.474530 / 2.142072 (0.332458) | 0.992979 / 4.805227 (-3.812249) | 0.200812 / 6.500664 (-6.299852) | 0.076291 / 0.075469 (0.000822) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.395533 / 1.841788 (-0.446254) | 17.418803 / 8.074308 (9.344495) | 16.584875 / 10.191392 (6.393483) | 0.167739 / 0.680424 (-0.512685) | 0.020923 / 0.534201 (-0.513278) | 0.500788 / 0.579283 (-0.078496) | 0.510270 / 0.434364 (0.075906) | 0.589608 / 0.540337 (0.049270) | 0.694233 / 1.386936 (-0.692703) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008440 / 0.011353 (-0.002913) | 0.005871 / 0.011008 (-0.005137) | 0.085805 / 0.038508 (0.047297) | 0.039324 / 0.023109 (0.016215) | 0.400587 / 0.275898 (0.124689) | 0.431729 / 0.323480 (0.108249) | 0.006557 / 0.007986 (-0.001429) | 0.005778 / 0.004328 (0.001450) | 0.084394 / 0.004250 (0.080144) | 0.055274 / 0.037052 (0.018222) | 0.410568 / 0.258489 (0.152079) | 0.439952 / 0.293841 (0.146111) | 0.040335 / 0.128546 (-0.088211) | 0.013968 / 0.075646 (-0.061679) | 0.098765 / 0.419271 (-0.320507) | 0.055897 / 0.043533 (0.012364) | 0.387584 / 0.255139 (0.132445) | 0.412568 / 0.283200 (0.129368) | 0.120393 / 0.141683 (-0.021290) | 1.730996 / 1.452155 (0.278841) | 1.821538 / 1.492716 (0.328822) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.245688 / 0.018006 (0.227682) | 0.484888 / 0.000490 (0.484398) | 0.000485 / 0.000200 (0.000285) | 0.000068 / 0.000054 (0.000013) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032340 / 0.037411 (-0.005072) | 0.130819 / 0.014526 (0.116293) | 0.138491 / 0.176557 (-0.038065) | 0.196902 / 0.737135 (-0.540233) | 0.145404 / 0.296338 (-0.150935) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.487643 / 0.215209 (0.272434) | 4.818956 / 2.077655 (2.741301) | 2.332316 / 1.504120 (0.828196) | 2.102018 / 1.541195 (0.560823) | 2.156743 / 1.468490 (0.688253) | 0.803365 / 4.584777 (-3.781412) | 4.308561 / 3.745712 (0.562849) | 2.373331 / 5.269862 (-2.896530) | 1.539474 / 4.565676 (-3.026202) | 0.099081 / 0.424275 (-0.325194) | 0.014627 / 0.007607 (0.007020) | 0.609883 / 0.226044 (0.383838) | 6.092402 / 2.268929 (3.823474) | 2.858137 / 55.444624 (-52.586488) | 2.463256 / 6.876477 (-4.413220) | 2.637048 / 2.142072 (0.494976) | 0.959552 / 4.805227 (-3.845676) | 0.194170 / 6.500664 (-6.306495) | 0.075231 / 0.075469 (-0.000238) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.516502 / 1.841788 (-0.325285) | 18.077893 / 8.074308 (10.003585) | 16.507961 / 10.191392 (6.316569) | 0.171643 / 0.680424 (-0.508780) | 0.020378 / 0.534201 (-0.513823) | 0.491508 / 0.579283 (-0.087775) | 0.492136 / 0.434364 (0.057772) | 0.602258 / 0.540337 (0.061920) | 0.719882 / 1.386936 (-0.667054) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#330ac3e95fd3f2d61bac31b5b9c24399a5b54723 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006572 / 0.011353 (-0.004781) | 0.004647 / 0.011008 (-0.006362) | 0.098277 / 0.038508 (0.059769) | 0.027937 / 0.023109 (0.004828) | 0.339833 / 0.275898 (0.063935) | 0.398305 / 0.323480 (0.074825) | 0.005093 / 0.007986 (-0.002893) | 0.003374 / 0.004328 (-0.000954) | 0.075287 / 0.004250 (0.071037) | 0.037355 / 0.037052 (0.000303) | 0.339779 / 0.258489 (0.081290) | 0.403756 / 0.293841 (0.109915) | 0.030705 / 0.128546 (-0.097841) | 0.011596 / 0.075646 (-0.064050) | 0.323809 / 0.419271 (-0.095463) | 0.043357 / 0.043533 (-0.000176) | 0.342817 / 0.255139 (0.087678) | 0.386330 / 0.283200 (0.103130) | 0.088229 / 0.141683 (-0.053454) | 1.466017 / 1.452155 (0.013862) | 1.566551 / 1.492716 (0.073835) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.196276 / 0.018006 (0.178269) | 0.420321 / 0.000490 (0.419831) | 0.002234 / 0.000200 (0.002034) | 0.000071 / 0.000054 (0.000016) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023999 / 0.037411 (-0.013412) | 0.095117 / 0.014526 (0.080592) | 0.102544 / 0.176557 (-0.074013) | 0.164796 / 0.737135 (-0.572340) | 0.107030 / 0.296338 (-0.189309) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.429299 / 0.215209 (0.214089) | 4.272503 / 2.077655 (2.194849) | 2.101890 / 1.504120 (0.597771) | 1.978907 / 1.541195 (0.437713) | 2.008993 / 1.468490 (0.540503) | 0.695171 / 4.584777 (-3.889606) | 3.427050 / 3.745712 (-0.318662) | 1.892945 / 5.269862 (-3.376917) | 1.247156 / 4.565676 (-3.318521) | 0.082576 / 0.424275 (-0.341699) | 0.012526 / 0.007607 (0.004918) | 0.526338 / 0.226044 (0.300293) | 5.313855 / 2.268929 (3.044927) | 2.421134 / 55.444624 (-53.023490) | 2.072026 / 6.876477 (-4.804451) | 2.159846 / 2.142072 (0.017773) | 0.800753 / 4.805227 (-4.004474) | 0.150507 / 6.500664 (-6.350157) | 0.066378 / 0.075469 (-0.009091) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.218709 / 1.841788 (-0.623079) | 13.649239 / 8.074308 (5.574931) | 13.952762 / 10.191392 (3.761370) | 0.141967 / 0.680424 (-0.538457) | 0.016443 / 0.534201 (-0.517758) | 0.380408 / 0.579283 (-0.198875) | 0.377693 / 0.434364 (-0.056671) | 0.439819 / 0.540337 (-0.100518) | 0.529667 / 1.386936 (-0.857269) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006722 / 0.011353 (-0.004630) | 0.004495 / 0.011008 (-0.006513) | 0.075459 / 0.038508 (0.036951) | 0.028135 / 0.023109 (0.005026) | 0.349904 / 0.275898 (0.074006) | 0.390620 / 0.323480 (0.067140) | 0.005175 / 0.007986 (-0.002810) | 0.004720 / 0.004328 (0.000392) | 0.074243 / 0.004250 (0.069993) | 0.039084 / 0.037052 (0.002032) | 0.352486 / 0.258489 (0.093997) | 0.397549 / 0.293841 (0.103708) | 0.030596 / 0.128546 (-0.097950) | 0.011627 / 0.075646 (-0.064020) | 0.083394 / 0.419271 (-0.335878) | 0.042155 / 0.043533 (-0.001378) | 0.345668 / 0.255139 (0.090529) | 0.383474 / 0.283200 (0.100275) | 0.096530 / 0.141683 (-0.045153) | 1.493360 / 1.452155 (0.041206) | 1.572259 / 1.492716 (0.079543) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.162605 / 0.018006 (0.144599) | 0.409513 / 0.000490 (0.409023) | 0.002029 / 0.000200 (0.001829) | 0.000069 / 0.000054 (0.000015) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025824 / 0.037411 (-0.011588) | 0.102439 / 0.014526 (0.087913) | 0.109515 / 0.176557 (-0.067041) | 0.160650 / 0.737135 (-0.576486) | 0.112971 / 0.296338 (-0.183367) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.433293 / 0.215209 (0.218084) | 4.340286 / 2.077655 (2.262631) | 2.055857 / 1.504120 (0.551737) | 1.854451 / 1.541195 (0.313256) | 1.912752 / 1.468490 (0.444261) | 0.700076 / 4.584777 (-3.884701) | 3.361542 / 3.745712 (-0.384170) | 2.760204 / 5.269862 (-2.509658) | 1.477395 / 4.565676 (-3.088282) | 0.082868 / 0.424275 (-0.341407) | 0.012479 / 0.007607 (0.004872) | 0.532749 / 0.226044 (0.306704) | 5.323701 / 2.268929 (3.054772) | 2.509524 / 55.444624 (-52.935100) | 2.168668 / 6.876477 (-4.707809) | 2.259112 / 2.142072 (0.117040) | 0.806686 / 4.805227 (-3.998542) | 0.154620 / 6.500664 (-6.346044) | 0.068348 / 0.075469 (-0.007121) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.316512 / 1.841788 (-0.525276) | 14.158143 / 8.074308 (6.083835) | 14.110643 / 10.191392 (3.919251) | 0.143760 / 0.680424 (-0.536664) | 0.016851 / 0.534201 (-0.517350) | 0.376594 / 0.579283 (-0.202689) | 0.386957 / 0.434364 (-0.047407) | 0.466185 / 0.540337 (-0.074152) | 0.550269 / 1.386936 (-0.836667) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#8e1af7b30c94ce77abd9de732f19198e197d900c \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009457 / 0.011353 (-0.001896) | 0.006453 / 0.011008 (-0.004555) | 0.136392 / 0.038508 (0.097884) | 0.038378 / 0.023109 (0.015269) | 0.413171 / 0.275898 (0.137273) | 0.451605 / 0.323480 (0.128126) | 0.007123 / 0.007986 (-0.000863) | 0.006316 / 0.004328 (0.001987) | 0.103009 / 0.004250 (0.098758) | 0.049182 / 0.037052 (0.012130) | 0.398635 / 0.258489 (0.140146) | 0.463146 / 0.293841 (0.169305) | 0.056247 / 0.128546 (-0.072299) | 0.019589 / 0.075646 (-0.056058) | 0.475882 / 0.419271 (0.056610) | 0.094918 / 0.043533 (0.051385) | 0.416502 / 0.255139 (0.161363) | 0.447129 / 0.283200 (0.163929) | 0.133314 / 0.141683 (-0.008369) | 2.132888 / 1.452155 (0.680733) | 2.073383 / 1.492716 (0.580667) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.273037 / 0.018006 (0.255030) | 0.625675 / 0.000490 (0.625185) | 0.003449 / 0.000200 (0.003249) | 0.000185 / 0.000054 (0.000130) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031889 / 0.037411 (-0.005523) | 0.131673 / 0.014526 (0.117148) | 0.141575 / 0.176557 (-0.034982) | 0.214978 / 0.737135 (-0.522158) | 0.145586 / 0.296338 (-0.150752) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.711135 / 0.215209 (0.495926) | 7.162492 / 2.077655 (5.084837) | 2.906028 / 1.504120 (1.401908) | 2.488855 / 1.541195 (0.947660) | 2.574628 / 1.468490 (1.106138) | 1.587824 / 4.584777 (-2.996953) | 6.332962 / 3.745712 (2.587250) | 5.419578 / 5.269862 (0.149717) | 2.935413 / 4.565676 (-1.630263) | 0.169159 / 0.424275 (-0.255116) | 0.015358 / 0.007607 (0.007751) | 0.862036 / 0.226044 (0.635992) | 8.559256 / 2.268929 (6.290328) | 3.530756 / 55.444624 (-51.913868) | 2.626288 / 6.876477 (-4.250188) | 2.770063 / 2.142072 (0.627990) | 1.500116 / 4.805227 (-3.305112) | 0.265109 / 6.500664 (-6.235555) | 0.084944 / 0.075469 (0.009475) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.631060 / 1.841788 (-0.210728) | 19.022827 / 8.074308 (10.948519) | 22.973632 / 10.191392 (12.782240) | 0.296265 / 0.680424 (-0.384158) | 0.032317 / 0.534201 (-0.501884) | 0.624171 / 0.579283 (0.044888) | 0.690643 / 0.434364 (0.256279) | 0.691206 / 0.540337 (0.150869) | 0.758855 / 1.386936 (-0.628081) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009441 / 0.011353 (-0.001912) | 0.006270 / 0.011008 (-0.004739) | 0.110284 / 0.038508 (0.071776) | 0.035952 / 0.023109 (0.012842) | 0.521894 / 0.275898 (0.245996) | 0.582624 / 0.323480 (0.259144) | 0.011400 / 0.007986 (0.003414) | 0.004677 / 0.004328 (0.000348) | 0.115721 / 0.004250 (0.111470) | 0.048521 / 0.037052 (0.011469) | 0.497142 / 0.258489 (0.238653) | 0.573733 / 0.293841 (0.279892) | 0.055788 / 0.128546 (-0.072759) | 0.020949 / 0.075646 (-0.054697) | 0.132968 / 0.419271 (-0.286303) | 0.063045 / 0.043533 (0.019512) | 0.537769 / 0.255139 (0.282630) | 0.527560 / 0.283200 (0.244361) | 0.123756 / 0.141683 (-0.017927) | 1.994111 / 1.452155 (0.541956) | 2.104623 / 1.492716 (0.611907) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.279057 / 0.018006 (0.261051) | 0.537342 / 0.000490 (0.536852) | 0.007782 / 0.000200 (0.007582) | 0.000115 / 0.000054 (0.000060) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032018 / 0.037411 (-0.005394) | 0.133456 / 0.014526 (0.118930) | 0.142039 / 0.176557 (-0.034517) | 0.213769 / 0.737135 (-0.523366) | 0.143811 / 0.296338 (-0.152527) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.680142 / 0.215209 (0.464933) | 6.450439 / 2.077655 (4.372784) | 2.820724 / 1.504120 (1.316604) | 2.520407 / 1.541195 (0.979212) | 2.568972 / 1.468490 (1.100482) | 1.250584 / 4.584777 (-3.334193) | 6.108222 / 3.745712 (2.362509) | 3.065965 / 5.269862 (-2.203897) | 2.108675 / 4.565676 (-2.457002) | 0.167870 / 0.424275 (-0.256405) | 0.015127 / 0.007607 (0.007520) | 0.849645 / 0.226044 (0.623600) | 8.508727 / 2.268929 (6.239799) | 3.707897 / 55.444624 (-51.736727) | 3.009279 / 6.876477 (-3.867198) | 3.067179 / 2.142072 (0.925106) | 1.516370 / 4.805227 (-3.288858) | 0.264845 / 6.500664 (-6.235819) | 0.095137 / 0.075469 (0.019668) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.826306 / 1.841788 (-0.015481) | 20.119641 / 8.074308 (12.045333) | 21.532158 / 10.191392 (11.340766) | 0.278631 / 0.680424 (-0.401793) | 0.029494 / 0.534201 (-0.504707) | 0.621887 / 0.579283 (0.042604) | 0.686864 / 0.434364 (0.252500) | 0.695412 / 0.540337 (0.155074) | 0.864829 / 1.386936 (-0.522108) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#8e1af7b30c94ce77abd9de732f19198e197d900c \"CML watermark\")\n" ]
"2023-04-28T09:52:11Z"
"2023-04-28T10:18:56Z"
"2023-04-28T09:54:43Z"
MEMBER
null
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5803/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5803/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5803.diff", "html_url": "https://github.com/huggingface/datasets/pull/5803", "merged_at": "2023-04-28T09:54:43Z", "patch_url": "https://github.com/huggingface/datasets/pull/5803.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5803" }
true
https://api.github.com/repos/huggingface/datasets/issues/5802
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5802/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5802/comments
https://api.github.com/repos/huggingface/datasets/issues/5802/events
https://github.com/huggingface/datasets/pull/5802
1,686,509,799
PR_kwDODunzps5PR199
5,802
Validate non-empty data_files
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007818 / 0.011353 (-0.003535) | 0.005456 / 0.011008 (-0.005552) | 0.114685 / 0.038508 (0.076177) | 0.038398 / 0.023109 (0.015289) | 0.351289 / 0.275898 (0.075391) | 0.389170 / 0.323480 (0.065690) | 0.006213 / 0.007986 (-0.001773) | 0.005796 / 0.004328 (0.001467) | 0.085315 / 0.004250 (0.081065) | 0.049251 / 0.037052 (0.012198) | 0.368119 / 0.258489 (0.109630) | 0.394725 / 0.293841 (0.100884) | 0.040390 / 0.128546 (-0.088157) | 0.014076 / 0.075646 (-0.061570) | 0.393771 / 0.419271 (-0.025500) | 0.058929 / 0.043533 (0.015397) | 0.349526 / 0.255139 (0.094387) | 0.378409 / 0.283200 (0.095210) | 0.114354 / 0.141683 (-0.027329) | 1.749244 / 1.452155 (0.297089) | 1.847946 / 1.492716 (0.355229) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.241648 / 0.018006 (0.223641) | 0.468419 / 0.000490 (0.467929) | 0.004311 / 0.000200 (0.004111) | 0.000091 / 0.000054 (0.000036) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029978 / 0.037411 (-0.007433) | 0.121832 / 0.014526 (0.107306) | 0.133516 / 0.176557 (-0.043041) | 0.199174 / 0.737135 (-0.537961) | 0.138181 / 0.296338 (-0.158158) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.478346 / 0.215209 (0.263137) | 4.723967 / 2.077655 (2.646312) | 2.107724 / 1.504120 (0.603604) | 1.874810 / 1.541195 (0.333615) | 1.911568 / 1.468490 (0.443078) | 0.800966 / 4.584777 (-3.783811) | 4.399032 / 3.745712 (0.653320) | 2.346160 / 5.269862 (-2.923702) | 1.506673 / 4.565676 (-3.059004) | 0.099119 / 0.424275 (-0.325156) | 0.014055 / 0.007607 (0.006448) | 0.582419 / 0.226044 (0.356375) | 5.789147 / 2.268929 (3.520218) | 2.632443 / 55.444624 (-52.812182) | 2.217630 / 6.876477 (-4.658846) | 2.337709 / 2.142072 (0.195637) | 0.995345 / 4.805227 (-3.809882) | 0.200040 / 6.500664 (-6.300624) | 0.076855 / 0.075469 (0.001386) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.386104 / 1.841788 (-0.455683) | 17.109772 / 8.074308 (9.035464) | 16.147612 / 10.191392 (5.956220) | 0.162846 / 0.680424 (-0.517577) | 0.020692 / 0.534201 (-0.513509) | 0.495752 / 0.579283 (-0.083531) | 0.475715 / 0.434364 (0.041351) | 0.619826 / 0.540337 (0.079488) | 0.720745 / 1.386936 (-0.666191) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008255 / 0.011353 (-0.003098) | 0.006118 / 0.011008 (-0.004890) | 0.088004 / 0.038508 (0.049496) | 0.039225 / 0.023109 (0.016116) | 0.399290 / 0.275898 (0.123392) | 0.432272 / 0.323480 (0.108792) | 0.007382 / 0.007986 (-0.000603) | 0.004576 / 0.004328 (0.000248) | 0.086511 / 0.004250 (0.082260) | 0.050472 / 0.037052 (0.013420) | 0.404160 / 0.258489 (0.145671) | 0.445356 / 0.293841 (0.151515) | 0.041549 / 0.128546 (-0.086997) | 0.014148 / 0.075646 (-0.061498) | 0.101697 / 0.419271 (-0.317574) | 0.057474 / 0.043533 (0.013941) | 0.395093 / 0.255139 (0.139954) | 0.418613 / 0.283200 (0.135414) | 0.123217 / 0.141683 (-0.018466) | 1.726146 / 1.452155 (0.273991) | 1.852746 / 1.492716 (0.360029) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.256876 / 0.018006 (0.238870) | 0.476336 / 0.000490 (0.475846) | 0.000465 / 0.000200 (0.000265) | 0.000068 / 0.000054 (0.000013) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034304 / 0.037411 (-0.003107) | 0.132617 / 0.014526 (0.118091) | 0.141712 / 0.176557 (-0.034845) | 0.198101 / 0.737135 (-0.539034) | 0.150877 / 0.296338 (-0.145461) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.504717 / 0.215209 (0.289508) | 5.035060 / 2.077655 (2.957405) | 2.494812 / 1.504120 (0.990692) | 2.306601 / 1.541195 (0.765406) | 2.481860 / 1.468490 (1.013370) | 0.826041 / 4.584777 (-3.758736) | 4.414748 / 3.745712 (0.669036) | 2.417899 / 5.269862 (-2.851963) | 1.574548 / 4.565676 (-2.991128) | 0.101712 / 0.424275 (-0.322563) | 0.014388 / 0.007607 (0.006781) | 0.616674 / 0.226044 (0.390630) | 6.180382 / 2.268929 (3.911453) | 2.969110 / 55.444624 (-52.475514) | 2.574383 / 6.876477 (-4.302094) | 2.711008 / 2.142072 (0.568935) | 0.997679 / 4.805227 (-3.807548) | 0.201241 / 6.500664 (-6.299423) | 0.076132 / 0.075469 (0.000663) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.542704 / 1.841788 (-0.299084) | 17.610700 / 8.074308 (9.536392) | 16.152973 / 10.191392 (5.961581) | 0.166040 / 0.680424 (-0.514384) | 0.020286 / 0.534201 (-0.513915) | 0.506724 / 0.579283 (-0.072559) | 0.484348 / 0.434364 (0.049984) | 0.606524 / 0.540337 (0.066187) | 0.734997 / 1.386936 (-0.651939) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#a200ec9126a0879f3d38d4e9e3787633a23af42e \"CML watermark\")\n" ]
"2023-04-27T09:51:36Z"
"2023-04-27T14:59:47Z"
"2023-04-27T14:51:40Z"
MEMBER
null
This PR adds validation of `data_files`, so that they are non-empty (str, list, or dict) or `None` (default). See: https://github.com/huggingface/datasets/pull/5787#discussion_r1178862327
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5802/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5802/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5802.diff", "html_url": "https://github.com/huggingface/datasets/pull/5802", "merged_at": "2023-04-27T14:51:40Z", "patch_url": "https://github.com/huggingface/datasets/pull/5802.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5802" }
true
https://api.github.com/repos/huggingface/datasets/issues/5800
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5800/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5800/comments
https://api.github.com/repos/huggingface/datasets/issues/5800/events
https://github.com/huggingface/datasets/pull/5800
1,686,348,096
PR_kwDODunzps5PRTRh
5,800
Change downloaded file permission based on umask
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
"2023-04-27T08:13:30Z"
"2023-04-27T09:33:05Z"
"2023-04-27T09:30:16Z"
MEMBER
null
This PR changes the permission of downloaded files to cache, so that the umask is taken into account. Related to: - #2157 Fix #5799. CC: @stas00
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5800/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5800/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5800.diff", "html_url": "https://github.com/huggingface/datasets/pull/5800", "merged_at": "2023-04-27T09:30:16Z", "patch_url": "https://github.com/huggingface/datasets/pull/5800.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5800" }
true
https://api.github.com/repos/huggingface/datasets/issues/5799
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5799/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5799/comments
https://api.github.com/repos/huggingface/datasets/issues/5799/events
https://github.com/huggingface/datasets/issues/5799
1,686,334,572
I_kwDODunzps5kg2xs
5,799
Files downloaded to cache do not respect umask
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
[]
"2023-04-27T08:06:05Z"
"2023-04-27T09:30:17Z"
"2023-04-27T09:30:17Z"
MEMBER
null
As reported by @stas00, files downloaded to the cache do not respect umask: ```bash $ ls -l /path/to/cache/datasets/downloads/ -rw------- 1 uername username 150M Apr 25 16:41 5e646c1d600f065adaeb134e536f6f2f296a6d804bd1f0e1fdcd20ee28c185c6 ``` Related to: - #2065
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5799/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5799/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5798
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5798/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5798/comments
https://api.github.com/repos/huggingface/datasets/issues/5798/events
https://github.com/huggingface/datasets/issues/5798
1,685,904,526
I_kwDODunzps5kfNyO
5,798
Support parallelized downloading and processing in load_dataset with Spark
{ "avatar_url": "https://avatars.githubusercontent.com/u/12763339?v=4", "events_url": "https://api.github.com/users/es94129/events{/privacy}", "followers_url": "https://api.github.com/users/es94129/followers", "following_url": "https://api.github.com/users/es94129/following{/other_user}", "gists_url": "https://api.github.com/users/es94129/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/es94129", "id": 12763339, "login": "es94129", "node_id": "MDQ6VXNlcjEyNzYzMzM5", "organizations_url": "https://api.github.com/users/es94129/orgs", "received_events_url": "https://api.github.com/users/es94129/received_events", "repos_url": "https://api.github.com/users/es94129/repos", "site_admin": false, "starred_url": "https://api.github.com/users/es94129/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/es94129/subscriptions", "type": "User", "url": "https://api.github.com/users/es94129" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
null
[ "Hi ! We're using process pools for parallelism right now. I was wondering if there's a package that implements the same API as a process pool but runs with Spark under the hood ? That or something similar would be cool because users could use whatever distributed framework they want this way.\r\n\r\nFeel free to ping us when you'd like to open PRs for this kind of things, so that we can discuss this before you start working on it ^^", "Hi, thanks for taking a look and providing your input! I don't know of such packages, and even it exists, I don't think with the process pool API it's possible to run Spark as backend properly; otherwise I understand a unified API would be preferable.\r\n\r\nThe process pool API requires splitting the workload to a fixed number parts for multiprocessing; meanwhile distributed framework such as Spark has sophisticated scheduler to distribute the workload to the processes on multiple machines in a cluster, so the way of splitting things for `multiprocessing.pool` would not suit / be as flexible as directly calling the `sparkContext.parallelize` API.\r\n\r\nI think this could be a good addition to scale the `datasets` implementation to distributed workers, and from my benchmark results so far it looks promising compared with multiprocessing.", "I see ! I think we only need an equivalent of `pool.map`. We use it to run download and conversion of data files on disk. That would require less changes in the internal code - and therefore less tests to write ;)\r\n\r\nWe also use `pool.apply_async` in some places with a `Queue` to get progress updates of the running jobs. I'm mentioning this in case there's a way to get a python generator from a running spark job ? This is less important though", "For Spark, `rdd.map` (where `rdd` can be created by `sparkContext.parallelize`) is the most similar as `pool.map`, but it requires creating a Spark RDD first that is used for distributing the `iterable` and the actual parallelization is managed by the Spark framework; `pool.map` takes the splits of `iterable` that are split into `num_proc` parts by the Python code. You can also check my PR #5807 in the `src/datasets/utils/py_utils.py` file to compare the differences of the APIs, it might make more sense than the the above description.\r\n\r\nGiven the different inputs and mechanisms of calling the `map` functions, this is why I think it's not that feasible to reuse most of the `multiprocessing` code.\r\n\r\nProgress bar updating might be challenging with Spark, I'll consider it as a followup work.", "Indeed I think the current use of multiprocessing.Pool in `map_nested` can be rewritten to work like `sparkContext.parallelize` - without splitting the iterable.\r\n\r\nMaybe from the user's perspective it's ok to let multiprocessing.Pool or spark distribute the load on their own, as long as it takes a list and runs jobs in parallel in the end :)\r\n", "From your feedback, seems to me there are two paths to consider now for supporting spark's `map` function in `map_nested` now:\r\n1. Keep the current `pool.map` implementation, and add an if statement for the spark's `map` code (which is what I did in my current PR) -- the code change is just a few lines in the `map_nested` function, and it has been tested by unit tests + manual testing on real Spark clusters; if you have other concerns I'd also be happy to address them.\r\n2. Rewrite the current `pool.map` implementation to remove splitting the iterable, and we will still need to add an if statement to use either\r\n```python\r\nwith Pool(...) as pool:\r\n mapped = pool.map(_single_map_nested, iterable)\r\n```\r\nor\r\n```python\r\nrdd = spark.sparkContext.parallelize(iterable)\r\nmapped = rdd.map(lambda obj: _single_map_nested((function, obj, types, None, True, None))).collect()\r\n```\r\nbecause there is no unified API that supports both `pool.map` and `rdd.map`. This can be more unified and flexible in the long run, but might require more work, and it will change the existing multiprocessing behavior, which is why I'm not leaning towards this option.\r\n\r\nAm I understanding correctly?", "Yup correct ! I think it's a nice path because it would be possible for users to define whatever parallel processing backend they want. I think we still need to discuss how that would look like in the `datasets` API : how to specify it has to use the \"spark\" parallel backend ? And how to specify the spark session parameters (number of executors etc.) ? Maybe there is something more practical than `use_spark=True`\r\n\r\nI'll check with the team internally if they have some ideas, but feel free to share your thoughts here !", "Sure, please let me know if you have more updates regarding the API and implementation from the team.\r\n\r\nFor parameters we don't need to worry about setting them for Spark, because Spark will figure out the environment / number of worker nodes by itself, so it's preferable to just provide some parameter such as `use_spark` to use the RDD `map` function.", "Hi! I wanted to check in to see if there is any update from the team.\r\n\r\nA potential change of API I can think of is change the argument to `distributed_backend=...`, which accepts `str`, such as `load_dataset(..., distributed_backend=\"spark\")`.\r\n\r\nImplementation wise, we can add a class / function to abstract away the details of using multiprocessing vs. spark vs. other parallel processing frameworks in `map_nested` and `_prepare_split`.", "I found this quite interesting: https://github.com/joblib/joblib-spark with this syntax:\r\n\r\n```python\r\nwith parallel_backend('spark', n_jobs=3):\r\n ...\r\n```\r\n\r\ncc @lu-wang-dl who might know better", "Joblib spark is providing Spark backend for joblib. We can implement a general parallel backend like\r\n```\r\nwith parallel_backend(\"<parallel-backedn>\", n_jobs=..):\r\n```\r\n\r\nIt can support multiprocessing , spark, ray, and etc. https://joblib.readthedocs.io/en/latest/parallel.html#joblib.parallel_backend", "Thank you @lhoestq for finding this repo. I validated that it can distribute downloading jobs with Spark to arbitrary cluster worker nodes evenly with `n_jobs=-1`.\r\n\r\nFor the API, I think it makes sense to define it as\r\n```python\r\nload_dataset(..., parallel_backend=<str>)\r\n```\r\nwhere `parallel_backend` can be `spark`, `multiprocessing`, and potentially other supported joblib backends including `ray` and `dask`.\r\n\r\nImplementation-wise, do you think it is better to just use `joblib` for `spark` backend in `map_nested`, or also migrate the `multiprocessing.Pool` code to use `joblib`?", "Hello @lhoestq, I wanted to follow up on my previous comment with some prototyping code that demonstrates how `map_nested` would be like if we unify `multiprocessing` and `spark` with `joblib`. The snippet hasn't hashed out the details such as dealing with `tqdm` yet.\r\n\r\nIn terms of API, the way of using multiprocessing is still the same; for Spark, the user sets `parallel_backend='spark'` can reuse the `num_proc` argument to pass in the number of executors, or preferably, just set `num_proc=-1` and joblib is able to decide it (I've validated it by running it on a Spark cluster).\r\n\r\n```python\r\ndef map_nested(\r\n # ... same args\r\n parallel_backend: Optional[str] = None, # proposed new argument\r\n):\r\n\r\n # ... same code\r\n\r\n # allow user to specify num_proc=-1, so that joblib will optimize it\r\n if (num_proc <= 1 and num_proc != -1) or len(iterable) < parallel_min_length:\r\n # same code\r\n mapped = [\r\n _single_map_nested((function, obj, types, None, True, None))\r\n for obj in logging.tqdm(iterable, disable=disable_tqdm, desc=desc)\r\n ]\r\n else:\r\n if not parallel_backend:\r\n parallel_backend = 'loky' # 'loky' is joblib's own implementation of robust multiprocessing\r\n \r\n n_jobs = min(num_proc, len(iterable))\r\n\r\n if parallel_backend == 'spark':\r\n n_jobs = -1 # 'loky' is joblib's own implementation of robust multiprocessing\r\n from joblibspark import register_spark\r\n register_spark()\r\n\r\n # parallelized with the same API\r\n with joblib.parallel_backend(parallel_backend, n_jobs=n_jobs):\r\n mapped = joblib.Parallel()(\r\n joblib.delayed(\r\n _single_map_nested((function, obj, types, None, True, None))\r\n )(obj) for obj in logging.tqdm(iterable, disable=disable_tqdm, desc=desc)\r\n )\r\n \r\n # ... same code\r\n```\r\nWe can always `joblib` for Spark and other distributed backends such as Ray if people want to support them later. It's worth noting that some distributed backends do not currently have `joblib` implementations.\r\n\r\nI would appreciate your thoughts on this proposed new API. We can also discuss the pros and cons of migrating the `multiprocessing` code to `joblib` later.", "Nice ! It should be quite easy to make the change then :)\r\n\r\nI think adding spark support can actually be less than 20 lines of code and would roughly require one line of code to change in map_nested:\r\n\r\nMaybe we can define a new `datasets.parallel` submodule that has the `parallel_backend()` context manager and a `parallel_map()` function that uses `Pool.map` by default and `joblib` otherwise.\r\n\r\n`joblib` would be an optional dependency, and `joblib-spark` as well.\r\n\r\nThen whenever someone wants to use Spark, they can do something like this (similar to scikit-learn parallel_backend):\r\n\r\n```python\r\nfrom datasets.parallel import parallel_backend\r\n\r\nwith parallel_backend(\"spark\"):\r\n ds = load_dataset(...)\r\n```\r\n\r\nWhat do you think ?", "Although until we've switched to all the steps in `load_dataset` to use `datasets.parallel`, I would require the user to explicitly say which step should use Spark. Maybe something like this, but I'm not sure yet:\r\n\r\n```python\r\nfrom datasets.parallel import parallel_backend\r\n\r\nwith parallel_backend(\"spark\", steps=[\"download\"]):\r\n ds = load_dataset(...)\r\n```\r\nfor now some steps can be NotImplemented:\r\n```python\r\nfrom datasets.parallel import parallel_backend\r\n\r\nwith parallel_backend(\"spark\", steps=[\"download\", \"prepare\"]):\r\n# NotImplementedError: the \"prepare\" step that converts the raw data files to Arrow is not compatible with the \"spark\" backend yet\r\n```\r\n\r\nThis way we can progressively roll out Spark support for the other data loading/processing steps without breaking changes between `datasets` versions", "Sounds good! I like the partial rollout idea.\r\nSo for example `map_nested` would call `parallel_map` under the hood if `num_proc != 1` or `parallel_backend` is specified right?\r\nI would be happy to start a PR next week to explore this path.", "Awesome ! I think map_nested can call `parallel_map()` if num_proc > 1, and `parallel_map` can be responsible to use Pool.map by default or joblib." ]
"2023-04-27T00:16:11Z"
"2023-05-25T14:11:41Z"
null
CONTRIBUTOR
null
### Feature request When calling `load_dataset` for datasets that have multiple files, support using Spark to distribute the downloading and processing job to worker nodes when `cache_dir` is a cloud file system shared among nodes. ```python load_dataset(..., use_spark=True) ``` ### Motivation Further speed up `dl_manager.download` and `_prepare_split` by distributing the workloads to worker nodes. ### Your contribution I can submit a PR to support this.
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/5798/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5798/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5797
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5797/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5797/comments
https://api.github.com/repos/huggingface/datasets/issues/5797/events
https://github.com/huggingface/datasets/issues/5797
1,685,501,199
I_kwDODunzps5kdrUP
5,797
load_dataset is case sentitive?
{ "avatar_url": "https://avatars.githubusercontent.com/u/34729065?v=4", "events_url": "https://api.github.com/users/haonan-li/events{/privacy}", "followers_url": "https://api.github.com/users/haonan-li/followers", "following_url": "https://api.github.com/users/haonan-li/following{/other_user}", "gists_url": "https://api.github.com/users/haonan-li/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/haonan-li", "id": 34729065, "login": "haonan-li", "node_id": "MDQ6VXNlcjM0NzI5MDY1", "organizations_url": "https://api.github.com/users/haonan-li/orgs", "received_events_url": "https://api.github.com/users/haonan-li/received_events", "repos_url": "https://api.github.com/users/haonan-li/repos", "site_admin": false, "starred_url": "https://api.github.com/users/haonan-li/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/haonan-li/subscriptions", "type": "User", "url": "https://api.github.com/users/haonan-li" }
[]
open
false
null
[]
null
[ "Hi @haonan-li , thank you for the report! It seems to be a bug on the [`huggingface_hub`](https://github.com/huggingface/huggingface_hub) site, there is even no such dataset as `mbzuai/bactrian-x` on the Hub. I opened and [issue](https://github.com/huggingface/huggingface_hub/issues/1453) there.", "I think `load_dataset(\"mbzuai/bactrian-x\")` shouldn't be loaded at all and raise an error but because of [this fallback](https://github.com/huggingface/datasets/blob/main/src/datasets/load.py#L1194) to packaged loaders when no other options are applicable, it loads the dataset with standard `json` loader instead of the custom loading script." ]
"2023-04-26T18:19:04Z"
"2023-04-27T11:56:58Z"
null
NONE
null
### Describe the bug load_dataset() function is case sensitive? ### Steps to reproduce the bug The following two code, get totally different behavior. 1. load_dataset('mbzuai/bactrian-x','en') 2. load_dataset('MBZUAI/Bactrian-X','en') ### Expected behavior Compare 1 and 2. 1 will download all 52 subsets, shell output: ```Downloading and preparing dataset json/MBZUAI--bactrian-X to xxx``` 2 will only download single subset, shell output ```Downloading and preparing dataset bactrian-x/en to xxx``` ### Environment info Python 3.10.11 datasets Version: 2.11.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5797/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5797/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5796
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5796/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5796/comments
https://api.github.com/repos/huggingface/datasets/issues/5796/events
https://github.com/huggingface/datasets/pull/5796
1,685,451,919
PR_kwDODunzps5PORm-
5,796
Spark docs
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010480 / 0.011353 (-0.000872) | 0.006743 / 0.011008 (-0.004265) | 0.126503 / 0.038508 (0.087995) | 0.036918 / 0.023109 (0.013808) | 0.387372 / 0.275898 (0.111474) | 0.456930 / 0.323480 (0.133450) | 0.008038 / 0.007986 (0.000052) | 0.005082 / 0.004328 (0.000753) | 0.093312 / 0.004250 (0.089062) | 0.065440 / 0.037052 (0.028387) | 0.378172 / 0.258489 (0.119683) | 0.430049 / 0.293841 (0.136208) | 0.054372 / 0.128546 (-0.074174) | 0.021875 / 0.075646 (-0.053772) | 0.441722 / 0.419271 (0.022450) | 0.063716 / 0.043533 (0.020183) | 0.375718 / 0.255139 (0.120579) | 0.413688 / 0.283200 (0.130488) | 0.122583 / 0.141683 (-0.019100) | 1.835992 / 1.452155 (0.383838) | 1.915862 / 1.492716 (0.423145) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.275305 / 0.018006 (0.257299) | 0.617170 / 0.000490 (0.616680) | 0.006467 / 0.000200 (0.006267) | 0.000117 / 0.000054 (0.000063) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031057 / 0.037411 (-0.006354) | 0.135178 / 0.014526 (0.120653) | 0.139265 / 0.176557 (-0.037292) | 0.221597 / 0.737135 (-0.515538) | 0.147632 / 0.296338 (-0.148706) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.640621 / 0.215209 (0.425411) | 6.354359 / 2.077655 (4.276704) | 2.748945 / 1.504120 (1.244825) | 2.396637 / 1.541195 (0.855442) | 2.395193 / 1.468490 (0.926703) | 1.209604 / 4.584777 (-3.375173) | 5.626901 / 3.745712 (1.881189) | 3.300941 / 5.269862 (-1.968920) | 2.123598 / 4.565676 (-2.442078) | 0.144270 / 0.424275 (-0.280005) | 0.015114 / 0.007607 (0.007507) | 0.812352 / 0.226044 (0.586307) | 8.024250 / 2.268929 (5.755322) | 3.557589 / 55.444624 (-51.887036) | 2.840632 / 6.876477 (-4.035845) | 3.152319 / 2.142072 (1.010246) | 1.447232 / 4.805227 (-3.357995) | 0.251740 / 6.500664 (-6.248924) | 0.083725 / 0.075469 (0.008256) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.568032 / 1.841788 (-0.273755) | 18.463860 / 8.074308 (10.389552) | 21.217395 / 10.191392 (11.026003) | 0.228457 / 0.680424 (-0.451967) | 0.031398 / 0.534201 (-0.502803) | 0.547627 / 0.579283 (-0.031656) | 0.642921 / 0.434364 (0.208557) | 0.687857 / 0.540337 (0.147520) | 0.800940 / 1.386936 (-0.585996) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009933 / 0.011353 (-0.001420) | 0.006065 / 0.011008 (-0.004943) | 0.102556 / 0.038508 (0.064048) | 0.034646 / 0.023109 (0.011537) | 0.437951 / 0.275898 (0.162053) | 0.482439 / 0.323480 (0.158959) | 0.007715 / 0.007986 (-0.000271) | 0.007426 / 0.004328 (0.003098) | 0.096427 / 0.004250 (0.092177) | 0.052983 / 0.037052 (0.015930) | 0.464533 / 0.258489 (0.206044) | 0.484848 / 0.293841 (0.191007) | 0.050415 / 0.128546 (-0.078131) | 0.021001 / 0.075646 (-0.054645) | 0.121214 / 0.419271 (-0.298058) | 0.061658 / 0.043533 (0.018125) | 0.431898 / 0.255139 (0.176759) | 0.482106 / 0.283200 (0.198907) | 0.128524 / 0.141683 (-0.013159) | 1.775714 / 1.452155 (0.323559) | 1.904738 / 1.492716 (0.412021) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.287641 / 0.018006 (0.269635) | 0.600667 / 0.000490 (0.600178) | 0.005097 / 0.000200 (0.004897) | 0.000112 / 0.000054 (0.000057) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032836 / 0.037411 (-0.004575) | 0.133114 / 0.014526 (0.118588) | 0.150874 / 0.176557 (-0.025683) | 0.217069 / 0.737135 (-0.520066) | 0.160387 / 0.296338 (-0.135951) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.668444 / 0.215209 (0.453235) | 6.240015 / 2.077655 (4.162360) | 2.808661 / 1.504120 (1.304542) | 2.336550 / 1.541195 (0.795356) | 2.538973 / 1.468490 (1.070483) | 1.189292 / 4.584777 (-3.395485) | 5.781028 / 3.745712 (2.035315) | 3.149895 / 5.269862 (-2.119967) | 2.130646 / 4.565676 (-2.435030) | 0.144944 / 0.424275 (-0.279331) | 0.014650 / 0.007607 (0.007043) | 0.792313 / 0.226044 (0.566269) | 7.933108 / 2.268929 (5.664180) | 3.527527 / 55.444624 (-51.917098) | 2.864271 / 6.876477 (-4.012205) | 3.098330 / 2.142072 (0.956258) | 1.421208 / 4.805227 (-3.384019) | 0.255638 / 6.500664 (-6.245026) | 0.086971 / 0.075469 (0.011502) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.585317 / 1.841788 (-0.256471) | 18.643133 / 8.074308 (10.568825) | 21.921256 / 10.191392 (11.729864) | 0.215493 / 0.680424 (-0.464931) | 0.028348 / 0.534201 (-0.505853) | 0.556925 / 0.579283 (-0.022358) | 0.631480 / 0.434364 (0.197116) | 0.654026 / 0.540337 (0.113689) | 0.799727 / 1.386936 (-0.587209) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#62520514b524b5904c7e4f0beddab1971212a96a \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006516 / 0.011353 (-0.004837) | 0.004500 / 0.011008 (-0.006509) | 0.097639 / 0.038508 (0.059131) | 0.028336 / 0.023109 (0.005227) | 0.377263 / 0.275898 (0.101365) | 0.409209 / 0.323480 (0.085729) | 0.004832 / 0.007986 (-0.003154) | 0.004629 / 0.004328 (0.000301) | 0.075046 / 0.004250 (0.070795) | 0.034080 / 0.037052 (-0.002972) | 0.377565 / 0.258489 (0.119076) | 0.419204 / 0.293841 (0.125363) | 0.030343 / 0.128546 (-0.098203) | 0.011465 / 0.075646 (-0.064182) | 0.322777 / 0.419271 (-0.096494) | 0.043774 / 0.043533 (0.000241) | 0.375808 / 0.255139 (0.120669) | 0.402665 / 0.283200 (0.119465) | 0.086811 / 0.141683 (-0.054872) | 1.518686 / 1.452155 (0.066531) | 1.540381 / 1.492716 (0.047664) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.197730 / 0.018006 (0.179724) | 0.409285 / 0.000490 (0.408795) | 0.004739 / 0.000200 (0.004539) | 0.000084 / 0.000054 (0.000029) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022974 / 0.037411 (-0.014437) | 0.096843 / 0.014526 (0.082317) | 0.103241 / 0.176557 (-0.073316) | 0.163691 / 0.737135 (-0.573444) | 0.107905 / 0.296338 (-0.188433) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.449408 / 0.215209 (0.234199) | 4.501375 / 2.077655 (2.423720) | 2.181491 / 1.504120 (0.677371) | 1.986153 / 1.541195 (0.444958) | 2.024735 / 1.468490 (0.556245) | 0.695368 / 4.584777 (-3.889409) | 3.416912 / 3.745712 (-0.328800) | 1.893343 / 5.269862 (-3.376519) | 1.275535 / 4.565676 (-3.290142) | 0.082772 / 0.424275 (-0.341503) | 0.012365 / 0.007607 (0.004758) | 0.553859 / 0.226044 (0.327814) | 5.540014 / 2.268929 (3.271085) | 2.634298 / 55.444624 (-52.810326) | 2.286686 / 6.876477 (-4.589790) | 2.384402 / 2.142072 (0.242330) | 0.806413 / 4.805227 (-3.998814) | 0.151757 / 6.500664 (-6.348907) | 0.067155 / 0.075469 (-0.008314) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.198776 / 1.841788 (-0.643012) | 13.517434 / 8.074308 (5.443126) | 13.926300 / 10.191392 (3.734908) | 0.141887 / 0.680424 (-0.538537) | 0.016571 / 0.534201 (-0.517630) | 0.383179 / 0.579283 (-0.196104) | 0.395189 / 0.434364 (-0.039175) | 0.479635 / 0.540337 (-0.060702) | 0.570576 / 1.386936 (-0.816360) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006691 / 0.011353 (-0.004662) | 0.004634 / 0.011008 (-0.006375) | 0.077087 / 0.038508 (0.038579) | 0.028281 / 0.023109 (0.005172) | 0.340108 / 0.275898 (0.064210) | 0.370611 / 0.323480 (0.047131) | 0.004997 / 0.007986 (-0.002988) | 0.003336 / 0.004328 (-0.000992) | 0.074814 / 0.004250 (0.070563) | 0.039001 / 0.037052 (0.001948) | 0.344225 / 0.258489 (0.085736) | 0.380621 / 0.293841 (0.086780) | 0.030858 / 0.128546 (-0.097689) | 0.011623 / 0.075646 (-0.064023) | 0.085016 / 0.419271 (-0.334256) | 0.042378 / 0.043533 (-0.001155) | 0.341428 / 0.255139 (0.086289) | 0.364823 / 0.283200 (0.081624) | 0.096695 / 0.141683 (-0.044988) | 1.527683 / 1.452155 (0.075528) | 1.585361 / 1.492716 (0.092645) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.184280 / 0.018006 (0.166274) | 0.397845 / 0.000490 (0.397355) | 0.004415 / 0.000200 (0.004215) | 0.000074 / 0.000054 (0.000020) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024296 / 0.037411 (-0.013115) | 0.101053 / 0.014526 (0.086527) | 0.108968 / 0.176557 (-0.067589) | 0.155732 / 0.737135 (-0.581403) | 0.112604 / 0.296338 (-0.183735) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.440819 / 0.215209 (0.225609) | 4.394017 / 2.077655 (2.316363) | 2.092456 / 1.504120 (0.588336) | 1.880186 / 1.541195 (0.338991) | 1.918035 / 1.468490 (0.449545) | 0.698059 / 4.584777 (-3.886718) | 3.422598 / 3.745712 (-0.323114) | 1.860465 / 5.269862 (-3.409396) | 1.157788 / 4.565676 (-3.407889) | 0.083566 / 0.424275 (-0.340709) | 0.012440 / 0.007607 (0.004832) | 0.549526 / 0.226044 (0.323481) | 5.500623 / 2.268929 (3.231694) | 2.546980 / 55.444624 (-52.897644) | 2.199527 / 6.876477 (-4.676949) | 2.297276 / 2.142072 (0.155203) | 0.801580 / 4.805227 (-4.003648) | 0.151842 / 6.500664 (-6.348822) | 0.067165 / 0.075469 (-0.008305) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.329097 / 1.841788 (-0.512691) | 13.830354 / 8.074308 (5.756046) | 14.155250 / 10.191392 (3.963858) | 0.144517 / 0.680424 (-0.535907) | 0.016738 / 0.534201 (-0.517463) | 0.379337 / 0.579283 (-0.199946) | 0.391382 / 0.434364 (-0.042982) | 0.459153 / 0.540337 (-0.081184) | 0.547287 / 1.386936 (-0.839649) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#2efb0289c887ec60d54e0715cd85c111cb45f9ee \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007176 / 0.011353 (-0.004177) | 0.005125 / 0.011008 (-0.005883) | 0.096060 / 0.038508 (0.057552) | 0.033262 / 0.023109 (0.010152) | 0.311461 / 0.275898 (0.035563) | 0.340673 / 0.323480 (0.017193) | 0.005700 / 0.007986 (-0.002286) | 0.005223 / 0.004328 (0.000894) | 0.072812 / 0.004250 (0.068561) | 0.042078 / 0.037052 (0.005025) | 0.320042 / 0.258489 (0.061553) | 0.346539 / 0.293841 (0.052698) | 0.035284 / 0.128546 (-0.093262) | 0.012021 / 0.075646 (-0.063625) | 0.331555 / 0.419271 (-0.087717) | 0.051058 / 0.043533 (0.007525) | 0.303001 / 0.255139 (0.047862) | 0.328431 / 0.283200 (0.045231) | 0.100954 / 0.141683 (-0.040729) | 1.407445 / 1.452155 (-0.044710) | 1.512826 / 1.492716 (0.020110) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.216442 / 0.018006 (0.198436) | 0.446298 / 0.000490 (0.445809) | 0.004701 / 0.000200 (0.004501) | 0.000084 / 0.000054 (0.000030) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028088 / 0.037411 (-0.009324) | 0.108669 / 0.014526 (0.094144) | 0.119597 / 0.176557 (-0.056960) | 0.178249 / 0.737135 (-0.558886) | 0.123914 / 0.296338 (-0.172424) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.413437 / 0.215209 (0.198228) | 4.136602 / 2.077655 (2.058947) | 1.875872 / 1.504120 (0.371752) | 1.680783 / 1.541195 (0.139588) | 1.757059 / 1.468490 (0.288569) | 0.711080 / 4.584777 (-3.873697) | 3.791701 / 3.745712 (0.045989) | 2.111612 / 5.269862 (-3.158250) | 1.351204 / 4.565676 (-3.214473) | 0.086477 / 0.424275 (-0.337798) | 0.012359 / 0.007607 (0.004752) | 0.504984 / 0.226044 (0.278940) | 5.040456 / 2.268929 (2.771527) | 2.266946 / 55.444624 (-53.177679) | 1.957827 / 6.876477 (-4.918650) | 2.120490 / 2.142072 (-0.021583) | 0.856148 / 4.805227 (-3.949079) | 0.172414 / 6.500664 (-6.328250) | 0.066833 / 0.075469 (-0.008636) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.198163 / 1.841788 (-0.643625) | 14.944930 / 8.074308 (6.870622) | 14.317196 / 10.191392 (4.125804) | 0.166104 / 0.680424 (-0.514320) | 0.017443 / 0.534201 (-0.516758) | 0.423025 / 0.579283 (-0.156258) | 0.437476 / 0.434364 (0.003112) | 0.500156 / 0.540337 (-0.040181) | 0.606226 / 1.386936 (-0.780710) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007417 / 0.011353 (-0.003936) | 0.005143 / 0.011008 (-0.005865) | 0.076401 / 0.038508 (0.037893) | 0.034818 / 0.023109 (0.011709) | 0.339633 / 0.275898 (0.063735) | 0.373839 / 0.323480 (0.050359) | 0.006004 / 0.007986 (-0.001982) | 0.005403 / 0.004328 (0.001075) | 0.074150 / 0.004250 (0.069899) | 0.050489 / 0.037052 (0.013436) | 0.343357 / 0.258489 (0.084868) | 0.377009 / 0.293841 (0.083168) | 0.035921 / 0.128546 (-0.092625) | 0.012197 / 0.075646 (-0.063449) | 0.087992 / 0.419271 (-0.331279) | 0.049452 / 0.043533 (0.005919) | 0.340495 / 0.255139 (0.085356) | 0.360277 / 0.283200 (0.077077) | 0.111114 / 0.141683 (-0.030569) | 1.463888 / 1.452155 (0.011734) | 1.548320 / 1.492716 (0.055604) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.228437 / 0.018006 (0.210431) | 0.445120 / 0.000490 (0.444631) | 0.000392 / 0.000200 (0.000192) | 0.000058 / 0.000054 (0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029965 / 0.037411 (-0.007446) | 0.113484 / 0.014526 (0.098958) | 0.125249 / 0.176557 (-0.051308) | 0.177201 / 0.737135 (-0.559934) | 0.128750 / 0.296338 (-0.167589) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.420089 / 0.215209 (0.204880) | 4.195772 / 2.077655 (2.118117) | 2.021539 / 1.504120 (0.517419) | 1.825118 / 1.541195 (0.283924) | 1.904090 / 1.468490 (0.435600) | 0.716276 / 4.584777 (-3.868501) | 3.742257 / 3.745712 (-0.003455) | 3.368880 / 5.269862 (-1.900981) | 1.728285 / 4.565676 (-2.837392) | 0.087656 / 0.424275 (-0.336619) | 0.012263 / 0.007607 (0.004656) | 0.524321 / 0.226044 (0.298277) | 5.217610 / 2.268929 (2.948682) | 2.474670 / 55.444624 (-52.969955) | 2.135452 / 6.876477 (-4.741025) | 2.292578 / 2.142072 (0.150505) | 0.852109 / 4.805227 (-3.953119) | 0.172031 / 6.500664 (-6.328633) | 0.065230 / 0.075469 (-0.010240) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.260494 / 1.841788 (-0.581293) | 15.019167 / 8.074308 (6.944859) | 14.647586 / 10.191392 (4.456193) | 0.170578 / 0.680424 (-0.509846) | 0.017619 / 0.534201 (-0.516582) | 0.423116 / 0.579283 (-0.156167) | 0.426680 / 0.434364 (-0.007684) | 0.519563 / 0.540337 (-0.020775) | 0.619335 / 1.386936 (-0.767601) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#e210dc20c19b5e6af05df9ca6e82984dfb42465f \"CML watermark\")\n" ]
"2023-04-26T17:39:43Z"
"2023-04-27T16:41:50Z"
"2023-04-27T16:34:45Z"
MEMBER
null
Added a "Use with Spark" doc page to document `Dataset.from_spark` following https://github.com/huggingface/datasets/pull/5701 cc @maddiedawson
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5796/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5796/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5796.diff", "html_url": "https://github.com/huggingface/datasets/pull/5796", "merged_at": "2023-04-27T16:34:45Z", "patch_url": "https://github.com/huggingface/datasets/pull/5796.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5796" }
true
https://api.github.com/repos/huggingface/datasets/issues/5795
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5795/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5795/comments
https://api.github.com/repos/huggingface/datasets/issues/5795/events
https://github.com/huggingface/datasets/pull/5795
1,685,414,505
PR_kwDODunzps5POJo8
5,795
Fix spark imports
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010844 / 0.011353 (-0.000509) | 0.007329 / 0.011008 (-0.003680) | 0.133764 / 0.038508 (0.095256) | 0.040213 / 0.023109 (0.017103) | 0.413466 / 0.275898 (0.137568) | 0.452860 / 0.323480 (0.129380) | 0.008109 / 0.007986 (0.000123) | 0.005773 / 0.004328 (0.001444) | 0.109969 / 0.004250 (0.105718) | 0.053001 / 0.037052 (0.015949) | 0.416377 / 0.258489 (0.157888) | 0.477486 / 0.293841 (0.183645) | 0.056556 / 0.128546 (-0.071990) | 0.024322 / 0.075646 (-0.051324) | 0.437750 / 0.419271 (0.018479) | 0.087732 / 0.043533 (0.044199) | 0.421540 / 0.255139 (0.166401) | 0.429143 / 0.283200 (0.145944) | 0.144864 / 0.141683 (0.003181) | 1.882785 / 1.452155 (0.430631) | 1.980721 / 1.492716 (0.488005) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.285497 / 0.018006 (0.267491) | 0.601820 / 0.000490 (0.601331) | 0.005003 / 0.000200 (0.004804) | 0.000122 / 0.000054 (0.000067) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030673 / 0.037411 (-0.006739) | 0.126883 / 0.014526 (0.112357) | 0.137677 / 0.176557 (-0.038880) | 0.211504 / 0.737135 (-0.525632) | 0.144752 / 0.296338 (-0.151587) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.665845 / 0.215209 (0.450636) | 6.369040 / 2.077655 (4.291385) | 2.708979 / 1.504120 (1.204859) | 2.370842 / 1.541195 (0.829647) | 2.445987 / 1.468490 (0.977497) | 1.260806 / 4.584777 (-3.323971) | 5.979216 / 3.745712 (2.233504) | 3.334350 / 5.269862 (-1.935512) | 2.187298 / 4.565676 (-2.378379) | 0.155494 / 0.424275 (-0.268781) | 0.017351 / 0.007607 (0.009744) | 0.853626 / 0.226044 (0.627581) | 8.375001 / 2.268929 (6.106072) | 3.528312 / 55.444624 (-51.916313) | 2.890509 / 6.876477 (-3.985968) | 3.051016 / 2.142072 (0.908944) | 1.529811 / 4.805227 (-3.275416) | 0.273883 / 6.500664 (-6.226781) | 0.086617 / 0.075469 (0.011148) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.648231 / 1.841788 (-0.193557) | 19.487109 / 8.074308 (11.412801) | 23.474621 / 10.191392 (13.283229) | 0.221392 / 0.680424 (-0.459032) | 0.028878 / 0.534201 (-0.505323) | 0.582302 / 0.579283 (0.003019) | 0.615059 / 0.434364 (0.180695) | 0.656082 / 0.540337 (0.115745) | 0.740544 / 1.386936 (-0.646392) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010687 / 0.011353 (-0.000665) | 0.007114 / 0.011008 (-0.003894) | 0.135426 / 0.038508 (0.096918) | 0.041027 / 0.023109 (0.017918) | 0.466441 / 0.275898 (0.190543) | 0.503545 / 0.323480 (0.180065) | 0.009418 / 0.007986 (0.001432) | 0.004976 / 0.004328 (0.000647) | 0.101342 / 0.004250 (0.097092) | 0.058289 / 0.037052 (0.021237) | 0.473715 / 0.258489 (0.215226) | 0.539556 / 0.293841 (0.245715) | 0.063138 / 0.128546 (-0.065408) | 0.020429 / 0.075646 (-0.055217) | 0.124179 / 0.419271 (-0.295093) | 0.066400 / 0.043533 (0.022867) | 0.450793 / 0.255139 (0.195654) | 0.494163 / 0.283200 (0.210964) | 0.131179 / 0.141683 (-0.010504) | 1.876396 / 1.452155 (0.424241) | 1.974148 / 1.492716 (0.481432) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.313362 / 0.018006 (0.295356) | 0.602618 / 0.000490 (0.602129) | 0.008279 / 0.000200 (0.008079) | 0.000155 / 0.000054 (0.000101) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.037250 / 0.037411 (-0.000161) | 0.144151 / 0.014526 (0.129625) | 0.155733 / 0.176557 (-0.020824) | 0.214334 / 0.737135 (-0.522801) | 0.167124 / 0.296338 (-0.129214) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.686471 / 0.215209 (0.471262) | 6.749174 / 2.077655 (4.671520) | 3.024941 / 1.504120 (1.520821) | 2.553363 / 1.541195 (1.012168) | 2.679107 / 1.468490 (1.210617) | 1.317212 / 4.584777 (-3.267565) | 5.917575 / 3.745712 (2.171862) | 3.412715 / 5.269862 (-1.857146) | 2.203478 / 4.565676 (-2.362198) | 0.150387 / 0.424275 (-0.273888) | 0.015977 / 0.007607 (0.008370) | 0.862999 / 0.226044 (0.636954) | 8.706459 / 2.268929 (6.437530) | 3.762648 / 55.444624 (-51.681977) | 2.992544 / 6.876477 (-3.883933) | 3.135796 / 2.142072 (0.993724) | 1.504140 / 4.805227 (-3.301088) | 0.268265 / 6.500664 (-6.232399) | 0.083297 / 0.075469 (0.007828) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.690193 / 1.841788 (-0.151594) | 19.912854 / 8.074308 (11.838546) | 23.568217 / 10.191392 (13.376825) | 0.285125 / 0.680424 (-0.395299) | 0.030593 / 0.534201 (-0.503608) | 0.565305 / 0.579283 (-0.013978) | 0.659283 / 0.434364 (0.224919) | 0.678864 / 0.540337 (0.138527) | 0.793634 / 1.386936 (-0.593302) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#9d0edbe3f3258b7e580d1b58c0eea6637b5e22b2 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.011615 / 0.011353 (0.000262) | 0.006716 / 0.011008 (-0.004292) | 0.146868 / 0.038508 (0.108360) | 0.037621 / 0.023109 (0.014512) | 0.425563 / 0.275898 (0.149664) | 0.483217 / 0.323480 (0.159737) | 0.007830 / 0.007986 (-0.000156) | 0.005940 / 0.004328 (0.001612) | 0.100771 / 0.004250 (0.096521) | 0.063907 / 0.037052 (0.026854) | 0.422993 / 0.258489 (0.164503) | 0.496514 / 0.293841 (0.202673) | 0.056004 / 0.128546 (-0.072542) | 0.021441 / 0.075646 (-0.054206) | 0.453589 / 0.419271 (0.034317) | 0.067555 / 0.043533 (0.024022) | 0.442490 / 0.255139 (0.187351) | 0.503941 / 0.283200 (0.220742) | 0.134023 / 0.141683 (-0.007660) | 1.886329 / 1.452155 (0.434175) | 2.030867 / 1.492716 (0.538150) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.288063 / 0.018006 (0.270057) | 0.627177 / 0.000490 (0.626687) | 0.006335 / 0.000200 (0.006135) | 0.000171 / 0.000054 (0.000116) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032424 / 0.037411 (-0.004987) | 0.132749 / 0.014526 (0.118223) | 0.144727 / 0.176557 (-0.031829) | 0.232577 / 0.737135 (-0.504558) | 0.157315 / 0.296338 (-0.139024) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.623058 / 0.215209 (0.407849) | 6.272447 / 2.077655 (4.194792) | 2.506778 / 1.504120 (1.002658) | 2.203094 / 1.541195 (0.661899) | 2.346972 / 1.468490 (0.878482) | 1.358498 / 4.584777 (-3.226279) | 5.879670 / 3.745712 (2.133958) | 5.818406 / 5.269862 (0.548545) | 3.231936 / 4.565676 (-1.333741) | 0.154013 / 0.424275 (-0.270263) | 0.021541 / 0.007607 (0.013934) | 0.823746 / 0.226044 (0.597702) | 8.140304 / 2.268929 (5.871375) | 3.366911 / 55.444624 (-52.077714) | 2.696856 / 6.876477 (-4.179621) | 2.845743 / 2.142072 (0.703671) | 1.522363 / 4.805227 (-3.282864) | 0.278938 / 6.500664 (-6.221726) | 0.085044 / 0.075469 (0.009575) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.681348 / 1.841788 (-0.160440) | 19.686703 / 8.074308 (11.612395) | 22.995655 / 10.191392 (12.804263) | 0.218876 / 0.680424 (-0.461548) | 0.029334 / 0.534201 (-0.504867) | 0.560846 / 0.579283 (-0.018438) | 0.645210 / 0.434364 (0.210846) | 0.697842 / 0.540337 (0.157505) | 0.832875 / 1.386936 (-0.554061) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009509 / 0.011353 (-0.001844) | 0.006471 / 0.011008 (-0.004537) | 0.101477 / 0.038508 (0.062969) | 0.035281 / 0.023109 (0.012171) | 0.470032 / 0.275898 (0.194134) | 0.501475 / 0.323480 (0.177995) | 0.007641 / 0.007986 (-0.000344) | 0.006784 / 0.004328 (0.002455) | 0.096111 / 0.004250 (0.091861) | 0.055199 / 0.037052 (0.018146) | 0.470095 / 0.258489 (0.211606) | 0.530955 / 0.293841 (0.237114) | 0.056161 / 0.128546 (-0.072385) | 0.022055 / 0.075646 (-0.053591) | 0.121585 / 0.419271 (-0.297686) | 0.063736 / 0.043533 (0.020203) | 0.470771 / 0.255139 (0.215632) | 0.490546 / 0.283200 (0.207346) | 0.128825 / 0.141683 (-0.012858) | 1.898639 / 1.452155 (0.446484) | 2.052305 / 1.492716 (0.559589) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.322526 / 0.018006 (0.304520) | 0.628096 / 0.000490 (0.627607) | 0.006837 / 0.000200 (0.006637) | 0.000199 / 0.000054 (0.000145) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033830 / 0.037411 (-0.003581) | 0.136217 / 0.014526 (0.121691) | 0.147006 / 0.176557 (-0.029551) | 0.203950 / 0.737135 (-0.533185) | 0.150327 / 0.296338 (-0.146011) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.654287 / 0.215209 (0.439078) | 6.430306 / 2.077655 (4.352651) | 2.881750 / 1.504120 (1.377630) | 2.489505 / 1.541195 (0.948310) | 2.543037 / 1.468490 (1.074547) | 1.226682 / 4.584777 (-3.358094) | 5.902076 / 3.745712 (2.156364) | 3.335344 / 5.269862 (-1.934518) | 2.156738 / 4.565676 (-2.408939) | 0.151804 / 0.424275 (-0.272472) | 0.015238 / 0.007607 (0.007631) | 0.816364 / 0.226044 (0.590319) | 8.126367 / 2.268929 (5.857438) | 3.653222 / 55.444624 (-51.791402) | 2.886667 / 6.876477 (-3.989809) | 3.120852 / 2.142072 (0.978779) | 1.421423 / 4.805227 (-3.383804) | 0.264590 / 6.500664 (-6.236074) | 0.085716 / 0.075469 (0.010247) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.745258 / 1.841788 (-0.096530) | 19.379253 / 8.074308 (11.304945) | 23.827046 / 10.191392 (13.635654) | 0.267702 / 0.680424 (-0.412722) | 0.030253 / 0.534201 (-0.503948) | 0.542037 / 0.579283 (-0.037246) | 0.655946 / 0.434364 (0.221582) | 0.683525 / 0.540337 (0.143188) | 0.831333 / 1.386936 (-0.555603) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#5b011a258329375aa4dc7b414bd4e7b6363c5357 \"CML watermark\")\n" ]
"2023-04-26T17:09:32Z"
"2023-04-26T17:49:03Z"
"2023-04-26T17:39:12Z"
MEMBER
null
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5795/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5795/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5795.diff", "html_url": "https://github.com/huggingface/datasets/pull/5795", "merged_at": "2023-04-26T17:39:12Z", "patch_url": "https://github.com/huggingface/datasets/pull/5795.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5795" }
true
https://api.github.com/repos/huggingface/datasets/issues/5794
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5794/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5794/comments
https://api.github.com/repos/huggingface/datasets/issues/5794/events
https://github.com/huggingface/datasets/issues/5794
1,685,196,061
I_kwDODunzps5kcg0d
5,794
CI ZeroDivisionError
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
open
false
null
[]
null
[]
"2023-04-26T14:55:23Z"
"2023-04-26T14:55:23Z"
null
MEMBER
null
Sometimes when running our CI on Windows, we get a ZeroDivisionError: ``` FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_frugalscore - ZeroDivisionError: float division by zero ``` See for example: - https://github.com/huggingface/datasets/actions/runs/4809358266/jobs/8560513110 - https://github.com/huggingface/datasets/actions/runs/4798359836/jobs/8536573688 ``` _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ split = 'test', start_time = 1682516718.8236516, num_samples = 2, num_steps = 1 def speed_metrics(split, start_time, num_samples=None, num_steps=None): """ Measure and return speed performance metrics. This function requires a time snapshot `start_time` before the operation to be measured starts and this function should be run immediately after the operation to be measured has completed. Args: - split: name to prefix metric (like train, eval, test...) - start_time: operation start time - num_samples: number of samples processed """ runtime = time.time() - start_time result = {f"{split}_runtime": round(runtime, 4)} if num_samples is not None: > samples_per_second = num_samples / runtime E ZeroDivisionError: float division by zero C:\hostedtoolcache\windows\Python\3.7.9\x64\lib\site-packages\transformers\trainer_utils.py:354: ZeroDivisionError ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5794/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5794/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5793
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5793/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5793/comments
https://api.github.com/repos/huggingface/datasets/issues/5793/events
https://github.com/huggingface/datasets/issues/5793
1,684,777,320
I_kwDODunzps5ka6lo
5,793
IterableDataset.with_format("torch") not working
{ "avatar_url": "https://avatars.githubusercontent.com/u/39762734?v=4", "events_url": "https://api.github.com/users/jiangwy99/events{/privacy}", "followers_url": "https://api.github.com/users/jiangwy99/followers", "following_url": "https://api.github.com/users/jiangwy99/following{/other_user}", "gists_url": "https://api.github.com/users/jiangwy99/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jiangwy99", "id": 39762734, "login": "jiangwy99", "node_id": "MDQ6VXNlcjM5NzYyNzM0", "organizations_url": "https://api.github.com/users/jiangwy99/orgs", "received_events_url": "https://api.github.com/users/jiangwy99/received_events", "repos_url": "https://api.github.com/users/jiangwy99/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jiangwy99/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jiangwy99/subscriptions", "type": "User", "url": "https://api.github.com/users/jiangwy99" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" }, { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" }, { "color": "fef2c0", "default": false, "description": "", "id": 3287858981, "name": "streaming", "node_id": "MDU6TGFiZWwzMjg3ODU4OTgx", "url": "https://api.github.com/repos/huggingface/datasets/labels/streaming" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" } ]
null
[ "Hi ! Thanks for reporting, I'm working on it ;)" ]
"2023-04-26T10:50:23Z"
"2023-06-13T15:57:06Z"
"2023-06-13T15:57:06Z"
NONE
null
### Describe the bug After calling the with_format("torch") method on an IterableDataset instance, the data format is unchanged. ### Steps to reproduce the bug ```python from datasets import IterableDataset def gen(): for i in range(4): yield {"a": [i] * 4} dataset = IterableDataset.from_generator(gen).with_format("torch") next(iter(dataset)) ``` ### Expected behavior `{"a": torch.tensor([0, 0, 0, 0])}` is expected, but `{"a": [0, 0, 0, 0]}` is observed. ### Environment info ```bash platform==ubuntu 22.04.01 python==3.10.9 datasets==2.11.0 ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5793/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5793/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5791
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5791/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5791/comments
https://api.github.com/repos/huggingface/datasets/issues/5791/events
https://github.com/huggingface/datasets/issues/5791
1,683,473,943
I_kwDODunzps5kV8YX
5,791
TIFF/TIF support
{ "avatar_url": "https://avatars.githubusercontent.com/u/31293221?v=4", "events_url": "https://api.github.com/users/sebasmos/events{/privacy}", "followers_url": "https://api.github.com/users/sebasmos/followers", "following_url": "https://api.github.com/users/sebasmos/following{/other_user}", "gists_url": "https://api.github.com/users/sebasmos/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sebasmos", "id": 31293221, "login": "sebasmos", "node_id": "MDQ6VXNlcjMxMjkzMjIx", "organizations_url": "https://api.github.com/users/sebasmos/orgs", "received_events_url": "https://api.github.com/users/sebasmos/received_events", "repos_url": "https://api.github.com/users/sebasmos/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sebasmos/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sebasmos/subscriptions", "type": "User", "url": "https://api.github.com/users/sebasmos" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
null
[ "The issue with multichannel TIFF images has already been reported in Pillow (https://github.com/python-pillow/Pillow/issues/1888). We can't do much about it on our side.\r\n\r\nStill, to avoid the error, you can bypass the default Pillow decoding and define a custom one as follows:\r\n```python\r\nimport tifffile # pip install tifffile\r\n\r\ndset = dset.cast_column(\"image\", datasets.Image(decode=False))\r\n\r\ndef decode_mutlichannel_tiff(batch):\r\n batch[\"image\"] = [tifffile.imread(image[\"path\"]) for image in batch[\"image\"]]\r\n return batch\r\n\r\ndset.set_transform(decode_mutlichannel_tiff)\r\n```\r\n\r\nRegarding the annotations, in which format are they? In the COCO format? I think this is a bit too specific to have a built-in loader for it." ]
"2023-04-25T16:14:18Z"
"2023-05-05T16:22:50Z"
null
NONE
null
### Feature request I currently have a dataset (with tiff and json files) where I have to do this: `wget path_to_data/images.zip && unzip images.zip` `wget path_to_data/annotations.zip && unzip annotations.zip` Would it make sense a contribution that supports these type of files? ### Motivation instead of using `load_dataset` have to use wget as these files are not supported for annotations with JSON and images with TIFF files. Additionally to this, the PIL formatting from datasets does not read correctly the image channels with TIFF format, besides multichannel adaptation might be necessary as well (as my data e.g has more than 3 channels) ### Your contribution 1. Support TIFF images over multi channel format 2. Support JSON annotations
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5791/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5791/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5790
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5790/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5790/comments
https://api.github.com/repos/huggingface/datasets/issues/5790/events
https://github.com/huggingface/datasets/pull/5790
1,683,229,126
PR_kwDODunzps5PG0mJ
5,790
Allow to run CI on push to ci-branch
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007852 / 0.011353 (-0.003500) | 0.005804 / 0.011008 (-0.005204) | 0.098268 / 0.038508 (0.059760) | 0.036440 / 0.023109 (0.013331) | 0.299952 / 0.275898 (0.024054) | 0.335590 / 0.323480 (0.012111) | 0.006332 / 0.007986 (-0.001653) | 0.004218 / 0.004328 (-0.000110) | 0.074733 / 0.004250 (0.070483) | 0.055252 / 0.037052 (0.018200) | 0.300854 / 0.258489 (0.042365) | 0.353442 / 0.293841 (0.059601) | 0.036447 / 0.128546 (-0.092099) | 0.012638 / 0.075646 (-0.063009) | 0.336680 / 0.419271 (-0.082591) | 0.052436 / 0.043533 (0.008903) | 0.292606 / 0.255139 (0.037467) | 0.319676 / 0.283200 (0.036476) | 0.111137 / 0.141683 (-0.030546) | 1.449569 / 1.452155 (-0.002586) | 1.558110 / 1.492716 (0.065394) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.306043 / 0.018006 (0.288037) | 0.563174 / 0.000490 (0.562684) | 0.032227 / 0.000200 (0.032027) | 0.000491 / 0.000054 (0.000436) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029874 / 0.037411 (-0.007537) | 0.109330 / 0.014526 (0.094805) | 0.122579 / 0.176557 (-0.053978) | 0.181398 / 0.737135 (-0.555737) | 0.127124 / 0.296338 (-0.169215) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.417950 / 0.215209 (0.202741) | 4.163883 / 2.077655 (2.086228) | 1.985209 / 1.504120 (0.481089) | 1.793660 / 1.541195 (0.252465) | 1.895193 / 1.468490 (0.426703) | 0.694331 / 4.584777 (-3.890446) | 3.820170 / 3.745712 (0.074458) | 2.180556 / 5.269862 (-3.089305) | 1.490671 / 4.565676 (-3.075006) | 0.086132 / 0.424275 (-0.338143) | 0.012289 / 0.007607 (0.004682) | 0.511182 / 0.226044 (0.285137) | 5.117855 / 2.268929 (2.848927) | 2.403914 / 55.444624 (-53.040710) | 2.071107 / 6.876477 (-4.805369) | 2.184108 / 2.142072 (0.042036) | 0.835028 / 4.805227 (-3.970199) | 0.167707 / 6.500664 (-6.332957) | 0.066724 / 0.075469 (-0.008746) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.203921 / 1.841788 (-0.637867) | 15.214676 / 8.074308 (7.140368) | 14.971337 / 10.191392 (4.779945) | 0.170225 / 0.680424 (-0.510199) | 0.017924 / 0.534201 (-0.516277) | 0.428532 / 0.579283 (-0.150751) | 0.449157 / 0.434364 (0.014793) | 0.507723 / 0.540337 (-0.032614) | 0.615331 / 1.386936 (-0.771605) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008172 / 0.011353 (-0.003181) | 0.005405 / 0.011008 (-0.005603) | 0.074684 / 0.038508 (0.036176) | 0.039133 / 0.023109 (0.016024) | 0.342598 / 0.275898 (0.066700) | 0.377752 / 0.323480 (0.054272) | 0.006655 / 0.007986 (-0.001331) | 0.005788 / 0.004328 (0.001459) | 0.074014 / 0.004250 (0.069763) | 0.056225 / 0.037052 (0.019173) | 0.342330 / 0.258489 (0.083841) | 0.381052 / 0.293841 (0.087211) | 0.036574 / 0.128546 (-0.091973) | 0.012472 / 0.075646 (-0.063174) | 0.087574 / 0.419271 (-0.331698) | 0.050178 / 0.043533 (0.006646) | 0.351116 / 0.255139 (0.095977) | 0.363772 / 0.283200 (0.080572) | 0.118313 / 0.141683 (-0.023370) | 1.436691 / 1.452155 (-0.015463) | 1.551397 / 1.492716 (0.058680) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.265201 / 0.018006 (0.247195) | 0.561855 / 0.000490 (0.561366) | 0.000463 / 0.000200 (0.000263) | 0.000058 / 0.000054 (0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030540 / 0.037411 (-0.006871) | 0.118815 / 0.014526 (0.104289) | 0.127689 / 0.176557 (-0.048868) | 0.176211 / 0.737135 (-0.560924) | 0.133130 / 0.296338 (-0.163208) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.416318 / 0.215209 (0.201109) | 4.146806 / 2.077655 (2.069151) | 1.983437 / 1.504120 (0.479317) | 1.799733 / 1.541195 (0.258539) | 1.889026 / 1.468490 (0.420536) | 0.723330 / 4.584777 (-3.861447) | 3.817795 / 3.745712 (0.072083) | 2.158449 / 5.269862 (-3.111413) | 1.377348 / 4.565676 (-3.188328) | 0.088504 / 0.424275 (-0.335771) | 0.012560 / 0.007607 (0.004953) | 0.530382 / 0.226044 (0.304337) | 5.308529 / 2.268929 (3.039600) | 2.469655 / 55.444624 (-52.974970) | 2.136209 / 6.876477 (-4.740267) | 2.322997 / 2.142072 (0.180924) | 0.861396 / 4.805227 (-3.943831) | 0.172747 / 6.500664 (-6.327917) | 0.067617 / 0.075469 (-0.007852) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.263225 / 1.841788 (-0.578563) | 15.878025 / 8.074308 (7.803717) | 14.815627 / 10.191392 (4.624235) | 0.148722 / 0.680424 (-0.531702) | 0.018071 / 0.534201 (-0.516130) | 0.428389 / 0.579283 (-0.150894) | 0.428635 / 0.434364 (-0.005729) | 0.496953 / 0.540337 (-0.043385) | 0.592783 / 1.386936 (-0.794153) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#d2e5568dc7a47f9a99678d2889bd2e3c33afdd00 \"CML watermark\")\n" ]
"2023-04-25T13:57:26Z"
"2023-04-26T13:43:08Z"
"2023-04-26T13:35:47Z"
MEMBER
null
This PR allows to run the CI on push to a branch named "ci-*", without needing to open a PR. - This will allow to make CI tests without opening a PR, e.g., for future `huggingface-hub` releases, future dependency releases (like `fsspec`, `pandas`,...) Note that to build the documentation, we already allow it on push to a branch named "doc-builder*". See: - #5788 CC: @Wauplin
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5790/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5790/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5790.diff", "html_url": "https://github.com/huggingface/datasets/pull/5790", "merged_at": "2023-04-26T13:35:47Z", "patch_url": "https://github.com/huggingface/datasets/pull/5790.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5790" }
true
https://api.github.com/repos/huggingface/datasets/issues/5789
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5789/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5789/comments
https://api.github.com/repos/huggingface/datasets/issues/5789/events
https://github.com/huggingface/datasets/issues/5789
1,682,611,179
I_kwDODunzps5kSpvr
5,789
Support streaming datasets that use jsonlines
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
[]
"2023-04-25T07:40:02Z"
"2023-04-25T07:40:03Z"
null
MEMBER
null
Extend support for streaming datasets that use `jsonlines.open`. Currently, if `jsonlines` is installed, `datasets` raises a `FileNotFoundError`: ``` FileNotFoundError: [Errno 2] No such file or directory: 'https://...' ``` See: - https://huggingface.co/datasets/masakhane/afriqa/discussions/1
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/5789/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5789/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5788
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5788/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5788/comments
https://api.github.com/repos/huggingface/datasets/issues/5788/events
https://github.com/huggingface/datasets/pull/5788
1,681,136,256
PR_kwDODunzps5O_v4B
5,788
Prepare tests for hfh 0.14
{ "avatar_url": "https://avatars.githubusercontent.com/u/11801849?v=4", "events_url": "https://api.github.com/users/Wauplin/events{/privacy}", "followers_url": "https://api.github.com/users/Wauplin/followers", "following_url": "https://api.github.com/users/Wauplin/following{/other_user}", "gists_url": "https://api.github.com/users/Wauplin/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Wauplin", "id": 11801849, "login": "Wauplin", "node_id": "MDQ6VXNlcjExODAxODQ5", "organizations_url": "https://api.github.com/users/Wauplin/orgs", "received_events_url": "https://api.github.com/users/Wauplin/received_events", "repos_url": "https://api.github.com/users/Wauplin/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Wauplin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Wauplin/subscriptions", "type": "User", "url": "https://api.github.com/users/Wauplin" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007343 / 0.011353 (-0.004010) | 0.005145 / 0.011008 (-0.005863) | 0.099820 / 0.038508 (0.061312) | 0.033487 / 0.023109 (0.010378) | 0.313069 / 0.275898 (0.037171) | 0.335420 / 0.323480 (0.011940) | 0.005959 / 0.007986 (-0.002027) | 0.005373 / 0.004328 (0.001044) | 0.076568 / 0.004250 (0.072317) | 0.048702 / 0.037052 (0.011650) | 0.322957 / 0.258489 (0.064468) | 0.363044 / 0.293841 (0.069203) | 0.035070 / 0.128546 (-0.093476) | 0.012029 / 0.075646 (-0.063618) | 0.334664 / 0.419271 (-0.084607) | 0.050549 / 0.043533 (0.007017) | 0.310113 / 0.255139 (0.054974) | 0.324405 / 0.283200 (0.041205) | 0.097596 / 0.141683 (-0.044087) | 1.440741 / 1.452155 (-0.011414) | 1.531194 / 1.492716 (0.038478) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.220799 / 0.018006 (0.202793) | 0.438158 / 0.000490 (0.437668) | 0.007737 / 0.000200 (0.007537) | 0.000082 / 0.000054 (0.000027) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026888 / 0.037411 (-0.010523) | 0.106281 / 0.014526 (0.091755) | 0.117419 / 0.176557 (-0.059138) | 0.179144 / 0.737135 (-0.557992) | 0.122477 / 0.296338 (-0.173861) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.412667 / 0.215209 (0.197458) | 4.108784 / 2.077655 (2.031129) | 1.834300 / 1.504120 (0.330180) | 1.627256 / 1.541195 (0.086061) | 1.691036 / 1.468490 (0.222546) | 0.713405 / 4.584777 (-3.871372) | 3.839262 / 3.745712 (0.093550) | 2.108453 / 5.269862 (-3.161408) | 1.340740 / 4.565676 (-3.224936) | 0.087776 / 0.424275 (-0.336499) | 0.012730 / 0.007607 (0.005123) | 0.505323 / 0.226044 (0.279279) | 5.085176 / 2.268929 (2.816247) | 2.307165 / 55.444624 (-53.137459) | 1.936771 / 6.876477 (-4.939706) | 2.097391 / 2.142072 (-0.044681) | 0.856215 / 4.805227 (-3.949012) | 0.171826 / 6.500664 (-6.328838) | 0.066603 / 0.075469 (-0.008866) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.202126 / 1.841788 (-0.639661) | 15.173598 / 8.074308 (7.099290) | 15.012645 / 10.191392 (4.821253) | 0.162187 / 0.680424 (-0.518237) | 0.017462 / 0.534201 (-0.516739) | 0.423895 / 0.579283 (-0.155388) | 0.432010 / 0.434364 (-0.002354) | 0.503234 / 0.540337 (-0.037104) | 0.598948 / 1.386936 (-0.787988) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007099 / 0.011353 (-0.004254) | 0.005167 / 0.011008 (-0.005841) | 0.075551 / 0.038508 (0.037043) | 0.033050 / 0.023109 (0.009940) | 0.339629 / 0.275898 (0.063731) | 0.380486 / 0.323480 (0.057006) | 0.005776 / 0.007986 (-0.002209) | 0.004029 / 0.004328 (-0.000299) | 0.075074 / 0.004250 (0.070823) | 0.046709 / 0.037052 (0.009656) | 0.340203 / 0.258489 (0.081714) | 0.380849 / 0.293841 (0.087008) | 0.035027 / 0.128546 (-0.093519) | 0.012226 / 0.075646 (-0.063420) | 0.087525 / 0.419271 (-0.331747) | 0.049361 / 0.043533 (0.005828) | 0.341854 / 0.255139 (0.086715) | 0.359590 / 0.283200 (0.076390) | 0.100102 / 0.141683 (-0.041581) | 1.482759 / 1.452155 (0.030605) | 1.569905 / 1.492716 (0.077189) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.213615 / 0.018006 (0.195609) | 0.441117 / 0.000490 (0.440628) | 0.004932 / 0.000200 (0.004732) | 0.000093 / 0.000054 (0.000038) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031313 / 0.037411 (-0.006098) | 0.110191 / 0.014526 (0.095665) | 0.125320 / 0.176557 (-0.051237) | 0.177658 / 0.737135 (-0.559477) | 0.127928 / 0.296338 (-0.168410) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.426952 / 0.215209 (0.211743) | 4.247731 / 2.077655 (2.170076) | 2.107318 / 1.504120 (0.603198) | 1.843845 / 1.541195 (0.302650) | 1.894822 / 1.468490 (0.426332) | 0.696232 / 4.584777 (-3.888545) | 3.826516 / 3.745712 (0.080804) | 2.126688 / 5.269862 (-3.143174) | 1.327062 / 4.565676 (-3.238615) | 0.085693 / 0.424275 (-0.338582) | 0.012226 / 0.007607 (0.004619) | 0.521904 / 0.226044 (0.295859) | 5.219798 / 2.268929 (2.950869) | 2.524908 / 55.444624 (-52.919716) | 2.212078 / 6.876477 (-4.664399) | 2.373944 / 2.142072 (0.231871) | 0.833846 / 4.805227 (-3.971381) | 0.169639 / 6.500664 (-6.331025) | 0.064538 / 0.075469 (-0.010931) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.254930 / 1.841788 (-0.586858) | 15.585277 / 8.074308 (7.510969) | 14.762857 / 10.191392 (4.571465) | 0.146959 / 0.680424 (-0.533465) | 0.017451 / 0.534201 (-0.516750) | 0.424469 / 0.579283 (-0.154814) | 0.422359 / 0.434364 (-0.012004) | 0.489930 / 0.540337 (-0.050408) | 0.595856 / 1.386936 (-0.791080) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#213c72f52ae52b662f967d3218f66c70a3043048 \"CML watermark\")\n", "@albertvillanova thanks for the review. As you prefer for the github CI config. I just took it from @lhoestq's branch when testing hfh==0.14.0. I think it's still relevant for next releases. In any case, I let you handle merging the PR :)", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008371 / 0.011353 (-0.002982) | 0.005210 / 0.011008 (-0.005798) | 0.105639 / 0.038508 (0.067131) | 0.045903 / 0.023109 (0.022794) | 0.391231 / 0.275898 (0.115333) | 0.438824 / 0.323480 (0.115345) | 0.006270 / 0.007986 (-0.001715) | 0.005950 / 0.004328 (0.001621) | 0.079685 / 0.004250 (0.075434) | 0.052121 / 0.037052 (0.015069) | 0.387787 / 0.258489 (0.129298) | 0.434322 / 0.293841 (0.140481) | 0.032598 / 0.128546 (-0.095948) | 0.012126 / 0.075646 (-0.063520) | 0.359658 / 0.419271 (-0.059613) | 0.046686 / 0.043533 (0.003154) | 0.391973 / 0.255139 (0.136834) | 0.421149 / 0.283200 (0.137949) | 0.105920 / 0.141683 (-0.035763) | 1.483008 / 1.452155 (0.030854) | 1.617010 / 1.492716 (0.124294) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.199111 / 0.018006 (0.181105) | 0.407995 / 0.000490 (0.407505) | 0.006706 / 0.000200 (0.006506) | 0.000229 / 0.000054 (0.000175) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030247 / 0.037411 (-0.007164) | 0.115977 / 0.014526 (0.101451) | 0.118112 / 0.176557 (-0.058444) | 0.182710 / 0.737135 (-0.554426) | 0.122483 / 0.296338 (-0.173855) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.430455 / 0.215209 (0.215246) | 4.314298 / 2.077655 (2.236643) | 1.898124 / 1.504120 (0.394005) | 1.734909 / 1.541195 (0.193715) | 1.802400 / 1.468490 (0.333910) | 0.717237 / 4.584777 (-3.867539) | 4.004705 / 3.745712 (0.258993) | 2.138901 / 5.269862 (-3.130960) | 1.254037 / 4.565676 (-3.311640) | 0.085594 / 0.424275 (-0.338681) | 0.013774 / 0.007607 (0.006166) | 0.535218 / 0.226044 (0.309174) | 5.373730 / 2.268929 (3.104801) | 2.371194 / 55.444624 (-53.073430) | 2.111206 / 6.876477 (-4.765270) | 2.225137 / 2.142072 (0.083064) | 0.838325 / 4.805227 (-3.966902) | 0.159176 / 6.500664 (-6.341488) | 0.072285 / 0.075469 (-0.003184) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.352232 / 1.841788 (-0.489555) | 16.926722 / 8.074308 (8.852414) | 16.709531 / 10.191392 (6.518139) | 0.159249 / 0.680424 (-0.521175) | 0.017667 / 0.534201 (-0.516534) | 0.426894 / 0.579283 (-0.152390) | 0.539903 / 0.434364 (0.105539) | 0.537471 / 0.540337 (-0.002866) | 0.619592 / 1.386936 (-0.767344) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008354 / 0.011353 (-0.002999) | 0.005366 / 0.011008 (-0.005642) | 0.080961 / 0.038508 (0.042453) | 0.046574 / 0.023109 (0.023465) | 0.345949 / 0.275898 (0.070051) | 0.394041 / 0.323480 (0.070562) | 0.006209 / 0.007986 (-0.001777) | 0.005980 / 0.004328 (0.001651) | 0.076235 / 0.004250 (0.071984) | 0.051833 / 0.037052 (0.014780) | 0.348786 / 0.258489 (0.090297) | 0.397421 / 0.293841 (0.103580) | 0.033026 / 0.128546 (-0.095520) | 0.012217 / 0.075646 (-0.063429) | 0.087439 / 0.419271 (-0.331832) | 0.045488 / 0.043533 (0.001955) | 0.352160 / 0.255139 (0.097021) | 0.379079 / 0.283200 (0.095879) | 0.116111 / 0.141683 (-0.025572) | 1.470177 / 1.452155 (0.018022) | 1.587499 / 1.492716 (0.094783) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.296149 / 0.018006 (0.278143) | 0.592362 / 0.000490 (0.591872) | 0.000492 / 0.000200 (0.000292) | 0.000064 / 0.000054 (0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.036599 / 0.037411 (-0.000813) | 0.113768 / 0.014526 (0.099242) | 0.116198 / 0.176557 (-0.060358) | 0.180329 / 0.737135 (-0.556806) | 0.123942 / 0.296338 (-0.172396) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.452445 / 0.215209 (0.237236) | 4.504330 / 2.077655 (2.426675) | 2.275645 / 1.504120 (0.771525) | 2.107765 / 1.541195 (0.566571) | 2.086363 / 1.468490 (0.617873) | 0.723721 / 4.584777 (-3.861056) | 3.825330 / 3.745712 (0.079618) | 2.162743 / 5.269862 (-3.107119) | 1.255953 / 4.565676 (-3.309724) | 0.085860 / 0.424275 (-0.338415) | 0.013790 / 0.007607 (0.006183) | 0.560257 / 0.226044 (0.334213) | 5.618180 / 2.268929 (3.349251) | 2.625423 / 55.444624 (-52.819202) | 2.374381 / 6.876477 (-4.502095) | 2.496560 / 2.142072 (0.354488) | 0.841120 / 4.805227 (-3.964107) | 0.161541 / 6.500664 (-6.339123) | 0.075270 / 0.075469 (-0.000199) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.432916 / 1.841788 (-0.408872) | 14.858534 / 8.074308 (6.784226) | 14.973521 / 10.191392 (4.782129) | 0.148312 / 0.680424 (-0.532112) | 0.016811 / 0.534201 (-0.517390) | 0.382623 / 0.579283 (-0.196660) | 0.389767 / 0.434364 (-0.044596) | 0.449657 / 0.540337 (-0.090680) | 0.533723 / 1.386936 (-0.853214) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#f8344350f15265a585188ac986ae49a8ed8289fe \"CML watermark\")\n", "I agree it is good to have a way to run the CI on push, without needing to open a PR.\r\n\r\nBut I think the branch name should be more generic (and this is not specific to this PR). See:\r\n- #5790 ", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007208 / 0.011353 (-0.004145) | 0.005600 / 0.011008 (-0.005408) | 0.096129 / 0.038508 (0.057621) | 0.027834 / 0.023109 (0.004725) | 0.295106 / 0.275898 (0.019208) | 0.323983 / 0.323480 (0.000503) | 0.005164 / 0.007986 (-0.002822) | 0.003962 / 0.004328 (-0.000366) | 0.078339 / 0.004250 (0.074089) | 0.036974 / 0.037052 (-0.000078) | 0.310315 / 0.258489 (0.051826) | 0.338036 / 0.293841 (0.044195) | 0.042124 / 0.128546 (-0.086422) | 0.015886 / 0.075646 (-0.059760) | 0.337961 / 0.419271 (-0.081310) | 0.051507 / 0.043533 (0.007974) | 0.297505 / 0.255139 (0.042366) | 0.310728 / 0.283200 (0.027528) | 0.086312 / 0.141683 (-0.055371) | 1.356923 / 1.452155 (-0.095232) | 1.429366 / 1.492716 (-0.063350) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.205495 / 0.018006 (0.187489) | 0.460639 / 0.000490 (0.460149) | 0.003996 / 0.000200 (0.003796) | 0.000093 / 0.000054 (0.000038) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021970 / 0.037411 (-0.015442) | 0.090283 / 0.014526 (0.075757) | 0.098579 / 0.176557 (-0.077978) | 0.160437 / 0.737135 (-0.576699) | 0.102738 / 0.296338 (-0.193600) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.494474 / 0.215209 (0.279265) | 4.967453 / 2.077655 (2.889799) | 2.045852 / 1.504120 (0.541732) | 1.858022 / 1.541195 (0.316827) | 1.771874 / 1.468490 (0.303384) | 1.186368 / 4.584777 (-3.398408) | 4.974762 / 3.745712 (1.229050) | 2.616225 / 5.269862 (-2.653636) | 1.702971 / 4.565676 (-2.862705) | 0.124929 / 0.424275 (-0.299346) | 0.011774 / 0.007607 (0.004167) | 0.569643 / 0.226044 (0.343598) | 5.793114 / 2.268929 (3.524186) | 2.441561 / 55.444624 (-53.003064) | 1.862233 / 6.876477 (-5.014243) | 1.931142 / 2.142072 (-0.210931) | 1.148915 / 4.805227 (-3.656313) | 0.203914 / 6.500664 (-6.296750) | 0.062468 / 0.075469 (-0.013001) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.188708 / 1.841788 (-0.653080) | 13.710830 / 8.074308 (5.636522) | 15.695153 / 10.191392 (5.503761) | 0.171467 / 0.680424 (-0.508957) | 0.024509 / 0.534201 (-0.509692) | 0.450270 / 0.579283 (-0.129014) | 0.500712 / 0.434364 (0.066348) | 0.488632 / 0.540337 (-0.051706) | 0.574893 / 1.386936 (-0.812043) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007254 / 0.011353 (-0.004099) | 0.006199 / 0.011008 (-0.004809) | 0.072079 / 0.038508 (0.033571) | 0.026909 / 0.023109 (0.003800) | 0.355538 / 0.275898 (0.079640) | 0.358625 / 0.323480 (0.035145) | 0.005564 / 0.007986 (-0.002421) | 0.005278 / 0.004328 (0.000950) | 0.076469 / 0.004250 (0.072219) | 0.038269 / 0.037052 (0.001216) | 0.355214 / 0.258489 (0.096725) | 0.383219 / 0.293841 (0.089378) | 0.046516 / 0.128546 (-0.082030) | 0.015393 / 0.075646 (-0.060254) | 0.088506 / 0.419271 (-0.330765) | 0.050326 / 0.043533 (0.006793) | 0.327265 / 0.255139 (0.072126) | 0.370176 / 0.283200 (0.086976) | 0.102438 / 0.141683 (-0.039245) | 1.378969 / 1.452155 (-0.073186) | 1.441998 / 1.492716 (-0.050719) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.209044 / 0.018006 (0.191038) | 0.455733 / 0.000490 (0.455243) | 0.005856 / 0.000200 (0.005656) | 0.000116 / 0.000054 (0.000061) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025336 / 0.037411 (-0.012075) | 0.097449 / 0.014526 (0.082923) | 0.106301 / 0.176557 (-0.070255) | 0.153053 / 0.737135 (-0.584082) | 0.107938 / 0.296338 (-0.188401) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.491070 / 0.215209 (0.275861) | 5.049637 / 2.077655 (2.971982) | 2.064709 / 1.504120 (0.560589) | 1.782266 / 1.541195 (0.241072) | 1.798570 / 1.468490 (0.330080) | 0.988886 / 4.584777 (-3.595891) | 4.690324 / 3.745712 (0.944612) | 4.317355 / 5.269862 (-0.952507) | 2.347596 / 4.565676 (-2.218081) | 0.117249 / 0.424275 (-0.307026) | 0.011614 / 0.007607 (0.004007) | 0.630033 / 0.226044 (0.403988) | 6.140108 / 2.268929 (3.871180) | 2.638080 / 55.444624 (-52.806545) | 2.133017 / 6.876477 (-4.743459) | 2.123392 / 2.142072 (-0.018680) | 1.178056 / 4.805227 (-3.627171) | 0.209465 / 6.500664 (-6.291199) | 0.063234 / 0.075469 (-0.012235) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.238089 / 1.841788 (-0.603699) | 14.066866 / 8.074308 (5.992558) | 16.225480 / 10.191392 (6.034088) | 0.206466 / 0.680424 (-0.473958) | 0.027279 / 0.534201 (-0.506922) | 0.443006 / 0.579283 (-0.136277) | 0.509512 / 0.434364 (0.075148) | 0.479075 / 0.540337 (-0.061263) | 0.573546 / 1.386936 (-0.813390) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#c6015a070c66a5bbd84603d415ccc57cb668b44b \"CML watermark\")\n" ]
"2023-04-24T12:13:03Z"
"2023-04-25T14:32:56Z"
"2023-04-25T14:25:30Z"
CONTRIBUTOR
null
Related to the coming release of `huggingface_hub==0.14.0`. It will break some internal tests. The PR fixes these tests. Let's double-check the CI but I expect the fixed tests to be running fine with both `hfh<=0.13.4` and `hfh==0.14`. Worth case scenario, existing PRs will have to be rebased once this fix is merged. See related [discussion](https://huggingface.slack.com/archives/C02V5EA0A95/p1682337463368609?thread_ts=1681994202.635609&cid=C02V5EA0A95) (private slack). cc @lhoestq
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5788/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5788/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5788.diff", "html_url": "https://github.com/huggingface/datasets/pull/5788", "merged_at": "2023-04-25T14:25:30Z", "patch_url": "https://github.com/huggingface/datasets/pull/5788.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5788" }
true
https://api.github.com/repos/huggingface/datasets/issues/5787
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5787/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5787/comments
https://api.github.com/repos/huggingface/datasets/issues/5787/events
https://github.com/huggingface/datasets/pull/5787
1,680,965,959
PR_kwDODunzps5O_KNU
5,787
Fix inferring module for unsupported data files
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "I think you can revert the last commit - it should fail if data_files={} IMO", "The validation of non-empty data_files is addressed in this PR:\r\n- #5802", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008622 / 0.011353 (-0.002730) | 0.005970 / 0.011008 (-0.005038) | 0.117797 / 0.038508 (0.079289) | 0.040955 / 0.023109 (0.017846) | 0.419538 / 0.275898 (0.143640) | 0.455816 / 0.323480 (0.132336) | 0.006481 / 0.007986 (-0.001505) | 0.004507 / 0.004328 (0.000178) | 0.089073 / 0.004250 (0.084822) | 0.052389 / 0.037052 (0.015337) | 0.420053 / 0.258489 (0.161564) | 0.466886 / 0.293841 (0.173045) | 0.042660 / 0.128546 (-0.085886) | 0.014673 / 0.075646 (-0.060973) | 0.411229 / 0.419271 (-0.008042) | 0.076993 / 0.043533 (0.033460) | 0.431693 / 0.255139 (0.176554) | 0.446283 / 0.283200 (0.163084) | 0.131408 / 0.141683 (-0.010275) | 1.820339 / 1.452155 (0.368184) | 1.952946 / 1.492716 (0.460230) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.246543 / 0.018006 (0.228537) | 0.489806 / 0.000490 (0.489317) | 0.013999 / 0.000200 (0.013800) | 0.000323 / 0.000054 (0.000269) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032541 / 0.037411 (-0.004870) | 0.130569 / 0.014526 (0.116043) | 0.139630 / 0.176557 (-0.036926) | 0.217018 / 0.737135 (-0.520118) | 0.147914 / 0.296338 (-0.148425) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.494767 / 0.215209 (0.279558) | 4.949313 / 2.077655 (2.871658) | 2.277023 / 1.504120 (0.772903) | 2.036677 / 1.541195 (0.495482) | 2.064461 / 1.468490 (0.595970) | 0.842484 / 4.584777 (-3.742293) | 4.720646 / 3.745712 (0.974934) | 4.025673 / 5.269862 (-1.244189) | 2.198606 / 4.565676 (-2.367070) | 0.103042 / 0.424275 (-0.321233) | 0.014794 / 0.007607 (0.007187) | 0.617867 / 0.226044 (0.391822) | 6.197146 / 2.268929 (3.928218) | 2.804927 / 55.444624 (-52.639697) | 2.426420 / 6.876477 (-4.450057) | 2.515182 / 2.142072 (0.373109) | 1.008098 / 4.805227 (-3.797129) | 0.204982 / 6.500664 (-6.295682) | 0.078643 / 0.075469 (0.003174) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.490790 / 1.841788 (-0.350997) | 17.268042 / 8.074308 (9.193734) | 17.129647 / 10.191392 (6.938255) | 0.170351 / 0.680424 (-0.510073) | 0.021317 / 0.534201 (-0.512884) | 0.517068 / 0.579283 (-0.062215) | 0.500200 / 0.434364 (0.065836) | 0.641974 / 0.540337 (0.101637) | 0.763984 / 1.386936 (-0.622952) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008358 / 0.011353 (-0.002995) | 0.005710 / 0.011008 (-0.005298) | 0.091077 / 0.038508 (0.052569) | 0.040413 / 0.023109 (0.017303) | 0.416634 / 0.275898 (0.140736) | 0.451122 / 0.323480 (0.127642) | 0.006417 / 0.007986 (-0.001569) | 0.004360 / 0.004328 (0.000032) | 0.089543 / 0.004250 (0.085292) | 0.051137 / 0.037052 (0.014085) | 0.420228 / 0.258489 (0.161739) | 0.458649 / 0.293841 (0.164808) | 0.041828 / 0.128546 (-0.086718) | 0.014268 / 0.075646 (-0.061379) | 0.105301 / 0.419271 (-0.313970) | 0.058931 / 0.043533 (0.015398) | 0.413445 / 0.255139 (0.158306) | 0.443882 / 0.283200 (0.160682) | 0.124946 / 0.141683 (-0.016737) | 1.842259 / 1.452155 (0.390104) | 1.948162 / 1.492716 (0.455445) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.235799 / 0.018006 (0.217792) | 0.487667 / 0.000490 (0.487177) | 0.001112 / 0.000200 (0.000912) | 0.000094 / 0.000054 (0.000039) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034233 / 0.037411 (-0.003178) | 0.136593 / 0.014526 (0.122068) | 0.145598 / 0.176557 (-0.030959) | 0.206545 / 0.737135 (-0.530590) | 0.150781 / 0.296338 (-0.145558) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.522345 / 0.215209 (0.307136) | 5.192092 / 2.077655 (3.114438) | 2.543182 / 1.504120 (1.039062) | 2.285212 / 1.541195 (0.744018) | 2.312803 / 1.468490 (0.844313) | 0.859334 / 4.584777 (-3.725443) | 4.620235 / 3.745712 (0.874523) | 3.964060 / 5.269862 (-1.305802) | 2.046347 / 4.565676 (-2.519330) | 0.105284 / 0.424275 (-0.318991) | 0.015051 / 0.007607 (0.007444) | 0.646530 / 0.226044 (0.420485) | 6.386396 / 2.268929 (4.117467) | 3.131833 / 55.444624 (-52.312791) | 2.761898 / 6.876477 (-4.114579) | 2.833216 / 2.142072 (0.691143) | 1.026024 / 4.805227 (-3.779204) | 0.206776 / 6.500664 (-6.293888) | 0.078845 / 0.075469 (0.003376) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.580851 / 1.841788 (-0.260937) | 17.826213 / 8.074308 (9.751905) | 16.929460 / 10.191392 (6.738068) | 0.232483 / 0.680424 (-0.447941) | 0.021123 / 0.534201 (-0.513078) | 0.522196 / 0.579283 (-0.057087) | 0.503495 / 0.434364 (0.069131) | 0.622777 / 0.540337 (0.082440) | 0.753272 / 1.386936 (-0.633664) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#3f9dfbd93707665132abc862b14bb9b50597b739 \"CML watermark\")\n" ]
"2023-04-24T10:44:50Z"
"2023-04-27T13:06:01Z"
"2023-04-27T12:57:28Z"
MEMBER
null
This PR raises a FileNotFoundError instead: ``` FileNotFoundError: No (supported) data files or dataset script found in <dataset_name> ``` Fix #5785.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5787/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5787/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5787.diff", "html_url": "https://github.com/huggingface/datasets/pull/5787", "merged_at": "2023-04-27T12:57:28Z", "patch_url": "https://github.com/huggingface/datasets/pull/5787.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5787" }
true
https://api.github.com/repos/huggingface/datasets/issues/5786
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5786/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5786/comments
https://api.github.com/repos/huggingface/datasets/issues/5786/events
https://github.com/huggingface/datasets/issues/5786
1,680,957,070
I_kwDODunzps5kMV6O
5,786
Multiprocessing in a `filter` or `map` function with a Pytorch model
{ "avatar_url": "https://avatars.githubusercontent.com/u/44556846?v=4", "events_url": "https://api.github.com/users/HugoLaurencon/events{/privacy}", "followers_url": "https://api.github.com/users/HugoLaurencon/followers", "following_url": "https://api.github.com/users/HugoLaurencon/following{/other_user}", "gists_url": "https://api.github.com/users/HugoLaurencon/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/HugoLaurencon", "id": 44556846, "login": "HugoLaurencon", "node_id": "MDQ6VXNlcjQ0NTU2ODQ2", "organizations_url": "https://api.github.com/users/HugoLaurencon/orgs", "received_events_url": "https://api.github.com/users/HugoLaurencon/received_events", "repos_url": "https://api.github.com/users/HugoLaurencon/repos", "site_admin": false, "starred_url": "https://api.github.com/users/HugoLaurencon/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/HugoLaurencon/subscriptions", "type": "User", "url": "https://api.github.com/users/HugoLaurencon" }
[]
closed
false
null
[]
null
[ "Hi ! PyTorch may hang when calling `load_state_dict()` in a subprocess. To fix that, set the multiprocessing start method to \"spawn\". Since `datasets` uses `multiprocess`, you should do:\r\n\r\n```python\r\n# Required to avoid issues with pytorch (otherwise hangs during load_state_dict in multiprocessing)\r\nimport multiprocess.context as ctx\r\nctx._force_start_method('spawn')\r\n```\r\n\r\nAlso make sure to run your main code in `if __name__ == \"__main__\":` to avoid issues with python multiprocesing", "Thanks!", "@lhoestq Hello, I also encountered this problem but maybe with another reason. Here is my code:\r\n```python\r\ntokenizer = AutoTokenizer.from_pretrained(model_args.model_name_or_path, cache_dir=model_args.cache_dir, model_max_length=training_args.model_max_length)\r\ndata = load_dataset(\"json\", data_files=data_args.train_file, cache_dir=data_args.data_cache_dir)\r\ndef func(samples):\r\n # main operation\r\n for sentence_value in samples:\r\n sentence_ids = tokenizer.encode(sentence_value, add_special_tokens=False, max_length=tokenizer.model_max_length, truncation=True)\r\n ... ...\r\ntrain_data = data[\"train\"].shuffle().map(func, num_proc=os.cpu_count())\r\n```\r\nIt hangs after the progress reaches 100%. Could you help me point out the reason?", "@SkyAndCloud your issue doesn't seem related to the original post - could you open a new issue and provide more details ? (size of the dataset, number of cpus, how much time it took to run, `datasets` version)", "@lhoestq Hi, I just solved this problem. Because the input is extremely long and the tokenizer requests a large amount of memory, which leads to a OOM error and may eventually causes the hang. I didn't filter those too-long sentences because I thought `tokenizer` would stop once the length exceeds the `max_length`. However, it actually firstly complete the tokenization of entire sentence and then truncate it." ]
"2023-04-24T10:38:07Z"
"2023-05-30T09:56:30Z"
"2023-04-24T10:43:58Z"
MEMBER
null
### Describe the bug I am trying to use a Pytorch model loaded on CPUs with multiple processes with a `.map` or a `.filter` method. Usually, when dealing with models that are non-pickable, creating a class such that the `map` function is the method `__call__`, and adding `reduce` helps to solve the problem. However, here, the command hangs without throwing an error. ### Steps to reproduce the bug ``` from datasets import Dataset import torch from torch import nn from torchvision import models ​ ​ class FilterFunction: #__slots__ = ("path_model", "model") # Doesn't change anything uncommented def __init__(self, path_model): self.path_model = path_model model = models.resnet50() model.fc = nn.Sequential( nn.Linear(2048, 512), nn.ReLU(), nn.Dropout(0.2), nn.Linear(512, 10), nn.LogSoftmax(dim=1) ) model.load_state_dict(torch.load(path_model, map_location=torch.device("cpu"))) model.eval() self.model = model def __call__(self, batch): return [True] * len(batch["id"]) # Comment this to have an error def __reduce__(self): return (self.__class__, (self.path_model,)) ​ ​ dataset = Dataset.from_dict({"id": [0, 1, 2, 4]}) ​ # Download (100 MB) at https://github.com/emiliantolo/pytorch_nsfw_model/raw/master/ResNet50_nsfw_model.pth path_model = "/fsx/hugo/nsfw_image/ResNet50_nsfw_model.pth" ​ filter_function = FilterFunction(path_model=path_model) ​ # Works filtered_dataset = dataset.filter(filter_function, num_proc=1, batched=True, batch_size=2) # Doesn't work filtered_dataset = dataset.filter(filter_function, num_proc=2, batched=True, batch_size=2) ``` ### Expected behavior The command `filtered_dataset = dataset.filter(filter_function, num_proc=2, batched=True, batch_size=2)` should work and not hang. ### Environment info Datasets: 2.11.0 Pyarrow: 11.0.0 Ubuntu
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5786/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5786/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5785
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5785/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5785/comments
https://api.github.com/repos/huggingface/datasets/issues/5785/events
https://github.com/huggingface/datasets/issues/5785
1,680,956,964
I_kwDODunzps5kMV4k
5,785
Unsupported data files raise TypeError: 'NoneType' object is not iterable
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
[]
"2023-04-24T10:38:03Z"
"2023-04-27T12:57:30Z"
"2023-04-27T12:57:30Z"
MEMBER
null
Currently, we raise a TypeError for unsupported data files: ``` TypeError: 'NoneType' object is not iterable ``` See: - https://github.com/huggingface/datasets-server/issues/1073 We should give a more informative error message.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5785/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5785/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5784
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5784/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5784/comments
https://api.github.com/repos/huggingface/datasets/issues/5784/events
https://github.com/huggingface/datasets/pull/5784
1,680,950,726
PR_kwDODunzps5O_G9S
5,784
Raise subprocesses traceback when interrupting
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008959 / 0.011353 (-0.002394) | 0.005804 / 0.011008 (-0.005204) | 0.112663 / 0.038508 (0.074155) | 0.043406 / 0.023109 (0.020297) | 0.348582 / 0.275898 (0.072684) | 0.382332 / 0.323480 (0.058852) | 0.007469 / 0.007986 (-0.000517) | 0.006211 / 0.004328 (0.001883) | 0.086576 / 0.004250 (0.082326) | 0.059223 / 0.037052 (0.022170) | 0.361051 / 0.258489 (0.102562) | 0.411359 / 0.293841 (0.117518) | 0.043640 / 0.128546 (-0.084906) | 0.014239 / 0.075646 (-0.061408) | 0.389729 / 0.419271 (-0.029542) | 0.072319 / 0.043533 (0.028786) | 0.351025 / 0.255139 (0.095886) | 0.371893 / 0.283200 (0.088693) | 0.125994 / 0.141683 (-0.015688) | 1.675249 / 1.452155 (0.223094) | 1.808740 / 1.492716 (0.316024) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.255172 / 0.018006 (0.237166) | 0.536003 / 0.000490 (0.535514) | 0.000365 / 0.000200 (0.000165) | 0.000070 / 0.000054 (0.000015) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031989 / 0.037411 (-0.005423) | 0.126854 / 0.014526 (0.112328) | 0.142458 / 0.176557 (-0.034098) | 0.207821 / 0.737135 (-0.529314) | 0.145610 / 0.296338 (-0.150728) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.468924 / 0.215209 (0.253715) | 4.696677 / 2.077655 (2.619023) | 2.183133 / 1.504120 (0.679013) | 1.994219 / 1.541195 (0.453024) | 2.101375 / 1.468490 (0.632885) | 0.827168 / 4.584777 (-3.757609) | 4.710167 / 3.745712 (0.964455) | 2.377062 / 5.269862 (-2.892800) | 1.712245 / 4.565676 (-2.853431) | 0.100620 / 0.424275 (-0.323655) | 0.014302 / 0.007607 (0.006695) | 0.590813 / 0.226044 (0.364769) | 5.871991 / 2.268929 (3.603063) | 2.722229 / 55.444624 (-52.722395) | 2.323585 / 6.876477 (-4.552892) | 2.503289 / 2.142072 (0.361217) | 0.983644 / 4.805227 (-3.821583) | 0.193942 / 6.500664 (-6.306722) | 0.076493 / 0.075469 (0.001024) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.463107 / 1.841788 (-0.378681) | 17.876918 / 8.074308 (9.802610) | 16.755740 / 10.191392 (6.564348) | 0.167556 / 0.680424 (-0.512868) | 0.020514 / 0.534201 (-0.513687) | 0.508385 / 0.579283 (-0.070898) | 0.505873 / 0.434364 (0.071509) | 0.603630 / 0.540337 (0.063293) | 0.708856 / 1.386936 (-0.678080) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008504 / 0.011353 (-0.002849) | 0.005894 / 0.011008 (-0.005114) | 0.085523 / 0.038508 (0.047015) | 0.038780 / 0.023109 (0.015671) | 0.402869 / 0.275898 (0.126971) | 0.423819 / 0.323480 (0.100339) | 0.006427 / 0.007986 (-0.001559) | 0.004598 / 0.004328 (0.000269) | 0.079807 / 0.004250 (0.075556) | 0.050852 / 0.037052 (0.013799) | 0.403232 / 0.258489 (0.144743) | 0.452489 / 0.293841 (0.158648) | 0.041501 / 0.128546 (-0.087045) | 0.014996 / 0.075646 (-0.060650) | 0.101548 / 0.419271 (-0.317724) | 0.056993 / 0.043533 (0.013461) | 0.403153 / 0.255139 (0.148014) | 0.424587 / 0.283200 (0.141388) | 0.114507 / 0.141683 (-0.027176) | 1.707098 / 1.452155 (0.254943) | 1.799008 / 1.492716 (0.306291) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.288003 / 0.018006 (0.269996) | 0.496526 / 0.000490 (0.496036) | 0.010923 / 0.000200 (0.010723) | 0.000159 / 0.000054 (0.000105) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033948 / 0.037411 (-0.003463) | 0.142343 / 0.014526 (0.127817) | 0.143862 / 0.176557 (-0.032695) | 0.202655 / 0.737135 (-0.534480) | 0.151177 / 0.296338 (-0.145162) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.508003 / 0.215209 (0.292794) | 5.320394 / 2.077655 (3.242740) | 2.409854 / 1.504120 (0.905734) | 2.190656 / 1.541195 (0.649462) | 2.272171 / 1.468490 (0.803681) | 0.809492 / 4.584777 (-3.775285) | 4.554412 / 3.745712 (0.808699) | 4.413643 / 5.269862 (-0.856218) | 2.374034 / 4.565676 (-2.191642) | 0.099458 / 0.424275 (-0.324817) | 0.014553 / 0.007607 (0.006946) | 0.613916 / 0.226044 (0.387871) | 6.121430 / 2.268929 (3.852502) | 2.945661 / 55.444624 (-52.498964) | 2.595247 / 6.876477 (-4.281230) | 2.734047 / 2.142072 (0.591975) | 0.952217 / 4.805227 (-3.853010) | 0.196933 / 6.500664 (-6.303731) | 0.073391 / 0.075469 (-0.002078) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.475666 / 1.841788 (-0.366122) | 18.564281 / 8.074308 (10.489973) | 16.865259 / 10.191392 (6.673867) | 0.166494 / 0.680424 (-0.513930) | 0.020655 / 0.534201 (-0.513546) | 0.495120 / 0.579283 (-0.084163) | 0.502602 / 0.434364 (0.068238) | 0.622448 / 0.540337 (0.082110) | 0.721036 / 1.386936 (-0.665900) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#40c204c777793d64e8bb8ce357e9c07b3b303e41 \"CML watermark\")\n", "Whoops mario you're off this week sorry. I'm taking the liberty to merge this one", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009079 / 0.011353 (-0.002274) | 0.005960 / 0.011008 (-0.005049) | 0.116530 / 0.038508 (0.078022) | 0.046649 / 0.023109 (0.023540) | 0.391906 / 0.275898 (0.116008) | 0.438892 / 0.323480 (0.115412) | 0.007134 / 0.007986 (-0.000851) | 0.004997 / 0.004328 (0.000668) | 0.085947 / 0.004250 (0.081697) | 0.059814 / 0.037052 (0.022762) | 0.396423 / 0.258489 (0.137934) | 0.455941 / 0.293841 (0.162100) | 0.042535 / 0.128546 (-0.086011) | 0.014667 / 0.075646 (-0.060980) | 0.402023 / 0.419271 (-0.017249) | 0.060381 / 0.043533 (0.016848) | 0.393829 / 0.255139 (0.138690) | 0.426557 / 0.283200 (0.143358) | 0.131519 / 0.141683 (-0.010163) | 1.758098 / 1.452155 (0.305943) | 1.848194 / 1.492716 (0.355478) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.236405 / 0.018006 (0.218399) | 0.611442 / 0.000490 (0.610952) | 0.005143 / 0.000200 (0.004943) | 0.000146 / 0.000054 (0.000092) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034317 / 0.037411 (-0.003094) | 0.182485 / 0.014526 (0.167959) | 0.183149 / 0.176557 (0.006592) | 0.293592 / 0.737135 (-0.443543) | 0.197137 / 0.296338 (-0.099202) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.475690 / 0.215209 (0.260481) | 4.757344 / 2.077655 (2.679690) | 2.184079 / 1.504120 (0.679959) | 1.956599 / 1.541195 (0.415404) | 2.043041 / 1.468490 (0.574551) | 0.817602 / 4.584777 (-3.767175) | 6.432267 / 3.745712 (2.686555) | 5.999402 / 5.269862 (0.729541) | 3.095970 / 4.565676 (-1.469706) | 0.181589 / 0.424275 (-0.242686) | 0.023286 / 0.007607 (0.015679) | 1.090318 / 0.226044 (0.864274) | 7.919330 / 2.268929 (5.650401) | 2.702821 / 55.444624 (-52.741804) | 2.375442 / 6.876477 (-4.501034) | 2.543075 / 2.142072 (0.401003) | 1.011763 / 4.805227 (-3.793464) | 0.203676 / 6.500664 (-6.296988) | 0.080075 / 0.075469 (0.004606) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.875420 / 1.841788 (0.033632) | 23.059278 / 8.074308 (14.984970) | 19.250807 / 10.191392 (9.059415) | 0.323678 / 0.680424 (-0.356746) | 0.028682 / 0.534201 (-0.505519) | 0.698231 / 0.579283 (0.118948) | 0.668129 / 0.434364 (0.233765) | 0.831218 / 0.540337 (0.290880) | 0.941191 / 1.386936 (-0.445745) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.013122 / 0.011353 (0.001769) | 0.006123 / 0.011008 (-0.004886) | 0.090493 / 0.038508 (0.051985) | 0.070660 / 0.023109 (0.047551) | 0.413486 / 0.275898 (0.137588) | 0.450364 / 0.323480 (0.126884) | 0.010288 / 0.007986 (0.002302) | 0.006590 / 0.004328 (0.002261) | 0.087174 / 0.004250 (0.082923) | 0.077304 / 0.037052 (0.040252) | 0.428480 / 0.258489 (0.169991) | 0.459872 / 0.293841 (0.166032) | 0.060477 / 0.128546 (-0.068069) | 0.014859 / 0.075646 (-0.060788) | 0.103915 / 0.419271 (-0.315356) | 0.087466 / 0.043533 (0.043933) | 0.418644 / 0.255139 (0.163505) | 0.433409 / 0.283200 (0.150209) | 0.166716 / 0.141683 (0.025033) | 1.712068 / 1.452155 (0.259914) | 1.827869 / 1.492716 (0.335153) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.372491 / 0.018006 (0.354484) | 0.493426 / 0.000490 (0.492937) | 0.005497 / 0.000200 (0.005297) | 0.000129 / 0.000054 (0.000074) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.036531 / 0.037411 (-0.000880) | 0.142152 / 0.014526 (0.127626) | 0.148183 / 0.176557 (-0.028373) | 0.212918 / 0.737135 (-0.524217) | 0.154092 / 0.296338 (-0.142246) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.551733 / 0.215209 (0.336524) | 5.421498 / 2.077655 (3.343843) | 2.418848 / 1.504120 (0.914728) | 2.213185 / 1.541195 (0.671991) | 2.294881 / 1.468490 (0.826391) | 0.827031 / 4.584777 (-3.757746) | 6.365622 / 3.745712 (2.619910) | 4.927996 / 5.269862 (-0.341866) | 2.756133 / 4.565676 (-1.809544) | 0.101474 / 0.424275 (-0.322801) | 0.014523 / 0.007607 (0.006916) | 0.619082 / 0.226044 (0.393037) | 6.200132 / 2.268929 (3.931204) | 3.015590 / 55.444624 (-52.429034) | 2.711181 / 6.876477 (-4.165296) | 2.857157 / 2.142072 (0.715084) | 0.993329 / 4.805227 (-3.811898) | 0.203364 / 6.500664 (-6.297301) | 0.079167 / 0.075469 (0.003698) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.709881 / 1.841788 (-0.131907) | 24.867536 / 8.074308 (16.793228) | 21.755361 / 10.191392 (11.563969) | 0.295837 / 0.680424 (-0.384586) | 0.031934 / 0.534201 (-0.502267) | 0.709994 / 0.579283 (0.130711) | 0.779656 / 0.434364 (0.345293) | 0.780669 / 0.540337 (0.240331) | 0.712808 / 1.386936 (-0.674128) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#cf4a1951bdca7175adac9c8b85550e89dcceb6fa \"CML watermark\")\n" ]
"2023-04-24T10:34:03Z"
"2023-04-26T16:04:42Z"
"2023-04-26T15:54:44Z"
MEMBER
null
When a subprocess hangs in `filter` or `map`, one should be able to get the subprocess' traceback when interrupting the main process. Right now it shows nothing. To do so I `.get()` the subprocesses async results even the main process is stopped with e.g. `KeyboardInterrupt`. I added a timeout in case the subprocess is hanging or crashed.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5784/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5784/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5784.diff", "html_url": "https://github.com/huggingface/datasets/pull/5784", "merged_at": "2023-04-26T15:54:44Z", "patch_url": "https://github.com/huggingface/datasets/pull/5784.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5784" }
true
https://api.github.com/repos/huggingface/datasets/issues/5783
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5783/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5783/comments
https://api.github.com/repos/huggingface/datasets/issues/5783/events
https://github.com/huggingface/datasets/issues/5783
1,679,664,393
I_kwDODunzps5kHaUJ
5,783
Offset overflow while doing regex on a text column
{ "avatar_url": "https://avatars.githubusercontent.com/u/5066268?v=4", "events_url": "https://api.github.com/users/nishanthcgit/events{/privacy}", "followers_url": "https://api.github.com/users/nishanthcgit/followers", "following_url": "https://api.github.com/users/nishanthcgit/following{/other_user}", "gists_url": "https://api.github.com/users/nishanthcgit/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/nishanthcgit", "id": 5066268, "login": "nishanthcgit", "node_id": "MDQ6VXNlcjUwNjYyNjg=", "organizations_url": "https://api.github.com/users/nishanthcgit/orgs", "received_events_url": "https://api.github.com/users/nishanthcgit/received_events", "repos_url": "https://api.github.com/users/nishanthcgit/repos", "site_admin": false, "starred_url": "https://api.github.com/users/nishanthcgit/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nishanthcgit/subscriptions", "type": "User", "url": "https://api.github.com/users/nishanthcgit" }
[]
open
false
null
[]
null
[ "Hi! This looks like an Arrow bug, but it can be avoided by reducing the `writer_batch_size`.\r\n\r\n(`ds = ds.map(get_text_caption, writer_batch_size=100)` in Colab runs without issues)\r\n", "@mariosasko I ran into this problem with load_dataset. What should I do", "@AisingioroHao0 You can also pass the `writer_batch_size` parameter to `load_dataset`, e.g., `load_dataset(\"mnist\", writer_batch_size=100)`", "@mariosasko How do I determine the optimal size of write_batch_size? My training is sometimes fast and sometimes slow. Is it because write_batch_size is too small? Each batch of the current dataloader should be the same size. I preprocessed the dataset using map" ]
"2023-04-22T19:12:03Z"
"2023-09-04T14:26:29Z"
null
NONE
null
### Describe the bug `ArrowInvalid: offset overflow while concatenating arrays` Same error as [here](https://github.com/huggingface/datasets/issues/615) ### Steps to reproduce the bug Steps to reproduce: (dataset is a few GB big so try in colab maybe) ``` import datasets import re ds = datasets.load_dataset('nishanthc/dnd_map_dataset_v0.1', split = 'train') def get_text_caption(example): regex_pattern = r'\s\d+x\d+|,\sLQ|,\sgrid|\.\w+$' example['text_caption'] = re.sub(regex_pattern, '', example['picture_text']) return example ds = ds.map(get_text_caption) ``` I am trying to apply a regex to remove certain patterns from a text column. Not sure why this error is showing up. ### Expected behavior Dataset should have a new column with processed text ### Environment info Datasets version - 2.11.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5783/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5783/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5782
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5782/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5782/comments
https://api.github.com/repos/huggingface/datasets/issues/5782/events
https://github.com/huggingface/datasets/issues/5782
1,679,622,367
I_kwDODunzps5kHQDf
5,782
Support for various audio-loading backends instead of always relying on SoundFile
{ "avatar_url": "https://avatars.githubusercontent.com/u/129098876?v=4", "events_url": "https://api.github.com/users/BoringDonut/events{/privacy}", "followers_url": "https://api.github.com/users/BoringDonut/followers", "following_url": "https://api.github.com/users/BoringDonut/following{/other_user}", "gists_url": "https://api.github.com/users/BoringDonut/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/BoringDonut", "id": 129098876, "login": "BoringDonut", "node_id": "U_kgDOB7HkfA", "organizations_url": "https://api.github.com/users/BoringDonut/orgs", "received_events_url": "https://api.github.com/users/BoringDonut/received_events", "repos_url": "https://api.github.com/users/BoringDonut/repos", "site_admin": false, "starred_url": "https://api.github.com/users/BoringDonut/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BoringDonut/subscriptions", "type": "User", "url": "https://api.github.com/users/BoringDonut" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
null
[]
null
[ "Hi! \r\n\r\nYou can use `set_transform`/`with_transform` to define a custom decoding for audio formats not supported by `soundfile`:\r\n```python\r\naudio_dataset_amr = Dataset.from_dict({\"audio\": [\"audio_samples/audio.amr\"]})\r\n\r\ndef decode_audio(batch):\r\n batch[\"audio\"] = [read_ffmpeg(audio_path) for audio_path in batch[\"audio\"]]\r\n return batch\r\n\r\naudio_dataset_amr.set_transform(decode_amr) \r\n```\r\n\r\nSupporting multiple backends is more work to maintain, but we could consider this if we get more requests such as this one.", "Could it be put somewhere as an example tip or something?", "Considering the number of times a custom decoding transform has been suggested as a solution, an example in the [docs](https://huggingface.co/docs/datasets/process#format-transform) would be nice.\r\n\r\ncc @stevhliu " ]
"2023-04-22T17:09:25Z"
"2023-05-10T20:23:04Z"
"2023-05-10T20:23:04Z"
NONE
null
### Feature request Introduce an option to select from a variety of audio-loading backends rather than solely relying on the SoundFile library. For instance, if the ffmpeg library is installed, it can serve as a fallback loading option. ### Motivation - The SoundFile library, used in [features/audio.py](https://github.com/huggingface/datasets/blob/649d5a3315f9e7666713b6affe318ee00c7163a0/src/datasets/features/audio.py#L185), supports only a [limited number of audio formats](https://pysoundfile.readthedocs.io/en/latest/index.html?highlight=supported#soundfile.available_formats). - However, current methods for creating audio datasets permit the inclusion of audio files in formats not supported by SoundFile. - As a result, developers may potentially create a dataset they cannot read back. In my most recent project, I dealt with phone call recordings in `.amr` or `.gsm` formats and was genuinely surprised when I couldn't read the dataset I had just packaged a minute prior. Nonetheless, I can still accurately read these files using the librosa library, which employs the audioread library that internally leverages ffmpeg to read such files. Example: ```python audio_dataset_amr = Dataset.from_dict({"audio": ["audio_samples/audio.amr"]}).cast_column("audio", Audio()) audio_dataset_amr.save_to_disk("audio_dataset_amr") audio_dataset_amr = Dataset.load_from_disk("audio_dataset_amr") print(audio_dataset_amr[0]) ``` Results in: ``` Traceback (most recent call last): ... raise LibsndfileError(err, prefix="Error opening {0!r}: ".format(self.name)) soundfile.LibsndfileError: Error opening <_io.BytesIO object at 0x7f316323e4d0>: Format not recognised. ``` While I acknowledge that support for these rare file types may not be a priority, I believe it's quite unfortunate that it's possible to create an unreadable dataset in this manner. ### Your contribution I've created a [simple demo repository](https://github.com/BoringDonut/hf-datasets-ffmpeg-audio) that highlights the mentioned issue. It demonstrates how to create an .amr dataset that results in an error when attempting to read it just a few lines later. Additionally, I've made a [fork with a rudimentary solution](https://github.com/BoringDonut/datasets/blob/fea73a8fbbc8876467c7e6422c9360546c6372d8/src/datasets/features/audio.py#L189) that utilizes ffmpeg to load files not supported by SoundFile. Here you may see github actions fails to read `.amr` dataset using the version of the current dataset, but will work with the patched version: - https://github.com/BoringDonut/hf-datasets-ffmpeg-audio/actions/runs/4773780420/jobs/8487063785 - https://github.com/BoringDonut/hf-datasets-ffmpeg-audio/actions/runs/4773780420/jobs/8487063829 As evident from the GitHub action above, this solution resolves the previously mentioned problem. I'd be happy to create a proper pull request, provide runtime benchmarks and tests if you could offer some guidance on the following: - Where should I incorporate the ffmpeg (or other backends) code? For example, should I create a new file or simply add a function within the Audio class? - Is it feasible to pass the audio-loading function as an argument within the current architecture? This would be useful if I know in advance that I'll be reading files not supported by SoundFile. A few more notes: - In theory, it's possible to load audio using librosa/audioread since librosa is already expected to be installed. However, librosa [will soon discontinue audioread support](https://github.com/librosa/librosa/blob/aacb4c134002903ae56bbd4b4a330519a5abacc0/librosa/core/audio.py#L227). Moreover, using audioread on its own seems inconvenient because it requires a file [path as input](https://github.com/beetbox/audioread/blob/ff9535df934c48038af7be9617fdebb12078cc07/audioread/__init__.py#L108) and cannot work with bytes already loaded into memory or an open file descriptor (as mentioned in [librosa docs](https://librosa.org/doc/main/generated/librosa.load.html#librosa.load), only SoundFile backend supports an open file descriptor as an input).
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/5782/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5782/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5781
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5781/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5781/comments
https://api.github.com/repos/huggingface/datasets/issues/5781/events
https://github.com/huggingface/datasets/issues/5781
1,679,580,460
I_kwDODunzps5kHF0s
5,781
Error using `load_datasets`
{ "avatar_url": "https://avatars.githubusercontent.com/u/61463108?v=4", "events_url": "https://api.github.com/users/gjyoungjr/events{/privacy}", "followers_url": "https://api.github.com/users/gjyoungjr/followers", "following_url": "https://api.github.com/users/gjyoungjr/following{/other_user}", "gists_url": "https://api.github.com/users/gjyoungjr/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/gjyoungjr", "id": 61463108, "login": "gjyoungjr", "node_id": "MDQ6VXNlcjYxNDYzMTA4", "organizations_url": "https://api.github.com/users/gjyoungjr/orgs", "received_events_url": "https://api.github.com/users/gjyoungjr/received_events", "repos_url": "https://api.github.com/users/gjyoungjr/repos", "site_admin": false, "starred_url": "https://api.github.com/users/gjyoungjr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gjyoungjr/subscriptions", "type": "User", "url": "https://api.github.com/users/gjyoungjr" }
[]
closed
false
null
[]
null
[ "It looks like an issue with your installation of scipy, can you try reinstalling it ?", "Sorry for the late reply, but that worked @lhoestq . Thanks for the assist." ]
"2023-04-22T15:10:44Z"
"2023-05-02T23:41:25Z"
"2023-05-02T23:41:25Z"
NONE
null
### Describe the bug I tried to load a dataset using the `datasets` library in a conda jupyter notebook and got the below error. ``` ImportError: dlopen(/Users/gilbertyoung/miniforge3/envs/review_sense/lib/python3.8/site-packages/scipy/sparse/linalg/_isolve/_iterative.cpython-38-darwin.so, 0x0002): Library not loaded: @rpath/liblapack.3.dylib Referenced from: <65B094A2-59D7-31AC-A966-4DB9E11D2A15> /Users/gilbertyoung/miniforge3/envs/review_sense/lib/python3.8/site-packages/scipy/sparse/linalg/_isolve/_iterative.cpython-38-darwin.so Reason: tried: '/Users/gilbertyoung/miniforge3/envs/review_sense/lib/python3.8/site-packages/scipy/sparse/linalg/_isolve/liblapack.3.dylib' (no such file), '/Users/gilbertyoung/miniforge3/envs/review_sense/lib/python3.8/site-packages/scipy/sparse/linalg/_isolve/../../../../../../liblapack.3.dylib' (no such file), '/Users/gilbertyoung/miniforge3/envs/review_sense/lib/python3.8/site-packages/scipy/sparse/linalg/_isolve/liblapack.3.dylib' (no such file), '/Users/gilbertyoung/miniforge3/envs/review_sense/lib/python3.8/site-packages/scipy/sparse/linalg/_isolve/../../../../../../liblapack.3.dylib' (no such file), '/Users/gilbertyoung/miniforge3/envs/review_sense/bin/../lib/liblapack.3.dylib' (no such file), '/Users/gilbertyoung/miniforge3/envs/review_sense/bin/../lib/liblapack.3.dylib' (no such file), '/usr/local/lib/liblapack.3.dylib' (no such file), '/usr/lib/liblapack.3.dylib' (no such file, not in dyld cache) ``` ### Steps to reproduce the bug Run the `load_datasets` function ### Expected behavior I expected the dataset to be loaded into my notebook. ### Environment info name: review_sense channels: - apple - conda-forge dependencies: - python=3.8 - pip>=19.0 - jupyter - tensorflow-deps #- scikit-learn #- scipy - pandas - pandas-datareader - matplotlib - pillow - tqdm - requests - h5py - pyyaml - flask - boto3 - ipykernel - seaborn - pip: - tensorflow-macos==2.9 - tensorflow-metal==0.5.0 - bayesian-optimization - gym - kaggle - huggingface_hub - datasets - numpy - huggingface
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5781/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5781/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5780
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5780/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5780/comments
https://api.github.com/repos/huggingface/datasets/issues/5780/events
https://github.com/huggingface/datasets/issues/5780
1,679,367,149
I_kwDODunzps5kGRvt
5,780
TypeError: 'NoneType' object does not support item assignment
{ "avatar_url": "https://avatars.githubusercontent.com/u/38179632?v=4", "events_url": "https://api.github.com/users/v-yunbin/events{/privacy}", "followers_url": "https://api.github.com/users/v-yunbin/followers", "following_url": "https://api.github.com/users/v-yunbin/following{/other_user}", "gists_url": "https://api.github.com/users/v-yunbin/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/v-yunbin", "id": 38179632, "login": "v-yunbin", "node_id": "MDQ6VXNlcjM4MTc5NjMy", "organizations_url": "https://api.github.com/users/v-yunbin/orgs", "received_events_url": "https://api.github.com/users/v-yunbin/received_events", "repos_url": "https://api.github.com/users/v-yunbin/repos", "site_admin": false, "starred_url": "https://api.github.com/users/v-yunbin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/v-yunbin/subscriptions", "type": "User", "url": "https://api.github.com/users/v-yunbin" }
[]
closed
false
null
[]
null
[]
"2023-04-22T06:22:43Z"
"2023-04-23T08:49:18Z"
"2023-04-23T08:49:18Z"
NONE
null
command: ``` def load_datasets(formats, data_dir=datadir, data_files=datafile): dataset = load_dataset(formats, data_dir=datadir, data_files=datafile, split=split, streaming=True, **kwargs) return dataset raw_datasets = DatasetDict() raw_datasets["train"] = load_datasets(“csv”, args.datadir, "train.csv", split=train_split) raw_datasets["test"] = load_datasets(“csv”, args.datadir, "dev.csv", split=test_split) raw_datasets = raw_datasets.cast_column("audio", Audio(sampling_rate=16000)) ``` error: ``` main() File "peft_adalora_whisper_large_training.py", line 502, in main raw_datasets = raw_datasets.cast_column("audio", Audio(sampling_rate=16000)) File "/home/ybZhang/miniconda3/envs/whister/lib/python3.8/site-packages/datasets/dataset_dict.py", line 2015, in cast_column info.features[column] = feature TypeError: 'NoneType' object does not support item assignment ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5780/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5780/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5779
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5779/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5779/comments
https://api.github.com/repos/huggingface/datasets/issues/5779/events
https://github.com/huggingface/datasets/pull/5779
1,678,669,865
PR_kwDODunzps5O3sHp
5,779
Call fs.makedirs in save_to_disk
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007490 / 0.011353 (-0.003862) | 0.004957 / 0.011008 (-0.006051) | 0.096952 / 0.038508 (0.058444) | 0.034125 / 0.023109 (0.011016) | 0.301926 / 0.275898 (0.026028) | 0.330538 / 0.323480 (0.007058) | 0.005999 / 0.007986 (-0.001987) | 0.003948 / 0.004328 (-0.000380) | 0.073024 / 0.004250 (0.068773) | 0.050020 / 0.037052 (0.012967) | 0.299987 / 0.258489 (0.041498) | 0.336077 / 0.293841 (0.042237) | 0.035781 / 0.128546 (-0.092765) | 0.012159 / 0.075646 (-0.063487) | 0.333311 / 0.419271 (-0.085960) | 0.059925 / 0.043533 (0.016392) | 0.297772 / 0.255139 (0.042633) | 0.313447 / 0.283200 (0.030247) | 0.100991 / 0.141683 (-0.040692) | 1.472182 / 1.452155 (0.020027) | 1.553010 / 1.492716 (0.060294) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.214222 / 0.018006 (0.196216) | 0.441579 / 0.000490 (0.441090) | 0.001030 / 0.000200 (0.000830) | 0.000194 / 0.000054 (0.000140) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026149 / 0.037411 (-0.011262) | 0.107324 / 0.014526 (0.092798) | 0.113390 / 0.176557 (-0.063167) | 0.170282 / 0.737135 (-0.566854) | 0.120601 / 0.296338 (-0.175737) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.411795 / 0.215209 (0.196585) | 4.091412 / 2.077655 (2.013757) | 1.819597 / 1.504120 (0.315477) | 1.623413 / 1.541195 (0.082218) | 1.658959 / 1.468490 (0.190469) | 0.697671 / 4.584777 (-3.887106) | 3.868855 / 3.745712 (0.123143) | 3.220448 / 5.269862 (-2.049414) | 1.796472 / 4.565676 (-2.769204) | 0.085817 / 0.424275 (-0.338458) | 0.012422 / 0.007607 (0.004815) | 0.520302 / 0.226044 (0.294258) | 5.062477 / 2.268929 (2.793548) | 2.275065 / 55.444624 (-53.169560) | 1.936717 / 6.876477 (-4.939759) | 2.069924 / 2.142072 (-0.072148) | 0.838964 / 4.805227 (-3.966264) | 0.170632 / 6.500664 (-6.330032) | 0.066011 / 0.075469 (-0.009458) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.190673 / 1.841788 (-0.651114) | 14.679478 / 8.074308 (6.605169) | 14.099743 / 10.191392 (3.908351) | 0.142556 / 0.680424 (-0.537868) | 0.017601 / 0.534201 (-0.516600) | 0.421301 / 0.579283 (-0.157982) | 0.418035 / 0.434364 (-0.016329) | 0.503799 / 0.540337 (-0.036539) | 0.588809 / 1.386936 (-0.798127) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007556 / 0.011353 (-0.003797) | 0.005283 / 0.011008 (-0.005725) | 0.075616 / 0.038508 (0.037107) | 0.034127 / 0.023109 (0.011018) | 0.345145 / 0.275898 (0.069247) | 0.377490 / 0.323480 (0.054010) | 0.006532 / 0.007986 (-0.001454) | 0.004145 / 0.004328 (-0.000183) | 0.074724 / 0.004250 (0.070473) | 0.048658 / 0.037052 (0.011605) | 0.339989 / 0.258489 (0.081500) | 0.398240 / 0.293841 (0.104399) | 0.037433 / 0.128546 (-0.091114) | 0.012410 / 0.075646 (-0.063237) | 0.088110 / 0.419271 (-0.331162) | 0.050635 / 0.043533 (0.007103) | 0.351878 / 0.255139 (0.096739) | 0.365707 / 0.283200 (0.082508) | 0.104342 / 0.141683 (-0.037341) | 1.438009 / 1.452155 (-0.014145) | 1.533616 / 1.492716 (0.040900) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.225570 / 0.018006 (0.207563) | 0.442482 / 0.000490 (0.441992) | 0.000402 / 0.000200 (0.000202) | 0.000063 / 0.000054 (0.000009) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030348 / 0.037411 (-0.007063) | 0.111402 / 0.014526 (0.096877) | 0.123365 / 0.176557 (-0.053192) | 0.175604 / 0.737135 (-0.561531) | 0.128458 / 0.296338 (-0.167881) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.426054 / 0.215209 (0.210845) | 4.255050 / 2.077655 (2.177395) | 2.039568 / 1.504120 (0.535448) | 1.856842 / 1.541195 (0.315647) | 1.923792 / 1.468490 (0.455301) | 0.701023 / 4.584777 (-3.883754) | 3.746632 / 3.745712 (0.000920) | 2.055563 / 5.269862 (-3.214298) | 1.308068 / 4.565676 (-3.257608) | 0.085524 / 0.424275 (-0.338751) | 0.012103 / 0.007607 (0.004496) | 0.522929 / 0.226044 (0.296885) | 5.258133 / 2.268929 (2.989205) | 2.458440 / 55.444624 (-52.986185) | 2.141681 / 6.876477 (-4.734796) | 2.258667 / 2.142072 (0.116595) | 0.842533 / 4.805227 (-3.962694) | 0.168089 / 6.500664 (-6.332575) | 0.063707 / 0.075469 (-0.011762) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.312252 / 1.841788 (-0.529536) | 14.939185 / 8.074308 (6.864877) | 14.479845 / 10.191392 (4.288453) | 0.162557 / 0.680424 (-0.517867) | 0.017660 / 0.534201 (-0.516541) | 0.423261 / 0.579283 (-0.156023) | 0.417693 / 0.434364 (-0.016671) | 0.495440 / 0.540337 (-0.044897) | 0.589932 / 1.386936 (-0.797004) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#4e3c86574155961097b367d5cddda5bd13c42b09 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008796 / 0.011353 (-0.002557) | 0.005828 / 0.011008 (-0.005180) | 0.118629 / 0.038508 (0.080121) | 0.042435 / 0.023109 (0.019326) | 0.383780 / 0.275898 (0.107882) | 0.420344 / 0.323480 (0.096864) | 0.006855 / 0.007986 (-0.001130) | 0.006290 / 0.004328 (0.001962) | 0.087160 / 0.004250 (0.082910) | 0.057568 / 0.037052 (0.020516) | 0.378761 / 0.258489 (0.120272) | 0.426496 / 0.293841 (0.132655) | 0.041772 / 0.128546 (-0.086774) | 0.014226 / 0.075646 (-0.061420) | 0.400097 / 0.419271 (-0.019174) | 0.060402 / 0.043533 (0.016870) | 0.381955 / 0.255139 (0.126816) | 0.399110 / 0.283200 (0.115911) | 0.124608 / 0.141683 (-0.017075) | 1.737856 / 1.452155 (0.285702) | 1.829034 / 1.492716 (0.336318) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.219941 / 0.018006 (0.201934) | 0.497156 / 0.000490 (0.496666) | 0.005094 / 0.000200 (0.004894) | 0.000097 / 0.000054 (0.000043) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032144 / 0.037411 (-0.005268) | 0.131782 / 0.014526 (0.117256) | 0.141543 / 0.176557 (-0.035014) | 0.211419 / 0.737135 (-0.525716) | 0.147338 / 0.296338 (-0.149001) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.478345 / 0.215209 (0.263136) | 4.749506 / 2.077655 (2.671851) | 2.195794 / 1.504120 (0.691674) | 1.978126 / 1.541195 (0.436932) | 2.059941 / 1.468490 (0.591451) | 0.821959 / 4.584777 (-3.762818) | 5.737479 / 3.745712 (1.991767) | 2.507125 / 5.269862 (-2.762737) | 2.051772 / 4.565676 (-2.513905) | 0.100619 / 0.424275 (-0.323656) | 0.014437 / 0.007607 (0.006830) | 0.599484 / 0.226044 (0.373440) | 5.977579 / 2.268929 (3.708651) | 2.708143 / 55.444624 (-52.736482) | 2.320279 / 6.876477 (-4.556198) | 2.510172 / 2.142072 (0.368100) | 1.006279 / 4.805227 (-3.798948) | 0.199812 / 6.500664 (-6.300853) | 0.077967 / 0.075469 (0.002498) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.510171 / 1.841788 (-0.331616) | 21.099446 / 8.074308 (13.025138) | 17.634225 / 10.191392 (7.442833) | 0.223506 / 0.680424 (-0.456918) | 0.023845 / 0.534201 (-0.510356) | 0.613489 / 0.579283 (0.034206) | 0.685735 / 0.434364 (0.251371) | 0.652485 / 0.540337 (0.112148) | 0.734756 / 1.386936 (-0.652180) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008444 / 0.011353 (-0.002909) | 0.005789 / 0.011008 (-0.005220) | 0.088297 / 0.038508 (0.049789) | 0.040847 / 0.023109 (0.017737) | 0.411748 / 0.275898 (0.135850) | 0.452320 / 0.323480 (0.128841) | 0.006689 / 0.007986 (-0.001296) | 0.006029 / 0.004328 (0.001701) | 0.086080 / 0.004250 (0.081830) | 0.053310 / 0.037052 (0.016257) | 0.402568 / 0.258489 (0.144079) | 0.459047 / 0.293841 (0.165206) | 0.041203 / 0.128546 (-0.087343) | 0.014216 / 0.075646 (-0.061431) | 0.102729 / 0.419271 (-0.316543) | 0.057170 / 0.043533 (0.013637) | 0.407137 / 0.255139 (0.151998) | 0.429703 / 0.283200 (0.146503) | 0.123528 / 0.141683 (-0.018155) | 1.690026 / 1.452155 (0.237872) | 1.797793 / 1.492716 (0.305077) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.264581 / 0.018006 (0.246575) | 0.498981 / 0.000490 (0.498492) | 0.000462 / 0.000200 (0.000262) | 0.000096 / 0.000054 (0.000041) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034613 / 0.037411 (-0.002798) | 0.136596 / 0.014526 (0.122070) | 0.142183 / 0.176557 (-0.034374) | 0.201816 / 0.737135 (-0.535320) | 0.148843 / 0.296338 (-0.147496) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.506708 / 0.215209 (0.291499) | 5.042829 / 2.077655 (2.965175) | 2.448414 / 1.504120 (0.944295) | 2.213251 / 1.541195 (0.672056) | 2.255805 / 1.468490 (0.787315) | 0.829929 / 4.584777 (-3.754848) | 5.145717 / 3.745712 (1.400004) | 2.493947 / 5.269862 (-2.775915) | 1.676171 / 4.565676 (-2.889506) | 0.102097 / 0.424275 (-0.322178) | 0.014545 / 0.007607 (0.006938) | 0.635473 / 0.226044 (0.409429) | 6.306767 / 2.268929 (4.037839) | 3.050284 / 55.444624 (-52.394341) | 2.653175 / 6.876477 (-4.223302) | 2.850569 / 2.142072 (0.708496) | 1.355280 / 4.805227 (-3.449947) | 0.248112 / 6.500664 (-6.252552) | 0.091993 / 0.075469 (0.016524) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.837509 / 1.841788 (-0.004279) | 21.268838 / 8.074308 (13.194530) | 17.338053 / 10.191392 (7.146660) | 0.232263 / 0.680424 (-0.448161) | 0.029093 / 0.534201 (-0.505108) | 0.651056 / 0.579283 (0.071773) | 0.617623 / 0.434364 (0.183259) | 0.773921 / 0.540337 (0.233584) | 0.705118 / 1.386936 (-0.681818) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#35846fd54fa16aa72ff344d15c98b5e08c5effe0 \"CML watermark\")\n" ]
"2023-04-21T15:04:28Z"
"2023-04-26T12:20:01Z"
"2023-04-26T12:11:15Z"
MEMBER
null
We need to call `fs.makedirs` when saving a dataset using `save_to_disk`, because some fs implementations have actual directories (S3 and others don't) Close https://github.com/huggingface/datasets/issues/5775
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5779/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5779/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5779.diff", "html_url": "https://github.com/huggingface/datasets/pull/5779", "merged_at": "2023-04-26T12:11:15Z", "patch_url": "https://github.com/huggingface/datasets/pull/5779.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5779" }
true
https://api.github.com/repos/huggingface/datasets/issues/5778
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5778/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5778/comments
https://api.github.com/repos/huggingface/datasets/issues/5778/events
https://github.com/huggingface/datasets/issues/5778
1,678,125,951
I_kwDODunzps5kBit_
5,778
Schrödinger's dataset_dict
{ "avatar_url": "https://avatars.githubusercontent.com/u/902005?v=4", "events_url": "https://api.github.com/users/liujuncn/events{/privacy}", "followers_url": "https://api.github.com/users/liujuncn/followers", "following_url": "https://api.github.com/users/liujuncn/following{/other_user}", "gists_url": "https://api.github.com/users/liujuncn/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/liujuncn", "id": 902005, "login": "liujuncn", "node_id": "MDQ6VXNlcjkwMjAwNQ==", "organizations_url": "https://api.github.com/users/liujuncn/orgs", "received_events_url": "https://api.github.com/users/liujuncn/received_events", "repos_url": "https://api.github.com/users/liujuncn/repos", "site_admin": false, "starred_url": "https://api.github.com/users/liujuncn/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/liujuncn/subscriptions", "type": "User", "url": "https://api.github.com/users/liujuncn" }
[]
closed
false
null
[]
null
[ "Hi ! Passing `data_files=\"path/test.json\"` is equivalent to `data_files={\"train\": [\"path/test.json\"]}`, that's why you end up with a train split. If you don't pass `data_files=`, then split names are inferred from the data files names" ]
"2023-04-21T08:38:12Z"
"2023-07-24T15:15:14Z"
"2023-07-24T15:15:14Z"
NONE
null
### Describe the bug If you use load_dataset('json', data_files="path/test.json"), it will return DatasetDict({train:...}). And if you use load_dataset("path"), it will return DatasetDict({test:...}). Why can't the output behavior be unified? ### Steps to reproduce the bug as description above. ### Expected behavior consistent predictable output. ### Environment info '2.11.0'
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5778/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5778/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5777
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5777/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5777/comments
https://api.github.com/repos/huggingface/datasets/issues/5777/events
https://github.com/huggingface/datasets/issues/5777
1,677,655,969
I_kwDODunzps5j_v-h
5,777
datasets.load_dataset("code_search_net", "python") : NotADirectoryError: [Errno 20] Not a directory
{ "avatar_url": "https://avatars.githubusercontent.com/u/34688597?v=4", "events_url": "https://api.github.com/users/jason-brian-anderson/events{/privacy}", "followers_url": "https://api.github.com/users/jason-brian-anderson/followers", "following_url": "https://api.github.com/users/jason-brian-anderson/following{/other_user}", "gists_url": "https://api.github.com/users/jason-brian-anderson/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jason-brian-anderson", "id": 34688597, "login": "jason-brian-anderson", "node_id": "MDQ6VXNlcjM0Njg4NTk3", "organizations_url": "https://api.github.com/users/jason-brian-anderson/orgs", "received_events_url": "https://api.github.com/users/jason-brian-anderson/received_events", "repos_url": "https://api.github.com/users/jason-brian-anderson/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jason-brian-anderson/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jason-brian-anderson/subscriptions", "type": "User", "url": "https://api.github.com/users/jason-brian-anderson" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
[ "Note:\r\nI listed the datasets and grepped around to find what appears to be an alternative source for this:\r\n\r\nraw_datasets = load_dataset(\"espejelomar/code_search_net_python_10000_examples\", \"python\")", "Thanks for reporting, @jason-brian-anderson.\r\n\r\nYes, this is a known issue: the [CodeSearchNet](https://github.com/github/CodeSearchNet) repo has been archived (Apr 11, 2023) and their source data files are no longer accessible in their S3: e.g. https://s3.amazonaws.com/code-search-net/CodeSearchNet/v2/python.zip gives 403 Forbidden error. See:\r\n- https://huggingface.co/datasets/code_search_net/discussions/3\r\n\r\nWe have contacted one of the authors of the dataset to find a solution. I'll keep you informed.\r\n\r\nCC: @hamelsmu", "cc: @julianeagu", "This issue is fixed because we are hosting the CodeSearchNet data files in the Hugging Face Hub. See: https://huggingface.co/datasets/code_search_net/discussions/7", "I learned that @mallamanis has uploaded the dataset [here as well](https://zenodo.org/record/7908468) ", "Thanks @hamelsmu for the Zenodo link. I am adding it to the dataset card on the Hugging Face Hub, so that the community knows about this \"official\" source data. I guess this link is not well known, because some community members already hosted an \"unofficial\" version of the data on Zenodo as well: https://zenodo.org/record/7857872\r\n\r\n" ]
"2023-04-21T02:08:07Z"
"2023-06-05T05:49:52Z"
"2023-05-11T11:51:56Z"
NONE
null
### Describe the bug While checking out the [tokenizer tutorial](https://huggingface.co/course/chapter6/2?fw=pt), i noticed getting an error while initially downloading the python dataset used in the examples. The [collab with the error is here](https://colab.research.google.com/github/huggingface/notebooks/blob/master/course/en/chapter6/section2.ipynb#scrollTo=hGb69Yo3eV8S) ``` from datasets import load_dataset import os os.environ["HF_DATASETS_CACHE"] = "/workspace" # This can take a few minutes to load, so grab a coffee or tea while you wait! raw_datasets = load_dataset("code_search_net", "python") ``` yeilds: ``` ile /opt/conda/lib/python3.10/site-packages/datasets/download/streaming_download_manager.py:524, in xlistdir(path, use_auth_token) 522 main_hop, *rest_hops = _as_str(path).split("::") 523 if is_local_path(main_hop): --> 524 return os.listdir(path) 525 else: 526 # globbing inside a zip in a private repo requires authentication 527 if not rest_hops and (main_hop.startswith("http://") or main_hop.startswith("https://")): NotADirectoryError: [Errno 20] Not a directory: '/workspace/downloads/25ceeb4c25ab737d688bd56ea92bfbb1f199fe572470456cf2d675479f342ac7/python/final/jsonl/train' ``` I was able to reproduce this erro both in the collab and on my own pytorch/pytorch container pulled from the dockerhub official pytorch image, so i think it may be a server side thing. ### Steps to reproduce the bug Steps to reproduce the issue: 1. run `raw_datasets = load_dataset("code_search_net", "python")` ### Expected behavior expect the code to not exception during dataset pull. ### Environment info i tried both the default HF_DATASETS_CACHE on Collab, and on my local container. i then pointed to the HF_DATASETS_CACHE to a large capacity local storage and the problem was consisten across all 3 scenarios.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5777/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5777/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5776
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5776/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5776/comments
https://api.github.com/repos/huggingface/datasets/issues/5776/events
https://github.com/huggingface/datasets/issues/5776
1,677,116,100
I_kwDODunzps5j9sLE
5,776
Use Pandas' `read_json` in the JSON builder
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" } ]
null
[]
"2023-04-20T17:15:49Z"
"2023-04-20T17:15:49Z"
null
CONTRIBUTOR
null
Instead of PyArrow's `read_json`, we should use `pd.read_json` in the JSON builder for consistency with the CSV and SQL builders (e.g., to address https://github.com/huggingface/datasets/issues/5725). In Pandas2.0, to get the same performance, we can set the `engine` to "pyarrow". The issue is that Colab still doesn't install Pandas 2.0 by default, so I think it's best to wait for this to be resolved on their side to avoid downgrading decoding performance in scenarios when Pandas 2.0 is not installed.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5776/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5776/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5775
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5775/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5775/comments
https://api.github.com/repos/huggingface/datasets/issues/5775/events
https://github.com/huggingface/datasets/issues/5775
1,677,089,901
I_kwDODunzps5j9lxt
5,775
ArrowDataset.save_to_disk lost some logic of remote
{ "avatar_url": "https://avatars.githubusercontent.com/u/29817738?v=4", "events_url": "https://api.github.com/users/Zoupers/events{/privacy}", "followers_url": "https://api.github.com/users/Zoupers/followers", "following_url": "https://api.github.com/users/Zoupers/following{/other_user}", "gists_url": "https://api.github.com/users/Zoupers/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Zoupers", "id": 29817738, "login": "Zoupers", "node_id": "MDQ6VXNlcjI5ODE3NzM4", "organizations_url": "https://api.github.com/users/Zoupers/orgs", "received_events_url": "https://api.github.com/users/Zoupers/received_events", "repos_url": "https://api.github.com/users/Zoupers/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Zoupers/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Zoupers/subscriptions", "type": "User", "url": "https://api.github.com/users/Zoupers" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" } ]
null
[ "We just fixed this on `main` and will do a new release soon :)" ]
"2023-04-20T16:58:01Z"
"2023-04-26T12:11:36Z"
"2023-04-26T12:11:17Z"
NONE
null
### Describe the bug https://github.com/huggingface/datasets/blob/e7ce0ac60c7efc10886471932854903a7c19f172/src/datasets/arrow_dataset.py#L1371 Here is the bug point, when I want to save from a `DatasetDict` class and the items of the instance is like `[('train', Dataset({features: ..., num_rows: ...}))]` , there is no guarantee that there exists a directory name `train` under `dataset_dict_path`. ### Steps to reproduce the bug 1. Mock a DatasetDict with items like what I said. 2. using save_to_disk with storage_options, u can use local sftp. code may like below ```python from datasets import load_dataset dataset = load_dataset(...) dataset.save_to_disk('sftp:///tmp', storage_options={'host': 'localhost', 'username': 'admin'}) ``` I suppose u can reproduce the bug by these steps. ### Expected behavior Should create the folder if it does not exists, just like we do locally. ### Environment info - `datasets` version: 2.11.0 - Platform: Linux-6.2.10-arch1-1-x86_64-with-glibc2.35 - Python version: 3.10.9 - Huggingface_hub version: 0.13.2 - PyArrow version: 11.0.0 - Pandas version: 1.5.3
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5775/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5775/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5774
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5774/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5774/comments
https://api.github.com/repos/huggingface/datasets/issues/5774/events
https://github.com/huggingface/datasets/pull/5774
1,676,716,662
PR_kwDODunzps5OxIMe
5,774
Fix style
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010336 / 0.011353 (-0.001017) | 0.007085 / 0.011008 (-0.003924) | 0.135577 / 0.038508 (0.097069) | 0.038301 / 0.023109 (0.015192) | 0.427919 / 0.275898 (0.152021) | 0.461451 / 0.323480 (0.137971) | 0.008929 / 0.007986 (0.000944) | 0.005260 / 0.004328 (0.000931) | 0.103481 / 0.004250 (0.099231) | 0.054885 / 0.037052 (0.017833) | 0.434956 / 0.258489 (0.176467) | 0.466915 / 0.293841 (0.173074) | 0.052403 / 0.128546 (-0.076144) | 0.021128 / 0.075646 (-0.054518) | 0.466847 / 0.419271 (0.047576) | 0.085096 / 0.043533 (0.041563) | 0.439935 / 0.255139 (0.184796) | 0.453613 / 0.283200 (0.170413) | 0.123913 / 0.141683 (-0.017769) | 1.930114 / 1.452155 (0.477959) | 2.052083 / 1.492716 (0.559366) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.280612 / 0.018006 (0.262606) | 0.583937 / 0.000490 (0.583447) | 0.004542 / 0.000200 (0.004342) | 0.000117 / 0.000054 (0.000063) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.035901 / 0.037411 (-0.001510) | 0.160357 / 0.014526 (0.145831) | 0.141661 / 0.176557 (-0.034896) | 0.234915 / 0.737135 (-0.502220) | 0.164110 / 0.296338 (-0.132228) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.659901 / 0.215209 (0.444692) | 6.529102 / 2.077655 (4.451447) | 2.635324 / 1.504120 (1.131204) | 2.275777 / 1.541195 (0.734583) | 2.343205 / 1.468490 (0.874715) | 1.241310 / 4.584777 (-3.343467) | 5.683784 / 3.745712 (1.938072) | 3.377162 / 5.269862 (-1.892700) | 2.176404 / 4.565676 (-2.389273) | 0.144303 / 0.424275 (-0.279972) | 0.016352 / 0.007607 (0.008745) | 0.817383 / 0.226044 (0.591339) | 8.148356 / 2.268929 (5.879428) | 3.489277 / 55.444624 (-51.955347) | 2.848086 / 6.876477 (-4.028391) | 2.973304 / 2.142072 (0.831232) | 1.517821 / 4.805227 (-3.287407) | 0.278794 / 6.500664 (-6.221870) | 0.096385 / 0.075469 (0.020916) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.631693 / 1.841788 (-0.210095) | 19.564716 / 8.074308 (11.490408) | 23.583081 / 10.191392 (13.391689) | 0.252363 / 0.680424 (-0.428061) | 0.027644 / 0.534201 (-0.506557) | 0.579634 / 0.579283 (0.000351) | 0.645702 / 0.434364 (0.211338) | 0.667302 / 0.540337 (0.126965) | 0.766425 / 1.386936 (-0.620511) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.011186 / 0.011353 (-0.000167) | 0.007327 / 0.011008 (-0.003681) | 0.105441 / 0.038508 (0.066933) | 0.040293 / 0.023109 (0.017184) | 0.480557 / 0.275898 (0.204659) | 0.522049 / 0.323480 (0.198569) | 0.007779 / 0.007986 (-0.000207) | 0.007338 / 0.004328 (0.003009) | 0.104744 / 0.004250 (0.100494) | 0.059463 / 0.037052 (0.022411) | 0.494055 / 0.258489 (0.235566) | 0.534340 / 0.293841 (0.240499) | 0.062800 / 0.128546 (-0.065746) | 0.020687 / 0.075646 (-0.054959) | 0.135833 / 0.419271 (-0.283439) | 0.087472 / 0.043533 (0.043939) | 0.465019 / 0.255139 (0.209880) | 0.526713 / 0.283200 (0.243513) | 0.131424 / 0.141683 (-0.010259) | 1.884759 / 1.452155 (0.432605) | 2.015817 / 1.492716 (0.523101) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.237032 / 0.018006 (0.219026) | 0.605209 / 0.000490 (0.604719) | 0.006653 / 0.000200 (0.006453) | 0.000264 / 0.000054 (0.000210) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034982 / 0.037411 (-0.002430) | 0.141409 / 0.014526 (0.126883) | 0.151635 / 0.176557 (-0.024922) | 0.217298 / 0.737135 (-0.519837) | 0.171945 / 0.296338 (-0.124393) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.678596 / 0.215209 (0.463387) | 6.802432 / 2.077655 (4.724777) | 3.021617 / 1.504120 (1.517497) | 2.722508 / 1.541195 (1.181313) | 2.728194 / 1.468490 (1.259704) | 1.245863 / 4.584777 (-3.338914) | 5.762676 / 3.745712 (2.016963) | 5.497855 / 5.269862 (0.227994) | 2.855764 / 4.565676 (-1.709912) | 0.157359 / 0.424275 (-0.266916) | 0.015562 / 0.007607 (0.007955) | 0.865559 / 0.226044 (0.639515) | 8.553052 / 2.268929 (6.284123) | 3.905544 / 55.444624 (-51.539081) | 3.272528 / 6.876477 (-3.603949) | 3.399481 / 2.142072 (1.257408) | 1.540155 / 4.805227 (-3.265072) | 0.275871 / 6.500664 (-6.224793) | 0.092346 / 0.075469 (0.016877) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.753646 / 1.841788 (-0.088142) | 20.074050 / 8.074308 (11.999742) | 23.920391 / 10.191392 (13.728999) | 0.257161 / 0.680424 (-0.423263) | 0.027805 / 0.534201 (-0.506396) | 0.565605 / 0.579283 (-0.013678) | 0.643277 / 0.434364 (0.208914) | 0.633504 / 0.540337 (0.093167) | 0.754317 / 1.386936 (-0.632619) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#2d34c7968ea1a3fe7d4fa7cdf23673e0354f69ac \"CML watermark\")\n" ]
"2023-04-20T13:21:32Z"
"2023-04-20T13:34:26Z"
"2023-04-20T13:24:28Z"
MEMBER
null
Fix C419 issues
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5774/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5774/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5774.diff", "html_url": "https://github.com/huggingface/datasets/pull/5774", "merged_at": "2023-04-20T13:24:28Z", "patch_url": "https://github.com/huggingface/datasets/pull/5774.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5774" }
true
https://api.github.com/repos/huggingface/datasets/issues/5773
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5773/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5773/comments
https://api.github.com/repos/huggingface/datasets/issues/5773/events
https://github.com/huggingface/datasets/issues/5773
1,675,984,633
I_kwDODunzps5j5X75
5,773
train_dataset does not implement __len__
{ "avatar_url": "https://avatars.githubusercontent.com/u/38179632?v=4", "events_url": "https://api.github.com/users/v-yunbin/events{/privacy}", "followers_url": "https://api.github.com/users/v-yunbin/followers", "following_url": "https://api.github.com/users/v-yunbin/following{/other_user}", "gists_url": "https://api.github.com/users/v-yunbin/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/v-yunbin", "id": 38179632, "login": "v-yunbin", "node_id": "MDQ6VXNlcjM4MTc5NjMy", "organizations_url": "https://api.github.com/users/v-yunbin/orgs", "received_events_url": "https://api.github.com/users/v-yunbin/received_events", "repos_url": "https://api.github.com/users/v-yunbin/repos", "site_admin": false, "starred_url": "https://api.github.com/users/v-yunbin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/v-yunbin/subscriptions", "type": "User", "url": "https://api.github.com/users/v-yunbin" }
[]
open
false
null
[]
null
[ "Thanks for reporting, @v-yunbin.\r\n\r\nCould you please give more details, the steps to reproduce the bug, the complete error back trace and the environment information (`datasets-cli env`)?", "this is a detail error info from transformers:\r\n```\r\nTraceback (most recent call last):\r\n File \"finetune.py\", line 177, in <module>\r\n whisper_finetune(traindir,devdir,outdir)\r\n File \"finetune.py\", line 161, in whisper_finetune\r\n trainer = Seq2SeqTrainer(\r\n File \"/home/ybZhang/miniconda3/envs/whister/lib/python3.8/site-packages/transformers/trainer_seq2seq.py\", line 56, in __init__\r\n super().__init__(\r\n File \"/home/ybZhang/miniconda3/envs/whister/lib/python3.8/site-packages/transformers/trainer.py\", line 567, in __init__\r\n raise ValueError(\r\nValueError: The train_dataset does not implement __len__, max_steps has to be specified. The number of steps needs to be known in advance for the learning rate scheduler.\r\n```\r\n", "How did you create `train_dataset`? The `datasets` library does not appear in your stack trace.\r\n\r\nWe need more information in order to reproduce the issue...", "```\r\ndef asr_dataset(traindir,devdir):\r\n we_voice = IterableDatasetDict()\r\n #we_voice[\"train\"] = load_from_disk(traindir,streaming=True)\r\n #we_voice[\"test\"]= load_from_disk(devdir,streaming=True)\r\n we_voice[\"train\"] = load_dataset(\"csv\",data_files=os.path.join(traindir,\"train.csv\"),split=\"train\",streaming=True)\r\n #print(load_dataset(\"csv\",data_files=os.path.join(traindir,\"train.csv\"),split=\"train\"))\r\n we_voice[\"test\"] = load_dataset(\"csv\",data_files=os.path.join(devdir,\"dev.csv\"), split=\"train\",streaming=True)\r\n we_voice = we_voice.remove_columns([\"id\"])\r\n we_voice = we_voice.cast_column(\"audio\", Audio(sampling_rate=16000))\r\n return we_voice\r\n\r\n```", "As you are using iterable datasets (`streaming=True`), their length is not defined.\r\n\r\nYou should:\r\n- Either use non-iterable datasets, which have a defined length: use `DatasetDict` and not passing `streaming=True`\r\n- Or pass `args.max_steps` to the `Trainer`", "I don't know how to give a reasonable args.max_steps...........................", "Then you should not use streaming.", "@albertvillanova I think @v-yunbin, myself, and others might be slightly confused about max_steps and how it differs from num_train_epochs.", "@lkurlandski A **step** is referring to optimizer's update after back propagation, and it's associated with a batch of data. For example, if a dataset contains 64 examples and you have an overall batch size of 4, then an epoch will have 64/4=16 batches. Therefore, in one epoch, you will have 16 optimizer **steps**." ]
"2023-04-20T04:37:05Z"
"2023-07-19T20:33:13Z"
null
NONE
null
when train using data precessored by the datasets, I get follow warning and it leads to that I can not set epoch numbers: `ValueError: The train_dataset does not implement __len__, max_steps has to be specified. The number of steps needs to be known in advance for the learning rate scheduler.`
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5773/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5773/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5772
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5772/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5772/comments
https://api.github.com/repos/huggingface/datasets/issues/5772/events
https://github.com/huggingface/datasets/pull/5772
1,675,033,510
PR_kwDODunzps5OreXV
5,772
Fix JSON builder when missing keys in first row
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009262 / 0.011353 (-0.002091) | 0.006157 / 0.011008 (-0.004851) | 0.125960 / 0.038508 (0.087451) | 0.036213 / 0.023109 (0.013104) | 0.399331 / 0.275898 (0.123433) | 0.453597 / 0.323480 (0.130117) | 0.006990 / 0.007986 (-0.000995) | 0.007320 / 0.004328 (0.002991) | 0.100321 / 0.004250 (0.096070) | 0.048870 / 0.037052 (0.011818) | 0.396284 / 0.258489 (0.137795) | 0.475619 / 0.293841 (0.181778) | 0.052329 / 0.128546 (-0.076217) | 0.019564 / 0.075646 (-0.056083) | 0.430942 / 0.419271 (0.011670) | 0.063224 / 0.043533 (0.019692) | 0.391717 / 0.255139 (0.136578) | 0.448342 / 0.283200 (0.165142) | 0.114055 / 0.141683 (-0.027628) | 1.793204 / 1.452155 (0.341049) | 1.895151 / 1.492716 (0.402435) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.283699 / 0.018006 (0.265693) | 0.597194 / 0.000490 (0.596704) | 0.007143 / 0.000200 (0.006944) | 0.000602 / 0.000054 (0.000548) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034761 / 0.037411 (-0.002651) | 0.124555 / 0.014526 (0.110030) | 0.149126 / 0.176557 (-0.027430) | 0.220335 / 0.737135 (-0.516801) | 0.153109 / 0.296338 (-0.143229) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.620210 / 0.215209 (0.405001) | 6.229937 / 2.077655 (4.152282) | 2.615203 / 1.504120 (1.111083) | 2.239337 / 1.541195 (0.698143) | 2.262138 / 1.468490 (0.793648) | 1.196498 / 4.584777 (-3.388279) | 5.609932 / 3.745712 (1.864220) | 3.031347 / 5.269862 (-2.238515) | 2.025530 / 4.565676 (-2.540146) | 0.139828 / 0.424275 (-0.284447) | 0.015476 / 0.007607 (0.007869) | 0.768964 / 0.226044 (0.542920) | 7.728677 / 2.268929 (5.459748) | 3.336407 / 55.444624 (-52.108217) | 2.700055 / 6.876477 (-4.176422) | 2.765223 / 2.142072 (0.623151) | 1.409073 / 4.805227 (-3.396155) | 0.246849 / 6.500664 (-6.253815) | 0.081231 / 0.075469 (0.005762) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.593836 / 1.841788 (-0.247952) | 18.020525 / 8.074308 (9.946216) | 21.766822 / 10.191392 (11.575430) | 0.258615 / 0.680424 (-0.421809) | 0.026895 / 0.534201 (-0.507306) | 0.529823 / 0.579283 (-0.049460) | 0.623470 / 0.434364 (0.189106) | 0.628171 / 0.540337 (0.087833) | 0.745249 / 1.386936 (-0.641687) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008624 / 0.011353 (-0.002729) | 0.006317 / 0.011008 (-0.004691) | 0.097315 / 0.038508 (0.058807) | 0.035217 / 0.023109 (0.012108) | 0.440197 / 0.275898 (0.164299) | 0.473863 / 0.323480 (0.150383) | 0.006722 / 0.007986 (-0.001264) | 0.006444 / 0.004328 (0.002116) | 0.102056 / 0.004250 (0.097806) | 0.047142 / 0.037052 (0.010089) | 0.452476 / 0.258489 (0.193986) | 0.487619 / 0.293841 (0.193778) | 0.052456 / 0.128546 (-0.076090) | 0.018735 / 0.075646 (-0.056911) | 0.114656 / 0.419271 (-0.304616) | 0.062577 / 0.043533 (0.019044) | 0.444471 / 0.255139 (0.189332) | 0.494264 / 0.283200 (0.211065) | 0.117112 / 0.141683 (-0.024571) | 1.848965 / 1.452155 (0.396810) | 1.984008 / 1.492716 (0.491292) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.290494 / 0.018006 (0.272488) | 0.588415 / 0.000490 (0.587925) | 0.000459 / 0.000200 (0.000259) | 0.000080 / 0.000054 (0.000026) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032873 / 0.037411 (-0.004538) | 0.131139 / 0.014526 (0.116614) | 0.140268 / 0.176557 (-0.036289) | 0.204561 / 0.737135 (-0.532574) | 0.147443 / 0.296338 (-0.148895) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.636899 / 0.215209 (0.421690) | 6.236139 / 2.077655 (4.158484) | 2.801468 / 1.504120 (1.297348) | 2.398808 / 1.541195 (0.857613) | 2.493150 / 1.468490 (1.024659) | 1.228845 / 4.584777 (-3.355932) | 5.675874 / 3.745712 (1.930162) | 3.084939 / 5.269862 (-2.184922) | 2.061310 / 4.565676 (-2.504367) | 0.142285 / 0.424275 (-0.281990) | 0.014972 / 0.007607 (0.007365) | 0.786599 / 0.226044 (0.560555) | 7.876036 / 2.268929 (5.607107) | 3.476136 / 55.444624 (-51.968489) | 2.847922 / 6.876477 (-4.028555) | 3.040326 / 2.142072 (0.898253) | 1.448538 / 4.805227 (-3.356690) | 0.257230 / 6.500664 (-6.243434) | 0.085137 / 0.075469 (0.009668) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.668173 / 1.841788 (-0.173615) | 18.668520 / 8.074308 (10.594212) | 20.535542 / 10.191392 (10.344150) | 0.244580 / 0.680424 (-0.435844) | 0.026364 / 0.534201 (-0.507837) | 0.531753 / 0.579283 (-0.047530) | 0.616578 / 0.434364 (0.182214) | 0.618906 / 0.540337 (0.078569) | 0.738785 / 1.386936 (-0.648151) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#f7265cafa3103d77d6d52aa897088faefcd96659 \"CML watermark\")\n" ]
"2023-04-19T14:32:57Z"
"2023-04-21T06:45:13Z"
"2023-04-21T06:35:27Z"
MEMBER
null
Until now, the JSON builder only considered the keys present in the first element of the list: - Either explicitly: by passing index 0 in `dataset[0].keys()` - Or implicitly: `pa.Table.from_pylist(dataset)`, where "schema (default None): If not passed, will be inferred from the first row of the mapping values" This PR fixes the bug by considering the union of the keys present in all the rows. Fix #5726.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5772/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5772/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5772.diff", "html_url": "https://github.com/huggingface/datasets/pull/5772", "merged_at": "2023-04-21T06:35:27Z", "patch_url": "https://github.com/huggingface/datasets/pull/5772.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5772" }
true
https://api.github.com/repos/huggingface/datasets/issues/5771
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5771/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5771/comments
https://api.github.com/repos/huggingface/datasets/issues/5771/events
https://github.com/huggingface/datasets/issues/5771
1,674,828,380
I_kwDODunzps5j09pc
5,771
Support cloud storage for loading datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/2437102?v=4", "events_url": "https://api.github.com/users/eli-osherovich/events{/privacy}", "followers_url": "https://api.github.com/users/eli-osherovich/followers", "following_url": "https://api.github.com/users/eli-osherovich/following{/other_user}", "gists_url": "https://api.github.com/users/eli-osherovich/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/eli-osherovich", "id": 2437102, "login": "eli-osherovich", "node_id": "MDQ6VXNlcjI0MzcxMDI=", "organizations_url": "https://api.github.com/users/eli-osherovich/orgs", "received_events_url": "https://api.github.com/users/eli-osherovich/received_events", "repos_url": "https://api.github.com/users/eli-osherovich/repos", "site_admin": false, "starred_url": "https://api.github.com/users/eli-osherovich/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/eli-osherovich/subscriptions", "type": "User", "url": "https://api.github.com/users/eli-osherovich" }
[ { "color": "cfd3d7", "default": true, "description": "This issue or pull request already exists", "id": 1935892865, "name": "duplicate", "node_id": "MDU6TGFiZWwxOTM1ODkyODY1", "url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate" }, { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
null
[]
null
[ "A duplicate of https://github.com/huggingface/datasets/issues/5281" ]
"2023-04-19T12:43:53Z"
"2023-05-07T17:47:41Z"
"2023-05-07T17:47:41Z"
CONTRIBUTOR
null
### Feature request It seems that the the current implementation supports cloud storage only for `load_from_disk`. It would be nice if a similar functionality existed in `load_dataset`. ### Motivation Motivation is pretty clear -- let users work with datasets located in the cloud. ### Your contribution I can help implementing this.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5771/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5771/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5770
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5770/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5770/comments
https://api.github.com/repos/huggingface/datasets/issues/5770/events
https://github.com/huggingface/datasets/pull/5770
1,673,581,555
PR_kwDODunzps5OmntV
5,770
Add IterableDataset.from_spark
{ "avatar_url": "https://avatars.githubusercontent.com/u/106995444?v=4", "events_url": "https://api.github.com/users/maddiedawson/events{/privacy}", "followers_url": "https://api.github.com/users/maddiedawson/followers", "following_url": "https://api.github.com/users/maddiedawson/following{/other_user}", "gists_url": "https://api.github.com/users/maddiedawson/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/maddiedawson", "id": 106995444, "login": "maddiedawson", "node_id": "U_kgDOBmCe9A", "organizations_url": "https://api.github.com/users/maddiedawson/orgs", "received_events_url": "https://api.github.com/users/maddiedawson/received_events", "repos_url": "https://api.github.com/users/maddiedawson/repos", "site_admin": false, "starred_url": "https://api.github.com/users/maddiedawson/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/maddiedawson/subscriptions", "type": "User", "url": "https://api.github.com/users/maddiedawson" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "Hi again @lhoestq this is ready for review! Not sure I have permission to add people to the reviewers list...", "Cool ! I think you can define `IterableDataset.from_spark` instead of adding `streaming=` in `Dataset.from_spark`, it can be more intuitive IMO :)", "Thanks for reviewing! I moved the streaming behavior to IterableDataset.from_spark", "Thanks Quentin! I'll flesh out the docs in a follow-up PR", "Friendly ping @lhoestq ", "Thanks @lhoestq ! I fixed the partition order thing and added more unit tests.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006165 / 0.011353 (-0.005188) | 0.004497 / 0.011008 (-0.006511) | 0.099142 / 0.038508 (0.060634) | 0.027479 / 0.023109 (0.004369) | 0.352491 / 0.275898 (0.076593) | 0.402993 / 0.323480 (0.079513) | 0.004885 / 0.007986 (-0.003100) | 0.003315 / 0.004328 (-0.001013) | 0.075787 / 0.004250 (0.071537) | 0.035320 / 0.037052 (-0.001732) | 0.368401 / 0.258489 (0.109912) | 0.409090 / 0.293841 (0.115249) | 0.030125 / 0.128546 (-0.098421) | 0.011670 / 0.075646 (-0.063976) | 0.324381 / 0.419271 (-0.094890) | 0.050815 / 0.043533 (0.007283) | 0.352598 / 0.255139 (0.097460) | 0.389189 / 0.283200 (0.105989) | 0.092873 / 0.141683 (-0.048810) | 1.485140 / 1.452155 (0.032986) | 1.545586 / 1.492716 (0.052869) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.199522 / 0.018006 (0.181516) | 0.404576 / 0.000490 (0.404087) | 0.003322 / 0.000200 (0.003122) | 0.000074 / 0.000054 (0.000020) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022945 / 0.037411 (-0.014466) | 0.095512 / 0.014526 (0.080987) | 0.103077 / 0.176557 (-0.073480) | 0.163918 / 0.737135 (-0.573217) | 0.105560 / 0.296338 (-0.190779) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.417360 / 0.215209 (0.202151) | 4.161693 / 2.077655 (2.084039) | 1.851941 / 1.504120 (0.347821) | 1.649872 / 1.541195 (0.108677) | 1.682099 / 1.468490 (0.213609) | 0.693187 / 4.584777 (-3.891590) | 3.462528 / 3.745712 (-0.283184) | 1.839893 / 5.269862 (-3.429968) | 1.155945 / 4.565676 (-3.409731) | 0.082611 / 0.424275 (-0.341664) | 0.012076 / 0.007607 (0.004469) | 0.514325 / 0.226044 (0.288280) | 5.155052 / 2.268929 (2.886123) | 2.307280 / 55.444624 (-53.137345) | 1.966483 / 6.876477 (-4.909994) | 2.018892 / 2.142072 (-0.123181) | 0.803068 / 4.805227 (-4.002159) | 0.152213 / 6.500664 (-6.348451) | 0.066320 / 0.075469 (-0.009149) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.218578 / 1.841788 (-0.623209) | 13.563869 / 8.074308 (5.489561) | 13.954596 / 10.191392 (3.763204) | 0.151527 / 0.680424 (-0.528897) | 0.016655 / 0.534201 (-0.517546) | 0.380637 / 0.579283 (-0.198646) | 0.395854 / 0.434364 (-0.038509) | 0.459111 / 0.540337 (-0.081226) | 0.560219 / 1.386936 (-0.826717) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006427 / 0.011353 (-0.004926) | 0.004728 / 0.011008 (-0.006280) | 0.080525 / 0.038508 (0.042017) | 0.027294 / 0.023109 (0.004185) | 0.414688 / 0.275898 (0.138790) | 0.449882 / 0.323480 (0.126402) | 0.004771 / 0.007986 (-0.003214) | 0.003402 / 0.004328 (-0.000926) | 0.078748 / 0.004250 (0.074497) | 0.037046 / 0.037052 (-0.000007) | 0.417398 / 0.258489 (0.158909) | 0.462921 / 0.293841 (0.169080) | 0.030364 / 0.128546 (-0.098182) | 0.011810 / 0.075646 (-0.063837) | 0.089787 / 0.419271 (-0.329485) | 0.039806 / 0.043533 (-0.003727) | 0.403401 / 0.255139 (0.148262) | 0.439477 / 0.283200 (0.156278) | 0.088431 / 0.141683 (-0.053252) | 1.534373 / 1.452155 (0.082219) | 1.592316 / 1.492716 (0.099600) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.217701 / 0.018006 (0.199695) | 0.384770 / 0.000490 (0.384280) | 0.000437 / 0.000200 (0.000237) | 0.000061 / 0.000054 (0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024952 / 0.037411 (-0.012459) | 0.098728 / 0.014526 (0.084202) | 0.106324 / 0.176557 (-0.070233) | 0.155484 / 0.737135 (-0.581651) | 0.109503 / 0.296338 (-0.186836) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.450639 / 0.215209 (0.235430) | 4.523110 / 2.077655 (2.445455) | 2.224810 / 1.504120 (0.720690) | 2.119516 / 1.541195 (0.578321) | 2.225192 / 1.468490 (0.756702) | 0.695397 / 4.584777 (-3.889380) | 3.433559 / 3.745712 (-0.312153) | 2.633127 / 5.269862 (-2.636735) | 1.448471 / 4.565676 (-3.117206) | 0.082262 / 0.424275 (-0.342013) | 0.012246 / 0.007607 (0.004639) | 0.561243 / 0.226044 (0.335199) | 5.652711 / 2.268929 (3.383782) | 2.689771 / 55.444624 (-52.754853) | 2.359512 / 6.876477 (-4.516965) | 2.471098 / 2.142072 (0.329026) | 0.802955 / 4.805227 (-4.002272) | 0.151142 / 6.500664 (-6.349522) | 0.067494 / 0.075469 (-0.007975) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.306879 / 1.841788 (-0.534909) | 14.030775 / 8.074308 (5.956467) | 12.917790 / 10.191392 (2.726398) | 0.141269 / 0.680424 (-0.539155) | 0.016264 / 0.534201 (-0.517937) | 0.411957 / 0.579283 (-0.167326) | 0.393235 / 0.434364 (-0.041129) | 0.505144 / 0.540337 (-0.035193) | 0.590660 / 1.386936 (-0.796276) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#7790ebd7072eafff755fb575b392f3efa74069e4 \"CML watermark\")\n" ]
"2023-04-18T17:47:53Z"
"2023-05-17T14:07:32Z"
"2023-05-17T14:00:38Z"
CONTRIBUTOR
null
Follow-up from https://github.com/huggingface/datasets/pull/5701 Related issue: https://github.com/huggingface/datasets/issues/5678
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5770/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5770/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5770.diff", "html_url": "https://github.com/huggingface/datasets/pull/5770", "merged_at": "2023-05-17T14:00:38Z", "patch_url": "https://github.com/huggingface/datasets/pull/5770.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5770" }
true
https://api.github.com/repos/huggingface/datasets/issues/5769
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5769/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5769/comments
https://api.github.com/repos/huggingface/datasets/issues/5769/events
https://github.com/huggingface/datasets/issues/5769
1,673,441,182
I_kwDODunzps5jvq-e
5,769
Tiktoken tokenizers are not pickable
{ "avatar_url": "https://avatars.githubusercontent.com/u/22663468?v=4", "events_url": "https://api.github.com/users/markovalexander/events{/privacy}", "followers_url": "https://api.github.com/users/markovalexander/followers", "following_url": "https://api.github.com/users/markovalexander/following{/other_user}", "gists_url": "https://api.github.com/users/markovalexander/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/markovalexander", "id": 22663468, "login": "markovalexander", "node_id": "MDQ6VXNlcjIyNjYzNDY4", "organizations_url": "https://api.github.com/users/markovalexander/orgs", "received_events_url": "https://api.github.com/users/markovalexander/received_events", "repos_url": "https://api.github.com/users/markovalexander/repos", "site_admin": false, "starred_url": "https://api.github.com/users/markovalexander/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/markovalexander/subscriptions", "type": "User", "url": "https://api.github.com/users/markovalexander" }
[]
closed
false
null
[]
null
[ "Thanks for reporting, @markovalexander.\r\n\r\nUnfortunately, I'm not able to reproduce the issue: the `tiktoken` tokenizer can be used within `Dataset.map`, both in my local machine and in a Colab notebook: https://colab.research.google.com/drive/1DhJroZgk0sNFJ2Mrz-jYgrmh9jblXaCG?usp=sharing\r\n\r\nAre you sure you are using `datasets` version 2.11.0?" ]
"2023-04-18T16:07:40Z"
"2023-05-04T18:55:57Z"
"2023-05-04T18:55:57Z"
NONE
null
### Describe the bug Since tiktoken tokenizer is not pickable, it is not possible to use it inside `dataset.map()` with multiprocessing enabled. However, you [made](https://github.com/huggingface/datasets/issues/5536) tiktoken's tokenizers pickable in `datasets==2.10.0` for caching. For some reason, this logic does not work in dataset processing and raises `TypeError: cannot pickle 'builtins.CoreBPE' object` ### Steps to reproduce the bug ``` from datasets import load_dataset import tiktoken dataset = load_dataset("stas/openwebtext-10k") enc = tiktoken.get_encoding("gpt2") tokenized = dataset.map( process, remove_columns=['text'], desc="tokenizing the OWT splits", num_proc=2, ) def process(example): ids = enc.encode(example['text']) ids.append(enc.eot_token) out = {'ids': ids, 'len': len(ids)} return out ``` ### Expected behavior starts processing dataset ### Environment info - `datasets` version: 2.11.0 - Platform: Linux-5.15.0-1021-oracle-x86_64-with-glibc2.29 - Python version: 3.8.10 - Huggingface_hub version: 0.13.4 - PyArrow version: 9.0.0 - Pandas version: 2.0.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5769/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5769/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5768
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5768/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5768/comments
https://api.github.com/repos/huggingface/datasets/issues/5768/events
https://github.com/huggingface/datasets/issues/5768
1,672,494,561
I_kwDODunzps5jsD3h
5,768
load_dataset("squad") doesn't work in 2.7.1 and 2.10.1
{ "avatar_url": "https://avatars.githubusercontent.com/u/57412770?v=4", "events_url": "https://api.github.com/users/yaseen157/events{/privacy}", "followers_url": "https://api.github.com/users/yaseen157/followers", "following_url": "https://api.github.com/users/yaseen157/following{/other_user}", "gists_url": "https://api.github.com/users/yaseen157/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/yaseen157", "id": 57412770, "login": "yaseen157", "node_id": "MDQ6VXNlcjU3NDEyNzcw", "organizations_url": "https://api.github.com/users/yaseen157/orgs", "received_events_url": "https://api.github.com/users/yaseen157/received_events", "repos_url": "https://api.github.com/users/yaseen157/repos", "site_admin": false, "starred_url": "https://api.github.com/users/yaseen157/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yaseen157/subscriptions", "type": "User", "url": "https://api.github.com/users/yaseen157" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
[ "Thanks for reporting, @yaseen157.\r\n\r\nCould you please give the complete error stack trace?", "I am not able to reproduce your issue: the dataset loads perfectly on my local machine and on a Colab notebook: https://colab.research.google.com/drive/1Fbdoa1JdNz8DOdX6gmIsOK1nCT8Abj4O?usp=sharing\r\n```python\r\nIn [1]: from datasets import load_dataset\r\n\r\nIn [2]: ds = load_dataset(\"squad\")\r\nDownloading builder script: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5.27k/5.27k [00:00<00:00, 3.22MB/s]\r\nDownloading metadata: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2.36k/2.36k [00:00<00:00, 1.60MB/s]\r\nDownloading readme: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 7.67k/7.67k [00:00<00:00, 4.58MB/s]\r\nDownloading and preparing dataset squad/plain_text to ...t/.cache/huggingface/datasets/squad/plain_text/1.0.0/d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453...\r\nDownloading data: 30.3MB [00:00, 91.8MB/s] | 0/2 [00:00<?, ?it/s]\r\nDownloading data: 4.85MB [00:00, 75.3MB/s] \r\nDownloading data files: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 2.31it/s]\r\nExtracting data files: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 2157.01it/s]\r\nDataset squad downloaded and prepared to .../.cache/huggingface/datasets/squad/plain_text/1.0.0/d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453. Subsequent calls will reuse this data.\r\n100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 463.95it/s]\r\n\r\nIn [3]: ds\r\nOut[3]: \r\nDatasetDict({\r\n train: Dataset({\r\n features: ['id', 'title', 'context', 'question', 'answers'],\r\n num_rows: 87599\r\n })\r\n validation: Dataset({\r\n features: ['id', 'title', 'context', 'question', 'answers'],\r\n num_rows: 10570\r\n })\r\n})\r\n```", "I am at a complete loss for what's happening here. A quick summary, I have 3 machines to try this with:\r\n1) My windows 10 laptop\r\n2) Linux machine1, super computer login node\r\n3) Linux machine2, super computer compute node\r\n\r\nLet's define the following as a test script for the machines:\r\n\r\n```\r\nimport traceback\r\nimport datasets\r\nprint(f\"{datasets.__version__=}\")\r\ntry:\r\n ds = datasets.load_dataset(\"squad\")\r\nexcept:\r\n traceback.print_exc()\r\nelse:\r\n print(\"Success!\")\r\n```\r\n\r\nThe Windows laptop enters some sort of traceback recursion loop:\r\n\r\n> datasets.__version__='2.7.1'\r\n> Downloading and preparing dataset squad/plain_text to C:/Users/yr3g17/.cache/huggingface/datasets/squad/plain_text/1.0.0/d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453...\r\n> Downloading data files: 100%|██████████| 2/2 [00:00<?, ?it/s]\r\n> Traceback (most recent call last):\r\n> File \"<string>\", line 1, in <module>\r\n> File \"C:\\Users\\yr3g17\\AppData\\Local\\Programs\\Python\\Python39\\lib\\multiprocessing\\spawn.py\", line 116, in spawn_main\r\n> exitcode = _main(fd, parent_sentinel)\r\n> File \"C:\\Users\\yr3g17\\AppData\\Local\\Programs\\Python\\Python39\\lib\\multiprocessing\\spawn.py\", line 125, in _main\r\n> prepare(preparation_data)\r\n> File \"C:\\Users\\yr3g17\\AppData\\Local\\Programs\\Python\\Python39\\lib\\multiprocessing\\spawn.py\", line 236, in prepare\r\n> _fixup_main_from_path(data['init_main_from_path'])\r\n> File \"C:\\Users\\yr3g17\\AppData\\Local\\Programs\\Python\\Python39\\lib\\multiprocessing\\spawn.py\", line 287, in _fixup_main_from_path\r\n> main_content = runpy.run_path(main_path,\r\n> File \"C:\\Users\\yr3g17\\AppData\\Local\\Programs\\Python\\Python39\\lib\\runpy.py\", line 267, in run_path\r\n> code, fname = _get_code_from_file(run_name, path_name)\r\n> File \"C:\\Users\\yr3g17\\AppData\\Local\\Programs\\Python\\Python39\\lib\\runpy.py\", line 237, in _get_code_from_file\r\n> with io.open_code(decoded_path) as f:\r\n> OSError: [Errno 22] Invalid argument: 'C:\\\\Users\\\\yr3g17\\\\OneDrive - University of Southampton\\\\Documents\\\\PhD-repository\\\\<input>'\r\n> Traceback (most recent call last):\r\n> File \"<string>\", line 1, in <module>\r\n> File \"C:\\Users\\yr3g17\\AppData\\Local\\Programs\\Python\\Python39\\lib\\multiprocessing\\spawn.py\", line 116, in spawn_main\r\n> exitcode = _main(fd, parent_sentinel)\r\n> File \"C:\\Users\\yr3g17\\AppData\\Local\\Programs\\Python\\Python39\\lib\\multiprocessing\\spawn.py\", line 125, in _main\r\n> prepare(preparation_data)\r\n**this error traceback is endlessly recursive**\r\n\r\nThis is a brand new issue that started today and I didn't even realise was a thing, as I had been using my windows machine to follow tracebacks for the other machines...\r\n\r\nI suspect this issue had something to do with my filepath naming, but I couldn't confirm this when I spent time trying to debug this myself weeks ago, something to do with files being locked and never released. I'm not too concerned about my laptop not working here because I've had so many issues with Microsoft OneDrive and my filesystem.\r\n\r\nLinux machines 1 and 2 were working fine for months, but have all of a sudden stopped working. Trying to run linux machine 1 (login node), I get:\r\n\r\n> datasets.__version__='2.10.1'\r\n> Downloading and preparing dataset json/squad to /home/yr3g17/.cache/hugg\r\ningface/datasets/json/squad-d733af945be1d2c2/0.0.0/0f7e3662623656454fcd2\r\nb650f34e886a7db4b9104504885bd462096cc7a9f51...\r\n> Downloading data files: 100%|███████████████████████████████████████████\r\n█████████████████████████████████████████████| 2/2 [00:00<00:00, 4042.70\r\nit/s]\r\n>Extracting data files: 100%|███████████████████████████████████████\r\n███████████████████████████████████████████████████| 2/2 [00:00<00:00, 1\r\n11.15it/s]\r\n> Generating train split: 0 examples [00:00, ? examples/s]\r\n\r\n and hangs here. This has not happened to me before on the Linux machine. If I forcefully keyboard interrupt, I get:\r\n \r\n> Traceback (most recent call last):\r\n> File \"<stdin>\", line 2, in <module>\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/d\r\n> atasets/load.py\", line 1782, in load_dataset\r\n> builder_instance.download_and_prepare(\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/d\r\n> atasets/builder.py\", line 793, in download_and_prepare\r\n> with FileLock(lock_path) if is_local else contextlib.nullcontext():\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/d\r\n> atasets/utils/filelock.py\", line 320, in __enter__\r\n> self.acquire()\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/d\r\n> atasets/utils/filelock.py\", line 282, in acquire\r\n> time.sleep(poll_intervall)\r\n\r\nWhich also appears to be file lock related! I resolved this by navigating to my ~/.cache/huggingface/datasets directory and wiping out anything to do with the squad dataset in *.lock files. Now I get:\r\n\r\n```\r\nfrom datasets import load_dataset\r\ndataset_load(\"squad\")\r\n\r\n```\r\n> Downloading and preparing dataset squad/plain_text to /home/yr3g17/.cache/huggingface/datasets/squad/plain_text/1.0.0/d6ec3ceb99ca480ce37cdd35555d6cb\r\n> 2511d223b9150cce08a837ef62ffea453...\r\n> Downloading data files: 100%|██████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 44.75it/s]\r\n> Extracting data files: 100%|███████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 8.54it/s]\r\n> Dataset squad downloaded and prepared to /home/yr3g17/.cache/huggingface/datasets/squad/plain_text/1.0.0/d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150\r\n> cce08a837ef62ffea453. Subsequent calls will reuse this data.\r\n> 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 19.77it/s]\r\n> DatasetDict({\r\n> train: Dataset({\r\n> features: ['id', 'title', 'context', 'question', 'answers'],\r\n> num_rows: 87599\r\n> })\r\n> validation: Dataset({\r\n> features: ['id', 'title', 'context', 'question', 'answers'],\r\n> num_rows: 10570\r\n> })\r\n> })\r\n> \r\n\r\nWhich all seems fine right, it's doing what it should be. But now, without ever leaving the IDE, I \"make a subsequent call\" to reuse the data by repeating the command. I encounter the following traceback\r\n\r\n`load_dataset(\"squad\")`\r\n\r\n> Traceback (most recent call last):\r\n> File \"<stdin>\", line 1, in <module>\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/datasets/load.py\", line 1759, in load_dataset\r\n> builder_instance = load_dataset_builder(\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/datasets/load.py\", line 1496, in load_dataset_builder\r\n> dataset_module = dataset_module_factory(\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/datasets/load.py\", line 1151, in dataset_module_factory\r\n> ).get_module()\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/datasets/load.py\", line 631, in get_module\r\n> data_files = DataFilesDict.from_local_or_remote(\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/datasets/data_files.py\", line 796, in from_local_or_remote\r\n> DataFilesList.from_local_or_remote(\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/datasets/data_files.py\", line 764, in from_local_or_remote\r\n> data_files = resolve_patterns_locally_or_by_urls(base_path, patterns, allowed_extensions)\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/datasets/data_files.py\", line 369, in resolve_patterns_locally_or_by_urls\r\n> raise FileNotFoundError(error_msg)\r\n> FileNotFoundError: Unable to resolve any data file that matches '['train[-._ 0-9/]**', '**[-._ 0-9/]train[-._ 0-9/]**', 'training[-._ 0-9/]**', '**[-\r\n> ._ 0-9/]training[-._ 0-9/]**']' at /mainfs/home/yr3g17/.cache/huggingface/datasets/squad with any supported extension ['csv', 'tsv', 'json', 'jsonl',\r\n> 'parquet', 'txt', 'blp', 'bmp', 'dib', 'bufr', 'cur', 'pcx', 'dcx', 'dds', 'ps', 'eps', 'fit', 'fits', 'fli', 'flc', 'ftc', 'ftu', 'gbr', 'gif', 'gr\r\n> ib', 'h5', 'hdf', 'png', 'apng', 'jp2', 'j2k', 'jpc', 'jpf', 'jpx', 'j2c', 'icns', 'ico', 'im', 'iim', 'tif', 'tiff', 'jfif', 'jpe', 'jpg', 'jpeg', '\r\n> mpg', 'mpeg', 'msp', 'pcd', 'pxr', 'pbm', 'pgm', 'ppm', 'pnm', 'psd', 'bw', 'rgb', 'rgba', 'sgi', 'ras', 'tga', 'icb', 'vda', 'vst', 'webp', 'wmf', '\r\n> emf', 'xbm', 'xpm', 'BLP', 'BMP', 'DIB', 'BUFR', 'CUR', 'PCX', 'DCX', 'DDS', 'PS', 'EPS', 'FIT', 'FITS', 'FLI', 'FLC', 'FTC', 'FTU', 'GBR', 'GIF', 'G\r\n> RIB', 'H5', 'HDF', 'PNG', 'APNG', 'JP2', 'J2K', 'JPC', 'JPF', 'JPX', 'J2C', 'ICNS', 'ICO', 'IM', 'IIM', 'TIF', 'TIFF', 'JFIF', 'JPE', 'JPG', 'JPEG',\r\n> 'MPG', 'MPEG', 'MSP', 'PCD', 'PXR', 'PBM', 'PGM', 'PPM', 'PNM', 'PSD', 'BW', 'RGB', 'RGBA', 'SGI', 'RAS', 'TGA', 'ICB', 'VDA', 'VST', 'WEBP', 'WMF',\r\n> 'EMF', 'XBM', 'XPM', 'aiff', 'au', 'avr', 'caf', 'flac', 'htk', 'svx', 'mat4', 'mat5', 'mpc2k', 'ogg', 'paf', 'pvf', 'raw', 'rf64', 'sd2', 'sds', 'ir\r\n> cam', 'voc', 'w64', 'wav', 'nist', 'wavex', 'wve', 'xi', 'mp3', 'opus', 'AIFF', 'AU', 'AVR', 'CAF', 'FLAC', 'HTK', 'SVX', 'MAT4', 'MAT5', 'MPC2K', 'O\r\n> GG', 'PAF', 'PVF', 'RAW', 'RF64', 'SD2', 'SDS', 'IRCAM', 'VOC', 'W64', 'WAV', 'NIST', 'WAVEX', 'WVE', 'XI', 'MP3', 'OPUS', 'zip']\r\n\r\nIt doesn't even appear like I can reliably repeat this process. I'll nuke squad files in my dataset cache and run the Python code again (which downloads a new copy of the dataset to cache). It will either fail (as it just did in the quote above), or it will successfully recall the dataset.\r\n\r\nI repeated this nuking process a few times until calling load_dataset was reliably giving me the correct result (no filelocking issues or tracebacks). I then sent the test script as a job to the supercomputer compute nodes (which do not have internet access and therefore depend on cached data from Linux machine 1 login nodes)\r\n\r\n> Using the latest cached version of the module from /home/yr3g17/.cache/huggingface/modules/datasets_modules/datasets/squad/8730650fed465361f38ac4d810\r\n> ccdd16e8fc87b56498e52fb7e2cadaefc1f177 (last modified on Tue Feb 14 10:12:56 2023) since it couldn't be found locally at squad., or remotely on the Hugging Face Hub.\r\n> Traceback (most recent call last):\r\n> File \"/mainfs/scratch/yr3g17/squad_qanswering/3054408/0/../../main.py\", line 5, in <module>\r\n> dataset = load_dataset(\"squad\")\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/datasets/load.py\", line 1759, in load_dataset\r\n> builder_instance = load_dataset_builder(\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/datasets/load.py\", line 1522, in load_dataset_builder\r\n> builder_instance: DatasetBuilder = builder_cls(\r\n> TypeError: 'NoneType' object is not callable\r\n\r\nand I have absolutely no idea why the second and third machines are producing different tracebacks. I have previously run these exact scripts successfully on the login and compute nodes of the supercomputer, this issue I'm raising has appeared fairly recently for me. This, is where I encounter the TypeError that I opened this issue with, which I was able to traceback (using my laptop before it too started not working) to whatever was dynamically importing \"builder_cls\". That bit of code wasn't doing importing builder_cls correctly and would effectively make the assignment \"builder_cls=None\" resulting in the TypeError. Does any of this help?", "I'm back on linux machine 1 (login node) now. After submitting that as a job to machine 2 and it failing with TypeError, linux machine 1 now produces identical traceback to machine 2:\r\n\r\n> (arkroyal) [yr3g17@cyan52 squad_qanswering]$ python\r\n> Python 3.10.8 (main, Nov 24 2022, 14:13:03) [GCC 11.2.0] on linux\r\n> Type \"help\", \"copyright\", \"credits\" or \"license\" for more information.\r\n>\r\n> from datasets import load_dataset\r\n> load_dataset(\"squad\")\r\n>\r\n> Traceback (most recent call last):\r\n> File \"<stdin>\", line 1, in <module>\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/datasets/load.py\", line 1759, in load_dataset\r\n> builder_instance = load_dataset_builder(\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/datasets/load.py\", line 1522, in load_dataset_builder\r\n> builder_instance: DatasetBuilder = builder_cls(\r\n> TypeError: 'NoneType' object is not callable\r\n\r\nI thought it might be useful to provide you with my cache file structure:\r\n\r\n>_home_yr3g17_.cache_huggingface_datasets_casino_default_1.1.0_302c3b1ac78c48091deabe83a11f4003c7b472a4e11a8eb92799653785bd5da1.lock\r\n>_home_yr3g17_.cache_huggingface_datasets_imdb_plain_text_1.0.0_2fdd8b9bcadd6e7055e742a706876ba43f19faee861df134affd7a3f60fc38a1.lock\r\n>_home_yr3g17_.cache_huggingface_datasets_squad_plain_text_1.0.0_d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453.lock\r\n>_home_yr3g17_.cache_huggingface_datasets_yelp_review_full_yelp_review_full_1.0.0_e8e18e19d7be9e75642fc66b198abadb116f73599ec89a69ba5dd8d1e57ba0bf.lock\r\n> casino\r\n> downloads\r\n> imdb\r\n> json\r\n> squad\r\n> squad_v2\r\n> yelp_review_full\r\n\r\nThe inside of squad/plain_text/1.0.0/ looks like\r\n\r\n> d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453\r\n> d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453.incomplete_info.lock\r\n> d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453_builder.lock\r\n", "I see this is quite a complex use case...\r\n\r\nLet's try multiple things:\r\n- First, update `datasets` and make sure you use the same version in all machines, so that we can easily compare different behaviors.\r\n ```\r\n pip install -U datasets\r\n ```\r\n- Second, wherever you run the `load_dataset(\"squad\")` command, make sure there is not a local directory named \"squad\". The datasets library gives priority to any local file/directory over the datasets on the Hugging Face Hub\r\n - I tell you this, because in one of your trace backs, it seems it refers to a local directory:\r\n ```\r\n Downloading and preparing dataset json/squad to /home/yr3g17/.cache/huggingface/datasets/json/squad-d733af945be1d2c2/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51...\r\n ```\r\n- Third, to use the \"squad\" dataset from the Hub, you need to have internet connection, so that you can download the \"squad\" Python loading script from the Hub. Do all your machines have internet connection?\r\n - I ask this because of this error message:\r\n ```\r\n Using the latest cached version of the module from /home/yr3g17/.cache/huggingface/modules/datasets_modules/datasets/squad/8730650fed465361f38ac4d810ccdd16e8fc87b56498e52fb7e2cadaefc1f177 (last modified on Tue Feb 14 10:12:56 2023) since it couldn't be found locally at squad., or remotely on the Hugging Face Hub.\r\n ```\r\n- Fourth, to be sure that we avoid any issues with the cache, it is a good idea to remove it and regenerate it. Remove `.cache/huggingface/datasets` and also `.cache/huggingface/modules`\r\n- Fifth, as an additional debugging tool, let's be sure we use the latest \"squad\" Python loading script by passing the revision parameter:\r\n ```\r\n ds = load_dataset(\"squad\", revision=\"5fe18c4c680f9922d794e3f4dd673a751c74ee37\")\r\n ```", "Additionally, we just had an infrastructure issue on the Hugging Face Hub at around 11:30 today. That might have contributed to the connectivity issue... It is fixed now.\r\n\r\nhttps://status.huggingface.co/", "Hi again, thanks for your help and insight Albert Villanova.\r\n\r\nIt's all working now, so thank you for that. For the benefit of anyone else who ends up in this thread, I solved the problem by addressing Albert's advice:\r\n\r\n(1) Both Windows and Linux machine 1 (have internet access) and can now access the SQuAD dataset. The supercomputer login node can only access version 2.7.1, but my Windows laptop is running on datasets 2.11.0 just fine. I suspect it was just a perfect storm alongside the aforementioned \"infrastructure issue\".\r\n\r\n(2) I did have a local directory called squad, because I was using a local copy of evaluate's \"SQuAD\" metric. The supercomputer compute nodes do not have internet access and treat `metric = evaluate.load('<x>')` as a way of loading a metric at the local path `./<x>/<x>.py`, which could've been a related issue as I was storing the metric under `squad/squad.py`. Don't be lazy like me and store the evaluation code under a path with a name that can be misinterpreted.\r\n\r\n(3) I can't give internet access to the supercomputer compute nodes, so local files do just fine here.\r\n\r\n(4) The windows and Linux machine 1 can both access the internet and were getting fresh copies of the dataset from the huggingface hub. Linux machine 2 was working after I cleared the contents of ~/.cache/huggingface/....\r\n\r\nI feel silly now, knowing it was all so simple! Sorry about that Albert, and thanks again for the help. I've not raised a Github issue like this before, so I'm not sure if I should be close my own issues or if this is something you guys do?", "Thanks for your detailed feedback which for sure will be useful to other community members." ]
"2023-04-18T07:10:56Z"
"2023-04-20T10:27:23Z"
"2023-04-20T10:27:22Z"
NONE
null
### Describe the bug There is an issue that seems to be unique to the "squad" dataset, in which it cannot be loaded using standard methods. This issue is most quickly reproduced from the command line, using the HF examples to verify a dataset is loaded properly. This is not a problem with "squad_v2" dataset for example. ### Steps to reproduce the bug cmd line > $ python -c "from datasets import load_dataset; print(load_dataset('squad', split='train')[0])" OR Python IDE > from datasets import load_dataset > load_dataset("squad") ### Expected behavior I expected to either see the output described here from running the very same command in command line ([https://huggingface.co/docs/datasets/installation]), or any output that does not raise Python's TypeError. There is some funky behaviour in the dataset builder portion of the codebase that means it is trying to import the squad dataset with an incorrect path, or the squad dataset couldn't be downloaded. I'm not really sure what the problem is beyond that. Messing around with caching I did manage to get it to load the dataset once, and then couldn't repeat this. ### Environment info datasets=2.7.1 **or** 2.10.1, python=3.10.8, Linux 3.10.0-1160.36.2.el7.x86_64 **or** Windows 10-64
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5768/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5768/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5767
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5767/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5767/comments
https://api.github.com/repos/huggingface/datasets/issues/5767/events
https://github.com/huggingface/datasets/issues/5767
1,672,433,979
I_kwDODunzps5jr1E7
5,767
How to use Distill-BERT with different datasets?
{ "avatar_url": "https://avatars.githubusercontent.com/u/109907638?v=4", "events_url": "https://api.github.com/users/sauravtii/events{/privacy}", "followers_url": "https://api.github.com/users/sauravtii/followers", "following_url": "https://api.github.com/users/sauravtii/following{/other_user}", "gists_url": "https://api.github.com/users/sauravtii/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sauravtii", "id": 109907638, "login": "sauravtii", "node_id": "U_kgDOBo0Otg", "organizations_url": "https://api.github.com/users/sauravtii/orgs", "received_events_url": "https://api.github.com/users/sauravtii/received_events", "repos_url": "https://api.github.com/users/sauravtii/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sauravtii/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sauravtii/subscriptions", "type": "User", "url": "https://api.github.com/users/sauravtii" }
[]
closed
false
null
[]
null
[ "Closing this one in favor of the same issue opened in the `transformers` repo." ]
"2023-04-18T06:25:12Z"
"2023-04-20T16:52:05Z"
"2023-04-20T16:52:05Z"
NONE
null
### Describe the bug - `transformers` version: 4.11.3 - Platform: Linux-5.4.0-58-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyTorch version (GPU?): 1.12.0+cu102 (True) - Tensorflow version (GPU?): 2.10.0 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Steps to reproduce the bug I recently read [this](https://huggingface.co/docs/transformers/quicktour#train-with-tensorflow:~:text=The%20most%20important%20thing%20to%20remember%20is%20you%20need%20to%20instantiate%20a%20tokenizer%20with%20the%20same%20model%20name%20to%20ensure%20you%E2%80%99re%20using%20the%20same%20tokenization%20rules%20a%20model%20was%20pretrained%20with.) and was wondering how to use distill-BERT (which is pre-trained with imdb dataset) with a different dataset (for eg. [this](https://huggingface.co/datasets/yhavinga/imdb_dutch) dataset)? ### Expected behavior Distill-BERT should work with different datasets. ### Environment info - `datasets` version: 1.12.1 - Platform: Linux-5.4.0-58-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyArrow version: 11.0.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5767/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5767/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5766
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5766/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5766/comments
https://api.github.com/repos/huggingface/datasets/issues/5766/events
https://github.com/huggingface/datasets/issues/5766
1,671,485,882
I_kwDODunzps5joNm6
5,766
Support custom feature types
{ "avatar_url": "https://avatars.githubusercontent.com/u/37540982?v=4", "events_url": "https://api.github.com/users/jmontalt/events{/privacy}", "followers_url": "https://api.github.com/users/jmontalt/followers", "following_url": "https://api.github.com/users/jmontalt/following{/other_user}", "gists_url": "https://api.github.com/users/jmontalt/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jmontalt", "id": 37540982, "login": "jmontalt", "node_id": "MDQ6VXNlcjM3NTQwOTgy", "organizations_url": "https://api.github.com/users/jmontalt/orgs", "received_events_url": "https://api.github.com/users/jmontalt/received_events", "repos_url": "https://api.github.com/users/jmontalt/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jmontalt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmontalt/subscriptions", "type": "User", "url": "https://api.github.com/users/jmontalt" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
null
[ "Hi ! Interesting :) What kind of new types would you like to use ?\r\n\r\nNote that you can already implement your own decoding by using `set_transform` that can decode data on-the-fly when rows are accessed", "An interesting proposal indeed. \r\n\r\nPandas and Polars have the \"extension API\", so doing something similar on our side could be useful, too. However, this requires defining a common interface for the existing feature types before discussing the API/workflow for defining/sharing custom feature types, and this could take some time.\r\n\r\nIt would also be nice if the datasets viewer could render these custom types.", "Thank you for your replies! @lhoestq I have a use case involving whole-slide images in digital pathology. These are very large images (potentially gigapixel scale), so standard image tools are not suitable. Essentially, encoding/decoding can be done from/to [`OpenSlide`](https://openslide.org/api/python/) objects. Though there may be interest in this use case from the digital pathology community, it may not be sufficiently useful to suggest adding the feature type, but there will likely be many other use cases for a generic custom feature type.\r\n\r\nThank you for pointing out `set_transform`! I will make sure to keep this in mind in the future.\r\n\r\n@mariosasko An \"extension API\" sounds like a good idea, though I understand that this needs to be properly defined, and that you will need to discuss it internally. Support from the viewer would be awesome, too, though the generalization to arbitrary types sounds challenging.\r\n\r\nFor now, happy to know that you're considering the feature. Feel free to let me know if I can do anything to support the process." ]
"2023-04-17T15:46:41Z"
"2023-05-03T21:58:43Z"
null
NONE
null
### Feature request I think it would be nice to allow registering custom feature types with the 🤗 Datasets library. For example, allow to do something along the following lines: ``` from datasets.features import register_feature_type # this would be a new function @register_feature_type class CustomFeatureType: def encode_example(self, value): """User-provided logic to encode an example of this feature.""" pass def decode_example(self, value, token_per_repo_id=None): """User-provided logic to decode an example of this feature.""" pass ``` ### Motivation Users of 🤗 Datasets, such as myself, may want to use the library to load datasets with unsupported feature types (i.e., beyond `ClassLabel`, `Image`, or `Audio`). This would be useful for prototyping new feature types and for feature types that aren't used widely enough to warrant inclusion in 🤗 Datasets. At the moment, this is only possible by monkey-patching 🤗 Datasets, which obfuscates the code and is prone to breaking with library updates. It also requires the user to write some custom code which could be easily avoided. ### Your contribution I would be happy to contribute this feature. My proposed solution would involve changing the following call to `globals()` to an explicit feature type registry, which a user-facing `register_feature_type` decorator could update. https://github.com/huggingface/datasets/blob/fd893098627230cc734f6009ad04cf885c979ac4/src/datasets/features/features.py#L1329 I would also provide an abstract base class for custom feature types which users could inherit. This would have at least an `encode_example` method and a `decode_example` method, similar to `Image` or `Audio`. The existing `encode_nested_example` and `decode_nested_example` functions would also need to be updated to correctly call the corresponding functions for the new type.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5766/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5766/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5765
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5765/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5765/comments
https://api.github.com/repos/huggingface/datasets/issues/5765/events
https://github.com/huggingface/datasets/issues/5765
1,671,388,824
I_kwDODunzps5jn16Y
5,765
ValueError: You should supply an encoding or a list of encodings to this method that includes input_ids, but you provided ['text']
{ "avatar_url": "https://avatars.githubusercontent.com/u/109907638?v=4", "events_url": "https://api.github.com/users/sauravtii/events{/privacy}", "followers_url": "https://api.github.com/users/sauravtii/followers", "following_url": "https://api.github.com/users/sauravtii/following{/other_user}", "gists_url": "https://api.github.com/users/sauravtii/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sauravtii", "id": 109907638, "login": "sauravtii", "node_id": "U_kgDOBo0Otg", "organizations_url": "https://api.github.com/users/sauravtii/orgs", "received_events_url": "https://api.github.com/users/sauravtii/received_events", "repos_url": "https://api.github.com/users/sauravtii/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sauravtii/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sauravtii/subscriptions", "type": "User", "url": "https://api.github.com/users/sauravtii" }
[]
open
false
null
[]
null
[ "You need to remove the `text` and `text_en` columns before passing the dataset to the `DataLoader` to avoid this error:\r\n```python\r\ntokenized_datasets = tokenized_datasets.remove_columns([\"text\", \"text_en\"])\r\n```\r\n", "Thanks @mariosasko. Now I am getting this error:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"client_2.py\", line 138, in <module>\r\n main()\r\n File \"client_2.py\", line 134, in main\r\n fl.client.start_numpy_client(server_address=\"localhost:8080\", client=IMDBClient())\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/flwr/client/app.py\", line 208, in start_numpy_client\r\n start_client(\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/flwr/client/app.py\", line 142, in start_client\r\n client_message, sleep_duration, keep_going = handle(\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/flwr/client/grpc_client/message_handler.py\", line 68, in handle\r\n return _fit(client, server_msg.fit_ins), 0, True\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/flwr/client/grpc_client/message_handler.py\", line 157, in _fit\r\n fit_res = client.fit(fit_ins)\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/flwr/client/app.py\", line 252, in _fit\r\n results = self.numpy_client.fit(parameters, ins.config) # type: ignore\r\n File \"client_2.py\", line 124, in fit\r\n train(net, trainloader, epochs=1)\r\n File \"client_2.py\", line 78, in train\r\n for batch in trainloader:\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/torch/utils/data/dataloader.py\", line 652, in __next__\r\n data = self._next_data()\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/torch/utils/data/dataloader.py\", line 692, in _next_data\r\n data = self._dataset_fetcher.fetch(index) # may raise StopIteration\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py\", line 49, in fetch\r\n data = [self.dataset[idx] for idx in possibly_batched_index]\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py\", line 49, in <listcomp>\r\n data = [self.dataset[idx] for idx in possibly_batched_index]\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/arrow_dataset.py\", line 1525, in __getitem__\r\n return self._getitem(\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/arrow_dataset.py\", line 1517, in _getitem\r\n pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None)\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/formatting/formatting.py\", line 373, in query_table\r\n pa_subtable = _query_table_with_indices_mapping(table, key, indices=indices)\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/formatting/formatting.py\", line 55, in _query_table_with_indices_mapping\r\n return _query_table(table, key)\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/formatting/formatting.py\", line 79, in _query_table\r\n return table.fast_slice(key % table.num_rows, 1)\r\nZeroDivisionError: integer division or modulo by zero\r\n```\r\n\r\nThis is my code:\r\n\r\n```\r\nfrom collections import OrderedDict\r\nimport warnings\r\n\r\nimport flwr as fl\r\nimport torch\r\nimport numpy as np\r\n\r\nimport random\r\nfrom torch.utils.data import DataLoader\r\n\r\nfrom datasets import load_dataset, load_metric\r\n\r\nfrom transformers import AutoTokenizer, DataCollatorWithPadding\r\nfrom transformers import AutoModelForSequenceClassification\r\nfrom transformers import AdamW\r\n#from transformers import tokenized_datasets\r\n\r\n\r\nwarnings.filterwarnings(\"ignore\", category=UserWarning)\r\n# DEVICE = torch.device(\"cuda:0\" if torch.cuda.is_available() else \"cpu\")\r\n\r\nDEVICE = \"cpu\"\r\n\r\nCHECKPOINT = \"distilbert-base-uncased\" # transformer model checkpoint\r\n\r\n\r\ndef load_data():\r\n \"\"\"Load IMDB data (training and eval)\"\"\"\r\n raw_datasets = load_dataset(\"yhavinga/imdb_dutch\")\r\n raw_datasets = raw_datasets.shuffle(seed=42)\r\n\r\n # remove unnecessary data split\r\n del raw_datasets[\"unsupervised\"]\r\n\r\n tokenizer = AutoTokenizer.from_pretrained(CHECKPOINT)\r\n\r\n def tokenize_function(examples):\r\n return tokenizer(examples[\"text\"], truncation=True)\r\n\r\n # random 100 samples\r\n population = random.sample(range(len(raw_datasets[\"train\"])), 100)\r\n\r\n tokenized_datasets = raw_datasets.map(tokenize_function, batched=True)\r\n tokenized_datasets[\"train\"] = tokenized_datasets[\"train\"].select(population)\r\n tokenized_datasets[\"test\"] = tokenized_datasets[\"test\"].select(population)\r\n\r\n # tokenized_datasets = tokenized_datasets.remove_columns(\"text\")\r\n # tokenized_datasets = tokenized_datasets.rename_column(\"label\", \"labels\")\r\n\r\n tokenized_datasets = tokenized_datasets.remove_columns(\"attention_mask\")\r\n tokenized_datasets = tokenized_datasets.remove_columns(\"input_ids\")\r\n tokenized_datasets = tokenized_datasets.remove_columns(\"label\")\r\n # tokenized_datasets = tokenized_datasets.remove_columns(\"text_en\")\r\n\r\n # tokenized_datasets = tokenized_datasets.remove_columns(raw_datasets[\"train\"].column_names)\r\n \r\n tokenized_datasets = tokenized_datasets.remove_columns([\"text\", \"text_en\"])\r\n \r\n data_collator = DataCollatorWithPadding(tokenizer=tokenizer)\r\n trainloader = DataLoader(\r\n tokenized_datasets[\"train\"],\r\n shuffle=True,\r\n batch_size=32,\r\n collate_fn=data_collator,\r\n )\r\n\r\n testloader = DataLoader(\r\n tokenized_datasets[\"test\"], batch_size=32, collate_fn=data_collator\r\n )\r\n\r\n return trainloader, testloader\r\n\r\n\r\ndef train(net, trainloader, epochs):\r\n optimizer = AdamW(net.parameters(), lr=5e-4)\r\n net.train()\r\n for _ in range(epochs):\r\n for batch in trainloader:\r\n batch = {k: v.to(DEVICE) for k, v in batch.items()}\r\n outputs = net(**batch)\r\n loss = outputs.loss\r\n loss.backward()\r\n optimizer.step()\r\n optimizer.zero_grad()\r\n\r\n\r\ndef test(net, testloader):\r\n metric = load_metric(\"accuracy\")\r\n loss = 0\r\n net.eval()\r\n for batch in testloader:\r\n batch = {k: v.to(DEVICE) for k, v in batch.items()}\r\n with torch.no_grad():\r\n outputs = net(**batch)\r\n logits = outputs.logits\r\n loss += outputs.loss.item()\r\n predictions = torch.argmax(logits, dim=-1)\r\n metric.add_batch(predictions=predictions, references=batch[\"labels\"])\r\n loss /= len(testloader.dataset)\r\n accuracy = metric.compute()[\"accuracy\"]\r\n return loss, accuracy\r\n\r\n\r\ndef main():\r\n net = AutoModelForSequenceClassification.from_pretrained(\r\n CHECKPOINT, num_labels=2\r\n ).to(DEVICE)\r\n\r\n trainloader, testloader = load_data()\r\n\r\n # Flower client\r\n class IMDBClient(fl.client.NumPyClient):\r\n def get_parameters(self, config):\r\n return [val.cpu().numpy() for _, val in net.state_dict().items()]\r\n\r\n def set_parameters(self, parameters):\r\n params_dict = zip(net.state_dict().keys(), parameters)\r\n state_dict = OrderedDict({k: torch.Tensor(v) for k, v in params_dict})\r\n net.load_state_dict(state_dict, strict=True)\r\n\r\n def fit(self, parameters, config):\r\n self.set_parameters(parameters)\r\n print(\"Training Started...\")\r\n train(net, trainloader, epochs=1)\r\n print(\"Training Finished.\")\r\n return self.get_parameters(config={}), len(trainloader), {}\r\n\r\n def evaluate(self, parameters, config):\r\n self.set_parameters(parameters)\r\n loss, accuracy = test(net, testloader)\r\n return float(loss), len(testloader), {\"accuracy\": float(accuracy)}\r\n\r\n # Start client\r\n fl.client.start_numpy_client(server_address=\"localhost:8080\", client=IMDBClient())\r\n\r\n\r\nif __name__ == \"__main__\":\r\n main()\r\n```", "Please also remove/comment these lines:\r\n```python\r\ntokenized_datasets = tokenized_datasets.remove_columns(\"attention_mask\")\r\ntokenized_datasets = tokenized_datasets.remove_columns(\"input_ids\")\r\ntokenized_datasets = tokenized_datasets.remove_columns(\"label\")\r\n```", "Thanks @mariosasko .\r\n\r\nNow, I am trying out this [tutorial](https://flower.dev/docs/quickstart-huggingface.html) which basically trains distil-BERT with IMDB dataset (very similar to this [tutorial](https://huggingface.co/docs/transformers/main/tasks/sequence_classification)). But I don't know why my accuracy isn't increasing even after training for a significant amount of time and also by using the entire dataset. Below I have attached `client.py` file:\r\n\r\n`client.py`:\r\n\r\n```\r\nfrom collections import OrderedDict\r\nimport warnings\r\n\r\nimport flwr as fl\r\nimport torch\r\nimport numpy as np\r\n\r\nimport random\r\nfrom torch.utils.data import DataLoader\r\n\r\nfrom datasets import load_dataset, load_metric\r\n\r\nfrom transformers import AutoTokenizer, DataCollatorWithPadding\r\nfrom transformers import AutoModelForSequenceClassification\r\nfrom transformers import AdamW\r\n\r\nwarnings.filterwarnings(\"ignore\", category=UserWarning)\r\n\r\nDEVICE = \"cuda:1\"\r\n\r\nCHECKPOINT = \"distilbert-base-uncased\" # transformer model checkpoint\r\n\r\n\r\ndef load_data():\r\n \"\"\"Load IMDB data (training and eval)\"\"\"\r\n raw_datasets = load_dataset(\"imdb\")\r\n raw_datasets = raw_datasets.shuffle(seed=42)\r\n\r\n # remove unnecessary data split\r\n del raw_datasets[\"unsupervised\"]\r\n\r\n tokenizer = AutoTokenizer.from_pretrained(CHECKPOINT)\r\n\r\n def tokenize_function(examples):\r\n return tokenizer(examples[\"text\"], truncation=True)\r\n\r\n tokenized_datasets = raw_datasets.map(tokenize_function, batched=True)\r\n\r\n tokenized_datasets = tokenized_datasets.remove_columns(\"text\")\r\n tokenized_datasets = tokenized_datasets.rename_column(\"label\", \"labels\")\r\n\r\n data_collator = DataCollatorWithPadding(tokenizer=tokenizer)\r\n trainloader = DataLoader(\r\n tokenized_datasets[\"train\"],\r\n shuffle=True,\r\n batch_size=32,\r\n collate_fn=data_collator,\r\n )\r\n\r\n testloader = DataLoader(\r\n tokenized_datasets[\"test\"], batch_size=32, collate_fn=data_collator\r\n )\r\n\r\n return trainloader, testloader\r\n\r\n\r\ndef train(net, trainloader, epochs):\r\n optimizer = AdamW(net.parameters(), lr=5e-5)\r\n net.train()\r\n for i in range(epochs):\r\n print(\"Epoch: \", i+1)\r\n j = 1\r\n print(\"####################### The length of the trainloader is: \", len(trainloader)) \r\n for batch in trainloader:\r\n print(\"####################### The batch number is: \", j)\r\n batch = {k: v.to(DEVICE) for k, v in batch.items()}\r\n outputs = net(**batch)\r\n loss = outputs.loss\r\n loss.backward()\r\n optimizer.step()\r\n optimizer.zero_grad()\r\n j += 1\r\n\r\n\r\ndef test(net, testloader):\r\n metric = load_metric(\"accuracy\")\r\n loss = 0\r\n net.eval()\r\n for batch in testloader:\r\n batch = {k: v.to(DEVICE) for k, v in batch.items()}\r\n with torch.no_grad():\r\n outputs = net(**batch)\r\n logits = outputs.logits\r\n loss += outputs.loss.item()\r\n predictions = torch.argmax(logits, dim=-1)\r\n metric.add_batch(predictions=predictions, references=batch[\"labels\"])\r\n loss /= len(testloader.dataset)\r\n accuracy = metric.compute()[\"accuracy\"]\r\n return loss, accuracy\r\n\r\n\r\ndef main():\r\n net = AutoModelForSequenceClassification.from_pretrained(\r\n CHECKPOINT, num_labels=2\r\n ).to(DEVICE)\r\n\r\n trainloader, testloader = load_data()\r\n\r\n # Flower client\r\n class IMDBClient(fl.client.NumPyClient):\r\n def get_parameters(self, config):\r\n return [val.cpu().numpy() for _, val in net.state_dict().items()]\r\n\r\n def set_parameters(self, parameters):\r\n params_dict = zip(net.state_dict().keys(), parameters)\r\n state_dict = OrderedDict({k: torch.Tensor(v) for k, v in params_dict})\r\n net.load_state_dict(state_dict, strict=True)\r\n\r\n def fit(self, parameters, config):\r\n self.set_parameters(parameters)\r\n print(\"Training Started...\")\r\n train(net, trainloader, epochs=1)\r\n print(\"Training Finished.\")\r\n return self.get_parameters(config={}), len(trainloader), {}\r\n\r\n def evaluate(self, parameters, config):\r\n self.set_parameters(parameters)\r\n loss, accuracy = test(net, testloader)\r\n print({\"loss\": float(loss), \"accuracy\": float(accuracy)})\r\n return float(loss), len(testloader), {\"loss\": float(loss), \"accuracy\": float(accuracy)}\r\n\r\n # Start client\r\n fl.client.start_numpy_client(server_address=\"localhost:5040\", client=IMDBClient())\r\n\r\n\r\nif __name__ == \"__main__\":\r\n main()\r\n```\r\n\r\nCan I get any help, please?" ]
"2023-04-17T15:00:50Z"
"2023-04-25T13:50:45Z"
null
NONE
null
### Describe the bug Following is my code that I am trying to run, but facing an error (have attached the whole error below): My code: ``` from collections import OrderedDict import warnings import flwr as fl import torch import numpy as np import random from torch.utils.data import DataLoader from datasets import load_dataset, load_metric from transformers import AutoTokenizer, DataCollatorWithPadding from transformers import AutoModelForSequenceClassification from transformers import AdamW #from transformers import tokenized_datasets warnings.filterwarnings("ignore", category=UserWarning) # DEVICE = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") DEVICE = "cpu" CHECKPOINT = "distilbert-base-uncased" # transformer model checkpoint def load_data(): """Load IMDB data (training and eval)""" raw_datasets = load_dataset("yhavinga/imdb_dutch") raw_datasets = raw_datasets.shuffle(seed=42) # remove unnecessary data split del raw_datasets["unsupervised"] tokenizer = AutoTokenizer.from_pretrained(CHECKPOINT) def tokenize_function(examples): return tokenizer(examples["text"], truncation=True) # random 100 samples population = random.sample(range(len(raw_datasets["train"])), 100) tokenized_datasets = raw_datasets.map(tokenize_function, batched=True) tokenized_datasets["train"] = tokenized_datasets["train"].select(population) tokenized_datasets["test"] = tokenized_datasets["test"].select(population) # tokenized_datasets = tokenized_datasets.remove_columns("text") # tokenized_datasets = tokenized_datasets.rename_column("label", "labels") tokenized_datasets = tokenized_datasets.remove_columns("attention_mask") tokenized_datasets = tokenized_datasets.remove_columns("input_ids") tokenized_datasets = tokenized_datasets.remove_columns("label") tokenized_datasets = tokenized_datasets.remove_columns("text_en") # tokenized_datasets = tokenized_datasets.remove_columns(raw_datasets["train"].column_names) data_collator = DataCollatorWithPadding(tokenizer=tokenizer) trainloader = DataLoader( tokenized_datasets["train"], shuffle=True, batch_size=32, collate_fn=data_collator, ) testloader = DataLoader( tokenized_datasets["test"], batch_size=32, collate_fn=data_collator ) return trainloader, testloader def train(net, trainloader, epochs): optimizer = AdamW(net.parameters(), lr=5e-4) net.train() for _ in range(epochs): for batch in trainloader: batch = {k: v.to(DEVICE) for k, v in batch.items()} outputs = net(**batch) loss = outputs.loss loss.backward() optimizer.step() optimizer.zero_grad() def test(net, testloader): metric = load_metric("accuracy") loss = 0 net.eval() for batch in testloader: batch = {k: v.to(DEVICE) for k, v in batch.items()} with torch.no_grad(): outputs = net(**batch) logits = outputs.logits loss += outputs.loss.item() predictions = torch.argmax(logits, dim=-1) metric.add_batch(predictions=predictions, references=batch["labels"]) loss /= len(testloader.dataset) accuracy = metric.compute()["accuracy"] return loss, accuracy def main(): net = AutoModelForSequenceClassification.from_pretrained( CHECKPOINT, num_labels=2 ).to(DEVICE) trainloader, testloader = load_data() # Flower client class IMDBClient(fl.client.NumPyClient): def get_parameters(self, config): return [val.cpu().numpy() for _, val in net.state_dict().items()] def set_parameters(self, parameters): params_dict = zip(net.state_dict().keys(), parameters) state_dict = OrderedDict({k: torch.Tensor(v) for k, v in params_dict}) net.load_state_dict(state_dict, strict=True) def fit(self, parameters, config): self.set_parameters(parameters) print("Training Started...") train(net, trainloader, epochs=1) print("Training Finished.") return self.get_parameters(config={}), len(trainloader), {} def evaluate(self, parameters, config): self.set_parameters(parameters) loss, accuracy = test(net, testloader) return float(loss), len(testloader), {"accuracy": float(accuracy)} # Start client fl.client.start_numpy_client(server_address="localhost:8080", client=IMDBClient()) if __name__ == "__main__": main() ``` Error: ``` Traceback (most recent call last): File "client_2.py", line 136, in <module> main() File "client_2.py", line 132, in main fl.client.start_numpy_client(server_address="localhost:8080", client=IMDBClient()) File "/home/saurav/.local/lib/python3.8/site-packages/flwr/client/app.py", line 208, in start_numpy_client start_client( File "/home/saurav/.local/lib/python3.8/site-packages/flwr/client/app.py", line 142, in start_client client_message, sleep_duration, keep_going = handle( File "/home/saurav/.local/lib/python3.8/site-packages/flwr/client/grpc_client/message_handler.py", line 68, in handle return _fit(client, server_msg.fit_ins), 0, True File "/home/saurav/.local/lib/python3.8/site-packages/flwr/client/grpc_client/message_handler.py", line 157, in _fit fit_res = client.fit(fit_ins) File "/home/saurav/.local/lib/python3.8/site-packages/flwr/client/app.py", line 252, in _fit results = self.numpy_client.fit(parameters, ins.config) # type: ignore File "client_2.py", line 122, in fit train(net, trainloader, epochs=1) File "client_2.py", line 76, in train for batch in trainloader: File "/home/saurav/.local/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 652, in __next__ data = self._next_data() File "/home/saurav/.local/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 692, in _next_data data = self._dataset_fetcher.fetch(index) # may raise StopIteration File "/home/saurav/.local/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 52, in fetch return self.collate_fn(data) File "/home/saurav/.local/lib/python3.8/site-packages/transformers/data/data_collator.py", line 221, in __call__ batch = self.tokenizer.pad( File "/home/saurav/.local/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 2713, in pad raise ValueError( ValueError: You should supply an encoding or a list of encodings to this method that includes input_ids, but you provided ['text'] ``` ### Steps to reproduce the bug Run the above code. ### Expected behavior Don't know, doing it for the first time. ### Environment info - `datasets` version: 1.12.1 - Platform: Linux-5.4.0-58-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyArrow version: 11.0.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5765/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5765/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5764
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5764/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5764/comments
https://api.github.com/repos/huggingface/datasets/issues/5764/events
https://github.com/huggingface/datasets/issues/5764
1,670,740,198
I_kwDODunzps5jlXjm
5,764
ConnectionError: Couldn't reach https://www.dropbox.com/s/zts98j4vkqtsns6/aclImdb_v2.tar?dl=1
{ "avatar_url": "https://avatars.githubusercontent.com/u/109907638?v=4", "events_url": "https://api.github.com/users/sauravtii/events{/privacy}", "followers_url": "https://api.github.com/users/sauravtii/followers", "following_url": "https://api.github.com/users/sauravtii/following{/other_user}", "gists_url": "https://api.github.com/users/sauravtii/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sauravtii", "id": 109907638, "login": "sauravtii", "node_id": "U_kgDOBo0Otg", "organizations_url": "https://api.github.com/users/sauravtii/orgs", "received_events_url": "https://api.github.com/users/sauravtii/received_events", "repos_url": "https://api.github.com/users/sauravtii/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sauravtii/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sauravtii/subscriptions", "type": "User", "url": "https://api.github.com/users/sauravtii" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
[ "Thanks for reporting, @sauravtii.\r\n\r\nUnfortunately, I'm not able to reproduce the issue:\r\n```python\r\nIn [1]: from datasets import load_dataset\r\n\r\nIn [2]: ds = load_dataset(\"josianem/imdb\")\r\n\r\nIn [2]: ds\r\nOut[2]: \r\nDatasetDict({\r\n train: Dataset({\r\n features: ['text', 'label'],\r\n num_rows: 25799\r\n })\r\n test: Dataset({\r\n features: ['text', 'label'],\r\n num_rows: 25000\r\n })\r\n unsupervised: Dataset({\r\n features: ['text', 'label'],\r\n num_rows: 50000\r\n })\r\n})\r\n```\r\n\r\nCould you please retry to load the dataset? Maybe there was a temporary connection issue to Dropbox.", "Thanks @albertvillanova. I am facing another issue now\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"sample.py\", line 4, in <module>\r\n dataset = load_dataset(\"josianem/imdb\")\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/load.py\", line 1112, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/builder.py\", line 636, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/builder.py\", line 738, in _download_and_prepare\r\n verify_splits(self.info.splits, split_dict)\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/utils/info_utils.py\", line 74, in verify_splits\r\n raise NonMatchingSplitsSizesError(str(bad_splits))\r\ndatasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=34501348, num_examples=25799, dataset_name='imdb'), 'recorded': SplitInfo(name='train', num_bytes=0, num_examples=0, dataset_name='imdb')}, {'expected': SplitInfo(name='test', num_bytes=32650697, num_examples=25000, dataset_name='imdb'), 'recorded': SplitInfo(name='test', num_bytes=0, num_examples=0, dataset_name='imdb')}, {'expected': SplitInfo(name='unsupervised', num_bytes=67106814, num_examples=50000, dataset_name='imdb'), 'recorded': SplitInfo(name='unsupervised', num_bytes=0, num_examples=0, dataset_name='imdb')}]\r\n```\r\n\r\nThis is my code\r\n\r\n```\r\nfrom datasets import load_dataset, load_metric\r\n\r\ndataset = load_dataset(\"josianem/imdb\")\r\n```", "Your connection didn't work and you got an empty dataset (`num_bytes=0, num_examples=0`):\r\n```\r\ndatasets.utils.info_utils.NonMatchingSplitsSizesError: \r\n[\r\n {\r\n 'expected': SplitInfo(name='train', num_bytes=34501348, num_examples=25799, dataset_name='imdb'), \r\n 'recorded': SplitInfo(name='train', num_bytes=0, num_examples=0, dataset_name='imdb')\r\n }, \r\n {\r\n 'expected': SplitInfo(name='test', num_bytes=32650697, num_examples=25000, dataset_name='imdb'), \r\n 'recorded': SplitInfo(name='test', num_bytes=0, num_examples=0, dataset_name='imdb')\r\n }, \r\n {\r\n 'expected': SplitInfo(name='unsupervised', num_bytes=67106814, num_examples=50000, dataset_name='imdb'), \r\n 'recorded': SplitInfo(name='unsupervised', num_bytes=0, num_examples=0, dataset_name='imdb')\r\n }\r\n]\r\n```\r\n\r\nCould you please try the link in your browser and see if it works? https://www.dropbox.com/s/zts98j4vkqtsns6/aclImdb_v2.tar?dl=1\r\n- If it does not work, you should contact the author of the dataset in their Community tab (https://huggingface.co/datasets/josianem/imdb/discussions) and inform them, so that they can host their data elsewhere, for example on the Hugging Face Hub itself\r\n\r\nIf the link works, you should try to load the dataset but forcing the re-download of the data files (so that the cache is refreshed with the actual data file), by passing `download_mode=\"force_redownload\"`:\r\n```python\r\ndataset = load_dataset(\"josianem/imdb\", download_mode=\"force_redownload\")\r\n```", "After pasting the link in the browser, it did start the download so it seems that the link is working. But even after including the `download_mode` in my code I am facing the same issue:\r\n\r\nError:\r\n```\r\nTraceback (most recent call last):\r\n File \"sample.py\", line 4, in <module>\r\n dataset = load_dataset(\"josianem/imdb\", download_mode=\"force_redownload\")\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/load.py\", line 1112, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/builder.py\", line 636, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/builder.py\", line 704, in _download_and_prepare\r\n split_generators = self._split_generators(dl_manager, **split_generators_kwargs)\r\n File \"/home/saurav/.cache/huggingface/modules/datasets_modules/datasets/imdb/cc6ab4acab2799be15d5d217c24548b856156dafdc850165fdc4f2031f27ff2f/imdb.py\", line 79, in _split_generators\r\n archive = dl_manager.download(_DOWNLOAD_URL)\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/utils/download_manager.py\", line 196, in download\r\n downloaded_path_or_paths = map_nested(\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/utils/py_utils.py\", line 197, in map_nested\r\n return function(data_struct)\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/utils/download_manager.py\", line 217, in _download\r\n return cached_path(url_or_filename, download_config=download_config)\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/utils/file_utils.py\", line 289, in cached_path\r\n output_path = get_from_cache(\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/utils/file_utils.py\", line 606, in get_from_cache\r\n raise ConnectionError(\"Couldn't reach {}\".format(url))\r\nConnectionError: Couldn't reach https://www.dropbox.com/s/zts98j4vkqtsns6/aclImdb_v2.tar?dl=1\r\n```\r\n\r\nMy code:\r\n```\r\nfrom datasets import load_dataset, load_metric\r\n\r\ndataset = load_dataset(\"josianem/imdb\", download_mode=\"force_redownload\")\r\n```", "I have tried again to reproduce your issue without success: the dataset loads perfectly, both in my local machine and in a Colab notebook.\r\n- See: https://colab.research.google.com/drive/1dky3T0XGFuldggy22NNQQN-UqOFqvnuY?usp=sharing\r\n\r\nI think the cause maight be that you are using a very old version of `datasets`. Please, could you update it and retry?\r\n```\r\npip install -U datasets\r\n```", "That worked!! Thanks @albertvillanova : )\r\n\r\n```\r\nDownloading builder script: 100%|███████| 4.20k/4.20k [00:00<00:00, 6.69MB/s]\r\nDownloading metadata: 100%|█████████████| 2.60k/2.60k [00:00<00:00, 3.41MB/s]\r\nDownloading readme: 100%|███████████████| 7.52k/7.52k [00:00<00:00, 12.6MB/s]\r\nDownloading and preparing dataset imdb/plain_text to /home/saurav/.cache/huggingface/datasets/josianem___imdb/plain_text/1.0.0/cc6ab4acab2799be15d5d217c24548b856156dafdc850165fdc4f2031f27ff2f...\r\nDownloading data: 100%|███████████████████| 301M/301M [01:32<00:00, 3.25MB/s]\r\nDataset imdb downloaded and prepared to /home/saurav/.cache/huggingface/datasets/josianem___imdb/plain_text/1.0.0/cc6ab4acab2799be15d5d217c24548b856156dafdc850165fdc4f2031f27ff2f. Subsequent calls will reuse this data.\r\n100%|█████████████████████████████████████████| 3/3 [00:00<00:00, 794.83it/s]\r\n```\r\n\r\nThe code I used:\r\n```\r\nfrom datasets import load_dataset, load_metric\r\n\r\ndataset = load_dataset(\"josianem/imdb\", download_mode=\"force_redownload\")\r\n\r\n```\r\n\r\nBut when I remove `download_mode=\"force_redownload\"` I get the same error. Any guess on that?", "That is because the cache got the \"empty\" download file the first time you tried and got the connection error.\r\n\r\nThen, once you no longer get the connection error, you need to refresh the cache by passing `download_mode=\"force_redownload\"`." ]
"2023-04-17T09:08:18Z"
"2023-04-18T07:18:20Z"
"2023-04-18T07:18:20Z"
NONE
null
### Describe the bug I want to use this (https://huggingface.co/datasets/josianem/imdb) dataset therefore I am trying to load it using the following code: ``` dataset = load_dataset("josianem/imdb") ``` The dataset is not getting loaded and gives the error message as the following: ``` Traceback (most recent call last): File "sample.py", line 3, in <module> dataset = load_dataset("josianem/imdb") File "/home/saurav/.local/lib/python3.8/site-packages/datasets/load.py", line 1112, in load_dataset builder_instance.download_and_prepare( File "/home/saurav/.local/lib/python3.8/site-packages/datasets/builder.py", line 636, in download_and_prepare self._download_and_prepare( File "/home/saurav/.local/lib/python3.8/site-packages/datasets/builder.py", line 704, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/home/saurav/.cache/huggingface/modules/datasets_modules/datasets/imdb/cc6ab4acab2799be15d5d217c24548b856156dafdc850165fdc4f2031f27ff2f/imdb.py", line 79, in _split_generators archive = dl_manager.download(_DOWNLOAD_URL) File "/home/saurav/.local/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 196, in download downloaded_path_or_paths = map_nested( File "/home/saurav/.local/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 197, in map_nested return function(data_struct) File "/home/saurav/.local/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 217, in _download return cached_path(url_or_filename, download_config=download_config) File "/home/saurav/.local/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 289, in cached_path output_path = get_from_cache( File "/home/saurav/.local/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 606, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach https://www.dropbox.com/s/zts98j4vkqtsns6/aclImdb_v2.tar?dl=1 ``` ### Steps to reproduce the bug You can reproduce the error by using the following code: ``` from datasets import load_dataset, load_metric dataset = load_dataset("josianem/imdb") ``` ### Expected behavior The dataset should get loaded (I am using this dataset for the first time so not much aware of the exact behavior). ### Environment info - `datasets` version: 1.12.1 - Platform: Linux-5.4.0-58-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyArrow version: 11.0.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5764/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5764/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5763
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5763/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5763/comments
https://api.github.com/repos/huggingface/datasets/issues/5763/events
https://github.com/huggingface/datasets/pull/5763
1,670,476,302
PR_kwDODunzps5OcMI7
5,763
fix typo: "mow" -> "now"
{ "avatar_url": "https://avatars.githubusercontent.com/u/1967608?v=4", "events_url": "https://api.github.com/users/csris/events{/privacy}", "followers_url": "https://api.github.com/users/csris/followers", "following_url": "https://api.github.com/users/csris/following{/other_user}", "gists_url": "https://api.github.com/users/csris/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/csris", "id": 1967608, "login": "csris", "node_id": "MDQ6VXNlcjE5Njc2MDg=", "organizations_url": "https://api.github.com/users/csris/orgs", "received_events_url": "https://api.github.com/users/csris/received_events", "repos_url": "https://api.github.com/users/csris/repos", "site_admin": false, "starred_url": "https://api.github.com/users/csris/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/csris/subscriptions", "type": "User", "url": "https://api.github.com/users/csris" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006804 / 0.011353 (-0.004549) | 0.004984 / 0.011008 (-0.006024) | 0.096781 / 0.038508 (0.058273) | 0.033049 / 0.023109 (0.009939) | 0.297681 / 0.275898 (0.021783) | 0.329553 / 0.323480 (0.006073) | 0.005697 / 0.007986 (-0.002289) | 0.004019 / 0.004328 (-0.000310) | 0.072691 / 0.004250 (0.068441) | 0.046921 / 0.037052 (0.009868) | 0.311467 / 0.258489 (0.052978) | 0.337616 / 0.293841 (0.043775) | 0.042400 / 0.128546 (-0.086146) | 0.011919 / 0.075646 (-0.063727) | 0.331390 / 0.419271 (-0.087881) | 0.051004 / 0.043533 (0.007471) | 0.295317 / 0.255139 (0.040178) | 0.316570 / 0.283200 (0.033371) | 0.099283 / 0.141683 (-0.042400) | 1.430583 / 1.452155 (-0.021572) | 1.493550 / 1.492716 (0.000834) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.213634 / 0.018006 (0.195628) | 0.432557 / 0.000490 (0.432067) | 0.001586 / 0.000200 (0.001386) | 0.000079 / 0.000054 (0.000025) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025249 / 0.037411 (-0.012162) | 0.105433 / 0.014526 (0.090908) | 0.113474 / 0.176557 (-0.063082) | 0.168799 / 0.737135 (-0.568336) | 0.119363 / 0.296338 (-0.176975) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.412450 / 0.215209 (0.197241) | 4.117432 / 2.077655 (2.039777) | 1.935176 / 1.504120 (0.431056) | 1.745674 / 1.541195 (0.204479) | 1.853872 / 1.468490 (0.385382) | 0.703429 / 4.584777 (-3.881348) | 3.756981 / 3.745712 (0.011269) | 3.730607 / 5.269862 (-1.539255) | 1.839052 / 4.565676 (-2.726624) | 0.087574 / 0.424275 (-0.336701) | 0.012293 / 0.007607 (0.004686) | 0.517234 / 0.226044 (0.291190) | 5.189759 / 2.268929 (2.920831) | 2.418739 / 55.444624 (-53.025885) | 2.081424 / 6.876477 (-4.795053) | 2.204464 / 2.142072 (0.062392) | 0.842768 / 4.805227 (-3.962459) | 0.169014 / 6.500664 (-6.331650) | 0.063711 / 0.075469 (-0.011758) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.180636 / 1.841788 (-0.661152) | 14.816088 / 8.074308 (6.741779) | 14.290085 / 10.191392 (4.098693) | 0.165267 / 0.680424 (-0.515156) | 0.017290 / 0.534201 (-0.516911) | 0.419678 / 0.579283 (-0.159605) | 0.418164 / 0.434364 (-0.016200) | 0.492210 / 0.540337 (-0.048127) | 0.588528 / 1.386936 (-0.798408) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007144 / 0.011353 (-0.004209) | 0.005223 / 0.011008 (-0.005785) | 0.073583 / 0.038508 (0.035075) | 0.033534 / 0.023109 (0.010425) | 0.339020 / 0.275898 (0.063122) | 0.366546 / 0.323480 (0.043066) | 0.006245 / 0.007986 (-0.001741) | 0.004081 / 0.004328 (-0.000247) | 0.073089 / 0.004250 (0.068839) | 0.047024 / 0.037052 (0.009971) | 0.342540 / 0.258489 (0.084051) | 0.379743 / 0.293841 (0.085902) | 0.037551 / 0.128546 (-0.090995) | 0.012246 / 0.075646 (-0.063400) | 0.084796 / 0.419271 (-0.334476) | 0.052256 / 0.043533 (0.008723) | 0.342675 / 0.255139 (0.087536) | 0.367157 / 0.283200 (0.083957) | 0.102939 / 0.141683 (-0.038744) | 1.409039 / 1.452155 (-0.043115) | 1.526137 / 1.492716 (0.033420) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.208143 / 0.018006 (0.190136) | 0.437940 / 0.000490 (0.437450) | 0.000424 / 0.000200 (0.000224) | 0.000056 / 0.000054 (0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028321 / 0.037411 (-0.009091) | 0.110417 / 0.014526 (0.095891) | 0.119449 / 0.176557 (-0.057107) | 0.168081 / 0.737135 (-0.569054) | 0.126658 / 0.296338 (-0.169681) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.429302 / 0.215209 (0.214093) | 4.270547 / 2.077655 (2.192892) | 2.061323 / 1.504120 (0.557203) | 1.857877 / 1.541195 (0.316682) | 1.873317 / 1.468490 (0.404827) | 0.688750 / 4.584777 (-3.896027) | 3.767951 / 3.745712 (0.022239) | 2.011436 / 5.269862 (-3.258426) | 1.299965 / 4.565676 (-3.265712) | 0.084799 / 0.424275 (-0.339476) | 0.012082 / 0.007607 (0.004475) | 0.521981 / 0.226044 (0.295937) | 5.265333 / 2.268929 (2.996405) | 2.494326 / 55.444624 (-52.950298) | 2.144672 / 6.876477 (-4.731804) | 2.365624 / 2.142072 (0.223551) | 0.839868 / 4.805227 (-3.965359) | 0.166614 / 6.500664 (-6.334050) | 0.063804 / 0.075469 (-0.011665) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.264623 / 1.841788 (-0.577164) | 14.946515 / 8.074308 (6.872207) | 14.450115 / 10.191392 (4.258723) | 0.163878 / 0.680424 (-0.516546) | 0.017501 / 0.534201 (-0.516700) | 0.420992 / 0.579283 (-0.158291) | 0.423005 / 0.434364 (-0.011359) | 0.489505 / 0.540337 (-0.050832) | 0.594631 / 1.386936 (-0.792305) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#fd893098627230cc734f6009ad04cf885c979ac4 \"CML watermark\")\n" ]
"2023-04-17T06:03:44Z"
"2023-04-17T15:01:53Z"
"2023-04-17T14:54:46Z"
CONTRIBUTOR
null
I noticed a typo as I was reading the datasets documentation. This PR contains a trivial fix changing "mow" to "now."
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5763/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5763/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5763.diff", "html_url": "https://github.com/huggingface/datasets/pull/5763", "merged_at": "2023-04-17T14:54:46Z", "patch_url": "https://github.com/huggingface/datasets/pull/5763.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5763" }
true
https://api.github.com/repos/huggingface/datasets/issues/5762
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5762/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5762/comments
https://api.github.com/repos/huggingface/datasets/issues/5762/events
https://github.com/huggingface/datasets/issues/5762
1,670,326,470
I_kwDODunzps5jjyjG
5,762
Not able to load the pile
{ "avatar_url": "https://avatars.githubusercontent.com/u/17240858?v=4", "events_url": "https://api.github.com/users/surya-narayanan/events{/privacy}", "followers_url": "https://api.github.com/users/surya-narayanan/followers", "following_url": "https://api.github.com/users/surya-narayanan/following{/other_user}", "gists_url": "https://api.github.com/users/surya-narayanan/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/surya-narayanan", "id": 17240858, "login": "surya-narayanan", "node_id": "MDQ6VXNlcjE3MjQwODU4", "organizations_url": "https://api.github.com/users/surya-narayanan/orgs", "received_events_url": "https://api.github.com/users/surya-narayanan/received_events", "repos_url": "https://api.github.com/users/surya-narayanan/repos", "site_admin": false, "starred_url": "https://api.github.com/users/surya-narayanan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/surya-narayanan/subscriptions", "type": "User", "url": "https://api.github.com/users/surya-narayanan" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
[ "Thanks for reporting, @surya-narayanan.\r\n\r\nI see you already started a discussion about this on the Community tab of the corresponding dataset: https://huggingface.co/datasets/EleutherAI/the_pile/discussions/10\r\nLet's continue the discussion there!" ]
"2023-04-17T03:09:10Z"
"2023-04-17T09:37:27Z"
"2023-04-17T09:37:27Z"
NONE
null
### Describe the bug Got this error when I am trying to load the pile dataset ``` TypeError: Couldn't cast array of type struct<file: string, id: string> to {'id': Value(dtype='string', id=None)} ``` ### Steps to reproduce the bug Please visit the following sample notebook https://colab.research.google.com/drive/1JHcjawcHL6QHhi5VcqYd07W2QCEj2nWK#scrollTo=ulJP3eJCI-tB ### Expected behavior The pile should work ### Environment info - `datasets` version: 2.11.0 - Platform: Linux-5.10.147+-x86_64-with-glibc2.31 - Python version: 3.9.16 - Huggingface_hub version: 0.13.4 - PyArrow version: 9.0.0 - Pandas version: 1.5.3
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5762/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5762/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5761
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5761/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5761/comments
https://api.github.com/repos/huggingface/datasets/issues/5761/events
https://github.com/huggingface/datasets/issues/5761
1,670,034,582
I_kwDODunzps5jirSW
5,761
One or several metadata.jsonl were found, but not in the same directory or in a parent directory
{ "avatar_url": "https://avatars.githubusercontent.com/u/69686152?v=4", "events_url": "https://api.github.com/users/blghtr/events{/privacy}", "followers_url": "https://api.github.com/users/blghtr/followers", "following_url": "https://api.github.com/users/blghtr/following{/other_user}", "gists_url": "https://api.github.com/users/blghtr/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/blghtr", "id": 69686152, "login": "blghtr", "node_id": "MDQ6VXNlcjY5Njg2MTUy", "organizations_url": "https://api.github.com/users/blghtr/orgs", "received_events_url": "https://api.github.com/users/blghtr/received_events", "repos_url": "https://api.github.com/users/blghtr/repos", "site_admin": false, "starred_url": "https://api.github.com/users/blghtr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/blghtr/subscriptions", "type": "User", "url": "https://api.github.com/users/blghtr" }
[]
open
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
[ "Also, when generated from a zip archive, the dataset contains only a few images. In my case, 20 versus 2000+ contained in the archive. The generation from folders works as expected.", "Thanks for reporting, @blghtr.\r\n\r\nYou should include the `metadata.jsonl` in your ZIP archives, at the root level directory.\r\n\r\nI agree that our documentation is not clear enough. Maybe we could improve it.", "You can find a dummy dataset example here: https://huggingface.co/datasets/albertvillanova/tmp-imagefolder-metadata\r\n\r\n```\r\ntmp-imagefolder-metadata/\r\n└── data/\r\n ├── train.zip\r\n └── valid.zip\r\n```\r\nwhere, the directory structure within the `train.zip` archive is:\r\n```\r\nmetadata.jsonl\r\ntrain/\r\n ├── bharatanatyam/\r\n └── bharatanatyam_original_113.jpg_70c297a2-e2f2-4ed8-b93c-0c03d0809fe2.jpg\r\n └── kathak/\r\n └── kathak_original_10.jpg_2c4a2c3d-47fc-4b33-9c09-38b542826632.jpg\r\n```\r\nand the metadata file contains:\r\n```\r\n{\"file_name\": \"train/bharatanatyam/bharatanatyam_original_113.jpg_70c297a2-e2f2-4ed8-b93c-0c03d0809fe2.jpg\", \"text\": \"first\"}\r\n{\"file_name\": \"train/kathak/kathak_original_10.jpg_2c4a2c3d-47fc-4b33-9c09-38b542826632.jpg\", \"text\": \"second\"}\r\n```" ]
"2023-04-16T16:21:55Z"
"2023-04-19T11:53:24Z"
null
NONE
null
### Describe the bug An attempt to generate a dataset from a zip archive using imagefolder and metadata.jsonl does not lead to the expected result. Tried all possible locations of the json file: the file in the archive is ignored (generated dataset contains only images), the file next to the archive like [here](https://huggingface.co/docs/datasets/image_dataset#imagefolder) leads to an error: ``` --------------------------------------------------------------------------- ValueError Traceback (most recent call last) File ~\PycharmProjects\testproj\venv\lib\site-packages\datasets\builder.py:1610, in GeneratorBasedBuilder._prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, split_info, check_duplicate_keys, job_id) 1609 _time = time.time() -> 1610 for key, record in generator: 1611 if max_shard_size is not None and writer._num_bytes > max_shard_size: File ~\PycharmProjects\testproj\venv\lib\site-packages\datasets\packaged_modules\folder_based_builder\folder_based_builder.py:370, in FolderBasedBuilder._generate_examples(self, files, metadata_files, split_name, add_metadata, add_labels) 369 else: --> 370 raise ValueError( 371 f"One or several metadata.{metadata_ext} were found, but not in the same directory or in a parent directory of {downloaded_dir_file}." 372 ) 373 if metadata_dir is not None and downloaded_metadata_file is not None: ValueError: One or several metadata.jsonl were found, but not in the same directory or in a parent directory of C:\Users\User\.cache\huggingface\datasets\downloads\extracted\f7fb7de25fb28ae63089974524f2d271a39d83888bc456d04aa3b3d45f33e6a6\ff0745a0-a741-4d9e-b228-a93b851adf61.png. The above exception was the direct cause of the following exception: DatasetGenerationError Traceback (most recent call last) Cell In[3], line 1 ----> 1 dataset = load_dataset("imagefolder", data_dir=r'C:\Users\User\data') File ~\PycharmProjects\testproj\venv\lib\site-packages\datasets\load.py:1791, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, num_proc, storage_options, **config_kwargs) 1788 try_from_hf_gcs = path not in _PACKAGED_DATASETS_MODULES 1790 # Download and prepare data -> 1791 builder_instance.download_and_prepare( 1792 download_config=download_config, 1793 download_mode=download_mode, 1794 verification_mode=verification_mode, 1795 try_from_hf_gcs=try_from_hf_gcs, 1796 num_proc=num_proc, 1797 storage_options=storage_options, 1798 ) 1800 # Build dataset for splits 1801 keep_in_memory = ( 1802 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size) 1803 ) File ~\PycharmProjects\testproj\venv\lib\site-packages\datasets\builder.py:891, in DatasetBuilder.download_and_prepare(self, output_dir, download_config, download_mode, verification_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, file_format, max_shard_size, num_proc, storage_options, **download_and_prepare_kwargs) 889 if num_proc is not None: 890 prepare_split_kwargs["num_proc"] = num_proc --> 891 self._download_and_prepare( 892 dl_manager=dl_manager, 893 verification_mode=verification_mode, 894 **prepare_split_kwargs, 895 **download_and_prepare_kwargs, 896 ) 897 # Sync info 898 self.info.dataset_size = sum(split.num_bytes for split in self.info.splits.values()) File ~\PycharmProjects\testproj\venv\lib\site-packages\datasets\builder.py:1651, in GeneratorBasedBuilder._download_and_prepare(self, dl_manager, verification_mode, **prepare_splits_kwargs) 1650 def _download_and_prepare(self, dl_manager, verification_mode, **prepare_splits_kwargs): -> 1651 super()._download_and_prepare( 1652 dl_manager, 1653 verification_mode, 1654 check_duplicate_keys=verification_mode == VerificationMode.BASIC_CHECKS 1655 or verification_mode == VerificationMode.ALL_CHECKS, 1656 **prepare_splits_kwargs, 1657 ) File ~\PycharmProjects\testproj\venv\lib\site-packages\datasets\builder.py:986, in DatasetBuilder._download_and_prepare(self, dl_manager, verification_mode, **prepare_split_kwargs) 982 split_dict.add(split_generator.split_info) 984 try: 985 # Prepare split will record examples associated to the split --> 986 self._prepare_split(split_generator, **prepare_split_kwargs) 987 except OSError as e: 988 raise OSError( 989 "Cannot find data file. " 990 + (self.manual_download_instructions or "") 991 + "\nOriginal error:\n" 992 + str(e) 993 ) from None File ~\PycharmProjects\testproj\venv\lib\site-packages\datasets\builder.py:1490, in GeneratorBasedBuilder._prepare_split(self, split_generator, check_duplicate_keys, file_format, num_proc, max_shard_size) 1488 gen_kwargs = split_generator.gen_kwargs 1489 job_id = 0 -> 1490 for job_id, done, content in self._prepare_split_single( 1491 gen_kwargs=gen_kwargs, job_id=job_id, **_prepare_split_args 1492 ): 1493 if done: 1494 result = content File ~\PycharmProjects\testproj\venv\lib\site-packages\datasets\builder.py:1646, in GeneratorBasedBuilder._prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, split_info, check_duplicate_keys, job_id) 1644 if isinstance(e, SchemaInferenceError) and e.__context__ is not None: 1645 e = e.__context__ -> 1646 raise DatasetGenerationError("An error occurred while generating the dataset") from e 1648 yield job_id, True, (total_num_examples, total_num_bytes, writer._features, num_shards, shard_lengths) DatasetGenerationError: An error occurred while generating the dataset ``` ### Steps to reproduce the bug 1. Organize directory structure like in the docs: folder/metadata.jsonl folder/train.zip 2. Run load_dataset("imagefolder", data_dir='folder/metadata.jsonl', split='train') ### Expected behavior Dataset generated with all additional features from metadata.jsonl ### Environment info - `datasets` version: 2.11.0 - Platform: Windows-10-10.0.22621-SP0 - Python version: 3.9.0 - Huggingface_hub version: 0.13.4 - PyArrow version: 11.0.0 - Pandas version: 1.5.3
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5761/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5761/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5760
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5760/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5760/comments
https://api.github.com/repos/huggingface/datasets/issues/5760/events
https://github.com/huggingface/datasets/issues/5760
1,670,028,072
I_kwDODunzps5jipso
5,760
Multi-image loading in Imagefolder dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/44398246?v=4", "events_url": "https://api.github.com/users/vvvm23/events{/privacy}", "followers_url": "https://api.github.com/users/vvvm23/followers", "following_url": "https://api.github.com/users/vvvm23/following{/other_user}", "gists_url": "https://api.github.com/users/vvvm23/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/vvvm23", "id": 44398246, "login": "vvvm23", "node_id": "MDQ6VXNlcjQ0Mzk4MjQ2", "organizations_url": "https://api.github.com/users/vvvm23/orgs", "received_events_url": "https://api.github.com/users/vvvm23/received_events", "repos_url": "https://api.github.com/users/vvvm23/repos", "site_admin": false, "starred_url": "https://api.github.com/users/vvvm23/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vvvm23/subscriptions", "type": "User", "url": "https://api.github.com/users/vvvm23" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
null
[ "Supporting this could be useful (I remember a use-case for this on the Hub). Do you agree @polinaeterna? \r\n\r\nImplementing this should be possible if we iterate over metadata files and build image/audio file paths instead of iterating over image/audio files and looking for the corresponding entries in metadata files.", "I've build a similar feature from scratch and would be interested to combine it with the datasets package.\r\n\r\nMy solution works something like this:\r\nInterpret the first element of each column as a file path. If the path exists and is a file, (try to) load the files for the entire column. Thereby, one isn't restricted to a particular column name, with comes in handy when dealing with multiple file columns.\r\n\r\nI've looked into the code to try to implement this, but didn't find the right places. I'm also open to contribute, but will need some guidance." ]
"2023-04-16T16:01:05Z"
"2023-05-16T10:14:59Z"
null
NONE
null
### Feature request Extend the `imagefolder` dataloading script to support loading multiple images per dataset entry. This only really makes sense if a metadata file is present. Currently you can use the following format (example `metadata.jsonl`: ``` {'file_name': 'path_to_image.png', 'metadata': ...} ... ``` which will return a batch with key `image` and any other metadata. I would propose extending `file_name` to also accept a list of files, which would return a batch with key `images` and any other metadata. ### Motivation This is useful for example in segmentation tasks in computer vision models, or in text-to-image models that also accept conditioning signals such as another image, feature map, or similar. Currently if I want to do this, I would need to write a custom dataset, rather than just use `imagefolder`. ### Your contribution Would be open to doing a PR, but also happy for someone else to take it as I am not familiar with the datasets library.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5760/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5760/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5759
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5759/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5759/comments
https://api.github.com/repos/huggingface/datasets/issues/5759/events
https://github.com/huggingface/datasets/issues/5759
1,669,977,848
I_kwDODunzps5jidb4
5,759
Can I load in list of list of dict format?
{ "avatar_url": "https://avatars.githubusercontent.com/u/72137647?v=4", "events_url": "https://api.github.com/users/LZY-the-boys/events{/privacy}", "followers_url": "https://api.github.com/users/LZY-the-boys/followers", "following_url": "https://api.github.com/users/LZY-the-boys/following{/other_user}", "gists_url": "https://api.github.com/users/LZY-the-boys/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/LZY-the-boys", "id": 72137647, "login": "LZY-the-boys", "node_id": "MDQ6VXNlcjcyMTM3NjQ3", "organizations_url": "https://api.github.com/users/LZY-the-boys/orgs", "received_events_url": "https://api.github.com/users/LZY-the-boys/received_events", "repos_url": "https://api.github.com/users/LZY-the-boys/repos", "site_admin": false, "starred_url": "https://api.github.com/users/LZY-the-boys/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LZY-the-boys/subscriptions", "type": "User", "url": "https://api.github.com/users/LZY-the-boys" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
[ "Thanks for reporting, @LZY-the-boys.\r\n\r\nCould you please give more details about what is your intended dataset structure? What are the names of the columns and the value of each row?\r\n\r\nCurrently, the JSON-Lines format is supported:\r\n- Each line correspond to one row of the dataset\r\n- Each line is composed of one JSON object, where the names are the names of the columns, and the values are the values for the row-column pair." ]
"2023-04-16T13:50:14Z"
"2023-04-19T12:04:36Z"
null
NONE
null
### Feature request my jsonl dataset has following format: ``` [{'input':xxx, 'output':xxx},{'input:xxx,'output':xxx},...] [{'input':xxx, 'output':xxx},{'input:xxx,'output':xxx},...] ``` I try to use `datasets.load_dataset('json', data_files=path)` or `datasets.Dataset.from_json`, it raises ``` File "site-packages/datasets/arrow_dataset.py", line 1078, in from_json ).read() File "site-packages/datasets/io/json.py", line 59, in read self.builder.download_and_prepare( File "site-packages/datasets/builder.py", line 872, in download_and_prepare self._download_and_prepare( File "site-packages/datasets/builder.py", line 967, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "site-packages/datasets/builder.py", line 1749, in _prepare_split for job_id, done, content in self._prepare_split_single( File "site-packages/datasets/builder.py", line 1892, in _prepare_split_single raise DatasetGenerationError("An error occurred while generating the dataset") from e datasets.builder.DatasetGenerationError: An error occurred while generating the dataset ``` ### Motivation I wanna use features like `Datasets.map` or `Datasets.shuffle`, so i need the dataset in memory to be `arrow_dataset.Datasets` format ### Your contribution PR
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5759/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5759/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5758
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5758/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5758/comments
https://api.github.com/repos/huggingface/datasets/issues/5758/events
https://github.com/huggingface/datasets/pull/5758
1,669,920,923
PR_kwDODunzps5OaY9S
5,758
Fixes #5757
{ "avatar_url": "https://avatars.githubusercontent.com/u/2437102?v=4", "events_url": "https://api.github.com/users/eli-osherovich/events{/privacy}", "followers_url": "https://api.github.com/users/eli-osherovich/followers", "following_url": "https://api.github.com/users/eli-osherovich/following{/other_user}", "gists_url": "https://api.github.com/users/eli-osherovich/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/eli-osherovich", "id": 2437102, "login": "eli-osherovich", "node_id": "MDQ6VXNlcjI0MzcxMDI=", "organizations_url": "https://api.github.com/users/eli-osherovich/orgs", "received_events_url": "https://api.github.com/users/eli-osherovich/received_events", "repos_url": "https://api.github.com/users/eli-osherovich/repos", "site_admin": false, "starred_url": "https://api.github.com/users/eli-osherovich/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/eli-osherovich/subscriptions", "type": "User", "url": "https://api.github.com/users/eli-osherovich" }
[]
closed
false
null
[]
null
[ "The CI can be fixed by merging `main` into your branch. Can you do that before we merge ?", "_The documentation is not available anymore as the PR was closed or merged._", "Done.\n\nOn Thu, Apr 20, 2023 at 6:01 PM Quentin Lhoest ***@***.***>\nwrote:\n\n> The CI can be fixed by merging main into your branch. Can you do that\n> before we merge ?\n>\n> —\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/datasets/pull/5758#issuecomment-1516488124>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AASS73QPLA735AMN4PFDYRTXCFFTJANCNFSM6AAAAAAXACBUQU>\n> .\n> You are receiving this because you authored the thread.Message ID:\n> ***@***.***>\n>\n", "Nice thanks !", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007161 / 0.011353 (-0.004192) | 0.005099 / 0.011008 (-0.005909) | 0.099301 / 0.038508 (0.060793) | 0.034144 / 0.023109 (0.011034) | 0.298273 / 0.275898 (0.022375) | 0.329009 / 0.323480 (0.005529) | 0.005486 / 0.007986 (-0.002500) | 0.003887 / 0.004328 (-0.000441) | 0.074769 / 0.004250 (0.070518) | 0.047505 / 0.037052 (0.010453) | 0.306550 / 0.258489 (0.048061) | 0.335380 / 0.293841 (0.041540) | 0.034796 / 0.128546 (-0.093750) | 0.012152 / 0.075646 (-0.063495) | 0.332194 / 0.419271 (-0.087077) | 0.049661 / 0.043533 (0.006128) | 0.296832 / 0.255139 (0.041693) | 0.316417 / 0.283200 (0.033218) | 0.098234 / 0.141683 (-0.043449) | 1.494114 / 1.452155 (0.041959) | 1.566468 / 1.492716 (0.073751) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.221309 / 0.018006 (0.203303) | 0.440855 / 0.000490 (0.440365) | 0.003025 / 0.000200 (0.002825) | 0.000076 / 0.000054 (0.000021) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026594 / 0.037411 (-0.010817) | 0.110406 / 0.014526 (0.095880) | 0.116117 / 0.176557 (-0.060439) | 0.173502 / 0.737135 (-0.563633) | 0.121988 / 0.296338 (-0.174351) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.403307 / 0.215209 (0.188098) | 4.034146 / 2.077655 (1.956492) | 1.852162 / 1.504120 (0.348042) | 1.675643 / 1.541195 (0.134448) | 1.748851 / 1.468490 (0.280360) | 0.703458 / 4.584777 (-3.881319) | 3.809055 / 3.745712 (0.063343) | 2.118060 / 5.269862 (-3.151801) | 1.338394 / 4.565676 (-3.227282) | 0.086319 / 0.424275 (-0.337956) | 0.012195 / 0.007607 (0.004588) | 0.520814 / 0.226044 (0.294769) | 5.201074 / 2.268929 (2.932145) | 2.418384 / 55.444624 (-53.026240) | 2.085496 / 6.876477 (-4.790980) | 2.245638 / 2.142072 (0.103565) | 0.849042 / 4.805227 (-3.956185) | 0.171912 / 6.500664 (-6.328752) | 0.065691 / 0.075469 (-0.009778) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.159985 / 1.841788 (-0.681803) | 14.910867 / 8.074308 (6.836559) | 14.473926 / 10.191392 (4.282534) | 0.181532 / 0.680424 (-0.498891) | 0.017203 / 0.534201 (-0.516998) | 0.420805 / 0.579283 (-0.158479) | 0.426455 / 0.434364 (-0.007909) | 0.497086 / 0.540337 (-0.043251) | 0.593909 / 1.386936 (-0.793027) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007688 / 0.011353 (-0.003665) | 0.005353 / 0.011008 (-0.005656) | 0.076869 / 0.038508 (0.038361) | 0.035030 / 0.023109 (0.011921) | 0.344649 / 0.275898 (0.068751) | 0.387669 / 0.323480 (0.064190) | 0.005913 / 0.007986 (-0.002072) | 0.004107 / 0.004328 (-0.000221) | 0.074111 / 0.004250 (0.069860) | 0.049351 / 0.037052 (0.012299) | 0.346061 / 0.258489 (0.087572) | 0.395499 / 0.293841 (0.101658) | 0.035549 / 0.128546 (-0.092997) | 0.012340 / 0.075646 (-0.063307) | 0.087031 / 0.419271 (-0.332241) | 0.049088 / 0.043533 (0.005556) | 0.342774 / 0.255139 (0.087635) | 0.362037 / 0.283200 (0.078837) | 0.100329 / 0.141683 (-0.041354) | 1.442349 / 1.452155 (-0.009806) | 1.551079 / 1.492716 (0.058363) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.228458 / 0.018006 (0.210452) | 0.446190 / 0.000490 (0.445701) | 0.000413 / 0.000200 (0.000213) | 0.000056 / 0.000054 (0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029884 / 0.037411 (-0.007527) | 0.117527 / 0.014526 (0.103002) | 0.123221 / 0.176557 (-0.053335) | 0.172290 / 0.737135 (-0.564845) | 0.128682 / 0.296338 (-0.167657) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.420905 / 0.215209 (0.205696) | 4.199342 / 2.077655 (2.121687) | 2.007327 / 1.504120 (0.503207) | 1.814732 / 1.541195 (0.273537) | 1.893999 / 1.468490 (0.425509) | 0.712259 / 4.584777 (-3.872518) | 3.843402 / 3.745712 (0.097690) | 3.198514 / 5.269862 (-2.071348) | 1.678732 / 4.565676 (-2.886945) | 0.086435 / 0.424275 (-0.337840) | 0.012233 / 0.007607 (0.004626) | 0.526121 / 0.226044 (0.300077) | 5.190578 / 2.268929 (2.921650) | 2.473259 / 55.444624 (-52.971366) | 2.142795 / 6.876477 (-4.733682) | 2.277594 / 2.142072 (0.135521) | 0.846117 / 4.805227 (-3.959110) | 0.169458 / 6.500664 (-6.331206) | 0.065017 / 0.075469 (-0.010452) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.272479 / 1.841788 (-0.569309) | 15.086473 / 8.074308 (7.012165) | 14.659728 / 10.191392 (4.468336) | 0.163915 / 0.680424 (-0.516509) | 0.017561 / 0.534201 (-0.516640) | 0.422074 / 0.579283 (-0.157209) | 0.421963 / 0.434364 (-0.012401) | 0.490321 / 0.540337 (-0.050016) | 0.586854 / 1.386936 (-0.800083) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#e7ce0ac60c7efc10886471932854903a7c19f172 \"CML watermark\")\n" ]
"2023-04-16T11:56:01Z"
"2023-04-20T15:37:49Z"
"2023-04-20T15:30:48Z"
CONTRIBUTOR
null
Fixes the bug #5757
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5758/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5758/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5758.diff", "html_url": "https://github.com/huggingface/datasets/pull/5758", "merged_at": "2023-04-20T15:30:48Z", "patch_url": "https://github.com/huggingface/datasets/pull/5758.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5758" }
true
https://api.github.com/repos/huggingface/datasets/issues/5757
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5757/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5757/comments
https://api.github.com/repos/huggingface/datasets/issues/5757/events
https://github.com/huggingface/datasets/issues/5757
1,669,910,503
I_kwDODunzps5jiM_n
5,757
Tilde (~) is not supported
{ "avatar_url": "https://avatars.githubusercontent.com/u/2437102?v=4", "events_url": "https://api.github.com/users/eli-osherovich/events{/privacy}", "followers_url": "https://api.github.com/users/eli-osherovich/followers", "following_url": "https://api.github.com/users/eli-osherovich/following{/other_user}", "gists_url": "https://api.github.com/users/eli-osherovich/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/eli-osherovich", "id": 2437102, "login": "eli-osherovich", "node_id": "MDQ6VXNlcjI0MzcxMDI=", "organizations_url": "https://api.github.com/users/eli-osherovich/orgs", "received_events_url": "https://api.github.com/users/eli-osherovich/received_events", "repos_url": "https://api.github.com/users/eli-osherovich/repos", "site_admin": false, "starred_url": "https://api.github.com/users/eli-osherovich/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/eli-osherovich/subscriptions", "type": "User", "url": "https://api.github.com/users/eli-osherovich" }
[]
closed
false
null
[]
null
[]
"2023-04-16T11:48:10Z"
"2023-04-20T15:30:51Z"
"2023-04-20T15:30:51Z"
CONTRIBUTOR
null
### Describe the bug It seems that `~` is not recognized correctly in local paths. Whenever I try to use it I get an exception ### Steps to reproduce the bug ```python load_dataset("imagefolder", data_dir="~/data/my_dataset") ``` Will generate the following error: ``` EmptyDatasetError: The directory at /path/to/cwd/~/data/datasets/clementine_tagged_per_cam doesn't contain any data files ``` ### Expected behavior Load the dataset. ### Environment info datasets==2.11.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5757/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5757/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5756
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5756/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5756/comments
https://api.github.com/repos/huggingface/datasets/issues/5756/events
https://github.com/huggingface/datasets/issues/5756
1,669,678,080
I_kwDODunzps5jhUQA
5,756
Calling shuffle on a IterableDataset with streaming=True, gives "ValueError: cannot reshape array"
{ "avatar_url": "https://avatars.githubusercontent.com/u/21077341?v=4", "events_url": "https://api.github.com/users/rohfle/events{/privacy}", "followers_url": "https://api.github.com/users/rohfle/followers", "following_url": "https://api.github.com/users/rohfle/following{/other_user}", "gists_url": "https://api.github.com/users/rohfle/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/rohfle", "id": 21077341, "login": "rohfle", "node_id": "MDQ6VXNlcjIxMDc3MzQx", "organizations_url": "https://api.github.com/users/rohfle/orgs", "received_events_url": "https://api.github.com/users/rohfle/received_events", "repos_url": "https://api.github.com/users/rohfle/repos", "site_admin": false, "starred_url": "https://api.github.com/users/rohfle/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rohfle/subscriptions", "type": "User", "url": "https://api.github.com/users/rohfle" }
[]
closed
false
null
[]
null
[ "Hi! I've merged a PR on the Hub with a fix: https://huggingface.co/datasets/fashion_mnist/discussions/3", "Thanks, this appears to have fixed the issue.\r\n\r\nI've created a PR for the same change in the mnist dataset: https://huggingface.co/datasets/mnist/discussions/3/files" ]
"2023-04-16T04:59:47Z"
"2023-04-18T03:40:56Z"
"2023-04-18T03:40:56Z"
NONE
null
### Describe the bug When calling shuffle on a IterableDataset with streaming=True, I get the following error: ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/administrator/Documents/Projects/huggingface/jax-diffusers-sprint-consistency-models/virtualenv/lib/python3.10/site-packages/datasets/iterable_dataset.py", line 937, in __iter__ for key, example in ex_iterable: File "/home/administrator/Documents/Projects/huggingface/jax-diffusers-sprint-consistency-models/virtualenv/lib/python3.10/site-packages/datasets/iterable_dataset.py", line 627, in __iter__ for x in self.ex_iterable: File "/home/administrator/Documents/Projects/huggingface/jax-diffusers-sprint-consistency-models/virtualenv/lib/python3.10/site-packages/datasets/iterable_dataset.py", line 138, in __iter__ yield from self.generate_examples_fn(**kwargs_with_shuffled_shards) File "/home/administrator/.cache/huggingface/modules/datasets_modules/datasets/mnist/fda16c03c4ecfb13f165ba7e29cf38129ce035011519968cdaf74894ce91c9d4/mnist.py", line 111, in _generate_examples images = np.frombuffer(f.read(), dtype=np.uint8).reshape(size, 28, 28) ValueError: cannot reshape array of size 59992 into shape (60000,28,28) ``` Tested with the fashion_mnist and mnist datasets ### Steps to reproduce the bug Code to reproduce ```python from datasets import load_dataset SHUFFLE_SEED = 42 SHUFFLE_BUFFER_SIZE = 10_000 dataset = load_dataset('fashion_mnist', streaming=True).shuffle(seed=SHUFFLE_SEED, buffer_size=SHUFFLE_BUFFER_SIZE) next(iter(dataset['train'])) ``` ### Expected behavior A random item from the dataset and no error ### Environment info - `datasets` version: 2.11.0 - Platform: Linux-5.15.0-69-generic-x86_64-with-glibc2.35 - Python version: 3.10.6 - Huggingface_hub version: 0.13.4 - PyArrow version: 11.0.0 - Pandas version: 2.0.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5756/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5756/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5755
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5755/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5755/comments
https://api.github.com/repos/huggingface/datasets/issues/5755/events
https://github.com/huggingface/datasets/issues/5755
1,669,048,438
I_kwDODunzps5je6h2
5,755
ImportError: cannot import name 'DeprecatedEnum' from 'datasets.utils.deprecation_utils'
{ "avatar_url": "https://avatars.githubusercontent.com/u/1405491?v=4", "events_url": "https://api.github.com/users/fivejjs/events{/privacy}", "followers_url": "https://api.github.com/users/fivejjs/followers", "following_url": "https://api.github.com/users/fivejjs/following{/other_user}", "gists_url": "https://api.github.com/users/fivejjs/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/fivejjs", "id": 1405491, "login": "fivejjs", "node_id": "MDQ6VXNlcjE0MDU0OTE=", "organizations_url": "https://api.github.com/users/fivejjs/orgs", "received_events_url": "https://api.github.com/users/fivejjs/received_events", "repos_url": "https://api.github.com/users/fivejjs/repos", "site_admin": false, "starred_url": "https://api.github.com/users/fivejjs/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fivejjs/subscriptions", "type": "User", "url": "https://api.github.com/users/fivejjs" }
[]
closed
false
null
[]
null
[ "update the version. fix" ]
"2023-04-14T23:28:54Z"
"2023-04-14T23:36:19Z"
"2023-04-14T23:36:19Z"
NONE
null
### Describe the bug The module moved to new place? ### Steps to reproduce the bug in the import step, ```python from datasets.utils.deprecation_utils import DeprecatedEnum ``` error: ``` ImportError: cannot import name 'DeprecatedEnum' from 'datasets.utils.deprecation_utils' ``` ### Expected behavior import successfully ### Environment info python==3.9.16 datasets==1.18.3
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5755/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5755/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5754
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5754/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5754/comments
https://api.github.com/repos/huggingface/datasets/issues/5754/events
https://github.com/huggingface/datasets/pull/5754
1,668,755,035
PR_kwDODunzps5OWozh
5,754
Minor tqdm fixes
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006479 / 0.011353 (-0.004874) | 0.004592 / 0.011008 (-0.006416) | 0.097239 / 0.038508 (0.058731) | 0.028609 / 0.023109 (0.005499) | 0.309225 / 0.275898 (0.033327) | 0.340015 / 0.323480 (0.016535) | 0.004857 / 0.007986 (-0.003129) | 0.004649 / 0.004328 (0.000320) | 0.074770 / 0.004250 (0.070520) | 0.038351 / 0.037052 (0.001299) | 0.313360 / 0.258489 (0.054871) | 0.350256 / 0.293841 (0.056416) | 0.030770 / 0.128546 (-0.097776) | 0.011591 / 0.075646 (-0.064055) | 0.322444 / 0.419271 (-0.096828) | 0.043704 / 0.043533 (0.000171) | 0.311790 / 0.255139 (0.056651) | 0.339183 / 0.283200 (0.055984) | 0.088041 / 0.141683 (-0.053642) | 1.490649 / 1.452155 (0.038494) | 1.561789 / 1.492716 (0.069072) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.208984 / 0.018006 (0.190978) | 0.406105 / 0.000490 (0.405616) | 0.003152 / 0.000200 (0.002952) | 0.000074 / 0.000054 (0.000019) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022622 / 0.037411 (-0.014790) | 0.095819 / 0.014526 (0.081294) | 0.105132 / 0.176557 (-0.071424) | 0.165684 / 0.737135 (-0.571451) | 0.106706 / 0.296338 (-0.189632) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.426126 / 0.215209 (0.210917) | 4.233864 / 2.077655 (2.156209) | 1.918727 / 1.504120 (0.414607) | 1.729905 / 1.541195 (0.188710) | 1.760342 / 1.468490 (0.291852) | 0.695449 / 4.584777 (-3.889328) | 3.413531 / 3.745712 (-0.332181) | 1.904557 / 5.269862 (-3.365305) | 1.270604 / 4.565676 (-3.295072) | 0.083018 / 0.424275 (-0.341257) | 0.012760 / 0.007607 (0.005152) | 0.523991 / 0.226044 (0.297947) | 5.236132 / 2.268929 (2.967204) | 2.360959 / 55.444624 (-53.083665) | 1.996533 / 6.876477 (-4.879943) | 2.072934 / 2.142072 (-0.069138) | 0.804133 / 4.805227 (-4.001094) | 0.150976 / 6.500664 (-6.349688) | 0.065503 / 0.075469 (-0.009966) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.211828 / 1.841788 (-0.629960) | 13.657743 / 8.074308 (5.583435) | 13.887148 / 10.191392 (3.695756) | 0.145996 / 0.680424 (-0.534428) | 0.016562 / 0.534201 (-0.517639) | 0.380359 / 0.579283 (-0.198924) | 0.388698 / 0.434364 (-0.045666) | 0.440373 / 0.540337 (-0.099965) | 0.531753 / 1.386936 (-0.855183) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006444 / 0.011353 (-0.004909) | 0.004569 / 0.011008 (-0.006439) | 0.076239 / 0.038508 (0.037731) | 0.028462 / 0.023109 (0.005352) | 0.365540 / 0.275898 (0.089642) | 0.398242 / 0.323480 (0.074762) | 0.005785 / 0.007986 (-0.002200) | 0.003346 / 0.004328 (-0.000982) | 0.076296 / 0.004250 (0.072046) | 0.039853 / 0.037052 (0.002800) | 0.367684 / 0.258489 (0.109195) | 0.409570 / 0.293841 (0.115730) | 0.030536 / 0.128546 (-0.098010) | 0.011534 / 0.075646 (-0.064112) | 0.084962 / 0.419271 (-0.334309) | 0.042708 / 0.043533 (-0.000825) | 0.344058 / 0.255139 (0.088919) | 0.389096 / 0.283200 (0.105897) | 0.090559 / 0.141683 (-0.051124) | 1.507101 / 1.452155 (0.054946) | 1.563977 / 1.492716 (0.071260) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.228740 / 0.018006 (0.210734) | 0.396890 / 0.000490 (0.396400) | 0.000392 / 0.000200 (0.000192) | 0.000060 / 0.000054 (0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025052 / 0.037411 (-0.012360) | 0.099951 / 0.014526 (0.085426) | 0.106847 / 0.176557 (-0.069710) | 0.156666 / 0.737135 (-0.580469) | 0.110344 / 0.296338 (-0.185994) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.442363 / 0.215209 (0.227154) | 4.429571 / 2.077655 (2.351917) | 2.076501 / 1.504120 (0.572381) | 1.875226 / 1.541195 (0.334031) | 1.909093 / 1.468490 (0.440603) | 0.703047 / 4.584777 (-3.881730) | 3.457036 / 3.745712 (-0.288676) | 2.866648 / 5.269862 (-2.403214) | 1.524430 / 4.565676 (-3.041246) | 0.083687 / 0.424275 (-0.340588) | 0.012251 / 0.007607 (0.004643) | 0.543945 / 0.226044 (0.317901) | 5.440559 / 2.268929 (3.171630) | 2.522924 / 55.444624 (-52.921700) | 2.188770 / 6.876477 (-4.687707) | 2.249632 / 2.142072 (0.107559) | 0.813499 / 4.805227 (-3.991728) | 0.152861 / 6.500664 (-6.347803) | 0.067189 / 0.075469 (-0.008280) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.284255 / 1.841788 (-0.557533) | 14.207864 / 8.074308 (6.133556) | 14.279691 / 10.191392 (4.088299) | 0.167027 / 0.680424 (-0.513396) | 0.016455 / 0.534201 (-0.517746) | 0.380798 / 0.579283 (-0.198485) | 0.390013 / 0.434364 (-0.044351) | 0.445493 / 0.540337 (-0.094845) | 0.526278 / 1.386936 (-0.860658) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#3fdb46c526b9d070df0eb2d56b0ecacdace7cb9a \"CML watermark\")\n" ]
"2023-04-14T18:15:14Z"
"2023-04-20T15:27:58Z"
"2023-04-20T15:21:00Z"
CONTRIBUTOR
null
`GeneratorBasedBuilder`'s TQDM bars were not used as context managers. This PR fixes that (missed these bars in https://github.com/huggingface/datasets/pull/5560). Also, this PR modifies the single-proc `save_to_disk` to fix the issue with the TQDM bar not accumulating the progress in the multi-shard setting (again, this bug was introduced by me in the linked PR 😎)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5754/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5754/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5754.diff", "html_url": "https://github.com/huggingface/datasets/pull/5754", "merged_at": "2023-04-20T15:21:00Z", "patch_url": "https://github.com/huggingface/datasets/pull/5754.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5754" }
true
https://api.github.com/repos/huggingface/datasets/issues/5753
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5753/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5753/comments
https://api.github.com/repos/huggingface/datasets/issues/5753/events
https://github.com/huggingface/datasets/issues/5753
1,668,659,536
I_kwDODunzps5jdblQ
5,753
[IterableDatasets] Add column followed by interleave datasets gives bogus outputs
{ "avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4", "events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}", "followers_url": "https://api.github.com/users/sanchit-gandhi/followers", "following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}", "gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sanchit-gandhi", "id": 93869735, "login": "sanchit-gandhi", "node_id": "U_kgDOBZhWpw", "organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs", "received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events", "repos_url": "https://api.github.com/users/sanchit-gandhi/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions", "type": "User", "url": "https://api.github.com/users/sanchit-gandhi" }
[]
closed
false
null
[]
null
[ "Problem with the code snippet! Using global vars and functions was not a good idea with iterable datasets!\r\n\r\nIf we update to:\r\n```python\r\nfrom datasets import load_dataset\r\n\r\noriginal_dataset = load_dataset(\"librispeech_asr\", \"clean\", split=\"validation\", streaming=True)\r\n\r\n# now add a new column to our streaming dataset using our hack\r\nname = \"new_column\"\r\ncolumn_1 = [f\"new dataset 1, row {i}\" for i in range(50)]\r\n\r\nnew_features = original_dataset.features.copy()\r\nnew_features[name] = new_features[\"file\"] # I know that \"file\" has the right column type to match our new feature\r\n\r\ndef add_column_fn_1(example, idx):\r\n if name in example:\r\n raise ValueError(f\"Error when adding {name}: column {name} is already in the dataset.\")\r\n return {name: column_1[idx]}\r\n\r\nmodified_dataset_1 = original_dataset.map(add_column_fn_1, with_indices=True, features=new_features)\r\n\r\n# now create a second modified dataset using the same trick\r\ncolumn_2 = [f\"new dataset 2, row {i}\" for i in range(50)]\r\n\r\ndef add_column_fn_2(example, idx):\r\n if name in example:\r\n raise ValueError(f\"Error when adding {name}: column {name} is already in the dataset.\")\r\n return {name: column_2[idx]}\r\n\r\nmodified_dataset_2 = original_dataset.map(add_column_fn_2, with_indices=True, features=new_features)\r\n\r\ninterleaved_dataset = interleave_datasets([modified_dataset_1, modified_dataset_2])\r\n\r\nfor i, sample in enumerate(interleaved_dataset):\r\n print(sample[\"new_column\"])\r\n if i == 10:\r\n break\r\n```\r\nwe get the correct outputs:\r\n```python\r\nnew dataset 1, row 0\r\nnew dataset 2, row 0\r\nnew dataset 1, row 1\r\nnew dataset 2, row 1\r\nnew dataset 1, row 2\r\nnew dataset 2, row 2\r\nnew dataset 1, row 3\r\nnew dataset 2, row 3\r\nnew dataset 1, row 4\r\nnew dataset 2, row 4\r\nnew dataset 1, row 5\r\n```\r\n" ]
"2023-04-14T17:32:31Z"
"2023-04-14T17:45:52Z"
"2023-04-14T17:36:37Z"
CONTRIBUTOR
null
### Describe the bug If we add a new column to our iterable dataset using the hack described in #5752, when we then interleave datasets the new column is pinned to one value. ### Steps to reproduce the bug What we're going to do here is: 1. Load an iterable dataset in streaming mode (`original_dataset`) 2. Add a new column to this dataset using the hack in #5752 (`modified_dataset_1`) 3. Create another new dataset by adding a column with the same key but different values (`modified_dataset_2`) 4. Interleave our new datasets (`modified_dataset_1` + `modified_dataset_2`) 5. Check the value of our newly added column (`new_column`) ```python from datasets import load_dataset # load an iterable dataset original_dataset = load_dataset("librispeech_asr", "clean", split="validation", streaming=True) # now add a new column to our streaming dataset using our hack from 5752 name = "new_column" column = [f"new dataset 1, row {i}" for i in range(50)] new_features = original_dataset.features.copy() new_features[name] = new_features["file"] # I know that "file" has the right column type to match our new feature def add_column_fn(example, idx): if name in example: raise ValueError(f"Error when adding {name}: column {name} is already in the dataset.") return {name: column[idx]} modified_dataset_1 = original_dataset.map(add_column_fn, with_indices=True, features=new_features) # now create a second modified dataset using the same trick column = [f"new dataset 2, row {i}" for i in range(50)] def add_column_fn(example, idx): if name in example: raise ValueError(f"Error when adding {name}: column {name} is already in the dataset.") return {name: column[idx]} modified_dataset_2 = original_dataset.map(add_column_fn, with_indices=True, features=new_features) # interleave these datasets interleaved_dataset = interleave_datasets([modified_dataset_1, modified_dataset_2]) # now check what the value of the added column is for i, sample in enumerate(interleaved_dataset): print(sample["new_column"]) if i == 10: break ``` **Print Output:** ``` new dataset 2, row 0 new dataset 2, row 0 new dataset 2, row 1 new dataset 2, row 1 new dataset 2, row 2 new dataset 2, row 2 new dataset 2, row 3 new dataset 2, row 3 new dataset 2, row 4 new dataset 2, row 4 new dataset 2, row 5 ``` We see that we only get outputs from our second dataset. ### Expected behavior We should interleave between dataset 1 and 2 and increase in row value: ``` new dataset 1, row 0 new dataset 2, row 0 new dataset 1, row 1 new dataset 2, row 1 new dataset 1, row 2 new dataset 2, row 2 ... ``` ### Environment info - datasets version: 2.10.2.dev0 - Platform: Linux-4.19.0-23-cloud-amd64-x86_64-with-glibc2.28 - Python version: 3.9.16 - Huggingface_hub version: 0.13.3 - PyArrow version: 10.0.1 - Pandas version: 1.5.2
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5753/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5753/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5752
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5752/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5752/comments
https://api.github.com/repos/huggingface/datasets/issues/5752/events
https://github.com/huggingface/datasets/issues/5752
1,668,574,209
I_kwDODunzps5jdGwB
5,752
Streaming dataset looses `.feature` method after `.add_column`
{ "avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4", "events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}", "followers_url": "https://api.github.com/users/sanchit-gandhi/followers", "following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}", "gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sanchit-gandhi", "id": 93869735, "login": "sanchit-gandhi", "node_id": "U_kgDOBZhWpw", "organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs", "received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events", "repos_url": "https://api.github.com/users/sanchit-gandhi/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions", "type": "User", "url": "https://api.github.com/users/sanchit-gandhi" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
open
false
null
[]
null
[ "I believe the issue resides in this line:\r\nhttps://github.com/huggingface/datasets/blob/7c3a9b057c476c40d157bd7a5d57f49066239df0/src/datasets/iterable_dataset.py#L1415\r\n\r\nIf we pass the **new** features of the dataset to the `.map` method we can return the features after adding a column, e.g.:\r\n```python\r\nfrom datasets import load_dataset, Value\r\n\r\noriginal_dataset = load_dataset(\"librispeech_asr\", \"clean\", split=\"validation\", streaming=True)\r\nprint(original_dataset.features.keys())\r\n\r\n# now add a new column to our streaming dataset using our hack\r\nname = \"new_column\"\r\ncolumn = [\"some random text\" for _ in range(50)]\r\n\r\nnew_features = original_dataset.features.copy()\r\nnew_features[name] = Value(dtype=\"string\", id=None) # I know the correct column type for this feature\r\n\r\ndef add_column_fn(example, idx):\r\n if name in example:\r\n raise ValueError(f\"Error when adding {name}: column {name} is already in the dataset.\")\r\n return {name: column[idx]}\r\n\r\nmodified_dataset = original_dataset.map(add_column_fn, with_indices=True, features=new_features)\r\n\r\nprint(modified_dataset.features.keys())\r\n```\r\n**Print Output:**\r\n```\r\ndict_keys(['file', 'audio', 'text', 'speaker_id', 'chapter_id', 'id'])\r\ndict_keys(['file', 'audio', 'text', 'speaker_id', 'chapter_id', 'id', 'new_column'])\r\n```\r\n" ]
"2023-04-14T16:39:50Z"
"2023-04-14T17:46:54Z"
null
CONTRIBUTOR
null
### Describe the bug After appending a new column to a streaming dataset using `.add_column`, we can no longer access the list of dataset features using the `.feature` method. ### Steps to reproduce the bug ```python from datasets import load_dataset original_dataset = load_dataset("librispeech_asr", "clean", split="validation", streaming=True) print(original_dataset.features.keys()) # now add a new column to our streaming dataset modified_dataset = original_dataset.add_column("new_column", ["some random text" for _ in range(50)]) print(modified_dataset.features.keys()) ``` **Print Output:** ``` dict_keys(['file', 'audio', 'text', 'speaker_id', 'chapter_id', 'id']) --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) Cell In[1], line 8 6 # now add a new column to our streaming dataset 7 modified_dataset = original_dataset.add_column("new_column", ["some random text" for _ in range(50)]) ----> 8 print(modified_dataset.features.keys()) AttributeError: 'NoneType' object has no attribute 'keys' ``` We see that we get the features for the original dataset, but not the modified one with the added column. ### Expected behavior Features should be persevered after adding a new column, i.e. calling: ```python print(modified_dataset.features.keys()) ``` Should return: ``` dict_keys(['file', 'audio', 'text', 'speaker_id', 'chapter_id', 'id', 'new_column']) ``` ### Environment info - `datasets` version: 2.10.2.dev0 - Platform: Linux-4.19.0-23-cloud-amd64-x86_64-with-glibc2.28 - Python version: 3.9.16 - Huggingface_hub version: 0.13.3 - PyArrow version: 10.0.1 - Pandas version: 1.5.2
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5752/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5752/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5751
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5751/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5751/comments
https://api.github.com/repos/huggingface/datasets/issues/5751/events
https://github.com/huggingface/datasets/pull/5751
1,668,333,316
PR_kwDODunzps5OVMuT
5,751
Consistent ArrayXD Python formatting + better NumPy/Pandas formatting
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010459 / 0.011353 (-0.000894) | 0.007009 / 0.011008 (-0.003999) | 0.153885 / 0.038508 (0.115377) | 0.037308 / 0.023109 (0.014199) | 0.431931 / 0.275898 (0.156033) | 0.452940 / 0.323480 (0.129461) | 0.008572 / 0.007986 (0.000586) | 0.007479 / 0.004328 (0.003150) | 0.093835 / 0.004250 (0.089584) | 0.050172 / 0.037052 (0.013120) | 0.428855 / 0.258489 (0.170366) | 0.517814 / 0.293841 (0.223974) | 0.058558 / 0.128546 (-0.069988) | 0.019550 / 0.075646 (-0.056096) | 0.449837 / 0.419271 (0.030566) | 0.069710 / 0.043533 (0.026177) | 0.444163 / 0.255139 (0.189024) | 0.469003 / 0.283200 (0.185803) | 0.114665 / 0.141683 (-0.027018) | 1.822415 / 1.452155 (0.370261) | 1.956360 / 1.492716 (0.463644) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.237489 / 0.018006 (0.219483) | 0.556947 / 0.000490 (0.556457) | 0.006988 / 0.000200 (0.006789) | 0.000499 / 0.000054 (0.000444) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.037047 / 0.037411 (-0.000364) | 0.133973 / 0.014526 (0.119447) | 0.137072 / 0.176557 (-0.039485) | 0.201520 / 0.737135 (-0.535615) | 0.144177 / 0.296338 (-0.152161) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.694853 / 0.215209 (0.479644) | 6.805746 / 2.077655 (4.728091) | 2.717864 / 1.504120 (1.213744) | 2.360529 / 1.541195 (0.819335) | 2.384403 / 1.468490 (0.915913) | 1.337512 / 4.584777 (-3.247265) | 5.734090 / 3.745712 (1.988378) | 5.344909 / 5.269862 (0.075047) | 2.906218 / 4.565676 (-1.659458) | 0.160148 / 0.424275 (-0.264127) | 0.015159 / 0.007607 (0.007551) | 0.871356 / 0.226044 (0.645312) | 8.550965 / 2.268929 (6.282037) | 3.613522 / 55.444624 (-51.831103) | 2.868508 / 6.876477 (-4.007969) | 2.912263 / 2.142072 (0.770190) | 1.652548 / 4.805227 (-3.152680) | 0.274117 / 6.500664 (-6.226547) | 0.085911 / 0.075469 (0.010442) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.624798 / 1.841788 (-0.216989) | 18.413303 / 8.074308 (10.338995) | 21.742854 / 10.191392 (11.551462) | 0.255937 / 0.680424 (-0.424487) | 0.029492 / 0.534201 (-0.504709) | 0.541932 / 0.579283 (-0.037351) | 0.638594 / 0.434364 (0.204230) | 0.607427 / 0.540337 (0.067090) | 0.763046 / 1.386936 (-0.623890) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.020543 / 0.011353 (0.009190) | 0.006079 / 0.011008 (-0.004929) | 0.100558 / 0.038508 (0.062050) | 0.039474 / 0.023109 (0.016365) | 0.468889 / 0.275898 (0.192991) | 0.477731 / 0.323480 (0.154251) | 0.006999 / 0.007986 (-0.000987) | 0.005845 / 0.004328 (0.001516) | 0.110022 / 0.004250 (0.105772) | 0.056885 / 0.037052 (0.019833) | 0.447296 / 0.258489 (0.188807) | 0.489007 / 0.293841 (0.195166) | 0.055086 / 0.128546 (-0.073460) | 0.020623 / 0.075646 (-0.055024) | 0.129599 / 0.419271 (-0.289672) | 0.064316 / 0.043533 (0.020784) | 0.446681 / 0.255139 (0.191542) | 0.488897 / 0.283200 (0.205698) | 0.119121 / 0.141683 (-0.022562) | 1.836248 / 1.452155 (0.384093) | 2.002456 / 1.492716 (0.509740) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.249344 / 0.018006 (0.231338) | 0.544320 / 0.000490 (0.543830) | 0.000459 / 0.000200 (0.000259) | 0.000088 / 0.000054 (0.000034) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.038771 / 0.037411 (0.001359) | 0.129527 / 0.014526 (0.115002) | 0.144681 / 0.176557 (-0.031876) | 0.208237 / 0.737135 (-0.528898) | 0.149502 / 0.296338 (-0.146836) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.668457 / 0.215209 (0.453248) | 6.729550 / 2.077655 (4.651895) | 2.741076 / 1.504120 (1.236956) | 2.394737 / 1.541195 (0.853542) | 2.415242 / 1.468490 (0.946752) | 1.322334 / 4.584777 (-3.262442) | 5.787454 / 3.745712 (2.041742) | 3.309847 / 5.269862 (-1.960015) | 2.199181 / 4.565676 (-2.366495) | 0.170740 / 0.424275 (-0.253535) | 0.015095 / 0.007607 (0.007487) | 0.864157 / 0.226044 (0.638112) | 8.701858 / 2.268929 (6.432929) | 3.617966 / 55.444624 (-51.826658) | 2.847144 / 6.876477 (-4.029332) | 3.011391 / 2.142072 (0.869319) | 1.595466 / 4.805227 (-3.209762) | 0.284010 / 6.500664 (-6.216654) | 0.091054 / 0.075469 (0.015585) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.702404 / 1.841788 (-0.139384) | 19.427130 / 8.074308 (11.352822) | 21.900446 / 10.191392 (11.709053) | 0.244088 / 0.680424 (-0.436336) | 0.027428 / 0.534201 (-0.506773) | 0.552226 / 0.579283 (-0.027057) | 0.653102 / 0.434364 (0.218738) | 0.635379 / 0.540337 (0.095042) | 0.771842 / 1.386936 (-0.615094) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#efde2a0b9ad937defc83e0ac3f14bbb90fb5f345 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006547 / 0.011353 (-0.004806) | 0.004569 / 0.011008 (-0.006439) | 0.097782 / 0.038508 (0.059274) | 0.028157 / 0.023109 (0.005048) | 0.319017 / 0.275898 (0.043119) | 0.340758 / 0.323480 (0.017278) | 0.005078 / 0.007986 (-0.002907) | 0.003343 / 0.004328 (-0.000985) | 0.074194 / 0.004250 (0.069944) | 0.037918 / 0.037052 (0.000866) | 0.310298 / 0.258489 (0.051809) | 0.349441 / 0.293841 (0.055600) | 0.030375 / 0.128546 (-0.098171) | 0.011527 / 0.075646 (-0.064119) | 0.320499 / 0.419271 (-0.098773) | 0.042639 / 0.043533 (-0.000894) | 0.312182 / 0.255139 (0.057043) | 0.329058 / 0.283200 (0.045858) | 0.085517 / 0.141683 (-0.056165) | 1.532603 / 1.452155 (0.080448) | 1.583996 / 1.492716 (0.091279) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.208286 / 0.018006 (0.190280) | 0.418696 / 0.000490 (0.418206) | 0.007051 / 0.000200 (0.006851) | 0.000409 / 0.000054 (0.000354) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024055 / 0.037411 (-0.013356) | 0.098420 / 0.014526 (0.083894) | 0.104785 / 0.176557 (-0.071771) | 0.163618 / 0.737135 (-0.573517) | 0.110006 / 0.296338 (-0.186332) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.418756 / 0.215209 (0.203547) | 4.179557 / 2.077655 (2.101902) | 1.881708 / 1.504120 (0.377588) | 1.683393 / 1.541195 (0.142198) | 1.731909 / 1.468490 (0.263419) | 0.696674 / 4.584777 (-3.888103) | 3.384167 / 3.745712 (-0.361545) | 3.173479 / 5.269862 (-2.096382) | 1.620019 / 4.565676 (-2.945658) | 0.082850 / 0.424275 (-0.341426) | 0.012396 / 0.007607 (0.004789) | 0.519743 / 0.226044 (0.293699) | 5.208480 / 2.268929 (2.939552) | 2.312917 / 55.444624 (-53.131708) | 1.963486 / 6.876477 (-4.912991) | 2.084553 / 2.142072 (-0.057519) | 0.805486 / 4.805227 (-3.999742) | 0.153429 / 6.500664 (-6.347235) | 0.069451 / 0.075469 (-0.006018) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.197185 / 1.841788 (-0.644603) | 14.341005 / 8.074308 (6.266696) | 14.476162 / 10.191392 (4.284770) | 0.157372 / 0.680424 (-0.523052) | 0.016444 / 0.534201 (-0.517757) | 0.383721 / 0.579283 (-0.195562) | 0.380800 / 0.434364 (-0.053564) | 0.441137 / 0.540337 (-0.099200) | 0.524778 / 1.386936 (-0.862158) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006728 / 0.011353 (-0.004625) | 0.004536 / 0.011008 (-0.006472) | 0.076266 / 0.038508 (0.037757) | 0.028133 / 0.023109 (0.005024) | 0.351072 / 0.275898 (0.075174) | 0.375823 / 0.323480 (0.052344) | 0.005166 / 0.007986 (-0.002819) | 0.004717 / 0.004328 (0.000388) | 0.076130 / 0.004250 (0.071880) | 0.041354 / 0.037052 (0.004301) | 0.345904 / 0.258489 (0.087415) | 0.384119 / 0.293841 (0.090278) | 0.030759 / 0.128546 (-0.097787) | 0.011659 / 0.075646 (-0.063988) | 0.085269 / 0.419271 (-0.334002) | 0.042161 / 0.043533 (-0.001372) | 0.340806 / 0.255139 (0.085667) | 0.366832 / 0.283200 (0.083632) | 0.092187 / 0.141683 (-0.049495) | 1.520035 / 1.452155 (0.067880) | 1.603856 / 1.492716 (0.111140) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.237763 / 0.018006 (0.219757) | 0.413406 / 0.000490 (0.412916) | 0.000415 / 0.000200 (0.000215) | 0.000060 / 0.000054 (0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026095 / 0.037411 (-0.011317) | 0.105775 / 0.014526 (0.091249) | 0.108452 / 0.176557 (-0.068105) | 0.160014 / 0.737135 (-0.577122) | 0.112385 / 0.296338 (-0.183953) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.437327 / 0.215209 (0.222118) | 4.374949 / 2.077655 (2.297294) | 2.090292 / 1.504120 (0.586172) | 1.885946 / 1.541195 (0.344752) | 1.946768 / 1.468490 (0.478278) | 0.704124 / 4.584777 (-3.880653) | 3.394994 / 3.745712 (-0.350718) | 1.905189 / 5.269862 (-3.364673) | 1.182300 / 4.565676 (-3.383376) | 0.082920 / 0.424275 (-0.341355) | 0.012781 / 0.007607 (0.005174) | 0.535467 / 0.226044 (0.309423) | 5.362799 / 2.268929 (3.093870) | 2.504825 / 55.444624 (-52.939799) | 2.180458 / 6.876477 (-4.696019) | 2.317750 / 2.142072 (0.175677) | 0.811182 / 4.805227 (-3.994045) | 0.151654 / 6.500664 (-6.349010) | 0.067925 / 0.075469 (-0.007544) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.290746 / 1.841788 (-0.551042) | 14.799309 / 8.074308 (6.725001) | 14.439722 / 10.191392 (4.248330) | 0.144358 / 0.680424 (-0.536066) | 0.016688 / 0.534201 (-0.517513) | 0.392907 / 0.579283 (-0.186376) | 0.383109 / 0.434364 (-0.051255) | 0.450069 / 0.540337 (-0.090269) | 0.532534 / 1.386936 (-0.854402) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#87c061032972509a2a1b4103763e62fb74912128 \"CML watermark\")\n", "I turned it into a draft to fix the failing tests, but CI is now green, so there is no good reason for it :)" ]
"2023-04-14T14:13:59Z"
"2023-04-20T14:43:20Z"
"2023-04-20T14:40:34Z"
CONTRIBUTOR
null
Return a list of lists instead of a list of NumPy arrays when converting the variable-shaped `ArrayXD` to Python. Additionally, improve the NumPy conversion by returning a numeric NumPy array when the offsets are equal or a NumPy object array when they aren't, and allow converting the variable-shaped `ArrayXD` to Pandas. (Reported in https://github.com/huggingface/datasets/issues/5719#issuecomment-1507579671)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5751/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5751/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5751.diff", "html_url": "https://github.com/huggingface/datasets/pull/5751", "merged_at": "2023-04-20T14:40:34Z", "patch_url": "https://github.com/huggingface/datasets/pull/5751.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5751" }
true
https://api.github.com/repos/huggingface/datasets/issues/5750
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5750/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5750/comments
https://api.github.com/repos/huggingface/datasets/issues/5750/events
https://github.com/huggingface/datasets/issues/5750
1,668,289,067
I_kwDODunzps5jcBIr
5,750
Fail to create datasets from a generator when using Google Big Query
{ "avatar_url": "https://avatars.githubusercontent.com/u/895720?v=4", "events_url": "https://api.github.com/users/ivanprado/events{/privacy}", "followers_url": "https://api.github.com/users/ivanprado/followers", "following_url": "https://api.github.com/users/ivanprado/following{/other_user}", "gists_url": "https://api.github.com/users/ivanprado/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ivanprado", "id": 895720, "login": "ivanprado", "node_id": "MDQ6VXNlcjg5NTcyMA==", "organizations_url": "https://api.github.com/users/ivanprado/orgs", "received_events_url": "https://api.github.com/users/ivanprado/received_events", "repos_url": "https://api.github.com/users/ivanprado/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ivanprado/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ivanprado/subscriptions", "type": "User", "url": "https://api.github.com/users/ivanprado" }
[]
closed
false
null
[]
null
[ "`from_generator` expects a generator function, not a generator object, so this should work:\r\n```python\r\nfrom datasets import Dataset\r\nfrom google.cloud import bigquery\r\n\r\nclient = bigquery.Client()\r\n\r\ndef gen()\r\n # Perform a query.\r\n QUERY = (\r\n 'SELECT name FROM `bigquery-public-data.usa_names.usa_1910_2013` '\r\n 'WHERE state = \"TX\" '\r\n 'LIMIT 100')\r\n query_job = client.query(QUERY) # API request\r\n yield from query_job.result() # Waits for query to finish\r\n\r\nds = Dataset.from_generator(rows)\r\n\r\nfor r in ds:\r\n print(r)\r\n```", "@mariosasko your code was incomplete, so I tried to fix it:\r\n\r\n```py\r\nfrom datasets import Dataset\r\nfrom google.cloud import bigquery\r\n\r\nclient = bigquery.Client()\r\n\r\ndef gen():\r\n # Perform a query.\r\n QUERY = (\r\n 'SELECT name FROM `bigquery-public-data.usa_names.usa_1910_2013` '\r\n 'WHERE state = \"TX\" '\r\n 'LIMIT 100')\r\n query_job = client.query(QUERY) # API request\r\n yield from query_job.result() # Waits for query to finish\r\n\r\nds = Dataset.from_generator(gen)\r\n\r\nfor r in ds:\r\n print(r)\r\n```\r\n\r\nThe error is also present in this case:\r\n\r\n```\r\n_pickle.PicklingError: Pickling client objects is explicitly not supported.\r\nClients have non-trivial state that is local and unpickleable.\r\n```\r\n\r\nI think it doesn't matter if the generator is an object or a function. The problem is that the generator is referencing an object that is not pickable (the client in this case). ", "It does matter: this function expects a generator function, as stated in the docs.\r\n\r\nThis should work:\r\n```python\r\nfrom datasets import Dataset\r\nfrom google.cloud import bigquery\r\n\r\ndef gen():\r\n client = bigquery.Client()\r\n # Perform a query.\r\n QUERY = (\r\n 'SELECT name FROM `bigquery-public-data.usa_names.usa_1910_2013` '\r\n 'WHERE state = \"TX\" '\r\n 'LIMIT 100')\r\n query_job = client.query(QUERY) # API request\r\n yield from query_job.result() # Waits for query to finish\r\n\r\nds = Dataset.from_generator(gen)\r\n\r\nfor r in ds:\r\n print(r)\r\n```\r\n\r\nWe could allow passing non-picklable objects and use a random hash for the generated arrow file. In that case, the caching mechanism would not work, meaning repeated calls with the same set of arguments would generate new datasets instead of reusing the cached version, but this behavior is still better than raising an error.", "Thank you @mariosasko . Your last code is working indeed. Curiously, the important detail here was to wrap the client instantiation within the generator itself. If the line `client = bigquery.Client()` is moved outside, then the error is back.\r\n\r\nI see now also your point in regard to the generator being a generator function. We can close the issue if you want." ]
"2023-04-14T13:50:59Z"
"2023-04-17T12:20:43Z"
"2023-04-17T12:20:43Z"
NONE
null
### Describe the bug Creating a dataset from a generator using `Dataset.from_generator()` fails if the generator is the [Google Big Query Python client](https://cloud.google.com/python/docs/reference/bigquery/latest). The problem is that the Big Query client is not pickable. And the function `create_config_id` tries to get a hash of the generator by pickling it. So the following error is generated: ``` _pickle.PicklingError: Pickling client objects is explicitly not supported. Clients have non-trivial state that is local and unpickleable. ``` ### Steps to reproduce the bug 1. Install the big query client and datasets `pip install google-cloud-bigquery datasets` 2. Run the following code: ```py from datasets import Dataset from google.cloud import bigquery client = bigquery.Client() # Perform a query. QUERY = ( 'SELECT name FROM `bigquery-public-data.usa_names.usa_1910_2013` ' 'WHERE state = "TX" ' 'LIMIT 100') query_job = client.query(QUERY) # API request rows = query_job.result() # Waits for query to finish ds = Dataset.from_generator(rows) for r in ds: print(r) ``` ### Expected behavior Two options: 1. Ignore the pickle errors when computing the hash 2. Provide a scape hutch so that we can avoid calculating the hash for the generator. For example, allowing to provide a hash from the user. ### Environment info python 3.9 google-cloud-bigquery 3.9.0 datasets 2.11.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5750/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5750/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5749
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5749/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5749/comments
https://api.github.com/repos/huggingface/datasets/issues/5749/events
https://github.com/huggingface/datasets/issues/5749
1,668,016,321
I_kwDODunzps5ja-jB
5,749
AttributeError: 'Version' object has no attribute 'match'
{ "avatar_url": "https://avatars.githubusercontent.com/u/54584290?v=4", "events_url": "https://api.github.com/users/gulnaz-zh/events{/privacy}", "followers_url": "https://api.github.com/users/gulnaz-zh/followers", "following_url": "https://api.github.com/users/gulnaz-zh/following{/other_user}", "gists_url": "https://api.github.com/users/gulnaz-zh/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/gulnaz-zh", "id": 54584290, "login": "gulnaz-zh", "node_id": "MDQ6VXNlcjU0NTg0Mjkw", "organizations_url": "https://api.github.com/users/gulnaz-zh/orgs", "received_events_url": "https://api.github.com/users/gulnaz-zh/received_events", "repos_url": "https://api.github.com/users/gulnaz-zh/repos", "site_admin": false, "starred_url": "https://api.github.com/users/gulnaz-zh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gulnaz-zh/subscriptions", "type": "User", "url": "https://api.github.com/users/gulnaz-zh" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
[ "I got the same error, and the official website for visual genome is down. Did you solve this problem? ", "I am in the same situation now :( ", "Thanks for reporting, @gulnaz-zh.\r\n\r\nI am investigating it.", "The host server is down: https://visualgenome.org/\r\n\r\nWe are contacting the dataset authors.", "Apart form data host server being down, there is an additional issue with the `datasets` library introduced by this PR:\r\n- #5238\r\n\r\nI am working to fix it.", "PR that fixes the AttributeError: https://huggingface.co/datasets/visual_genome/discussions/2", "For the issue with their data host server being down, I have opened a discussion in the \"Community\" tab of the Hub dataset: https://huggingface.co/datasets/visual_genome/discussions/3\r\nLet's continue the discussion there.", "The authors just replied to us with their new URL: https://homes.cs.washington.edu/~ranjay/visualgenome/\r\n\r\nWe have fixed the datasets loading script, which is operative again." ]
"2023-04-14T10:48:06Z"
"2023-06-30T11:31:17Z"
"2023-04-18T12:57:08Z"
NONE
null
### Describe the bug When I run from datasets import load_dataset data = load_dataset("visual_genome", 'region_descriptions_v1.2.0') AttributeError: 'Version' object has no attribute 'match' ### Steps to reproduce the bug from datasets import load_dataset data = load_dataset("visual_genome", 'region_descriptions_v1.2.0') ### Expected behavior This is error trace: Downloading and preparing dataset visual_genome/region_descriptions_v1.2.0 to C:/Users/Acer/.cache/huggingface/datasets/visual_genome/region_descriptions_v1.2.0/1.2.0/136fe5b83f6691884566c5530313288171e053a3b33bfe3ea2e4c8b39abaf7f3... --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) Cell In[6], line 1 ----> 1 data = load_dataset("visual_genome", 'region_descriptions_v1.2.0') File ~\.conda\envs\aai\Lib\site-packages\datasets\load.py:1791, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, num_proc, storage_options, **config_kwargs) 1788 try_from_hf_gcs = path not in _PACKAGED_DATASETS_MODULES 1790 # Download and prepare data -> 1791 builder_instance.download_and_prepare( 1792 download_config=download_config, 1793 download_mode=download_mode, 1794 verification_mode=verification_mode, 1795 try_from_hf_gcs=try_from_hf_gcs, 1796 num_proc=num_proc, 1797 storage_options=storage_options, 1798 ) 1800 # Build dataset for splits 1801 keep_in_memory = ( 1802 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size) 1803 ) File ~\.conda\envs\aai\Lib\site-packages\datasets\builder.py:891, in DatasetBuilder.download_and_prepare(self, output_dir, download_config, download_mode, verification_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, file_format, max_shard_size, num_proc, storage_options, **download_and_prepare_kwargs) 889 if num_proc is not None: 890 prepare_split_kwargs["num_proc"] = num_proc --> 891 self._download_and_prepare( 892 dl_manager=dl_manager, 893 verification_mode=verification_mode, 894 **prepare_split_kwargs, 895 **download_and_prepare_kwargs, 896 ) 897 # Sync info 898 self.info.dataset_size = sum(split.num_bytes for split in self.info.splits.values()) File ~\.conda\envs\aai\Lib\site-packages\datasets\builder.py:1651, in GeneratorBasedBuilder._download_and_prepare(self, dl_manager, verification_mode, **prepare_splits_kwargs) 1650 def _download_and_prepare(self, dl_manager, verification_mode, **prepare_splits_kwargs): -> 1651 super()._download_and_prepare( 1652 dl_manager, 1653 verification_mode, 1654 check_duplicate_keys=verification_mode == VerificationMode.BASIC_CHECKS 1655 or verification_mode == VerificationMode.ALL_CHECKS, 1656 **prepare_splits_kwargs, 1657 ) File ~\.conda\envs\aai\Lib\site-packages\datasets\builder.py:964, in DatasetBuilder._download_and_prepare(self, dl_manager, verification_mode, **prepare_split_kwargs) 962 split_dict = SplitDict(dataset_name=self.name) 963 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs) --> 964 split_generators = self._split_generators(dl_manager, **split_generators_kwargs) 966 # Checksums verification 967 if verification_mode == VerificationMode.ALL_CHECKS and dl_manager.record_checksums: File ~\.cache\huggingface\modules\datasets_modules\datasets\visual_genome\136fe5b83f6691884566c5530313288171e053a3b33bfe3ea2e4c8b39abaf7f3\visual_genome.py:377, in VisualGenome._split_generators(self, dl_manager) 375 def _split_generators(self, dl_manager): 376 # Download image meta datas. --> 377 image_metadatas_dir = dl_manager.download_and_extract(self.config.image_metadata_url) 378 image_metadatas_file = os.path.join( 379 image_metadatas_dir, _get_decompressed_filename_from_url(self.config.image_metadata_url) 380 ) 382 # Download annotations File ~\.cache\huggingface\modules\datasets_modules\datasets\visual_genome\136fe5b83f6691884566c5530313288171e053a3b33bfe3ea2e4c8b39abaf7f3\visual_genome.py:328, in VisualGenomeConfig.image_metadata_url(self) 326 @property 327 def image_metadata_url(self): --> 328 if not self.version.match(_LATEST_VERSIONS["image_metadata"]): 329 logger.warning( 330 f"Latest image metadata version is {_LATEST_VERSIONS['image_metadata']}. Trying to generate a dataset of version: {self.version}. Please double check that image data are unchanged between the two versions." 331 ) 332 return f"{_BASE_ANNOTATION_URL}/image_data.json.zip" ### Environment info datasets 2.11.0 python 3.11.3
{ "+1": 3, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 3, "url": "https://api.github.com/repos/huggingface/datasets/issues/5749/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5749/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5748
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5748/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5748/comments
https://api.github.com/repos/huggingface/datasets/issues/5748/events
https://github.com/huggingface/datasets/pull/5748
1,667,517,024
PR_kwDODunzps5OSgNH
5,748
[BUG FIX] Issue 5739
{ "avatar_url": "https://avatars.githubusercontent.com/u/1772912?v=4", "events_url": "https://api.github.com/users/ericxsun/events{/privacy}", "followers_url": "https://api.github.com/users/ericxsun/followers", "following_url": "https://api.github.com/users/ericxsun/following{/other_user}", "gists_url": "https://api.github.com/users/ericxsun/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ericxsun", "id": 1772912, "login": "ericxsun", "node_id": "MDQ6VXNlcjE3NzI5MTI=", "organizations_url": "https://api.github.com/users/ericxsun/orgs", "received_events_url": "https://api.github.com/users/ericxsun/received_events", "repos_url": "https://api.github.com/users/ericxsun/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ericxsun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ericxsun/subscriptions", "type": "User", "url": "https://api.github.com/users/ericxsun" }
[]
open
false
null
[]
null
[]
"2023-04-14T05:07:31Z"
"2023-04-14T05:07:31Z"
null
NONE
null
A fix for https://github.com/huggingface/datasets/issues/5739
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5748/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5748/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5748.diff", "html_url": "https://github.com/huggingface/datasets/pull/5748", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/5748.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5748" }
true
https://api.github.com/repos/huggingface/datasets/issues/5747
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5747/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5747/comments
https://api.github.com/repos/huggingface/datasets/issues/5747/events
https://github.com/huggingface/datasets/pull/5747
1,667,270,412
PR_kwDODunzps5ORtBF
5,747
[WIP] Add Dataset.to_spark
{ "avatar_url": "https://avatars.githubusercontent.com/u/106995444?v=4", "events_url": "https://api.github.com/users/maddiedawson/events{/privacy}", "followers_url": "https://api.github.com/users/maddiedawson/followers", "following_url": "https://api.github.com/users/maddiedawson/following{/other_user}", "gists_url": "https://api.github.com/users/maddiedawson/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/maddiedawson", "id": 106995444, "login": "maddiedawson", "node_id": "U_kgDOBmCe9A", "organizations_url": "https://api.github.com/users/maddiedawson/orgs", "received_events_url": "https://api.github.com/users/maddiedawson/received_events", "repos_url": "https://api.github.com/users/maddiedawson/repos", "site_admin": false, "starred_url": "https://api.github.com/users/maddiedawson/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/maddiedawson/subscriptions", "type": "User", "url": "https://api.github.com/users/maddiedawson" }
[]
open
false
null
[]
null
[]
"2023-04-13T23:20:03Z"
"2023-05-05T12:31:10Z"
null
CONTRIBUTOR
null
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5747/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5747/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5747.diff", "html_url": "https://github.com/huggingface/datasets/pull/5747", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/5747.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5747" }
true
https://api.github.com/repos/huggingface/datasets/issues/5746
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5746/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5746/comments
https://api.github.com/repos/huggingface/datasets/issues/5746/events
https://github.com/huggingface/datasets/pull/5746
1,667,102,459
PR_kwDODunzps5ORIUU
5,746
Fix link in docs
{ "avatar_url": "https://avatars.githubusercontent.com/u/7485661?v=4", "events_url": "https://api.github.com/users/bbbxyz/events{/privacy}", "followers_url": "https://api.github.com/users/bbbxyz/followers", "following_url": "https://api.github.com/users/bbbxyz/following{/other_user}", "gists_url": "https://api.github.com/users/bbbxyz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/bbbxyz", "id": 7485661, "login": "bbbxyz", "node_id": "MDQ6VXNlcjc0ODU2NjE=", "organizations_url": "https://api.github.com/users/bbbxyz/orgs", "received_events_url": "https://api.github.com/users/bbbxyz/received_events", "repos_url": "https://api.github.com/users/bbbxyz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/bbbxyz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bbbxyz/subscriptions", "type": "User", "url": "https://api.github.com/users/bbbxyz" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006461 / 0.011353 (-0.004892) | 0.004671 / 0.011008 (-0.006337) | 0.097329 / 0.038508 (0.058821) | 0.028380 / 0.023109 (0.005270) | 0.369892 / 0.275898 (0.093994) | 0.398244 / 0.323480 (0.074764) | 0.004795 / 0.007986 (-0.003190) | 0.004866 / 0.004328 (0.000538) | 0.075060 / 0.004250 (0.070809) | 0.035678 / 0.037052 (-0.001374) | 0.372197 / 0.258489 (0.113708) | 0.407509 / 0.293841 (0.113668) | 0.031557 / 0.128546 (-0.096989) | 0.011608 / 0.075646 (-0.064038) | 0.325467 / 0.419271 (-0.093805) | 0.042590 / 0.043533 (-0.000943) | 0.373738 / 0.255139 (0.118599) | 0.395793 / 0.283200 (0.112593) | 0.082335 / 0.141683 (-0.059348) | 1.471582 / 1.452155 (0.019427) | 1.535834 / 1.492716 (0.043117) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.192432 / 0.018006 (0.174426) | 0.404423 / 0.000490 (0.403933) | 0.003252 / 0.000200 (0.003052) | 0.000073 / 0.000054 (0.000018) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025312 / 0.037411 (-0.012099) | 0.099964 / 0.014526 (0.085438) | 0.108779 / 0.176557 (-0.067777) | 0.170438 / 0.737135 (-0.566697) | 0.110116 / 0.296338 (-0.186223) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.420402 / 0.215209 (0.205193) | 4.179142 / 2.077655 (2.101487) | 1.858114 / 1.504120 (0.353994) | 1.674452 / 1.541195 (0.133257) | 1.697839 / 1.468490 (0.229349) | 0.694707 / 4.584777 (-3.890070) | 3.394321 / 3.745712 (-0.351391) | 1.918437 / 5.269862 (-3.351425) | 1.277954 / 4.565676 (-3.287723) | 0.082357 / 0.424275 (-0.341918) | 0.012206 / 0.007607 (0.004598) | 0.522093 / 0.226044 (0.296049) | 5.239604 / 2.268929 (2.970675) | 2.347764 / 55.444624 (-53.096860) | 1.996864 / 6.876477 (-4.879613) | 2.050820 / 2.142072 (-0.091253) | 0.806110 / 4.805227 (-3.999118) | 0.151061 / 6.500664 (-6.349603) | 0.066438 / 0.075469 (-0.009031) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.211233 / 1.841788 (-0.630554) | 14.054422 / 8.074308 (5.980114) | 14.110141 / 10.191392 (3.918749) | 0.129962 / 0.680424 (-0.550462) | 0.017271 / 0.534201 (-0.516930) | 0.386410 / 0.579283 (-0.192873) | 0.392648 / 0.434364 (-0.041716) | 0.444940 / 0.540337 (-0.095398) | 0.533535 / 1.386936 (-0.853401) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006865 / 0.011353 (-0.004488) | 0.004662 / 0.011008 (-0.006346) | 0.077837 / 0.038508 (0.039329) | 0.028258 / 0.023109 (0.005149) | 0.346136 / 0.275898 (0.070238) | 0.380414 / 0.323480 (0.056934) | 0.005039 / 0.007986 (-0.002947) | 0.004967 / 0.004328 (0.000638) | 0.077774 / 0.004250 (0.073523) | 0.037504 / 0.037052 (0.000452) | 0.341550 / 0.258489 (0.083061) | 0.382494 / 0.293841 (0.088653) | 0.031881 / 0.128546 (-0.096665) | 0.011746 / 0.075646 (-0.063901) | 0.087087 / 0.419271 (-0.332185) | 0.043108 / 0.043533 (-0.000425) | 0.344103 / 0.255139 (0.088964) | 0.366613 / 0.283200 (0.083413) | 0.090399 / 0.141683 (-0.051284) | 1.492675 / 1.452155 (0.040520) | 1.588666 / 1.492716 (0.095950) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.191859 / 0.018006 (0.173853) | 0.412514 / 0.000490 (0.412025) | 0.001953 / 0.000200 (0.001753) | 0.000084 / 0.000054 (0.000030) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025159 / 0.037411 (-0.012252) | 0.100125 / 0.014526 (0.085599) | 0.106000 / 0.176557 (-0.070556) | 0.160710 / 0.737135 (-0.576425) | 0.110449 / 0.296338 (-0.185889) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.436636 / 0.215209 (0.221427) | 4.364597 / 2.077655 (2.286942) | 2.077492 / 1.504120 (0.573372) | 1.868248 / 1.541195 (0.327053) | 1.911218 / 1.468490 (0.442728) | 0.700306 / 4.584777 (-3.884471) | 3.385428 / 3.745712 (-0.360284) | 2.965384 / 5.269862 (-2.304478) | 1.522093 / 4.565676 (-3.043583) | 0.082805 / 0.424275 (-0.341470) | 0.012432 / 0.007607 (0.004825) | 0.538478 / 0.226044 (0.312433) | 5.383207 / 2.268929 (3.114278) | 2.525177 / 55.444624 (-52.919447) | 2.179632 / 6.876477 (-4.696845) | 2.280768 / 2.142072 (0.138695) | 0.805869 / 4.805227 (-3.999358) | 0.152716 / 6.500664 (-6.347948) | 0.067848 / 0.075469 (-0.007621) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.318899 / 1.841788 (-0.522889) | 14.416310 / 8.074308 (6.342002) | 14.172804 / 10.191392 (3.981412) | 0.141729 / 0.680424 (-0.538695) | 0.016785 / 0.534201 (-0.517416) | 0.378626 / 0.579283 (-0.200657) | 0.387153 / 0.434364 (-0.047211) | 0.439950 / 0.540337 (-0.100388) | 0.523958 / 1.386936 (-0.862978) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#7c3a9b057c476c40d157bd7a5d57f49066239df0 \"CML watermark\")\n" ]
"2023-04-13T20:45:19Z"
"2023-04-14T13:15:38Z"
"2023-04-14T13:08:42Z"
CONTRIBUTOR
null
Fixes a broken link in the use_with_pytorch docs
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5746/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5746/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5746.diff", "html_url": "https://github.com/huggingface/datasets/pull/5746", "merged_at": "2023-04-14T13:08:42Z", "patch_url": "https://github.com/huggingface/datasets/pull/5746.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5746" }
true
https://api.github.com/repos/huggingface/datasets/issues/5745
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5745/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5745/comments
https://api.github.com/repos/huggingface/datasets/issues/5745/events
https://github.com/huggingface/datasets/pull/5745
1,667,086,143
PR_kwDODunzps5ORE2n
5,745
[BUG FIX] Issue 5744
{ "avatar_url": "https://avatars.githubusercontent.com/u/15572698?v=4", "events_url": "https://api.github.com/users/keyboardAnt/events{/privacy}", "followers_url": "https://api.github.com/users/keyboardAnt/followers", "following_url": "https://api.github.com/users/keyboardAnt/following{/other_user}", "gists_url": "https://api.github.com/users/keyboardAnt/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/keyboardAnt", "id": 15572698, "login": "keyboardAnt", "node_id": "MDQ6VXNlcjE1NTcyNjk4", "organizations_url": "https://api.github.com/users/keyboardAnt/orgs", "received_events_url": "https://api.github.com/users/keyboardAnt/received_events", "repos_url": "https://api.github.com/users/keyboardAnt/repos", "site_admin": false, "starred_url": "https://api.github.com/users/keyboardAnt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/keyboardAnt/subscriptions", "type": "User", "url": "https://api.github.com/users/keyboardAnt" }
[]
open
false
null
[]
null
[ "Have met the same problem with datasets==2.8.0, pandas==2.0.0. It could be solved by installing the latest version of datasets or using datasets==2.8.0, pandas==1.5.3.", "Pandas 2.0.0 has removed support to passing `mangle_dupe_cols`.\r\n\r\nHowever, our `datasets` library does not use this parameter: it only passes it to pandas if the user passes it to `load_dataset`.\r\n\r\nYou should better:\r\n- Either \"take steps to stop the use of 'mangle_dupe_cols'\" (as it was suggested in the deprecation warning in pandas-1.5.3)\r\n- Or pin pandas (< 2.0.0) in your local requirements file\r\n\r\nPlease note that from `datasets` library, we don't want to force users to use a specific pandas version. We would like to support users as well:\r\n- that use pandas < 1.5.3\r\n- that use pandas >= 2.0.0 and that do not pass the 'mangle_dupe_cols' parameter", "`datasets` 2.11 doesn't pass `mangle_dupe_cols` unless the user specifies it indeed, so I think we're fine" ]
"2023-04-13T20:29:55Z"
"2023-04-21T15:22:43Z"
null
NONE
null
A temporal fix for https://github.com/huggingface/datasets/issues/5744.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5745/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5745/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5745.diff", "html_url": "https://github.com/huggingface/datasets/pull/5745", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/5745.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5745" }
true
https://api.github.com/repos/huggingface/datasets/issues/5744
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5744/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5744/comments
https://api.github.com/repos/huggingface/datasets/issues/5744/events
https://github.com/huggingface/datasets/issues/5744
1,667,076,620
I_kwDODunzps5jXZIM
5,744
[BUG] With Pandas 2.0.0, `load_dataset` raises `TypeError: read_csv() got an unexpected keyword argument 'mangle_dupe_cols'`
{ "avatar_url": "https://avatars.githubusercontent.com/u/15572698?v=4", "events_url": "https://api.github.com/users/keyboardAnt/events{/privacy}", "followers_url": "https://api.github.com/users/keyboardAnt/followers", "following_url": "https://api.github.com/users/keyboardAnt/following{/other_user}", "gists_url": "https://api.github.com/users/keyboardAnt/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/keyboardAnt", "id": 15572698, "login": "keyboardAnt", "node_id": "MDQ6VXNlcjE1NTcyNjk4", "organizations_url": "https://api.github.com/users/keyboardAnt/orgs", "received_events_url": "https://api.github.com/users/keyboardAnt/received_events", "repos_url": "https://api.github.com/users/keyboardAnt/repos", "site_admin": false, "starred_url": "https://api.github.com/users/keyboardAnt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/keyboardAnt/subscriptions", "type": "User", "url": "https://api.github.com/users/keyboardAnt" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
[ "Thanks for reporting, @keyboardAnt.\r\n\r\nWe haven't noticed any crash in our CI tests. Could you please indicate specifically the `load_dataset` command that crashes in your side, so that we can reproduce it?", "This has been fixed in `datasets` 2.11" ]
"2023-04-13T20:21:28Z"
"2023-07-06T17:01:59Z"
"2023-07-06T17:01:59Z"
NONE
null
The `load_dataset` function with Pandas `1.5.3` has no issue (just a FutureWarning) but crashes with Pandas `2.0.0`. For your convenience, I opened a draft Pull Request to fix it quickly: https://github.com/huggingface/datasets/pull/5745 --- * The FutureWarning mentioned above: ``` FutureWarning: the 'mangle_dupe_cols' keyword is deprecated and will be removed in a future version. Please take steps to stop the use of 'mangle_dupe_cols' ```
{ "+1": 2, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/5744/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5744/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5743
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5743/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5743/comments
https://api.github.com/repos/huggingface/datasets/issues/5743/events
https://github.com/huggingface/datasets/issues/5743
1,666,843,832
I_kwDODunzps5jWgS4
5,743
dataclass.py in virtual environment is overriding the stdlib module "dataclasses"
{ "avatar_url": "https://avatars.githubusercontent.com/u/71216295?v=4", "events_url": "https://api.github.com/users/syedabdullahhassan/events{/privacy}", "followers_url": "https://api.github.com/users/syedabdullahhassan/followers", "following_url": "https://api.github.com/users/syedabdullahhassan/following{/other_user}", "gists_url": "https://api.github.com/users/syedabdullahhassan/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/syedabdullahhassan", "id": 71216295, "login": "syedabdullahhassan", "node_id": "MDQ6VXNlcjcxMjE2Mjk1", "organizations_url": "https://api.github.com/users/syedabdullahhassan/orgs", "received_events_url": "https://api.github.com/users/syedabdullahhassan/received_events", "repos_url": "https://api.github.com/users/syedabdullahhassan/repos", "site_admin": false, "starred_url": "https://api.github.com/users/syedabdullahhassan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/syedabdullahhassan/subscriptions", "type": "User", "url": "https://api.github.com/users/syedabdullahhassan" }
[]
closed
false
null
[]
null
[ "We no longer depend on `dataclasses` (for almost a year), so I don't think our package is the problematic one. \r\n\r\nI think it makes more sense to raise this issue in the `dataclasses` repo: https://github.com/ericvsmith/dataclasses." ]
"2023-04-13T17:28:33Z"
"2023-04-17T12:23:18Z"
"2023-04-17T12:23:18Z"
NONE
null
### Describe the bug "e:\Krish_naik\FSDSRegression\venv\Lib\dataclasses.py" is overriding the stdlib module "dataclasses" ### Steps to reproduce the bug module issue ### Expected behavior overriding the stdlib module "dataclasses" ### Environment info VS code
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5743/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5743/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5742
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5742/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5742/comments
https://api.github.com/repos/huggingface/datasets/issues/5742/events
https://github.com/huggingface/datasets/pull/5742
1,666,209,738
PR_kwDODunzps5OOH-W
5,742
Warning specifying future change in to_tf_dataset behaviour
{ "avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4", "events_url": "https://api.github.com/users/amyeroberts/events{/privacy}", "followers_url": "https://api.github.com/users/amyeroberts/followers", "following_url": "https://api.github.com/users/amyeroberts/following{/other_user}", "gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/amyeroberts", "id": 22614925, "login": "amyeroberts", "node_id": "MDQ6VXNlcjIyNjE0OTI1", "organizations_url": "https://api.github.com/users/amyeroberts/orgs", "received_events_url": "https://api.github.com/users/amyeroberts/received_events", "repos_url": "https://api.github.com/users/amyeroberts/repos", "site_admin": false, "starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions", "type": "User", "url": "https://api.github.com/users/amyeroberts" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006693 / 0.011353 (-0.004660) | 0.004586 / 0.011008 (-0.006422) | 0.097238 / 0.038508 (0.058730) | 0.027912 / 0.023109 (0.004802) | 0.347339 / 0.275898 (0.071441) | 0.393847 / 0.323480 (0.070368) | 0.005105 / 0.007986 (-0.002880) | 0.004750 / 0.004328 (0.000422) | 0.074671 / 0.004250 (0.070421) | 0.037912 / 0.037052 (0.000860) | 0.368973 / 0.258489 (0.110483) | 0.403983 / 0.293841 (0.110142) | 0.030817 / 0.128546 (-0.097730) | 0.011813 / 0.075646 (-0.063833) | 0.324470 / 0.419271 (-0.094802) | 0.044232 / 0.043533 (0.000699) | 0.347623 / 0.255139 (0.092484) | 0.382458 / 0.283200 (0.099259) | 0.086603 / 0.141683 (-0.055080) | 1.485778 / 1.452155 (0.033623) | 1.549776 / 1.492716 (0.057059) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.200154 / 0.018006 (0.182147) | 0.440645 / 0.000490 (0.440155) | 0.003664 / 0.000200 (0.003464) | 0.000069 / 0.000054 (0.000015) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023635 / 0.037411 (-0.013776) | 0.094969 / 0.014526 (0.080443) | 0.103630 / 0.176557 (-0.072927) | 0.168655 / 0.737135 (-0.568480) | 0.105850 / 0.296338 (-0.190488) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.425224 / 0.215209 (0.210015) | 4.236618 / 2.077655 (2.158963) | 1.917091 / 1.504120 (0.412971) | 1.746984 / 1.541195 (0.205789) | 1.817766 / 1.468490 (0.349276) | 0.700989 / 4.584777 (-3.883788) | 3.412577 / 3.745712 (-0.333135) | 3.049311 / 5.269862 (-2.220551) | 1.607692 / 4.565676 (-2.957984) | 0.083410 / 0.424275 (-0.340865) | 0.012601 / 0.007607 (0.004994) | 0.528244 / 0.226044 (0.302200) | 5.284134 / 2.268929 (3.015206) | 2.391885 / 55.444624 (-53.052740) | 2.020018 / 6.876477 (-4.856459) | 2.105908 / 2.142072 (-0.036164) | 0.801262 / 4.805227 (-4.003965) | 0.151467 / 6.500664 (-6.349197) | 0.066529 / 0.075469 (-0.008940) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.203894 / 1.841788 (-0.637894) | 13.827561 / 8.074308 (5.753253) | 14.136730 / 10.191392 (3.945338) | 0.143829 / 0.680424 (-0.536595) | 0.016410 / 0.534201 (-0.517791) | 0.378194 / 0.579283 (-0.201089) | 0.391235 / 0.434364 (-0.043129) | 0.439261 / 0.540337 (-0.101076) | 0.527181 / 1.386936 (-0.859755) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006639 / 0.011353 (-0.004714) | 0.004469 / 0.011008 (-0.006540) | 0.076495 / 0.038508 (0.037987) | 0.027880 / 0.023109 (0.004771) | 0.342807 / 0.275898 (0.066909) | 0.374258 / 0.323480 (0.050778) | 0.005543 / 0.007986 (-0.002443) | 0.003362 / 0.004328 (-0.000966) | 0.075064 / 0.004250 (0.070813) | 0.039209 / 0.037052 (0.002156) | 0.342490 / 0.258489 (0.084001) | 0.382135 / 0.293841 (0.088294) | 0.030356 / 0.128546 (-0.098191) | 0.011762 / 0.075646 (-0.063884) | 0.086031 / 0.419271 (-0.333241) | 0.041991 / 0.043533 (-0.001542) | 0.340323 / 0.255139 (0.085184) | 0.364160 / 0.283200 (0.080961) | 0.088483 / 0.141683 (-0.053200) | 1.502836 / 1.452155 (0.050681) | 1.570438 / 1.492716 (0.077722) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.218486 / 0.018006 (0.200480) | 0.405251 / 0.000490 (0.404761) | 0.000398 / 0.000200 (0.000198) | 0.000062 / 0.000054 (0.000008) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025738 / 0.037411 (-0.011673) | 0.100390 / 0.014526 (0.085864) | 0.109913 / 0.176557 (-0.066644) | 0.161310 / 0.737135 (-0.575826) | 0.113269 / 0.296338 (-0.183069) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.438083 / 0.215209 (0.222874) | 4.377742 / 2.077655 (2.300087) | 2.069949 / 1.504120 (0.565829) | 1.857807 / 1.541195 (0.316613) | 1.881315 / 1.468490 (0.412825) | 0.695373 / 4.584777 (-3.889404) | 3.440287 / 3.745712 (-0.305425) | 1.842888 / 5.269862 (-3.426973) | 1.146655 / 4.565676 (-3.419022) | 0.083386 / 0.424275 (-0.340889) | 0.012290 / 0.007607 (0.004683) | 0.545672 / 0.226044 (0.319628) | 5.469568 / 2.268929 (3.200639) | 2.511886 / 55.444624 (-52.932739) | 2.184210 / 6.876477 (-4.692267) | 2.329822 / 2.142072 (0.187749) | 0.804114 / 4.805227 (-4.001114) | 0.151651 / 6.500664 (-6.349013) | 0.067269 / 0.075469 (-0.008200) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.272564 / 1.841788 (-0.569223) | 14.180708 / 8.074308 (6.106400) | 14.181657 / 10.191392 (3.990265) | 0.131443 / 0.680424 (-0.548981) | 0.016513 / 0.534201 (-0.517688) | 0.383786 / 0.579283 (-0.195497) | 0.397678 / 0.434364 (-0.036686) | 0.447003 / 0.540337 (-0.093334) | 0.539453 / 1.386936 (-0.847483) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#649d5a3315f9e7666713b6affe318ee00c7163a0 \"CML watermark\")\n" ]
"2023-04-13T11:10:00Z"
"2023-04-21T13:18:14Z"
"2023-04-21T13:11:09Z"
CONTRIBUTOR
null
Warning specifying future changes happening to `to_tf_dataset` behaviour when #5602 is merged in
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5742/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5742/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5742.diff", "html_url": "https://github.com/huggingface/datasets/pull/5742", "merged_at": "2023-04-21T13:11:09Z", "patch_url": "https://github.com/huggingface/datasets/pull/5742.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5742" }
true
https://api.github.com/repos/huggingface/datasets/issues/5741
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5741/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5741/comments
https://api.github.com/repos/huggingface/datasets/issues/5741/events
https://github.com/huggingface/datasets/pull/5741
1,665,860,919
PR_kwDODunzps5OM9nZ
5,741
Fix CI warnings
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007448 / 0.011353 (-0.003905) | 0.005182 / 0.011008 (-0.005826) | 0.098718 / 0.038508 (0.060210) | 0.034594 / 0.023109 (0.011485) | 0.317301 / 0.275898 (0.041403) | 0.357800 / 0.323480 (0.034320) | 0.005860 / 0.007986 (-0.002126) | 0.004267 / 0.004328 (-0.000061) | 0.074876 / 0.004250 (0.070626) | 0.048002 / 0.037052 (0.010950) | 0.333360 / 0.258489 (0.074871) | 0.362080 / 0.293841 (0.068239) | 0.035957 / 0.128546 (-0.092589) | 0.012245 / 0.075646 (-0.063401) | 0.332970 / 0.419271 (-0.086301) | 0.050825 / 0.043533 (0.007293) | 0.313936 / 0.255139 (0.058797) | 0.340684 / 0.283200 (0.057485) | 0.106630 / 0.141683 (-0.035053) | 1.427898 / 1.452155 (-0.024257) | 1.547518 / 1.492716 (0.054801) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.296952 / 0.018006 (0.278945) | 0.515708 / 0.000490 (0.515218) | 0.004225 / 0.000200 (0.004025) | 0.000082 / 0.000054 (0.000027) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029365 / 0.037411 (-0.008046) | 0.111142 / 0.014526 (0.096616) | 0.124414 / 0.176557 (-0.052142) | 0.185227 / 0.737135 (-0.551908) | 0.129545 / 0.296338 (-0.166793) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.403303 / 0.215209 (0.188094) | 4.044138 / 2.077655 (1.966483) | 1.803622 / 1.504120 (0.299502) | 1.615436 / 1.541195 (0.074242) | 1.703576 / 1.468490 (0.235086) | 0.706398 / 4.584777 (-3.878379) | 3.912995 / 3.745712 (0.167283) | 4.004575 / 5.269862 (-1.265287) | 2.101592 / 4.565676 (-2.464085) | 0.087280 / 0.424275 (-0.336995) | 0.012564 / 0.007607 (0.004957) | 0.508484 / 0.226044 (0.282440) | 5.089351 / 2.268929 (2.820422) | 2.269022 / 55.444624 (-53.175602) | 1.933375 / 6.876477 (-4.943102) | 2.136783 / 2.142072 (-0.005289) | 0.862624 / 4.805227 (-3.942603) | 0.172107 / 6.500664 (-6.328557) | 0.066694 / 0.075469 (-0.008775) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.172513 / 1.841788 (-0.669275) | 15.877519 / 8.074308 (7.803211) | 14.687476 / 10.191392 (4.496084) | 0.189392 / 0.680424 (-0.491032) | 0.017334 / 0.534201 (-0.516866) | 0.420201 / 0.579283 (-0.159082) | 0.418502 / 0.434364 (-0.015862) | 0.489130 / 0.540337 (-0.051207) | 0.580678 / 1.386936 (-0.806258) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007942 / 0.011353 (-0.003411) | 0.005312 / 0.011008 (-0.005696) | 0.074684 / 0.038508 (0.036176) | 0.035952 / 0.023109 (0.012843) | 0.349672 / 0.275898 (0.073774) | 0.377157 / 0.323480 (0.053678) | 0.006399 / 0.007986 (-0.001586) | 0.005769 / 0.004328 (0.001441) | 0.074283 / 0.004250 (0.070032) | 0.053217 / 0.037052 (0.016165) | 0.342545 / 0.258489 (0.084056) | 0.383663 / 0.293841 (0.089822) | 0.037234 / 0.128546 (-0.091312) | 0.012349 / 0.075646 (-0.063298) | 0.086522 / 0.419271 (-0.332749) | 0.049888 / 0.043533 (0.006355) | 0.337686 / 0.255139 (0.082547) | 0.361564 / 0.283200 (0.078365) | 0.104902 / 0.141683 (-0.036781) | 1.478259 / 1.452155 (0.026104) | 1.576376 / 1.492716 (0.083660) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.339760 / 0.018006 (0.321753) | 0.530946 / 0.000490 (0.530456) | 0.000474 / 0.000200 (0.000274) | 0.000063 / 0.000054 (0.000008) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029685 / 0.037411 (-0.007726) | 0.109409 / 0.014526 (0.094883) | 0.125579 / 0.176557 (-0.050978) | 0.175378 / 0.737135 (-0.561757) | 0.130672 / 0.296338 (-0.165667) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.428456 / 0.215209 (0.213247) | 4.238731 / 2.077655 (2.161077) | 2.046703 / 1.504120 (0.542583) | 1.850701 / 1.541195 (0.309506) | 1.909290 / 1.468490 (0.440800) | 0.714314 / 4.584777 (-3.870463) | 3.816056 / 3.745712 (0.070344) | 2.118567 / 5.269862 (-3.151295) | 1.348017 / 4.565676 (-3.217659) | 0.087140 / 0.424275 (-0.337135) | 0.012546 / 0.007607 (0.004938) | 0.538041 / 0.226044 (0.311997) | 5.381822 / 2.268929 (3.112893) | 2.525685 / 55.444624 (-52.918939) | 2.178659 / 6.876477 (-4.697817) | 2.381054 / 2.142072 (0.238981) | 0.844404 / 4.805227 (-3.960823) | 0.171802 / 6.500664 (-6.328862) | 0.065630 / 0.075469 (-0.009839) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.262187 / 1.841788 (-0.579600) | 16.197668 / 8.074308 (8.123360) | 15.148636 / 10.191392 (4.957244) | 0.152601 / 0.680424 (-0.527823) | 0.020238 / 0.534201 (-0.513963) | 0.420141 / 0.579283 (-0.159142) | 0.416295 / 0.434364 (-0.018068) | 0.487051 / 0.540337 (-0.053286) | 0.581942 / 1.386936 (-0.804994) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#9615e5af75b190c4e7b66792f9ba444f352765a0 \"CML watermark\")\n" ]
"2023-04-13T07:17:02Z"
"2023-04-13T09:48:10Z"
"2023-04-13T09:40:50Z"
MEMBER
null
Fix warnings in our CI tests.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5741/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5741/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5741.diff", "html_url": "https://github.com/huggingface/datasets/pull/5741", "merged_at": "2023-04-13T09:40:50Z", "patch_url": "https://github.com/huggingface/datasets/pull/5741.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5741" }
true
https://api.github.com/repos/huggingface/datasets/issues/5740
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5740/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5740/comments
https://api.github.com/repos/huggingface/datasets/issues/5740/events
https://github.com/huggingface/datasets/pull/5740
1,664,132,130
PR_kwDODunzps5OHI08
5,740
Fix CI mock filesystem fixtures
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007003 / 0.011353 (-0.004350) | 0.004854 / 0.011008 (-0.006154) | 0.096982 / 0.038508 (0.058474) | 0.033218 / 0.023109 (0.010109) | 0.314088 / 0.275898 (0.038190) | 0.351315 / 0.323480 (0.027835) | 0.005679 / 0.007986 (-0.002307) | 0.005404 / 0.004328 (0.001075) | 0.071773 / 0.004250 (0.067522) | 0.044593 / 0.037052 (0.007540) | 0.323643 / 0.258489 (0.065154) | 0.357172 / 0.293841 (0.063331) | 0.036782 / 0.128546 (-0.091764) | 0.012146 / 0.075646 (-0.063501) | 0.334874 / 0.419271 (-0.084397) | 0.051475 / 0.043533 (0.007942) | 0.305949 / 0.255139 (0.050810) | 0.339326 / 0.283200 (0.056126) | 0.101509 / 0.141683 (-0.040174) | 1.458254 / 1.452155 (0.006099) | 1.535252 / 1.492716 (0.042535) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.264837 / 0.018006 (0.246831) | 0.441444 / 0.000490 (0.440955) | 0.003331 / 0.000200 (0.003131) | 0.000084 / 0.000054 (0.000030) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026529 / 0.037411 (-0.010882) | 0.105924 / 0.014526 (0.091398) | 0.117191 / 0.176557 (-0.059365) | 0.176606 / 0.737135 (-0.560529) | 0.123452 / 0.296338 (-0.172887) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.412351 / 0.215209 (0.197142) | 4.135468 / 2.077655 (2.057813) | 1.912820 / 1.504120 (0.408700) | 1.738993 / 1.541195 (0.197798) | 1.754228 / 1.468490 (0.285738) | 0.692239 / 4.584777 (-3.892538) | 3.765672 / 3.745712 (0.019959) | 2.081141 / 5.269862 (-3.188720) | 1.425153 / 4.565676 (-3.140523) | 0.085055 / 0.424275 (-0.339220) | 0.011918 / 0.007607 (0.004311) | 0.517573 / 0.226044 (0.291529) | 5.179809 / 2.268929 (2.910881) | 2.471620 / 55.444624 (-52.973005) | 2.140634 / 6.876477 (-4.735843) | 2.200150 / 2.142072 (0.058077) | 0.831662 / 4.805227 (-3.973566) | 0.168828 / 6.500664 (-6.331836) | 0.062755 / 0.075469 (-0.012714) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.196890 / 1.841788 (-0.644898) | 14.826423 / 8.074308 (6.752114) | 14.020782 / 10.191392 (3.829390) | 0.161275 / 0.680424 (-0.519149) | 0.017467 / 0.534201 (-0.516734) | 0.422278 / 0.579283 (-0.157005) | 0.424053 / 0.434364 (-0.010311) | 0.490768 / 0.540337 (-0.049570) | 0.584490 / 1.386936 (-0.802446) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007102 / 0.011353 (-0.004250) | 0.005145 / 0.011008 (-0.005863) | 0.073823 / 0.038508 (0.035315) | 0.032947 / 0.023109 (0.009838) | 0.336978 / 0.275898 (0.061080) | 0.368961 / 0.323480 (0.045481) | 0.006052 / 0.007986 (-0.001934) | 0.003970 / 0.004328 (-0.000358) | 0.072925 / 0.004250 (0.068674) | 0.044502 / 0.037052 (0.007450) | 0.340849 / 0.258489 (0.082360) | 0.381487 / 0.293841 (0.087646) | 0.037207 / 0.128546 (-0.091339) | 0.012095 / 0.075646 (-0.063551) | 0.085206 / 0.419271 (-0.334065) | 0.056236 / 0.043533 (0.012703) | 0.334048 / 0.255139 (0.078909) | 0.360442 / 0.283200 (0.077242) | 0.104402 / 0.141683 (-0.037281) | 1.446907 / 1.452155 (-0.005248) | 1.542430 / 1.492716 (0.049713) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.238720 / 0.018006 (0.220714) | 0.445857 / 0.000490 (0.445367) | 0.009280 / 0.000200 (0.009080) | 0.000150 / 0.000054 (0.000095) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028414 / 0.037411 (-0.008998) | 0.110506 / 0.014526 (0.095981) | 0.124593 / 0.176557 (-0.051964) | 0.170951 / 0.737135 (-0.566184) | 0.128033 / 0.296338 (-0.168305) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.426206 / 0.215209 (0.210997) | 4.267289 / 2.077655 (2.189634) | 2.026880 / 1.504120 (0.522760) | 1.844052 / 1.541195 (0.302858) | 1.897697 / 1.468490 (0.429207) | 0.713545 / 4.584777 (-3.871232) | 3.815052 / 3.745712 (0.069339) | 3.217091 / 5.269862 (-2.052770) | 1.790546 / 4.565676 (-2.775130) | 0.087501 / 0.424275 (-0.336774) | 0.012136 / 0.007607 (0.004529) | 0.534495 / 0.226044 (0.308451) | 5.325913 / 2.268929 (3.056984) | 2.484309 / 55.444624 (-52.960315) | 2.149721 / 6.876477 (-4.726756) | 2.158764 / 2.142072 (0.016692) | 0.855273 / 4.805227 (-3.949954) | 0.170374 / 6.500664 (-6.330290) | 0.064053 / 0.075469 (-0.011416) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.253171 / 1.841788 (-0.588617) | 15.254562 / 8.074308 (7.180254) | 14.242119 / 10.191392 (4.050727) | 0.159298 / 0.680424 (-0.521126) | 0.017504 / 0.534201 (-0.516696) | 0.419710 / 0.579283 (-0.159574) | 0.417879 / 0.434364 (-0.016485) | 0.486328 / 0.540337 (-0.054009) | 0.578933 / 1.386936 (-0.808003) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#bc38663c8e2c2b0b246791c3ed8bddbff163dd64 \"CML watermark\")\n", "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008476 / 0.011353 (-0.002877) | 0.005745 / 0.011008 (-0.005263) | 0.115307 / 0.038508 (0.076799) | 0.039356 / 0.023109 (0.016247) | 0.367155 / 0.275898 (0.091257) | 0.422147 / 0.323480 (0.098667) | 0.006817 / 0.007986 (-0.001168) | 0.004652 / 0.004328 (0.000323) | 0.084045 / 0.004250 (0.079795) | 0.055483 / 0.037052 (0.018431) | 0.364249 / 0.258489 (0.105760) | 0.415975 / 0.293841 (0.122134) | 0.041322 / 0.128546 (-0.087224) | 0.014178 / 0.075646 (-0.061469) | 0.392658 / 0.419271 (-0.026614) | 0.060156 / 0.043533 (0.016623) | 0.373938 / 0.255139 (0.118799) | 0.397494 / 0.283200 (0.114294) | 0.113811 / 0.141683 (-0.027872) | 1.688581 / 1.452155 (0.236427) | 1.790374 / 1.492716 (0.297658) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.222203 / 0.018006 (0.204196) | 0.471109 / 0.000490 (0.470619) | 0.007071 / 0.000200 (0.006871) | 0.000156 / 0.000054 (0.000102) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032112 / 0.037411 (-0.005299) | 0.118726 / 0.014526 (0.104200) | 0.134918 / 0.176557 (-0.041639) | 0.207766 / 0.737135 (-0.529369) | 0.139756 / 0.296338 (-0.156582) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.479858 / 0.215209 (0.264649) | 4.798428 / 2.077655 (2.720773) | 2.221573 / 1.504120 (0.717453) | 1.964956 / 1.541195 (0.423761) | 2.021763 / 1.468490 (0.553273) | 0.820401 / 4.584777 (-3.764376) | 4.533887 / 3.745712 (0.788175) | 4.121332 / 5.269862 (-1.148529) | 2.195807 / 4.565676 (-2.369869) | 0.103133 / 0.424275 (-0.321142) | 0.014620 / 0.007607 (0.007013) | 0.605012 / 0.226044 (0.378967) | 5.966623 / 2.268929 (3.697694) | 2.844118 / 55.444624 (-52.600506) | 2.463569 / 6.876477 (-4.412907) | 2.597177 / 2.142072 (0.455105) | 0.983201 / 4.805227 (-3.822026) | 0.199500 / 6.500664 (-6.301164) | 0.078387 / 0.075469 (0.002918) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.401083 / 1.841788 (-0.440705) | 17.258725 / 8.074308 (9.184417) | 16.825992 / 10.191392 (6.634600) | 0.216762 / 0.680424 (-0.463662) | 0.021135 / 0.534201 (-0.513066) | 0.513688 / 0.579283 (-0.065595) | 0.488892 / 0.434364 (0.054529) | 0.566745 / 0.540337 (0.026408) | 0.688958 / 1.386936 (-0.697978) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007948 / 0.011353 (-0.003405) | 0.005981 / 0.011008 (-0.005027) | 0.084474 / 0.038508 (0.045966) | 0.037952 / 0.023109 (0.014843) | 0.383359 / 0.275898 (0.107461) | 0.409324 / 0.323480 (0.085844) | 0.006641 / 0.007986 (-0.001344) | 0.004785 / 0.004328 (0.000456) | 0.083214 / 0.004250 (0.078964) | 0.053177 / 0.037052 (0.016125) | 0.393147 / 0.258489 (0.134658) | 0.438496 / 0.293841 (0.144655) | 0.042090 / 0.128546 (-0.086456) | 0.013373 / 0.075646 (-0.062273) | 0.097585 / 0.419271 (-0.321686) | 0.056359 / 0.043533 (0.012826) | 0.378113 / 0.255139 (0.122974) | 0.403874 / 0.283200 (0.120674) | 0.123503 / 0.141683 (-0.018180) | 1.639557 / 1.452155 (0.187403) | 1.759787 / 1.492716 (0.267071) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.242534 / 0.018006 (0.224528) | 0.459040 / 0.000490 (0.458550) | 0.000454 / 0.000200 (0.000254) | 0.000066 / 0.000054 (0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031747 / 0.037411 (-0.005664) | 0.125823 / 0.014526 (0.111297) | 0.138985 / 0.176557 (-0.037571) | 0.194371 / 0.737135 (-0.542764) | 0.148905 / 0.296338 (-0.147433) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.508201 / 0.215209 (0.292992) | 5.007519 / 2.077655 (2.929865) | 2.412956 / 1.504120 (0.908836) | 2.143378 / 1.541195 (0.602183) | 2.192966 / 1.468490 (0.724476) | 0.828497 / 4.584777 (-3.756280) | 4.496457 / 3.745712 (0.750745) | 2.397546 / 5.269862 (-2.872315) | 1.522889 / 4.565676 (-3.042787) | 0.099904 / 0.424275 (-0.324371) | 0.014561 / 0.007607 (0.006954) | 0.627417 / 0.226044 (0.401373) | 6.296441 / 2.268929 (4.027512) | 2.962858 / 55.444624 (-52.481767) | 2.543083 / 6.876477 (-4.333394) | 2.711884 / 2.142072 (0.569811) | 0.997969 / 4.805227 (-3.807259) | 0.200283 / 6.500664 (-6.300382) | 0.075934 / 0.075469 (0.000465) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.541707 / 1.841788 (-0.300081) | 17.791559 / 8.074308 (9.717251) | 16.782877 / 10.191392 (6.591485) | 0.171954 / 0.680424 (-0.508470) | 0.020506 / 0.534201 (-0.513695) | 0.504189 / 0.579283 (-0.075094) | 0.501655 / 0.434364 (0.067291) | 0.583120 / 0.540337 (0.042782) | 0.694931 / 1.386936 (-0.692005) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#53355f308f4ffb9b4071f5d420b5c6767799ef1c \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007613 / 0.011353 (-0.003740) | 0.005057 / 0.011008 (-0.005951) | 0.099147 / 0.038508 (0.060639) | 0.035358 / 0.023109 (0.012249) | 0.303442 / 0.275898 (0.027544) | 0.336898 / 0.323480 (0.013418) | 0.006216 / 0.007986 (-0.001770) | 0.004085 / 0.004328 (-0.000244) | 0.074567 / 0.004250 (0.070317) | 0.050917 / 0.037052 (0.013865) | 0.301786 / 0.258489 (0.043297) | 0.341362 / 0.293841 (0.047521) | 0.037019 / 0.128546 (-0.091528) | 0.011977 / 0.075646 (-0.063669) | 0.334688 / 0.419271 (-0.084583) | 0.051326 / 0.043533 (0.007793) | 0.299878 / 0.255139 (0.044739) | 0.325571 / 0.283200 (0.042371) | 0.110744 / 0.141683 (-0.030939) | 1.480898 / 1.452155 (0.028743) | 1.566917 / 1.492716 (0.074201) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.253249 / 0.018006 (0.235242) | 0.558576 / 0.000490 (0.558086) | 0.003838 / 0.000200 (0.003638) | 0.000085 / 0.000054 (0.000030) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028731 / 0.037411 (-0.008681) | 0.110643 / 0.014526 (0.096117) | 0.119560 / 0.176557 (-0.056996) | 0.178010 / 0.737135 (-0.559126) | 0.130286 / 0.296338 (-0.166053) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.400190 / 0.215209 (0.184981) | 3.999326 / 2.077655 (1.921672) | 1.797332 / 1.504120 (0.293212) | 1.610808 / 1.541195 (0.069613) | 1.679949 / 1.468490 (0.211459) | 0.696539 / 4.584777 (-3.888238) | 3.784766 / 3.745712 (0.039054) | 2.205008 / 5.269862 (-3.064854) | 1.501697 / 4.565676 (-3.063979) | 0.085553 / 0.424275 (-0.338723) | 0.012223 / 0.007607 (0.004616) | 0.494858 / 0.226044 (0.268813) | 4.968535 / 2.268929 (2.699606) | 2.258759 / 55.444624 (-53.185865) | 1.926236 / 6.876477 (-4.950241) | 2.072155 / 2.142072 (-0.069917) | 0.838354 / 4.805227 (-3.966873) | 0.168810 / 6.500664 (-6.331854) | 0.064347 / 0.075469 (-0.011122) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.166696 / 1.841788 (-0.675091) | 14.721287 / 8.074308 (6.646979) | 14.319272 / 10.191392 (4.127880) | 0.144534 / 0.680424 (-0.535890) | 0.017502 / 0.534201 (-0.516699) | 0.422682 / 0.579283 (-0.156601) | 0.424426 / 0.434364 (-0.009938) | 0.493561 / 0.540337 (-0.046777) | 0.586765 / 1.386936 (-0.800171) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007764 / 0.011353 (-0.003589) | 0.005516 / 0.011008 (-0.005492) | 0.074745 / 0.038508 (0.036237) | 0.034364 / 0.023109 (0.011255) | 0.344318 / 0.275898 (0.068420) | 0.374779 / 0.323480 (0.051299) | 0.005904 / 0.007986 (-0.002082) | 0.004323 / 0.004328 (-0.000005) | 0.073191 / 0.004250 (0.068941) | 0.051549 / 0.037052 (0.014496) | 0.341792 / 0.258489 (0.083303) | 0.387576 / 0.293841 (0.093735) | 0.037483 / 0.128546 (-0.091063) | 0.012410 / 0.075646 (-0.063237) | 0.086480 / 0.419271 (-0.332791) | 0.050035 / 0.043533 (0.006502) | 0.335475 / 0.255139 (0.080336) | 0.361436 / 0.283200 (0.078236) | 0.106890 / 0.141683 (-0.034792) | 1.464032 / 1.452155 (0.011877) | 1.563490 / 1.492716 (0.070774) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.268765 / 0.018006 (0.250758) | 0.563811 / 0.000490 (0.563321) | 0.004904 / 0.000200 (0.004704) | 0.000096 / 0.000054 (0.000041) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029885 / 0.037411 (-0.007526) | 0.113885 / 0.014526 (0.099359) | 0.124283 / 0.176557 (-0.052274) | 0.173619 / 0.737135 (-0.563517) | 0.131781 / 0.296338 (-0.164557) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.420296 / 0.215209 (0.205087) | 4.167656 / 2.077655 (2.090001) | 1.982356 / 1.504120 (0.478237) | 1.792181 / 1.541195 (0.250986) | 1.871459 / 1.468490 (0.402969) | 0.707066 / 4.584777 (-3.877711) | 3.835922 / 3.745712 (0.090210) | 3.506796 / 5.269862 (-1.763066) | 1.857172 / 4.565676 (-2.708505) | 0.086219 / 0.424275 (-0.338056) | 0.012404 / 0.007607 (0.004796) | 0.512393 / 0.226044 (0.286348) | 5.111623 / 2.268929 (2.842695) | 2.493523 / 55.444624 (-52.951101) | 2.188220 / 6.876477 (-4.688257) | 2.319096 / 2.142072 (0.177024) | 0.844084 / 4.805227 (-3.961144) | 0.171130 / 6.500664 (-6.329534) | 0.065913 / 0.075469 (-0.009556) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.284768 / 1.841788 (-0.557020) | 15.334610 / 8.074308 (7.260301) | 14.724436 / 10.191392 (4.533044) | 0.188425 / 0.680424 (-0.491999) | 0.017984 / 0.534201 (-0.516217) | 0.428150 / 0.579283 (-0.151133) | 0.429013 / 0.434364 (-0.005351) | 0.500818 / 0.540337 (-0.039519) | 0.592879 / 1.386936 (-0.794057) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#ee68da958c2fab3a26d9f0efb1e207ecbcf7ce15 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006870 / 0.011353 (-0.004483) | 0.004702 / 0.011008 (-0.006306) | 0.099258 / 0.038508 (0.060750) | 0.029008 / 0.023109 (0.005899) | 0.330599 / 0.275898 (0.054701) | 0.361163 / 0.323480 (0.037683) | 0.005020 / 0.007986 (-0.002965) | 0.003474 / 0.004328 (-0.000855) | 0.075902 / 0.004250 (0.071651) | 0.037462 / 0.037052 (0.000410) | 0.336213 / 0.258489 (0.077724) | 0.370645 / 0.293841 (0.076804) | 0.032435 / 0.128546 (-0.096111) | 0.011686 / 0.075646 (-0.063960) | 0.326040 / 0.419271 (-0.093232) | 0.043750 / 0.043533 (0.000217) | 0.332629 / 0.255139 (0.077490) | 0.353302 / 0.283200 (0.070102) | 0.090421 / 0.141683 (-0.051262) | 1.470097 / 1.452155 (0.017942) | 1.544908 / 1.492716 (0.052191) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.213418 / 0.018006 (0.195411) | 0.434808 / 0.000490 (0.434319) | 0.005949 / 0.000200 (0.005749) | 0.000072 / 0.000054 (0.000018) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023085 / 0.037411 (-0.014327) | 0.098222 / 0.014526 (0.083696) | 0.104543 / 0.176557 (-0.072013) | 0.165423 / 0.737135 (-0.571713) | 0.108732 / 0.296338 (-0.187606) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.433933 / 0.215209 (0.218724) | 4.334358 / 2.077655 (2.256704) | 2.013984 / 1.504120 (0.509864) | 1.862981 / 1.541195 (0.321787) | 1.873936 / 1.468490 (0.405446) | 0.699857 / 4.584777 (-3.884920) | 3.417815 / 3.745712 (-0.327897) | 1.946403 / 5.269862 (-3.323459) | 1.308683 / 4.565676 (-3.256994) | 0.083297 / 0.424275 (-0.340978) | 0.012610 / 0.007607 (0.005003) | 0.540877 / 0.226044 (0.314832) | 5.408293 / 2.268929 (3.139365) | 2.529574 / 55.444624 (-52.915050) | 2.201047 / 6.876477 (-4.675429) | 2.392966 / 2.142072 (0.250894) | 0.812719 / 4.805227 (-3.992509) | 0.154013 / 6.500664 (-6.346651) | 0.067614 / 0.075469 (-0.007855) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.228150 / 1.841788 (-0.613638) | 14.037090 / 8.074308 (5.962782) | 14.259416 / 10.191392 (4.068024) | 0.155554 / 0.680424 (-0.524870) | 0.016521 / 0.534201 (-0.517680) | 0.379615 / 0.579283 (-0.199668) | 0.421352 / 0.434364 (-0.013012) | 0.446512 / 0.540337 (-0.093825) | 0.531802 / 1.386936 (-0.855134) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006629 / 0.011353 (-0.004724) | 0.004432 / 0.011008 (-0.006577) | 0.076662 / 0.038508 (0.038154) | 0.027674 / 0.023109 (0.004565) | 0.341667 / 0.275898 (0.065769) | 0.376493 / 0.323480 (0.053014) | 0.005076 / 0.007986 (-0.002910) | 0.004655 / 0.004328 (0.000326) | 0.075698 / 0.004250 (0.071448) | 0.036905 / 0.037052 (-0.000147) | 0.342394 / 0.258489 (0.083905) | 0.383330 / 0.293841 (0.089489) | 0.031729 / 0.128546 (-0.096817) | 0.011582 / 0.075646 (-0.064064) | 0.085721 / 0.419271 (-0.333551) | 0.042012 / 0.043533 (-0.001521) | 0.342063 / 0.255139 (0.086924) | 0.367335 / 0.283200 (0.084136) | 0.089641 / 0.141683 (-0.052042) | 1.520353 / 1.452155 (0.068198) | 1.643653 / 1.492716 (0.150937) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.178995 / 0.018006 (0.160989) | 0.436544 / 0.000490 (0.436055) | 0.002311 / 0.000200 (0.002111) | 0.000081 / 0.000054 (0.000026) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025386 / 0.037411 (-0.012026) | 0.099717 / 0.014526 (0.085192) | 0.110809 / 0.176557 (-0.065747) | 0.162931 / 0.737135 (-0.574204) | 0.110430 / 0.296338 (-0.185909) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.438592 / 0.215209 (0.223382) | 4.372560 / 2.077655 (2.294905) | 2.069686 / 1.504120 (0.565567) | 1.860576 / 1.541195 (0.319382) | 1.898161 / 1.468490 (0.429671) | 0.698353 / 4.584777 (-3.886424) | 3.462440 / 3.745712 (-0.283272) | 1.868602 / 5.269862 (-3.401260) | 1.160498 / 4.565676 (-3.405179) | 0.082869 / 0.424275 (-0.341406) | 0.012690 / 0.007607 (0.005083) | 0.533278 / 0.226044 (0.307233) | 5.386214 / 2.268929 (3.117285) | 2.519243 / 55.444624 (-52.925382) | 2.171109 / 6.876477 (-4.705368) | 2.272617 / 2.142072 (0.130544) | 0.805843 / 4.805227 (-3.999384) | 0.152275 / 6.500664 (-6.348389) | 0.068038 / 0.075469 (-0.007431) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.291967 / 1.841788 (-0.549821) | 14.386474 / 8.074308 (6.312166) | 14.180693 / 10.191392 (3.989301) | 0.131714 / 0.680424 (-0.548710) | 0.016596 / 0.534201 (-0.517605) | 0.384293 / 0.579283 (-0.194990) | 0.404051 / 0.434364 (-0.030313) | 0.452167 / 0.540337 (-0.088170) | 0.542718 / 1.386936 (-0.844218) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#f9c770bb1a43fa7fe390286d7535266d3964d067 \"CML watermark\")\n" ]
"2023-04-12T08:52:35Z"
"2023-04-13T11:01:24Z"
"2023-04-13T10:54:13Z"
MEMBER
null
This PR fixes the fixtures of our CI mock filesystems. Before, we had to pass `clobber=True` to `fsspec.register_implementation` to overwrite the still present previously added "mock" filesystem. That meant that the mock filesystem fixture was not working properly, because the previously added "mock" filesystem, should have been deleted by the fixture. This PR fixes the mock filesystem fixtures, so that the "mock" filesystem is properly deleted from the inner `fsspec` registry. Tests were added to check the correct behavior of the mock filesystem fixtures. Related to: - #5733
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5740/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5740/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5740.diff", "html_url": "https://github.com/huggingface/datasets/pull/5740", "merged_at": "2023-04-13T10:54:13Z", "patch_url": "https://github.com/huggingface/datasets/pull/5740.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5740" }
true
https://api.github.com/repos/huggingface/datasets/issues/5739
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5739/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5739/comments
https://api.github.com/repos/huggingface/datasets/issues/5739/events
https://github.com/huggingface/datasets/issues/5739
1,663,762,901
I_kwDODunzps5jKwHV
5,739
weird result during dataset split when data path starts with `/data`
{ "avatar_url": "https://avatars.githubusercontent.com/u/1772912?v=4", "events_url": "https://api.github.com/users/ericxsun/events{/privacy}", "followers_url": "https://api.github.com/users/ericxsun/followers", "following_url": "https://api.github.com/users/ericxsun/following{/other_user}", "gists_url": "https://api.github.com/users/ericxsun/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ericxsun", "id": 1772912, "login": "ericxsun", "node_id": "MDQ6VXNlcjE3NzI5MTI=", "organizations_url": "https://api.github.com/users/ericxsun/orgs", "received_events_url": "https://api.github.com/users/ericxsun/received_events", "repos_url": "https://api.github.com/users/ericxsun/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ericxsun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ericxsun/subscriptions", "type": "User", "url": "https://api.github.com/users/ericxsun" }
[]
open
false
null
[]
null
[ "Same problem.", "hi! \r\nI think you can run python from `/data/train/raw/` directory and load dataset as `load_dataset(\"code_contests\")` to mitigate this issue as a workaround. \r\n@ericxsun Do you want to open a PR to fix the regex? As you already found the solution :) ", "> hi! I think you can run python from `/data/train/raw/` directory and load dataset as `load_dataset(\"code_contests\")` to mitigate this issue as a workaround. @ericxsun Do you want to open a PR to fix the regex? As you already found the solution :)\r\n\r\nSure, please see https://github.com/huggingface/datasets/pull/5748 @polinaeterna ", "I think `string_to_dict` is ok, and that the issue is that it gets `'/data2/train/raw/code_contests/data/test-00000-of-00001-9c49eeff30aacaa8.parquet'` as input instead of `'data/test-00000-of-00001-9c49eeff30aacaa8.parquet'`. The path should be relative to the directory being loaded by `load_dataset`" ]
"2023-04-12T04:51:35Z"
"2023-04-21T14:20:59Z"
null
NONE
null
### Describe the bug The regex defined here https://github.com/huggingface/datasets/blob/f2607935c4e45c70c44fcb698db0363ca7ba83d4/src/datasets/utils/py_utils.py#L158 will cause a weird result during dataset split when data path starts with `/data` ### Steps to reproduce the bug 1. clone dataset into local path ``` cd /data/train/raw/ git lfs clone https://huggingface.co/datasets/deepmind/code_contests.git ls /data/train/raw/code_contests # README.md data dataset_infos.json ls /data/train/raw/code_contests/data # test-00000-of-00001-9c49eeff30aacaa8.parquet # train-[0-9]+-of-[0-9]+-xx.parquet # valid-00000-of-00001-5e672c5751f060d3.parquet ``` 2. loading data from local ``` from datasets import load_dataset dataset = load_dataset('/data/train/raw/code_contests') FileNotFoundError: Unable to resolve any data file that matches '['data/train/raw/code_contests/data/train-[0-9][0-9][0-9][0-9][0-9]-of-[0-9][0-9][0-9][0-9][0-9]*.*']' at /data/train/raw/code_contests with any supported extension ``` weird path `data/train/raw/code_contests/data/train-[0-9][0-9][0-9][0-9][0-9]-of-[0-9][0-9][0-9][0-9][0-9]*.*` While dive deep into `LocalDatasetModuleFactoryWithoutScript` defined in [load.py](https://github.com/huggingface/datasets/blob/f2607935c4e45c70c44fcb698db0363ca7ba83d4/src/datasets/load.py#L627) and _get_data_files_patterns https://github.com/huggingface/datasets/blob/f2607935c4e45c70c44fcb698db0363ca7ba83d4/src/datasets/data_files.py#L228. I found the weird behavior caused by `string_to_dict` 3. check `string_to_dict` ``` p = '/data/train/raw/code_contests/data/test-00000-of-00001-9c49eeff30aacaa8.parquet' split_pattern = 'data/{split}-[0-9][0-9][0-9][0-9][0-9]-of-[0-9][0-9][0-9][0-9][0-9]*.*' string_to_dict(p, split_pattern) # {'split': 'train/raw/code_contests/data/test'} p = '/data2/train/raw/code_contests/data/test-00000-of-00001-9c49eeff30aacaa8.parquet' string_to_dict(p, split_pattern) {'split': 'test'} ``` go deep into string_to_dict https://github.com/huggingface/datasets/blob/f2607935c4e45c70c44fcb698db0363ca7ba83d4/src/datasets/utils/py_utils.py#L158. 4. test the regex: <img width="680" alt="image" src="https://user-images.githubusercontent.com/1772912/231351129-75179f01-fb9f-4f12-8fa9-0dfcc3d5f3bd.png"> <img width="679" alt="image" src="https://user-images.githubusercontent.com/1772912/231351025-009f3d83-2cf3-4e15-9ed4-6b9663dcb2ee.png"> ### Expected behavior statement in `steps to reproduce the bug` 3. check `string_to_dict` ``` p = '/data/train/raw/code_contests/data/test-00000-of-00001-9c49eeff30aacaa8.parquet' split_pattern = 'data/{split}-[0-9][0-9][0-9][0-9][0-9]-of-[0-9][0-9][0-9][0-9][0-9]*.*' string_to_dict(p, split_pattern) # {'split': 'train/raw/code_contests/data/test'} p = '/data2/train/raw/code_contests/data/test-00000-of-00001-9c49eeff30aacaa8.parquet' string_to_dict(p, split_pattern) {'split': 'test'} ``` ### Environment info - linux(debian) - python 3.7 - datasets 2.8.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5739/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5739/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5738
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5738/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5738/comments
https://api.github.com/repos/huggingface/datasets/issues/5738/events
https://github.com/huggingface/datasets/issues/5738
1,663,477,690
I_kwDODunzps5jJqe6
5,738
load_dataset("text","dataset.txt") loads the wrong dataset!
{ "avatar_url": "https://avatars.githubusercontent.com/u/41713505?v=4", "events_url": "https://api.github.com/users/Tylersuard/events{/privacy}", "followers_url": "https://api.github.com/users/Tylersuard/followers", "following_url": "https://api.github.com/users/Tylersuard/following{/other_user}", "gists_url": "https://api.github.com/users/Tylersuard/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Tylersuard", "id": 41713505, "login": "Tylersuard", "node_id": "MDQ6VXNlcjQxNzEzNTA1", "organizations_url": "https://api.github.com/users/Tylersuard/orgs", "received_events_url": "https://api.github.com/users/Tylersuard/received_events", "repos_url": "https://api.github.com/users/Tylersuard/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Tylersuard/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Tylersuard/subscriptions", "type": "User", "url": "https://api.github.com/users/Tylersuard" }
[]
closed
false
null
[]
null
[ "You need to provide a text file as `data_files`, not as a configuration:\r\n\r\n```python\r\nmy_dataset = load_dataset(\"text\", data_files=\"TextFile.txt\")\r\n```\r\n\r\nOtherwise, since `data_files` is `None`, it picks up Colab's sample datasets from the `content` dir." ]
"2023-04-12T01:07:46Z"
"2023-04-19T12:08:27Z"
"2023-04-19T12:08:27Z"
NONE
null
### Describe the bug I am trying to load my own custom text dataset using the load_dataset function. My dataset is a bunch of ordered text, think along the lines of shakespeare plays. However, after I load the dataset and I inspect it, the dataset is a table with a bunch of latitude and longitude values! What in the world?? ### Steps to reproduce the bug my_dataset = load_dataset("text","TextFile.txt") my_dataset ### Expected behavior I expected the dataset to contain the actual data from the text document that I used. ### Environment info Google Colab
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5738/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5738/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5737
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5737/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5737/comments
https://api.github.com/repos/huggingface/datasets/issues/5737/events
https://github.com/huggingface/datasets/issues/5737
1,662,919,811
I_kwDODunzps5jHiSD
5,737
ClassLabel Error
{ "avatar_url": "https://avatars.githubusercontent.com/u/10896776?v=4", "events_url": "https://api.github.com/users/mrcaelumn/events{/privacy}", "followers_url": "https://api.github.com/users/mrcaelumn/followers", "following_url": "https://api.github.com/users/mrcaelumn/following{/other_user}", "gists_url": "https://api.github.com/users/mrcaelumn/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mrcaelumn", "id": 10896776, "login": "mrcaelumn", "node_id": "MDQ6VXNlcjEwODk2Nzc2", "organizations_url": "https://api.github.com/users/mrcaelumn/orgs", "received_events_url": "https://api.github.com/users/mrcaelumn/received_events", "repos_url": "https://api.github.com/users/mrcaelumn/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mrcaelumn/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mrcaelumn/subscriptions", "type": "User", "url": "https://api.github.com/users/mrcaelumn" }
[]
closed
false
null
[]
null
[ "Hi, you can use the `cast_column` function to change the feature type from a `Value(int64)` to `ClassLabel`:\r\n\r\n```py\r\ndataset = dataset.cast_column(\"label\", ClassLabel(names=[\"label_1\", \"label_2\", \"label_3\"]))\r\nprint(dataset.features)\r\n{'text': Value(dtype='string', id=None),\r\n 'label': ClassLabel(names=['label_1', 'label_2', 'label_3'], id=None)}\r\n```", "thank you @stevhliu, its worked. " ]
"2023-04-11T17:14:13Z"
"2023-04-13T16:49:57Z"
"2023-04-13T16:49:57Z"
NONE
null
### Describe the bug I still getting the error "call() takes 1 positional argument but 2 were given" even after ensuring that the value being passed to the label object is a single value and that the ClassLabel object has been created with the correct number of label classes ### Steps to reproduce the bug from datasets import ClassLabel, Dataset 1. Create the ClassLabel object with 3 label values and their corresponding names label_test = ClassLabel(num_classes=3, names=["label_1", "label_2", "label_3"]) 2. Define a dictionary with text and label fields data = { 'text': ['text_1', 'text_2', 'text_3'], 'label': [1, 2, 3], } 3. Create a Hugging Face dataset from the dictionary dataset = Dataset.from_dict(data) print(dataset.features) 4. Map the label values to their corresponding label names using the label object dataset = dataset.map(lambda example: {'text': example['text'], 'label': label_test(example['label'])}) 5. Print the resulting dataset print(dataset) ### Expected behavior I hope my label type is class label instead int. ### Environment info python 3.9 google colab
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5737/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5737/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5736
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5736/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5736/comments
https://api.github.com/repos/huggingface/datasets/issues/5736/events
https://github.com/huggingface/datasets/issues/5736
1,662,286,061
I_kwDODunzps5jFHjt
5,736
FORCE_REDOWNLOAD raises "Directory not empty" exception on second run
{ "avatar_url": "https://avatars.githubusercontent.com/u/1219084?v=4", "events_url": "https://api.github.com/users/rcasero/events{/privacy}", "followers_url": "https://api.github.com/users/rcasero/followers", "following_url": "https://api.github.com/users/rcasero/following{/other_user}", "gists_url": "https://api.github.com/users/rcasero/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/rcasero", "id": 1219084, "login": "rcasero", "node_id": "MDQ6VXNlcjEyMTkwODQ=", "organizations_url": "https://api.github.com/users/rcasero/orgs", "received_events_url": "https://api.github.com/users/rcasero/received_events", "repos_url": "https://api.github.com/users/rcasero/repos", "site_admin": false, "starred_url": "https://api.github.com/users/rcasero/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rcasero/subscriptions", "type": "User", "url": "https://api.github.com/users/rcasero" }
[]
open
false
null
[]
null
[ "Hi ! I couldn't reproduce your issue :/\r\n\r\nIt seems that `shutil.rmtree` failed. It is supposed to work even if the directory is not empty, but you still end up with `OSError: [Errno 39] Directory not empty:`. Can you make sure another process is not using this directory at the same time ?" ]
"2023-04-11T11:29:15Z"
"2023-04-21T15:27:40Z"
null
NONE
null
### Describe the bug Running `load_dataset(..., download_mode=datasets.DownloadMode.FORCE_REDOWNLOAD)` twice raises a `Directory not empty` exception on the second run. ### Steps to reproduce the bug I cannot test this on datasets v2.11.0 due to #5711, but this happens in v2.10.1. 1. Set up a script `my_dataset.py` to generate and load an offline dataset. 2. Load it with ```python ds = datasets.load_dataset(path=/path/to/my_dataset.py, name='toy', data_dir=/path/to/my_dataset.py, cache_dir=cache_dir, download_mode=datasets.DownloadMode.FORCE_REDOWNLOAD, ) ``` It loads fine ``` Dataset my_dataset downloaded and prepared to /path/to/cache/toy-..e05e/1.0.0/...5b4c. Subsequent calls will reuse this data. ``` 3. Try to load it again with the same snippet and the splits are generated, but at the end of the loading process it raises the error ``` 2023-04-11 12:10:19,965: DEBUG: open file: /path/to/cache/toy-..e05e/1.0.0/...5b4c.incomplete/dataset_info.json Traceback (most recent call last): File "<string>", line 2, in <module> File "/path/to/conda/environment/lib/python3.10/site-packages/datasets/load.py", line 1782, in load_dataset builder_instance.download_and_prepare( File "/path/to/conda/environment/lib/python3.10/site-packages/datasets/builder.py", line 852, in download_and_prepare with incomplete_dir(self._output_dir) as tmp_output_dir: File "/path/to/conda/environment/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/path/to/conda/environment/lib/python3.10/site-packages/datasets/builder.py", line 826, in incomplete_dir shutil.rmtree(dirname) File "/path/to/conda/environment/lib/python3.10/shutil.py", line 730, in rmtree onerror(os.rmdir, path, sys.exc_info()) File "/path/to/conda/environment/lib/python3.10/shutil.py", line 728, in rmtree os.rmdir(path) OSError: [Errno 39] Directory not empty: '/path/to/cache/toy-..e05e/1.0.0/...5b4c' ``` ### Expected behavior Regenerate the dataset from scratch and reload it. ### Environment info - `datasets` version: 2.10.1 - Platform: Linux-4.18.0-483.el8.x86_64-x86_64-with-glibc2.28 - Python version: 3.10.8 - PyArrow version: 11.0.0 - Pandas version: 1.5.2
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5736/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5736/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5735
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5735/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5735/comments
https://api.github.com/repos/huggingface/datasets/issues/5735/events
https://github.com/huggingface/datasets/pull/5735
1,662,150,903
PR_kwDODunzps5OAY3A
5,735
Implement sharding on merged iterable datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/48770768?v=4", "events_url": "https://api.github.com/users/Hubert-Bonisseur/events{/privacy}", "followers_url": "https://api.github.com/users/Hubert-Bonisseur/followers", "following_url": "https://api.github.com/users/Hubert-Bonisseur/following{/other_user}", "gists_url": "https://api.github.com/users/Hubert-Bonisseur/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Hubert-Bonisseur", "id": 48770768, "login": "Hubert-Bonisseur", "node_id": "MDQ6VXNlcjQ4NzcwNzY4", "organizations_url": "https://api.github.com/users/Hubert-Bonisseur/orgs", "received_events_url": "https://api.github.com/users/Hubert-Bonisseur/received_events", "repos_url": "https://api.github.com/users/Hubert-Bonisseur/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Hubert-Bonisseur/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Hubert-Bonisseur/subscriptions", "type": "User", "url": "https://api.github.com/users/Hubert-Bonisseur" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "Hi ! What if one of the sub-iterables only has one shard ? In that case I don't think we'd end up with a correctly interleaved dataset, since only rank 0 would yield examples from this sub-iterable", "Hi ! \r\nI just tested this out with the code below and it seems to be ok. Both datasets are alternating and we get all the examples with no duplicates.\r\n\r\nOn thing to keep in mind is that the max amount of workers is equal to the lowest amount of shard amongst the datasets to be merged (1 in this example).\r\n\r\n ```python\r\nfrom torch.utils.data import DataLoader\r\n\r\nfrom datasets import load_dataset, interleave_datasets\r\n\r\n\r\ndef process_dataset_train(batch):\r\n return {\"input\": f'train: {batch[\"review\"][:20]}'}\r\n\r\n\r\ndef process_dataset_test(batch):\r\n return {\"input\": f'test: {batch[\"review\"][:20]}'}\r\n\r\n\r\ndef identity_collator(x):\r\n return x\r\n\r\n\r\nif __name__ == \"__main__\":\r\n ds = load_dataset(\"lhoestq/demo1\")\r\n ds[\"train\"] = ds[\"train\"].map(process_dataset_train, remove_columns=ds[\"train\"].column_names)\r\n ds[\"test\"] = ds[\"test\"].map(process_dataset_test, remove_columns=ds[\"test\"].column_names)\r\n\r\n ds1 = ds[\"train\"].to_iterable_dataset(num_shards=5)\r\n ds2 = ds[\"test\"].to_iterable_dataset(num_shards=1)\r\n\r\n ds_merged = interleave_datasets([ds1, ds2], stopping_strategy=\"all_exhausted\")\r\n\r\n dataloader = DataLoader(ds_merged, collate_fn=identity_collator, num_workers=1, batch_size=1)\r\n\r\n for i, element in enumerate(dataloader):\r\n print(i, element)\r\n\r\n```\r\n\r\n```\r\n0 [{'input': 'train: Great app! The new v'}]\r\n1 [{'input': 'test: Works with RTL and N'}]\r\n2 [{'input': \"train: Great It's not fully\"}]\r\n3 [{'input': 'test: Works with RTL SDR W'}]\r\n4 [{'input': 'train: Works on a Nexus 6p '}]\r\n5 [{'input': 'test: Awsome App! Easy to '}]\r\n6 [{'input': 'train: The bandwidth seemed'}]\r\n7 [{'input': \"test: I'll forgo the refun\"}]\r\n8 [{'input': 'train: Works well with my H'}]\r\n9 [{'input': 'test: looks like a great p'}]\r\n```", "<s> Could you try with `num_workers>1` ? </s>\r\n\r\nedit: Oh I see\r\n\r\n> On thing to keep in mind is that the max amount of workers is equal to the lowest amount of shard amongst the datasets to be merged (1 in this example).", "Great ! It's ok to have the max amount of workers is equal to the lowest amount of shard :)\r\n\r\nSo in the case of `num_workers>min(n_shards_per_dataset)` maybe some workers should turn off, and a warning can probably be shown. This is already the case if you use a single dataset with a single shard and `num_workers>1`.\r\n\r\n\r\nRight now it seems to raise an error:\r\n\r\n```python\r\n File \"/Users/quentinlhoest/hf/datasets/src/datasets/iterable_dataset.py\", line 979, in __iter__\r\n yield from self._iter_pytorch(ex_iterable)\r\n File \"/Users/quentinlhoest/hf/datasets/src/datasets/iterable_dataset.py\", line 912, in _iter_pytorch\r\n for key, example in ex_iterable.shard_data_sources(worker_info.id, worker_info.num_workers):\r\n File \"/Users/quentinlhoest/hf/datasets/src/datasets/iterable_dataset.py\", line 259, in shard_data_sources\r\n [iterable.shard_data_sources(worker_id, num_workers) for iterable in self.ex_iterables],\r\n File \"/Users/quentinlhoest/hf/datasets/src/datasets/iterable_dataset.py\", line 259, in <listcomp>\r\n [iterable.shard_data_sources(worker_id, num_workers) for iterable in self.ex_iterables],\r\n File \"/Users/quentinlhoest/hf/datasets/src/datasets/iterable_dataset.py\", line 125, in shard_data_sources\r\n requested_gen_kwargs = _merge_gen_kwargs([gen_kwargs_list[i] for i in shard_indices])\r\n File \"/Users/quentinlhoest/hf/datasets/src/datasets/utils/sharding.py\", line 76, in _merge_gen_kwargs\r\n for key in gen_kwargs_list[0]\r\nIndexError: list index out of range\r\n```", "Good point. I have fixed the n_shards property of merged iterable datasets so that this warning is raised properly", "Hey @lhoestq, what do you think of the last modifications ? ", "Hello! No problem :)\r\n\r\n- About HorizontallyConcatenatedMultiSourcesExamplesIterable, I've haven't been able to create a bug with sharding. So either I missed something or it's working somehow:\r\n\r\n```python\r\nfrom torch.utils.data import DataLoader\r\n\r\nfrom datasets import load_dataset, interleave_datasets, concatenate_datasets\r\n\r\n\r\ndef process_dataset_train(batch):\r\n return {\"input\": f'train: {batch[\"review\"][:20]}'}\r\n\r\n\r\ndef process_dataset_test(batch):\r\n return {\"input\": f'test: {batch[\"review\"][:20]}'}\r\n\r\n\r\ndef identity_collator(x):\r\n return x\r\n\r\n\r\nif __name__ == \"__main__\":\r\n ds = load_dataset(\"lhoestq/demo1\")\r\n ds[\"train\"] = ds[\"train\"].map(process_dataset_train, remove_columns=ds[\"train\"].column_names)\r\n ds[\"test\"] = ds[\"test\"].map(process_dataset_test, remove_columns=ds[\"test\"].column_names)\r\n ds[\"test\"] = ds[\"test\"].rename_columns({\"input\": \"input2\"})\r\n\r\n ds1 = ds[\"train\"].to_iterable_dataset(num_shards=5)\r\n ds2 = ds[\"test\"].to_iterable_dataset(num_shards=3)\r\n\r\n ds_merged = concatenate_datasets([ds1, ds2], axis=1)\r\n\r\n #n_shards is always 1 for HorizontallyConcatenatedMultiSourcesExamplesIterable\r\n dataloader = DataLoader(ds_merged, collate_fn=identity_collator, num_workers=1, batch_size=1)\r\n\r\n for i, element in enumerate(dataloader):\r\n print(i, element)\r\n```\r\n\r\n```\r\n0 [{'input': 'train: Great app! The new v', 'input2': 'test: Works with RTL and N'}]\r\n1 [{'input': \"train: Great It's not fully\", 'input2': 'test: Works with RTL SDR W'}]\r\n2 [{'input': 'train: Works on a Nexus 6p ', 'input2': 'test: Awsome App! Easy to '}]\r\n3 [{'input': 'train: The bandwidth seemed', 'input2': \"test: I'll forgo the refun\"}]\r\n4 [{'input': 'train: Works well with my H', 'input2': 'test: looks like a great p'}]\r\n```\r\n\r\n- I've added a test but I'm not completely happy with it. My issue is that multiprocessing makes interleaving not completely deterministic as samples are yielded whenever ready by each process, if I'm correct.\r\nAs a result I opted to check for the amount of samples yielded and make that they are all unique, which should be equivalent.\r\nBut now my issue is that the \"first_exhausted\" method breaks the loop when one of the datasets of one of the shards is empty which means that all shards stop yielding and we could be missing up to n_workers samples. I don't know if this is the behaviour expected, but I had to modify the test to accomodate this.\r\n\r\nWhat are your thoughts about this ?", "Ah indeed it works because it's set to be only 1 shard - my bad :)", "> But now my issue is that the \"first_exhausted\" method breaks the loop when one of the datasets of one of the shards is empty which means that all shards stop yielding and we could be missing up to n_workers samples. I don't know if this is the behaviour expected, but I had to modify the test to accomodate this.\r\n\r\nThis looks reasonable, maybe this can be documented in the `interleave_datasets` docstring ?\r\n```\r\nNote for iterable datasets:\r\n\r\nIn a distributed setup or in PyTorch DataLoader workers, the stopping strategy is applied per process.\r\nTherefore the \"first_exhausted\" strategy on an sharded iterable dataset can generate less samples in total (up to 1 missing sample per subdataset per worker).\r\n```", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006441 / 0.011353 (-0.004912) | 0.004551 / 0.011008 (-0.006457) | 0.099144 / 0.038508 (0.060636) | 0.028163 / 0.023109 (0.005054) | 0.386342 / 0.275898 (0.110444) | 0.398347 / 0.323480 (0.074867) | 0.004836 / 0.007986 (-0.003150) | 0.004724 / 0.004328 (0.000395) | 0.076277 / 0.004250 (0.072027) | 0.036305 / 0.037052 (-0.000747) | 0.377179 / 0.258489 (0.118690) | 0.410694 / 0.293841 (0.116853) | 0.030196 / 0.128546 (-0.098351) | 0.011436 / 0.075646 (-0.064211) | 0.325911 / 0.419271 (-0.093360) | 0.043709 / 0.043533 (0.000177) | 0.375801 / 0.255139 (0.120662) | 0.396511 / 0.283200 (0.113311) | 0.088346 / 0.141683 (-0.053337) | 1.483427 / 1.452155 (0.031272) | 1.553708 / 1.492716 (0.060992) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.190974 / 0.018006 (0.172968) | 0.451309 / 0.000490 (0.450819) | 0.004045 / 0.000200 (0.003845) | 0.000077 / 0.000054 (0.000022) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023814 / 0.037411 (-0.013597) | 0.096922 / 0.014526 (0.082396) | 0.101506 / 0.176557 (-0.075050) | 0.164694 / 0.737135 (-0.572441) | 0.106899 / 0.296338 (-0.189439) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.432164 / 0.215209 (0.216954) | 4.308076 / 2.077655 (2.230421) | 2.092434 / 1.504120 (0.588314) | 1.937405 / 1.541195 (0.396210) | 1.988030 / 1.468490 (0.519540) | 0.695476 / 4.584777 (-3.889301) | 3.436413 / 3.745712 (-0.309299) | 2.892954 / 5.269862 (-2.376908) | 1.519906 / 4.565676 (-3.045771) | 0.082579 / 0.424275 (-0.341696) | 0.012233 / 0.007607 (0.004626) | 0.531329 / 0.226044 (0.305284) | 5.365272 / 2.268929 (3.096344) | 2.391452 / 55.444624 (-53.053172) | 2.051116 / 6.876477 (-4.825361) | 2.140663 / 2.142072 (-0.001410) | 0.807262 / 4.805227 (-3.997966) | 0.151290 / 6.500664 (-6.349374) | 0.066137 / 0.075469 (-0.009333) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.193106 / 1.841788 (-0.648682) | 13.577240 / 8.074308 (5.502932) | 14.280126 / 10.191392 (4.088734) | 0.142538 / 0.680424 (-0.537886) | 0.016641 / 0.534201 (-0.517560) | 0.386318 / 0.579283 (-0.192965) | 0.385991 / 0.434364 (-0.048373) | 0.440712 / 0.540337 (-0.099625) | 0.524189 / 1.386936 (-0.862747) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006628 / 0.011353 (-0.004725) | 0.004664 / 0.011008 (-0.006344) | 0.077254 / 0.038508 (0.038746) | 0.028369 / 0.023109 (0.005259) | 0.343076 / 0.275898 (0.067178) | 0.376491 / 0.323480 (0.053011) | 0.005298 / 0.007986 (-0.002687) | 0.004853 / 0.004328 (0.000524) | 0.075927 / 0.004250 (0.071677) | 0.039951 / 0.037052 (0.002899) | 0.346225 / 0.258489 (0.087736) | 0.382367 / 0.293841 (0.088526) | 0.031133 / 0.128546 (-0.097413) | 0.011666 / 0.075646 (-0.063981) | 0.086383 / 0.419271 (-0.332889) | 0.042885 / 0.043533 (-0.000647) | 0.343885 / 0.255139 (0.088746) | 0.366840 / 0.283200 (0.083640) | 0.095942 / 0.141683 (-0.045741) | 1.528972 / 1.452155 (0.076817) | 1.586392 / 1.492716 (0.093676) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.223952 / 0.018006 (0.205946) | 0.410767 / 0.000490 (0.410277) | 0.001014 / 0.000200 (0.000814) | 0.000067 / 0.000054 (0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024210 / 0.037411 (-0.013201) | 0.100308 / 0.014526 (0.085782) | 0.106899 / 0.176557 (-0.069658) | 0.156514 / 0.737135 (-0.580621) | 0.109548 / 0.296338 (-0.186790) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.434763 / 0.215209 (0.219554) | 4.348485 / 2.077655 (2.270831) | 2.064255 / 1.504120 (0.560135) | 1.864394 / 1.541195 (0.323199) | 1.899732 / 1.468490 (0.431242) | 0.694147 / 4.584777 (-3.890630) | 3.357898 / 3.745712 (-0.387815) | 2.909155 / 5.269862 (-2.360707) | 1.424790 / 4.565676 (-3.140886) | 0.082597 / 0.424275 (-0.341678) | 0.012442 / 0.007607 (0.004835) | 0.538758 / 0.226044 (0.312713) | 5.390288 / 2.268929 (3.121359) | 2.532016 / 55.444624 (-52.912609) | 2.185724 / 6.876477 (-4.690753) | 2.274176 / 2.142072 (0.132104) | 0.804785 / 4.805227 (-4.000442) | 0.152649 / 6.500664 (-6.348015) | 0.067707 / 0.075469 (-0.007762) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.285219 / 1.841788 (-0.556568) | 13.958098 / 8.074308 (5.883790) | 14.043653 / 10.191392 (3.852261) | 0.144526 / 0.680424 (-0.535898) | 0.016813 / 0.534201 (-0.517388) | 0.390286 / 0.579283 (-0.188997) | 0.389184 / 0.434364 (-0.045180) | 0.470810 / 0.540337 (-0.069527) | 0.562391 / 1.386936 (-0.824545) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#4bb172c9772858c188f85ffc9a51f8cb1da292a0 \"CML watermark\")\n" ]
"2023-04-11T10:02:25Z"
"2023-04-27T16:39:04Z"
"2023-04-27T16:32:09Z"
CONTRIBUTOR
null
This PR allows sharding of merged iterable datasets. Merged iterable datasets with for instance the `interleave_datasets` command are comprised of multiple sub-iterable, one for each dataset that has been merged. With this PR, sharding a merged iterable will result in multiple merged datasets each comprised of sharded sub-iterable, ensuring that there is no duplication of data. As a result it is now possible to set any amount of workers in the dataloader as long as it is lower or equal to the lowest amount of shards amongst the datasets. Before it had to be set to 0. I previously talked about this issue on the forum [here](https://discuss.huggingface.co/t/interleaving-iterable-dataset-with-num-workers-0/35801)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5735/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5735/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5735.diff", "html_url": "https://github.com/huggingface/datasets/pull/5735", "merged_at": "2023-04-27T16:32:09Z", "patch_url": "https://github.com/huggingface/datasets/pull/5735.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5735" }
true
https://api.github.com/repos/huggingface/datasets/issues/5734
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5734/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5734/comments
https://api.github.com/repos/huggingface/datasets/issues/5734/events
https://github.com/huggingface/datasets/issues/5734
1,662,058,028
I_kwDODunzps5jEP4s
5,734
Remove temporary pin of fsspec
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
[]
"2023-04-11T09:04:17Z"
"2023-04-11T11:04:52Z"
"2023-04-11T11:04:52Z"
MEMBER
null
Once root cause is found and fixed, remove the temporary pin introduced by: - #5731
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5734/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5734/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5733
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5733/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5733/comments
https://api.github.com/repos/huggingface/datasets/issues/5733/events
https://github.com/huggingface/datasets/pull/5733
1,662,039,191
PR_kwDODunzps5OAA04
5,733
Unpin fsspec
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006240 / 0.011353 (-0.005113) | 0.004392 / 0.011008 (-0.006616) | 0.097276 / 0.038508 (0.058768) | 0.027262 / 0.023109 (0.004153) | 0.303203 / 0.275898 (0.027305) | 0.331878 / 0.323480 (0.008398) | 0.004706 / 0.007986 (-0.003279) | 0.004428 / 0.004328 (0.000100) | 0.074666 / 0.004250 (0.070416) | 0.036154 / 0.037052 (-0.000899) | 0.302997 / 0.258489 (0.044508) | 0.340350 / 0.293841 (0.046509) | 0.031011 / 0.128546 (-0.097535) | 0.011616 / 0.075646 (-0.064031) | 0.323671 / 0.419271 (-0.095601) | 0.042062 / 0.043533 (-0.001471) | 0.311381 / 0.255139 (0.056242) | 0.324697 / 0.283200 (0.041498) | 0.084248 / 0.141683 (-0.057435) | 1.471651 / 1.452155 (0.019496) | 1.533414 / 1.492716 (0.040697) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.193555 / 0.018006 (0.175549) | 0.393452 / 0.000490 (0.392962) | 0.002348 / 0.000200 (0.002148) | 0.000071 / 0.000054 (0.000016) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022523 / 0.037411 (-0.014889) | 0.096552 / 0.014526 (0.082026) | 0.101746 / 0.176557 (-0.074810) | 0.163145 / 0.737135 (-0.573990) | 0.106417 / 0.296338 (-0.189921) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.448589 / 0.215209 (0.233380) | 4.467803 / 2.077655 (2.390148) | 2.178745 / 1.504120 (0.674625) | 1.983339 / 1.541195 (0.442145) | 2.056554 / 1.468490 (0.588064) | 0.697571 / 4.584777 (-3.887206) | 3.363967 / 3.745712 (-0.381745) | 1.872526 / 5.269862 (-3.397336) | 1.258245 / 4.565676 (-3.307432) | 0.082954 / 0.424275 (-0.341321) | 0.012306 / 0.007607 (0.004699) | 0.545096 / 0.226044 (0.319052) | 5.468706 / 2.268929 (3.199777) | 2.645333 / 55.444624 (-52.799292) | 2.287659 / 6.876477 (-4.588818) | 2.346768 / 2.142072 (0.204696) | 0.803730 / 4.805227 (-4.001497) | 0.151037 / 6.500664 (-6.349627) | 0.066404 / 0.075469 (-0.009065) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.192982 / 1.841788 (-0.648806) | 13.631225 / 8.074308 (5.556917) | 13.830053 / 10.191392 (3.638661) | 0.141901 / 0.680424 (-0.538523) | 0.016500 / 0.534201 (-0.517701) | 0.373268 / 0.579283 (-0.206015) | 0.380123 / 0.434364 (-0.054241) | 0.430786 / 0.540337 (-0.109551) | 0.512669 / 1.386936 (-0.874267) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006161 / 0.011353 (-0.005192) | 0.004399 / 0.011008 (-0.006609) | 0.076210 / 0.038508 (0.037702) | 0.026791 / 0.023109 (0.003681) | 0.341523 / 0.275898 (0.065625) | 0.370400 / 0.323480 (0.046920) | 0.004495 / 0.007986 (-0.003491) | 0.003204 / 0.004328 (-0.001125) | 0.075444 / 0.004250 (0.071194) | 0.035914 / 0.037052 (-0.001138) | 0.343806 / 0.258489 (0.085317) | 0.384320 / 0.293841 (0.090479) | 0.031438 / 0.128546 (-0.097109) | 0.011253 / 0.075646 (-0.064393) | 0.085364 / 0.419271 (-0.333908) | 0.041407 / 0.043533 (-0.002126) | 0.338831 / 0.255139 (0.083692) | 0.364357 / 0.283200 (0.081158) | 0.087417 / 0.141683 (-0.054266) | 1.520624 / 1.452155 (0.068470) | 1.572432 / 1.492716 (0.079716) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.232403 / 0.018006 (0.214396) | 0.388187 / 0.000490 (0.387698) | 0.001158 / 0.000200 (0.000958) | 0.000069 / 0.000054 (0.000014) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024596 / 0.037411 (-0.012816) | 0.101203 / 0.014526 (0.086677) | 0.105243 / 0.176557 (-0.071314) | 0.158215 / 0.737135 (-0.578920) | 0.110277 / 0.296338 (-0.186061) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.435661 / 0.215209 (0.220452) | 4.350151 / 2.077655 (2.272496) | 2.072372 / 1.504120 (0.568252) | 1.870675 / 1.541195 (0.329480) | 1.910883 / 1.468490 (0.442393) | 0.697384 / 4.584777 (-3.887393) | 3.399377 / 3.745712 (-0.346335) | 2.685008 / 5.269862 (-2.584854) | 1.476843 / 4.565676 (-3.088834) | 0.083177 / 0.424275 (-0.341098) | 0.012413 / 0.007607 (0.004806) | 0.542543 / 0.226044 (0.316498) | 5.431422 / 2.268929 (3.162494) | 2.506419 / 55.444624 (-52.938206) | 2.166342 / 6.876477 (-4.710135) | 2.164421 / 2.142072 (0.022348) | 0.800609 / 4.805227 (-4.004618) | 0.150527 / 6.500664 (-6.350137) | 0.065780 / 0.075469 (-0.009689) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.293409 / 1.841788 (-0.548379) | 13.814898 / 8.074308 (5.740590) | 13.940416 / 10.191392 (3.749024) | 0.149377 / 0.680424 (-0.531047) | 0.016462 / 0.534201 (-0.517739) | 0.393748 / 0.579283 (-0.185535) | 0.384327 / 0.434364 (-0.050037) | 0.489900 / 0.540337 (-0.050437) | 0.574608 / 1.386936 (-0.812328) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#f2607935c4e45c70c44fcb698db0363ca7ba83d4 \"CML watermark\")\n" ]
"2023-04-11T08:52:12Z"
"2023-04-11T11:11:45Z"
"2023-04-11T11:04:51Z"
MEMBER
null
In `fsspec--2023.4.0` default value for clobber when registering an implementation was changed from True to False. See: - https://github.com/fsspec/filesystem_spec/pull/1237 This PR recovers previous behavior by passing clobber True when registering mock implementations. This PR also removes the temporary pin introduced by: - #5731 Fix #5734.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5733/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5733/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5733.diff", "html_url": "https://github.com/huggingface/datasets/pull/5733", "merged_at": "2023-04-11T11:04:51Z", "patch_url": "https://github.com/huggingface/datasets/pull/5733.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5733" }
true
https://api.github.com/repos/huggingface/datasets/issues/5732
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5732/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5732/comments
https://api.github.com/repos/huggingface/datasets/issues/5732/events
https://github.com/huggingface/datasets/issues/5732
1,662,020,571
I_kwDODunzps5jEGvb
5,732
Enwik8 should support the standard split
{ "avatar_url": "https://avatars.githubusercontent.com/u/10287371?v=4", "events_url": "https://api.github.com/users/lucaslingle/events{/privacy}", "followers_url": "https://api.github.com/users/lucaslingle/followers", "following_url": "https://api.github.com/users/lucaslingle/following{/other_user}", "gists_url": "https://api.github.com/users/lucaslingle/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lucaslingle", "id": 10287371, "login": "lucaslingle", "node_id": "MDQ6VXNlcjEwMjg3Mzcx", "organizations_url": "https://api.github.com/users/lucaslingle/orgs", "received_events_url": "https://api.github.com/users/lucaslingle/received_events", "repos_url": "https://api.github.com/users/lucaslingle/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lucaslingle/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lucaslingle/subscriptions", "type": "User", "url": "https://api.github.com/users/lucaslingle" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/10287371?v=4", "events_url": "https://api.github.com/users/lucaslingle/events{/privacy}", "followers_url": "https://api.github.com/users/lucaslingle/followers", "following_url": "https://api.github.com/users/lucaslingle/following{/other_user}", "gists_url": "https://api.github.com/users/lucaslingle/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lucaslingle", "id": 10287371, "login": "lucaslingle", "node_id": "MDQ6VXNlcjEwMjg3Mzcx", "organizations_url": "https://api.github.com/users/lucaslingle/orgs", "received_events_url": "https://api.github.com/users/lucaslingle/received_events", "repos_url": "https://api.github.com/users/lucaslingle/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lucaslingle/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lucaslingle/subscriptions", "type": "User", "url": "https://api.github.com/users/lucaslingle" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/10287371?v=4", "events_url": "https://api.github.com/users/lucaslingle/events{/privacy}", "followers_url": "https://api.github.com/users/lucaslingle/followers", "following_url": "https://api.github.com/users/lucaslingle/following{/other_user}", "gists_url": "https://api.github.com/users/lucaslingle/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lucaslingle", "id": 10287371, "login": "lucaslingle", "node_id": "MDQ6VXNlcjEwMjg3Mzcx", "organizations_url": "https://api.github.com/users/lucaslingle/orgs", "received_events_url": "https://api.github.com/users/lucaslingle/received_events", "repos_url": "https://api.github.com/users/lucaslingle/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lucaslingle/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lucaslingle/subscriptions", "type": "User", "url": "https://api.github.com/users/lucaslingle" } ]
null
[ "#self-assign", "The Enwik8 pipeline is not present in this codebase, and is hosted elsewhere. I have opened a PR [there](https://huggingface.co/datasets/enwik8/discussions/4) instead. " ]
"2023-04-11T08:38:53Z"
"2023-04-11T09:28:17Z"
"2023-04-11T09:28:16Z"
NONE
null
### Feature request The HuggingFace Datasets library currently supports two BuilderConfigs for Enwik8. One config yields individual lines as examples, while the other config yields the entire dataset as a single example. Both support only a monolithic split: it is all grouped as "train". The HuggingFace Datasets library should include a BuilderConfig for Enwik8 with train, validation, and test sets derived from the first 90 million bytes, next 5 million bytes, and last 5 million bytes, respectively. This Enwik8 split is standard practice in LM papers, as elaborated and motivated below. ### Motivation Enwik8 is commonly split into 90M, 5M, 5M consecutive bytes. This is done in the Transformer-XL [codebase](https://github.com/kimiyoung/transformer-xl/blob/44781ed21dbaec88b280f74d9ae2877f52b492a5/getdata.sh#L34), and is additionally mentioned in the Sparse Transformers [paper](https://arxiv.org/abs/1904.10509) and the Compressive Transformers [paper](https://arxiv.org/abs/1911.05507). This split is pretty much universal among language modeling papers. One may obtain the splits by manual wrangling, using the data yielded by the ```enwik8-raw``` BuilderConfig. However, this undermines the seamless functionality of the library: one must slice the single raw example, extract it into three tensors, and wrap each in a separate dataset. This becomes even more of a nuisance if using the current Enwik8 HuggingFace dataset as a TfdsDataSource with [SeqIO](https://github.com/google/seqio), where a pipeline of preprocessors is typically included in a SeqIO Task definition, to be applied immediately after loading the data with TFDS. ### Your contribution Supporting this functionality in HuggingFace Datasets will only require an additional BuilderConfig for Enwik8 and a few additional lines of code. I will submit a PR.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5732/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5732/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5731
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5731/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5731/comments
https://api.github.com/repos/huggingface/datasets/issues/5731/events
https://github.com/huggingface/datasets/pull/5731
1,662,012,913
PR_kwDODunzps5N_7Un
5,731
Temporarily pin fsspec
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009735 / 0.011353 (-0.001618) | 0.010410 / 0.011008 (-0.000598) | 0.134986 / 0.038508 (0.096478) | 0.038392 / 0.023109 (0.015283) | 0.414451 / 0.275898 (0.138553) | 0.447775 / 0.323480 (0.124295) | 0.007223 / 0.007986 (-0.000763) | 0.006373 / 0.004328 (0.002045) | 0.102631 / 0.004250 (0.098381) | 0.048516 / 0.037052 (0.011464) | 0.410179 / 0.258489 (0.151690) | 0.467773 / 0.293841 (0.173932) | 0.053163 / 0.128546 (-0.075384) | 0.019801 / 0.075646 (-0.055845) | 0.452708 / 0.419271 (0.033436) | 0.068691 / 0.043533 (0.025159) | 0.405482 / 0.255139 (0.150343) | 0.457669 / 0.283200 (0.174470) | 0.113464 / 0.141683 (-0.028219) | 1.918143 / 1.452155 (0.465988) | 2.033123 / 1.492716 (0.540407) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.274564 / 0.018006 (0.256557) | 0.608855 / 0.000490 (0.608366) | 0.006266 / 0.000200 (0.006066) | 0.000105 / 0.000054 (0.000050) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033704 / 0.037411 (-0.003708) | 0.130982 / 0.014526 (0.116456) | 0.143862 / 0.176557 (-0.032694) | 0.212622 / 0.737135 (-0.524513) | 0.148899 / 0.296338 (-0.147439) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.670968 / 0.215209 (0.455759) | 6.602911 / 2.077655 (4.525256) | 2.644290 / 1.504120 (1.140171) | 2.268593 / 1.541195 (0.727399) | 2.325393 / 1.468490 (0.856903) | 1.388156 / 4.584777 (-3.196621) | 5.958569 / 3.745712 (2.212857) | 3.310756 / 5.269862 (-1.959106) | 2.390953 / 4.565676 (-2.174724) | 0.147416 / 0.424275 (-0.276859) | 0.015201 / 0.007607 (0.007594) | 0.794109 / 0.226044 (0.568064) | 7.984855 / 2.268929 (5.715926) | 3.382275 / 55.444624 (-52.062349) | 2.676102 / 6.876477 (-4.200375) | 2.846743 / 2.142072 (0.704671) | 1.467523 / 4.805227 (-3.337704) | 0.283184 / 6.500664 (-6.217480) | 0.088655 / 0.075469 (0.013186) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.632765 / 1.841788 (-0.209022) | 19.102473 / 8.074308 (11.028165) | 25.632535 / 10.191392 (15.441143) | 0.255628 / 0.680424 (-0.424795) | 0.034655 / 0.534201 (-0.499546) | 0.564593 / 0.579283 (-0.014690) | 0.668339 / 0.434364 (0.233975) | 0.648414 / 0.540337 (0.108076) | 0.766735 / 1.386936 (-0.620201) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009658 / 0.011353 (-0.001695) | 0.006690 / 0.011008 (-0.004318) | 0.099151 / 0.038508 (0.060643) | 0.037092 / 0.023109 (0.013983) | 0.470354 / 0.275898 (0.194456) | 0.525863 / 0.323480 (0.202383) | 0.007593 / 0.007986 (-0.000393) | 0.006637 / 0.004328 (0.002308) | 0.098782 / 0.004250 (0.094532) | 0.058524 / 0.037052 (0.021471) | 0.502569 / 0.258489 (0.244080) | 0.526410 / 0.293841 (0.232569) | 0.059486 / 0.128546 (-0.069060) | 0.019742 / 0.075646 (-0.055904) | 0.119715 / 0.419271 (-0.299556) | 0.065269 / 0.043533 (0.021736) | 0.483327 / 0.255139 (0.228188) | 0.506148 / 0.283200 (0.222948) | 0.123178 / 0.141683 (-0.018505) | 1.916624 / 1.452155 (0.464470) | 2.051410 / 1.492716 (0.558694) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.286481 / 0.018006 (0.268475) | 0.597300 / 0.000490 (0.596810) | 0.008906 / 0.000200 (0.008706) | 0.000128 / 0.000054 (0.000074) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031406 / 0.037411 (-0.006005) | 0.146748 / 0.014526 (0.132222) | 0.152898 / 0.176557 (-0.023658) | 0.212535 / 0.737135 (-0.524600) | 0.155577 / 0.296338 (-0.140761) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.660989 / 0.215209 (0.445780) | 6.688530 / 2.077655 (4.610875) | 3.039278 / 1.504120 (1.535159) | 2.660357 / 1.541195 (1.119162) | 2.696912 / 1.468490 (1.228422) | 1.259760 / 4.584777 (-3.325017) | 5.922452 / 3.745712 (2.176740) | 5.304200 / 5.269862 (0.034338) | 2.823928 / 4.565676 (-1.741748) | 0.148118 / 0.424275 (-0.276157) | 0.015575 / 0.007607 (0.007968) | 0.794404 / 0.226044 (0.568360) | 8.233651 / 2.268929 (5.964722) | 3.777482 / 55.444624 (-51.667142) | 3.064924 / 6.876477 (-3.811552) | 3.117803 / 2.142072 (0.975731) | 1.479559 / 4.805227 (-3.325668) | 0.254070 / 6.500664 (-6.246594) | 0.086806 / 0.075469 (0.011337) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.735515 / 1.841788 (-0.106273) | 18.934157 / 8.074308 (10.859848) | 22.645248 / 10.191392 (12.453856) | 0.227073 / 0.680424 (-0.453351) | 0.030650 / 0.534201 (-0.503551) | 0.594619 / 0.579283 (0.015336) | 0.653304 / 0.434364 (0.218940) | 0.707484 / 0.540337 (0.167147) | 0.823327 / 1.386936 (-0.563610) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#273392966e434286f4f5ba2ad596730bff11056d \"CML watermark\")\n" ]
"2023-04-11T08:33:15Z"
"2023-04-11T08:57:45Z"
"2023-04-11T08:47:55Z"
MEMBER
null
Fix #5730.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5731/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5731/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5731.diff", "html_url": "https://github.com/huggingface/datasets/pull/5731", "merged_at": "2023-04-11T08:47:55Z", "patch_url": "https://github.com/huggingface/datasets/pull/5731.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5731" }
true
https://api.github.com/repos/huggingface/datasets/issues/5730
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5730/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5730/comments
https://api.github.com/repos/huggingface/datasets/issues/5730/events
https://github.com/huggingface/datasets/issues/5730
1,662,007,926
I_kwDODunzps5jEDp2
5,730
CI is broken: ValueError: Name (mock) already in the registry and clobber is False
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
[]
"2023-04-11T08:29:46Z"
"2023-04-11T08:47:56Z"
"2023-04-11T08:47:56Z"
MEMBER
null
CI is broken for `test_py310`. See: https://github.com/huggingface/datasets/actions/runs/4665326892/jobs/8258580948 ``` =========================== short test summary info ============================ ERROR tests/test_builder.py::test_builder_with_filesystem_download_and_prepare - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_builder.py::test_builder_with_filesystem_download_and_prepare_reload - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_dataset_dict.py::test_dummy_datasetdict_serialize_fs - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_file_utils.py::test_get_from_cache_fsspec - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_filesystem.py::test_is_remote_filesystem - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::test_xexists[tmp_path/file.txt-True] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::test_xexists[tmp_path/file_that_doesnt_exist.txt-False] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::test_xexists[mock://top_level/second_level/date=2019-10-01/a.parquet-True] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::test_xexists[mock://top_level/second_level/date=2019-10-01/file_that_doesnt_exist.parquet-False] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::test_xlistdir[tmp_path-expected_paths0] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::test_xlistdir[mock://-expected_paths1] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::test_xlistdir[mock://top_level-expected_paths2] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::test_xlistdir[mock://top_level/second_level/date=2019-10-01-expected_paths3] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::test_xisdir[tmp_path-True] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::test_xisdir[tmp_path/file.txt-False] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::test_xisdir[mock://-True] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::test_xisdir[mock://top_level-True] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::test_xisdir[mock://dir_that_doesnt_exist-False] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::test_xisfile[tmp_path/file.txt-True] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::test_xisfile[tmp_path/file_that_doesnt_exist.txt-False] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::test_xisfile[mock://-False] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::test_xisfile[mock://top_level/second_level/date=2019-10-01/a.parquet-True] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::test_xgetsize[tmp_path/file.txt-100] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::test_xgetsize[mock://-0] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::test_xgetsize[mock://top_level/second_level/date=2019-10-01/a.parquet-100] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::test_xglob[tmp_path/*.txt-expected_paths0] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::test_xglob[mock://*-expected_paths1] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::test_xglob[mock://top_*-expected_paths2] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::test_xglob[mock://top_level/second_level/date=2019-10-0[1-4]-expected_paths3] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::test_xglob[mock://top_level/second_level/date=2019-10-0[1-4]/*-expected_paths4] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::test_xwalk[tmp_path-expected_outputs0] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::test_xwalk[mock://top_level/second_level-expected_outputs1] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::TestxPath::test_xpath_exists[tmp_path/file.txt-True] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::TestxPath::test_xpath_exists[tmp_path/file_that_doesnt_exist.txt-False] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::TestxPath::test_xpath_exists[mock://top_level/second_level/date=2019-10-01/a.parquet-True] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::TestxPath::test_xpath_exists[mock://top_level/second_level/date=2019-10-01/file_that_doesnt_exist.parquet-False] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::TestxPath::test_xpath_glob[tmp_path-*.txt-expected_paths0] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::TestxPath::test_xpath_glob[mock://-*-expected_paths1] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::TestxPath::test_xpath_glob[mock://-top_*-expected_paths2] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::TestxPath::test_xpath_glob[mock://top_level/second_level-date=2019-10-0[1-4]-expected_paths3] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::TestxPath::test_xpath_glob[mock://top_level/second_level-date=2019-10-0[1-4]/*-expected_paths4] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::TestxPath::test_xpath_rglob[tmp_path-*.txt-expected_paths0] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::TestxPath::test_xpath_rglob[mock://-date=2019-10-0[1-4]-expected_paths1] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::TestxPath::test_xpath_rglob[mock://top_level-date=2019-10-0[1-4]-expected_paths2] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::TestxPath::test_xpath_rglob[mock://-date=2019-10-0[1-4]/*-expected_paths3] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::TestxPath::test_xpath_rglob[mock://top_level-date=2019-10-0[1-4]/*-expected_paths4] - ValueError: Name (mock) already in the registry and clobber is False ===== 2105 passed, 18 skipped, 38 warnings, 46 errors in 236.22s (0:03:56) ===== ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5730/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5730/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5729
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5729/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5729/comments
https://api.github.com/repos/huggingface/datasets/issues/5729/events
https://github.com/huggingface/datasets/pull/5729
1,661,929,923
PR_kwDODunzps5N_pvI
5,729
Fix nondeterministic sharded data split order
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "The error in the CI was unrelated to this PR. I have merged main branch once that has been fixed.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006954 / 0.011353 (-0.004399) | 0.004947 / 0.011008 (-0.006061) | 0.086564 / 0.038508 (0.048056) | 0.031167 / 0.023109 (0.008058) | 0.262285 / 0.275898 (-0.013613) | 0.295753 / 0.323480 (-0.027727) | 0.005389 / 0.007986 (-0.002596) | 0.004130 / 0.004328 (-0.000198) | 0.065127 / 0.004250 (0.060877) | 0.042511 / 0.037052 (0.005458) | 0.263497 / 0.258489 (0.005008) | 0.307456 / 0.293841 (0.013615) | 0.031338 / 0.128546 (-0.097209) | 0.011023 / 0.075646 (-0.064623) | 0.295625 / 0.419271 (-0.123647) | 0.045813 / 0.043533 (0.002280) | 0.259369 / 0.255139 (0.004230) | 0.279325 / 0.283200 (-0.003875) | 0.099748 / 0.141683 (-0.041934) | 1.252572 / 1.452155 (-0.199583) | 1.347069 / 1.492716 (-0.145647) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.249726 / 0.018006 (0.231720) | 0.556882 / 0.000490 (0.556392) | 0.008237 / 0.000200 (0.008037) | 0.000294 / 0.000054 (0.000239) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026879 / 0.037411 (-0.010533) | 0.105141 / 0.014526 (0.090615) | 0.115473 / 0.176557 (-0.061084) | 0.172989 / 0.737135 (-0.564147) | 0.120433 / 0.296338 (-0.175906) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.400022 / 0.215209 (0.184812) | 3.965402 / 2.077655 (1.887747) | 1.805257 / 1.504120 (0.301138) | 1.610136 / 1.541195 (0.068941) | 1.661162 / 1.468490 (0.192672) | 0.695311 / 4.584777 (-3.889466) | 3.753757 / 3.745712 (0.008045) | 2.060609 / 5.269862 (-3.209253) | 1.333251 / 4.565676 (-3.232426) | 0.085790 / 0.424275 (-0.338485) | 0.012256 / 0.007607 (0.004649) | 0.502133 / 0.226044 (0.276088) | 5.040979 / 2.268929 (2.772051) | 2.310919 / 55.444624 (-53.133705) | 2.010534 / 6.876477 (-4.865943) | 2.132961 / 2.142072 (-0.009111) | 0.837636 / 4.805227 (-3.967592) | 0.169838 / 6.500664 (-6.330826) | 0.065003 / 0.075469 (-0.010466) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.218674 / 1.841788 (-0.623114) | 14.696076 / 8.074308 (6.621768) | 14.559492 / 10.191392 (4.368100) | 0.167761 / 0.680424 (-0.512663) | 0.017747 / 0.534201 (-0.516454) | 0.421624 / 0.579283 (-0.157659) | 0.414086 / 0.434364 (-0.020278) | 0.501398 / 0.540337 (-0.038940) | 0.596099 / 1.386936 (-0.790837) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007230 / 0.011353 (-0.004123) | 0.005345 / 0.011008 (-0.005664) | 0.073739 / 0.038508 (0.035231) | 0.033440 / 0.023109 (0.010330) | 0.339790 / 0.275898 (0.063892) | 0.367857 / 0.323480 (0.044377) | 0.005927 / 0.007986 (-0.002058) | 0.004279 / 0.004328 (-0.000049) | 0.074247 / 0.004250 (0.069996) | 0.048971 / 0.037052 (0.011918) | 0.340235 / 0.258489 (0.081746) | 0.380521 / 0.293841 (0.086680) | 0.035322 / 0.128546 (-0.093225) | 0.012416 / 0.075646 (-0.063230) | 0.086060 / 0.419271 (-0.333212) | 0.049331 / 0.043533 (0.005799) | 0.342871 / 0.255139 (0.087732) | 0.355673 / 0.283200 (0.072473) | 0.111976 / 0.141683 (-0.029707) | 1.462530 / 1.452155 (0.010375) | 1.550336 / 1.492716 (0.057620) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.266560 / 0.018006 (0.248554) | 0.550886 / 0.000490 (0.550396) | 0.001069 / 0.000200 (0.000869) | 0.000085 / 0.000054 (0.000031) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028701 / 0.037411 (-0.008711) | 0.110535 / 0.014526 (0.096010) | 0.122846 / 0.176557 (-0.053711) | 0.176395 / 0.737135 (-0.560740) | 0.128653 / 0.296338 (-0.167685) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.431693 / 0.215209 (0.216484) | 4.283691 / 2.077655 (2.206036) | 2.013967 / 1.504120 (0.509847) | 1.823914 / 1.541195 (0.282719) | 1.872055 / 1.468490 (0.403565) | 0.703318 / 4.584777 (-3.881459) | 3.783412 / 3.745712 (0.037699) | 2.950147 / 5.269862 (-2.319715) | 1.826159 / 4.565676 (-2.739518) | 0.086897 / 0.424275 (-0.337379) | 0.012512 / 0.007607 (0.004905) | 0.526730 / 0.226044 (0.300685) | 5.263871 / 2.268929 (2.994943) | 2.552163 / 55.444624 (-52.892462) | 2.276216 / 6.876477 (-4.600261) | 2.419934 / 2.142072 (0.277862) | 0.848235 / 4.805227 (-3.956993) | 0.170405 / 6.500664 (-6.330259) | 0.064979 / 0.075469 (-0.010491) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.276780 / 1.841788 (-0.565008) | 15.100829 / 8.074308 (7.026521) | 15.117531 / 10.191392 (4.926139) | 0.147129 / 0.680424 (-0.533295) | 0.017806 / 0.534201 (-0.516395) | 0.422975 / 0.579283 (-0.156308) | 0.430286 / 0.434364 (-0.004078) | 0.501405 / 0.540337 (-0.038932) | 0.596810 / 1.386936 (-0.790126) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#f6ee2e6603fe81638256d37a6aa7ad0400e31a83 \"CML watermark\")\n" ]
"2023-04-11T07:34:20Z"
"2023-04-26T15:12:25Z"
"2023-04-26T15:05:12Z"
MEMBER
null
This PR makes the order of the split names deterministic. Before it was nondeterministic because we were iterating over `set` elements. Fix #5728.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5729/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5729/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5729.diff", "html_url": "https://github.com/huggingface/datasets/pull/5729", "merged_at": "2023-04-26T15:05:12Z", "patch_url": "https://github.com/huggingface/datasets/pull/5729.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5729" }
true
https://api.github.com/repos/huggingface/datasets/issues/5728
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5728/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5728/comments
https://api.github.com/repos/huggingface/datasets/issues/5728/events
https://github.com/huggingface/datasets/issues/5728
1,661,925,932
I_kwDODunzps5jDvos
5,728
The order of data split names is nondeterministic
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
[]
"2023-04-11T07:31:25Z"
"2023-04-26T15:05:13Z"
"2023-04-26T15:05:13Z"
MEMBER
null
After this CI error: https://github.com/huggingface/datasets/actions/runs/4639528358/jobs/8210492953?pr=5718 ``` FAILED tests/test_data_files.py::test_get_data_files_patterns[data_file_per_split4] - AssertionError: assert ['random', 'train'] == ['train', 'random'] At index 0 diff: 'random' != 'train' Full diff: - ['train', 'random'] + ['random', 'train'] ``` I have checked locally and found out that the data split order is nondeterministic. This is caused by the use of `set` for sharded splits.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5728/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5728/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5727
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5727/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5727/comments
https://api.github.com/repos/huggingface/datasets/issues/5727/events
https://github.com/huggingface/datasets/issues/5727
1,661,536,363
I_kwDODunzps5jCQhr
5,727
load_dataset fails with FileNotFound error on Windows
{ "avatar_url": "https://avatars.githubusercontent.com/u/122648572?v=4", "events_url": "https://api.github.com/users/joelkowalewski/events{/privacy}", "followers_url": "https://api.github.com/users/joelkowalewski/followers", "following_url": "https://api.github.com/users/joelkowalewski/following{/other_user}", "gists_url": "https://api.github.com/users/joelkowalewski/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/joelkowalewski", "id": 122648572, "login": "joelkowalewski", "node_id": "U_kgDOB093_A", "organizations_url": "https://api.github.com/users/joelkowalewski/orgs", "received_events_url": "https://api.github.com/users/joelkowalewski/received_events", "repos_url": "https://api.github.com/users/joelkowalewski/repos", "site_admin": false, "starred_url": "https://api.github.com/users/joelkowalewski/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/joelkowalewski/subscriptions", "type": "User", "url": "https://api.github.com/users/joelkowalewski" }
[]
closed
false
null
[]
null
[ "Hi! Can you please paste the entire error stack trace, not only the last few lines?", "`----> 1 dataset = datasets.load_dataset(\"glue\", \"ax\")\r\n\r\nFile ~\\anaconda3\\envs\\huggingface\\Lib\\site-packages\\datasets\\load.py:1767, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, num_proc, storage_options, **config_kwargs)\r\n 1762 verification_mode = VerificationMode(\r\n 1763 (verification_mode or VerificationMode.BASIC_CHECKS) if not save_infos else VerificationMode.ALL_CHECKS\r\n 1764 )\r\n 1766 # Create a dataset builder\r\n-> 1767 builder_instance = load_dataset_builder(\r\n 1768 path=path,\r\n 1769 name=name,\r\n 1770 data_dir=data_dir,\r\n 1771 data_files=data_files,\r\n 1772 cache_dir=cache_dir,\r\n 1773 features=features,\r\n 1774 download_config=download_config,\r\n 1775 download_mode=download_mode,\r\n 1776 revision=revision,\r\n 1777 use_auth_token=use_auth_token,\r\n 1778 storage_options=storage_options,\r\n 1779 **config_kwargs,\r\n 1780 )\r\n 1782 # Return iterable dataset in case of streaming\r\n 1783 if streaming:\r\n\r\nFile ~\\anaconda3\\envs\\huggingface\\Lib\\site-packages\\datasets\\load.py:1498, in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, use_auth_token, storage_options, **config_kwargs)\r\n 1496 download_config = download_config.copy() if download_config else DownloadConfig()\r\n 1497 download_config.use_auth_token = use_auth_token\r\n-> 1498 dataset_module = dataset_module_factory(\r\n 1499 path,\r\n 1500 revision=revision,\r\n 1501 download_config=download_config,\r\n 1502 download_mode=download_mode,\r\n 1503 data_dir=data_dir,\r\n 1504 data_files=data_files,\r\n 1505 )\r\n 1507 # Get dataset builder class from the processing script\r\n 1508 builder_cls = import_main_class(dataset_module.module_path)\r\n\r\nFile ~\\anaconda3\\envs\\huggingface\\Lib\\site-packages\\datasets\\load.py:1211, in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, **download_kwargs)\r\n 1209 raise e1 from None\r\n 1210 if isinstance(e1, FileNotFoundError):\r\n-> 1211 raise FileNotFoundError(\r\n 1212 f\"Couldn't find a dataset script at {relative_to_absolute_path(combined_path)} or any data file in the same directory. \"\r\n 1213 f\"Couldn't find '{path}' on the Hugging Face Hub either: {type(e1).__name__}: {e1}\"\r\n 1214 ) from None\r\n 1215 raise e1 from None\r\n 1216 else:`", "Okay, this is the issue:\r\n```\r\nFileNotFoundError: [WinError 3] The system cannot find the path specified: \r\n'C:\\\\Users\\\\...\\\\.cache\\\\huggingface'\r\n``` \r\n\r\nI don't remember seeing this error before.\r\n\r\nI guess it could happen in a multi-process environment if one of the processes deletes the `datasets` cache as the other one is loading a dataset (with `load_dataset`), so make sure that's not the case. Also, you can disable the Windows max path length limit (if enabled), but this is most likely not the problem.", "Closing due to inactivity." ]
"2023-04-10T23:21:12Z"
"2023-07-21T14:08:20Z"
"2023-07-21T14:08:19Z"
NONE
null
### Describe the bug Although I can import and run the datasets library in a Colab environment, I cannot successfully load any data on my own machine (Windows 10) despite following the install steps: (1) create conda environment (2) activate environment (3) install with: ``conda` install -c huggingface -c conda-forge datasets` Then ``` from datasets import load_dataset # this or any other example from the website fails with the FileNotFoundError glue = load_dataset("glue", "ax") ``` **Below I have pasted the error omitting the full path**: ``` raise FileNotFoundError( FileNotFoundError: Couldn't find a dataset script at C:\Users\...\glue\glue.py or any data file in the same directory. Couldn't find 'glue' on the Hugging Face Hub either: FileNotFoundError: [WinError 3] The system cannot find the path specified: 'C:\\Users\\...\\.cache\\huggingface' ``` ### Steps to reproduce the bug On Windows 10 1) create a minimal conda environment (with just Python) (2) activate environment (3) install datasets with: ``conda` install -c huggingface -c conda-forge datasets` (4) import load_dataset and follow example usage from any dataset card. ### Expected behavior The expected behavior is to load the file into the Python session running on my machine without error. ### Environment info ``` # Name Version Build Channel aiohttp 3.8.4 py311ha68e1ae_0 conda-forge aiosignal 1.3.1 pyhd8ed1ab_0 conda-forge arrow-cpp 11.0.0 h57928b3_13_cpu conda-forge async-timeout 4.0.2 pyhd8ed1ab_0 conda-forge attrs 22.2.0 pyh71513ae_0 conda-forge aws-c-auth 0.6.26 h1262f0c_1 conda-forge aws-c-cal 0.5.21 h7cda486_2 conda-forge aws-c-common 0.8.14 hcfcfb64_0 conda-forge aws-c-compression 0.2.16 h8a79959_5 conda-forge aws-c-event-stream 0.2.20 h5f78564_4 conda-forge aws-c-http 0.7.6 h2545be9_0 conda-forge aws-c-io 0.13.19 h0d2781e_3 conda-forge aws-c-mqtt 0.8.6 hd211e0c_12 conda-forge aws-c-s3 0.2.7 h8113e7b_1 conda-forge aws-c-sdkutils 0.1.8 h8a79959_0 conda-forge aws-checksums 0.1.14 h8a79959_5 conda-forge aws-crt-cpp 0.19.8 he6d3b81_12 conda-forge aws-sdk-cpp 1.10.57 h64004b3_8 conda-forge brotlipy 0.7.0 py311ha68e1ae_1005 conda-forge bzip2 1.0.8 h8ffe710_4 conda-forge c-ares 1.19.0 h2bbff1b_0 ca-certificates 2023.01.10 haa95532_0 certifi 2022.12.7 pyhd8ed1ab_0 conda-forge cffi 1.15.1 py311h7d9ee11_3 conda-forge charset-normalizer 2.1.1 pyhd8ed1ab_0 conda-forge colorama 0.4.6 pyhd8ed1ab_0 conda-forge cryptography 40.0.1 py311h28e9c30_0 conda-forge dataclasses 0.8 pyhc8e2a94_3 conda-forge datasets 2.11.0 py_0 huggingface dill 0.3.6 pyhd8ed1ab_1 conda-forge filelock 3.11.0 pyhd8ed1ab_0 conda-forge frozenlist 1.3.3 py311ha68e1ae_0 conda-forge fsspec 2023.4.0 pyh1a96a4e_0 conda-forge gflags 2.2.2 ha925a31_1004 conda-forge glog 0.6.0 h4797de2_0 conda-forge huggingface_hub 0.13.4 py_0 huggingface idna 3.4 pyhd8ed1ab_0 conda-forge importlib-metadata 6.3.0 pyha770c72_0 conda-forge importlib_metadata 6.3.0 hd8ed1ab_0 conda-forge intel-openmp 2023.0.0 h57928b3_25922 conda-forge krb5 1.20.1 heb0366b_0 conda-forge libabseil 20230125.0 cxx17_h63175ca_1 conda-forge libarrow 11.0.0 h04c43f8_13_cpu conda-forge libblas 3.9.0 16_win64_mkl conda-forge libbrotlicommon 1.0.9 hcfcfb64_8 conda-forge libbrotlidec 1.0.9 hcfcfb64_8 conda-forge libbrotlienc 1.0.9 hcfcfb64_8 conda-forge libcblas 3.9.0 16_win64_mkl conda-forge libcrc32c 1.1.2 h0e60522_0 conda-forge libcurl 7.88.1 h68f0423_1 conda-forge libexpat 2.5.0 h63175ca_1 conda-forge libffi 3.4.2 h8ffe710_5 conda-forge libgoogle-cloud 2.8.0 hf2ff781_1 conda-forge libgrpc 1.52.1 h32da247_1 conda-forge libhwloc 2.9.0 h51c2c0f_0 conda-forge libiconv 1.17 h8ffe710_0 conda-forge liblapack 3.9.0 16_win64_mkl conda-forge libprotobuf 3.21.12 h12be248_0 conda-forge libsqlite 3.40.0 hcfcfb64_0 conda-forge libssh2 1.10.0 h9a1e1f7_3 conda-forge libthrift 0.18.1 h9ce19ad_0 conda-forge libutf8proc 2.8.0 h82a8f57_0 conda-forge libxml2 2.10.3 hc3477c8_6 conda-forge libzlib 1.2.13 hcfcfb64_4 conda-forge lz4-c 1.9.4 hcfcfb64_0 conda-forge mkl 2022.1.0 h6a75c08_874 conda-forge multidict 6.0.4 py311ha68e1ae_0 conda-forge multiprocess 0.70.14 py311ha68e1ae_3 conda-forge numpy 1.24.2 py311h0b4df5a_0 conda-forge openssl 3.1.0 hcfcfb64_0 conda-forge orc 1.8.3 hada7b9e_0 conda-forge packaging 23.0 pyhd8ed1ab_0 conda-forge pandas 2.0.0 py311hf63dbb6_0 conda-forge parquet-cpp 1.5.1 2 conda-forge pip 23.0.1 pyhd8ed1ab_0 conda-forge pthreads-win32 2.9.1 hfa6e2cd_3 conda-forge pyarrow 11.0.0 py311h6a6099b_13_cpu conda-forge pycparser 2.21 pyhd8ed1ab_0 conda-forge pyopenssl 23.1.1 pyhd8ed1ab_0 conda-forge pysocks 1.7.1 pyh0701188_6 conda-forge python 3.11.3 h2628c8c_0_cpython conda-forge python-dateutil 2.8.2 pyhd8ed1ab_0 conda-forge python-tzdata 2023.3 pyhd8ed1ab_0 conda-forge python-xxhash 3.2.0 py311ha68e1ae_0 conda-forge python_abi 3.11 3_cp311 conda-forge pytz 2023.3 pyhd8ed1ab_0 conda-forge pyyaml 6.0 py311ha68e1ae_5 conda-forge re2 2023.02.02 h63175ca_0 conda-forge requests 2.28.2 pyhd8ed1ab_1 conda-forge setuptools 67.6.1 pyhd8ed1ab_0 conda-forge six 1.16.0 pyh6c4a22f_0 conda-forge snappy 1.1.10 hfb803bf_0 conda-forge tbb 2021.8.0 h91493d7_0 conda-forge tk 8.6.12 h8ffe710_0 conda-forge tqdm 4.65.0 pyhd8ed1ab_1 conda-forge typing-extensions 4.5.0 hd8ed1ab_0 conda-forge typing_extensions 4.5.0 pyha770c72_0 conda-forge tzdata 2023c h71feb2d_0 conda-forge ucrt 10.0.22621.0 h57928b3_0 conda-forge urllib3 1.26.15 pyhd8ed1ab_0 conda-forge vc 14.3 hb6edc58_10 conda-forge vs2015_runtime 14.34.31931 h4c5c07a_10 conda-forge wheel 0.40.0 pyhd8ed1ab_0 conda-forge win_inet_pton 1.1.0 pyhd8ed1ab_6 conda-forge xxhash 0.8.1 hcfcfb64_0 conda-forge xz 5.2.10 h8cc25b3_1 yaml 0.2.5 h8ffe710_2 conda-forge yarl 1.8.2 py311ha68e1ae_0 conda-forge zipp 3.15.0 pyhd8ed1ab_0 conda-forge zlib 1.2.13 hcfcfb64_4 conda-forge zstd 1.5.4 hd43e919_0 ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5727/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5727/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5726
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5726/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5726/comments
https://api.github.com/repos/huggingface/datasets/issues/5726/events
https://github.com/huggingface/datasets/issues/5726
1,660,944,807
I_kwDODunzps5jAAGn
5,726
Fallback JSON Dataset loading does not load all values when features specified manually
{ "avatar_url": "https://avatars.githubusercontent.com/u/3610788?v=4", "events_url": "https://api.github.com/users/myluki2000/events{/privacy}", "followers_url": "https://api.github.com/users/myluki2000/followers", "following_url": "https://api.github.com/users/myluki2000/following{/other_user}", "gists_url": "https://api.github.com/users/myluki2000/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/myluki2000", "id": 3610788, "login": "myluki2000", "node_id": "MDQ6VXNlcjM2MTA3ODg=", "organizations_url": "https://api.github.com/users/myluki2000/orgs", "received_events_url": "https://api.github.com/users/myluki2000/received_events", "repos_url": "https://api.github.com/users/myluki2000/repos", "site_admin": false, "starred_url": "https://api.github.com/users/myluki2000/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/myluki2000/subscriptions", "type": "User", "url": "https://api.github.com/users/myluki2000" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
[ "Thanks for reporting, @myluki2000.\r\n\r\nI am working on a fix." ]
"2023-04-10T15:22:14Z"
"2023-04-21T06:35:28Z"
"2023-04-21T06:35:28Z"
NONE
null
### Describe the bug The fallback JSON dataset loader located here: https://github.com/huggingface/datasets/blob/1c4ec00511868bd881e84a6f7e0333648d833b8e/src/datasets/packaged_modules/json/json.py#L130-L153 does not load the values of features correctly when features are specified manually and not all features have a value in the first entry of the dataset. I'm pretty sure this is not supposed to be expected bahavior? To fix this you'd have to change this line: https://github.com/huggingface/datasets/blob/1c4ec00511868bd881e84a6f7e0333648d833b8e/src/datasets/packaged_modules/json/json.py#L140 To pass a schema to pyarrow which has the same structure as the features argument passed to the load_dataset() method. ### Steps to reproduce the bug Consider a dataset JSON like this: ``` [ { "instruction": "Do stuff", "output": "Answer stuff" }, { "instruction": "Do stuff2", "input": "Additional Input2", "output": "Answer stuff2" } ] ``` Using this code to load the dataset: ``` from datasets import load_dataset, Features, Value features = { "instruction": Value("string"), "input": Value("string"), "output": Value("string") } features = Features(features) ds = load_dataset("json", data_files="./ds.json", features=features) for row in ds["train"]: print(row) ``` we get a dataset that looks like this: | **Instruction** | **Input** | **Output** | |-----------------|--------------------|-----------------| | "Do stuff" | None | "Answer Stuff" | | "Do stuff2" | None | "Answer Stuff2" | ### Expected behavior The input column should contain values other than None for dataset entries that have the "input" attribute set: | **Instruction** | **Input** | **Output** | |-----------------|--------------------|-----------------| | "Do stuff" | None | "Answer Stuff" | | "Do stuff2" | "Additional Input2" | "Answer Stuff2" | ### Environment info Python 3.10.10 Datasets 2.11.0 Windows 10
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5726/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5726/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5725
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5725/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5725/comments
https://api.github.com/repos/huggingface/datasets/issues/5725/events
https://github.com/huggingface/datasets/issues/5725
1,660,455,202
I_kwDODunzps5i-Iki
5,725
How to limit the number of examples in dataset, for testing?
{ "avatar_url": "https://avatars.githubusercontent.com/u/845175?v=4", "events_url": "https://api.github.com/users/ndvbd/events{/privacy}", "followers_url": "https://api.github.com/users/ndvbd/followers", "following_url": "https://api.github.com/users/ndvbd/following{/other_user}", "gists_url": "https://api.github.com/users/ndvbd/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ndvbd", "id": 845175, "login": "ndvbd", "node_id": "MDQ6VXNlcjg0NTE3NQ==", "organizations_url": "https://api.github.com/users/ndvbd/orgs", "received_events_url": "https://api.github.com/users/ndvbd/received_events", "repos_url": "https://api.github.com/users/ndvbd/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ndvbd/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ndvbd/subscriptions", "type": "User", "url": "https://api.github.com/users/ndvbd" }
[]
closed
false
null
[]
null
[ "Hi! You can use the `nrows` parameter for this:\r\n```python\r\ndata = load_dataset(\"json\", data_files=data_path, nrows=10)\r\n```", "@mariosasko I get:\r\n\r\n`TypeError: __init__() got an unexpected keyword argument 'nrows'`", "I misread the format in which the dataset is stored - the `nrows` parameter works for CSV, but not JSON.\r\n\r\nThis means the only option is first to create a DataFrame and then convert it to a Dataset object:\r\n```python\r\nimport pandas as pd\r\nfrom datasets import Dataset\r\n\r\ndf = pd.read_json(data_path, lines=True, nrows=10)\r\nds = Dataset.from_pandas(df)\r\n```" ]
"2023-04-10T08:41:43Z"
"2023-04-21T06:16:24Z"
"2023-04-21T06:16:24Z"
NONE
null
### Describe the bug I am using this command: `data = load_dataset("json", data_files=data_path)` However, I want to add a parameter, to limit the number of loaded examples to be 10, for development purposes, but can't find this simple parameter. ### Steps to reproduce the bug In the description. ### Expected behavior To be able to limit the number of examples ### Environment info Nothing special
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5725/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5725/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5724
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5724/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5724/comments
https://api.github.com/repos/huggingface/datasets/issues/5724/events
https://github.com/huggingface/datasets/issues/5724
1,659,938,135
I_kwDODunzps5i8KVX
5,724
Error after shuffling streaming IterableDatasets with downloaded dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/41177966?v=4", "events_url": "https://api.github.com/users/szxiangjn/events{/privacy}", "followers_url": "https://api.github.com/users/szxiangjn/followers", "following_url": "https://api.github.com/users/szxiangjn/following{/other_user}", "gists_url": "https://api.github.com/users/szxiangjn/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/szxiangjn", "id": 41177966, "login": "szxiangjn", "node_id": "MDQ6VXNlcjQxMTc3OTY2", "organizations_url": "https://api.github.com/users/szxiangjn/orgs", "received_events_url": "https://api.github.com/users/szxiangjn/received_events", "repos_url": "https://api.github.com/users/szxiangjn/repos", "site_admin": false, "starred_url": "https://api.github.com/users/szxiangjn/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/szxiangjn/subscriptions", "type": "User", "url": "https://api.github.com/users/szxiangjn" }
[]
closed
false
null
[]
null
[ "Moving `\"en\"` to the end of the path instead of passing it as a config name should fix the error:\r\n```python\r\nimport datasets\r\ndataset = datasets.load_dataset('/path/to/your/data/dir/en', streaming=True, split='train')\r\ndataset = dataset.shuffle(buffer_size=10_000, seed=42)\r\nnext(iter(dataset))\r\n```\r\n\r\nPS: https://github.com/huggingface/datasets/pull/5331, once merged, will allow us to define C4's configs in its README, making downloading it much more user-friendly." ]
"2023-04-09T16:58:44Z"
"2023-04-20T20:37:30Z"
"2023-04-20T20:37:30Z"
NONE
null
### Describe the bug I downloaded the C4 dataset, and used streaming IterableDatasets to read it. Everything went normal until I used `dataset = dataset.shuffle(seed=42, buffer_size=10_000)` to shuffle the dataset. Shuffled dataset will throw the following error when it is used by `next(iter(dataset))`: ``` File "/data/miniconda3/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 937, in __iter__ for key, example in ex_iterable: File "/data/miniconda3/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 627, in __iter__ for x in self.ex_iterable: File "/data/miniconda3/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 138, in __iter__ yield from self.generate_examples_fn(**kwargs_with_shuffled_shards) File "/data/miniconda3/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 763, in wrapper for key, table in generate_tables_fn(**kwargs): File "/data/miniconda3/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 101, in _generate_tables batch = f.read(self.config.chunksize) File "/data/miniconda3/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 372, in read_with_retries out = read(*args, **kwargs) File "/data/miniconda3/lib/python3.9/gzip.py", line 300, in read return self._buffer.read(size) File "/data/miniconda3/lib/python3.9/_compression.py", line 68, in readinto data = self.read(len(byte_view)) File "/data/miniconda3/lib/python3.9/gzip.py", line 487, in read if not self._read_gzip_header(): File "/data/miniconda3/lib/python3.9/gzip.py", line 435, in _read_gzip_header raise BadGzipFile('Not a gzipped file (%r)' % magic) gzip.BadGzipFile: Not a gzipped file (b've') ``` I found that there is no problem to use the dataset in this way without shuffling. Also, use `dataset = datasets.load_dataset('c4', 'en', split='train', streaming=True)`, which will download the dataset on-the-fly instead of loading from the local file, will also not have problems even after shuffle. ### Steps to reproduce the bug 1. Download C4 dataset from https://huggingface.co/datasets/allenai/c4 2. ``` import datasets dataset = datasets.load_dataset('/path/to/your/data/dir', 'en', streaming=True, split='train') dataset = dataset.shuffle(buffer_size=10_000, seed=42) next(iter(dataset)) ``` ### Expected behavior `next(iter(dataset))` should give me a sample from the dataset ### Environment info - `datasets` version: 2.11.0 - Platform: Linux-5.4.32-1-tlinux4-0001-x86_64-with-glibc2.28 - Python version: 3.9.16 - Huggingface_hub version: 0.13.1 - PyArrow version: 11.0.0 - Pandas version: 1.5.3
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5724/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5724/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5722
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5722/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5722/comments
https://api.github.com/repos/huggingface/datasets/issues/5722/events
https://github.com/huggingface/datasets/issues/5722
1,659,837,510
I_kwDODunzps5i7xxG
5,722
Distributed Training Error on Customized Dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/16603773?v=4", "events_url": "https://api.github.com/users/wlhgtc/events{/privacy}", "followers_url": "https://api.github.com/users/wlhgtc/followers", "following_url": "https://api.github.com/users/wlhgtc/following{/other_user}", "gists_url": "https://api.github.com/users/wlhgtc/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/wlhgtc", "id": 16603773, "login": "wlhgtc", "node_id": "MDQ6VXNlcjE2NjAzNzcz", "organizations_url": "https://api.github.com/users/wlhgtc/orgs", "received_events_url": "https://api.github.com/users/wlhgtc/received_events", "repos_url": "https://api.github.com/users/wlhgtc/repos", "site_admin": false, "starred_url": "https://api.github.com/users/wlhgtc/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wlhgtc/subscriptions", "type": "User", "url": "https://api.github.com/users/wlhgtc" }
[]
closed
false
null
[]
null
[ "Hmm the error doesn't seem related to data loading.\r\n\r\nRegarding `split_dataset_by_node`: it's generally used to split an iterable dataset (e.g. when streaming) in pytorch DDP. It's not needed if you use a regular dataset since the pytorch DataLoader already assigns a subset of the dataset indices to each node." ]
"2023-04-09T11:04:59Z"
"2023-07-24T14:50:46Z"
"2023-07-24T14:50:46Z"
NONE
null
Hi guys, recently I tried to use `datasets` to train a dual encoder. I finish my own datasets according to the nice [tutorial](https://huggingface.co/docs/datasets/v2.11.0/en/dataset_script) Here are my code: ```python class RetrivalDataset(datasets.GeneratorBasedBuilder): """CrossEncoder dataset.""" BUILDER_CONFIGS = [RetrivalConfig(name="DuReader")] # DEFAULT_CONFIG_NAME = "DuReader" def _info(self): return datasets.DatasetInfo( features=datasets.Features( { "id": datasets.Value("string"), "question": datasets.Value("string"), "documents": Sequence(datasets.Value("string")), } ), supervised_keys=None, ) def _split_generators(self, dl_manager): """Returns SplitGenerators.""" train_file = self.config.data_dir + self.config.train_file valid_file = self.config.data_dir + self.config.valid_file logger.info(f"Training on {self.config.train_file}") logger.info(f"Evaluating on {self.config.valid_file}") return [ datasets.SplitGenerator( name=datasets.Split.TRAIN, gen_kwargs={"file_path": train_file} ), datasets.SplitGenerator( name=datasets.Split.VALIDATION, gen_kwargs={"file_path": valid_file} ), ] def _generate_examples(self, file_path): with jsonlines.open(file_path, "r") as f: for record in f: label = record["label"] question = record["question"] # dual encoder all_documents = record["all_documents"] positive_paragraph = all_documents.pop(label) all_documents = [positive_paragraph] + all_documents u_id = "{}_#_{}".format( md5_hash(question + "".join(all_documents)), "".join(random.sample(string.ascii_letters + string.digits, 7)), ) item = { "question": question, "documents": all_documents, "id": u_id, } yield u_id, item ``` It works well on single GPU, but got errors as follows when used DDP: ```python Detected mismatch between collectives on ranks. Rank 1 is running collective: CollectiveFingerPrint(OpType=BARRIER), but Rank 0 is running collective: CollectiveFingerPrint(OpType=ALLGATHER_COALESCED) ``` Here are my train script on a two A100 mechine: ```bash export TORCH_DISTRIBUTED_DEBUG=DETAIL export TORCH_SHOW_CPP_STACKTRACES=1 export NCCL_DEBUG=INFO export NCCL_DEBUG_SUBSYS=INIT,COLL,ENV nohup torchrun --nproc_per_node 2 train.py experiments/de-big.json >logs/de-big.log 2>&1& ``` I am not sure if this error below related to my dataset code when use DDP. And I notice the PR(#5369 ), but I don't know when and where should I used the function(`split_dataset_by_node`) . @lhoestq hope you could help me?
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5722/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5722/timeline
null
completed
null
null
false